⚡ Seamless setup | One‑command solo setup auto‑detects CPU/GPU/RAM and writes an optimised config |
📚 Open model registry | Pull weights from Hugging Face, Ollama, or local GGUF bins |
🖥️ Cross‑platform | macOS (Apple Silicon & Intel), Linux, Windows 10/11 |
🛠️ Configurable framework | Tweak ports, back‑end, quantisation, & device mapping in ~/.solo_server/solo.json |
solo.json
.
Flag | Description | Default |
---|---|---|
-s, --server | Back‑end: ollama , vllm , llama.cpp | ollama |
-m, --model | Model name or path | — |
-p, --port | HTTP port | 5070 |
solo.json
)solo setup
writes a machine‑specific config at ~/.solo_server/solo.json
. Edit it manually or rerun the wizard any time.