# Ollama model manifest # One model per line. Lines starting with # are ignored. # Format: : or just for latest # # Pull all models: bash scripts/pull-models.sh # Pull specific: ollama pull # ─── Primary (main conversation) ─────────────────────────────────────────────── llama3.3:70b # ─── Alternative primary ─────────────────────────────────────────────────────── qwen2.5:72b # ─── Fast / low-latency (voice pipeline, quick tasks) ───────────────────────── qwen2.5:7b # ─── Code generation ─────────────────────────────────────────────────────────── qwen2.5-coder:32b # ─── Embeddings (mem0 memory store) ──────────────────────────────────────────── nomic-embed-text