Local LLM runtime. Unlocks bigger models (Llama 3.1 8B, Mistral, Qwen) than the in-browser Llama 3.2 1B.
Pick the build for your OS. The installer is signed by the vendor, so Windows SmartScreen / macOS Gatekeeper will let it run after a click-through.
winget install Ollama.Ollama; Start-Sleep -Seconds 3; ollama pull llama3.1
curl -fsSL https://ollama.com/install.sh | sh && ollama pull llama3.1
Open Claude (the desktop app or web), click + New session, switch to the Code tab, then paste the prompt below as your first message. Claude does the install, downloads, and verification for you. No terminal commands you have to type yourself.
Install Ollama on this machine and pull the llama3.1 model so I can use it with Phantomline. 1. Detect my OS (Windows / macOS / Linux). 2. Install Ollama using the official installer (winget on Windows, the install.sh script on macOS/Linux, or download the .dmg/.exe if scripts aren't available). 3. Wait for the Ollama daemon to start (it runs at http://localhost:11434). 4. Run `ollama pull llama3.1` to download the default Phantomline model (~4.7 GB). 5. Verify by running `ollama list` and confirming llama3.1 appears. 6. Tell me when it's done so I can reload http://localhost:5000/app and see it marked READY.
Go to ollama.com/download and grab the build for your OS (Windows .exe, macOS .dmg, or Linux script).
Double-click the .exe / .dmg, or pipe the Linux script into sh. Default options are fine. After install, Ollama runs as a background service. You'll see a small llama icon in your system tray.
curl -fsSL https://ollama.com/install.sh | sh
Open a terminal/PowerShell after install and run the command below. Llama 3.1 is ~4.7 GB so this takes 5-15 min depending on your connection.
ollama pull llama3.1
pulling manifest pulling 6a0746a1ec1a... 100% ▕████████▏ 4.7 GB pulling 4fa551d4f938... 100% ▕████████▏ 12 KB verifying sha256 digest writing manifest success
Run this in a terminal. It should list llama3.1 (size ~4.7 GB).
ollama list