AI is Trash Unless Locally
Jun 07, 2025
Running AI models locally doesn't have to be complicated. With Ollama, it's surprisingly simple and powerful.
Ollama is a lightweight tool that lets you run open source large language models directly on your machine. Whether you're experimenting, building apps, or exploring AI, it supports models like DeepSeek R1, Qwen 3, Llama 3.3, Qwen 2.5 VL, Gemma 3, and many others.
Why Run Models Locally?
Local deployment gives you privacy, control, and offline access—Ollama makes this easier than ever.
Installation Guide
Windows
- Download the installer from ollama.com/download
- Run the
.exefile and follow the prompts. - Open Command Prompt or PowerShell and run:
ollama run llama3
macOS
- Open Terminal and run:
curl -fsSL https://ollama.com/install.sh | sh
- Then start a model:
ollama run llama3
Linux
- Install using the command:
curl -fsSL https://ollama.com/install.sh | sh
- Run a model:
ollama run llama3
Docker
- Make sure Docker is installed and running.
- Pull and run the Ollama image:
docker run -d -p 11434:11434 ollama/ollama
- Then, from your host machine, you can run:
ollama run llama3
Final Thoughts
Ollama is a beautiful tool for anyone looking to explore or integrate local AI capabilities. It's free, open, and supports a growing list of models. Give it a try—you might be surprised how easy and fast it is to get started.