How to Install Ollama AI Models Locally on Mac, Windows, and Linux (With External Drive Setup)

Ollama is a powerful tool for running LLMs (large language models) locally on your machine. Whether you’re a developer, researcher, or hobbyist, you can now run models like LLaMA 3, Mistral, and others directly from your computer. However, these models are large and can consume significant storage space.

This guide will walk you through:

  • Installing Ollama on macOS, Windows, and Linux
  • Managing storage limitations using symlinks and external hard drives

🚀 What is Ollama?

Ollama is a command-line tool that makes it easy to run and interact with open-source large language models locally. It wraps models in containers and offers a simple interface (ollama run model_name) to get started with minimal setup.

🖥️ How to Install Ollama on Different Operating Systems

🔹 macOS Installation

  1. Install via Homebrew (Recommended): brew install ollama
  2. Start Ollama: ollama run llama3
  3. Ollama will download and run the model for you.

Tip: Use ollama list to see downloaded models, and ollama pull <model> to get a specific one.


🔹 Windows Installation

  1. Download the Installer from the official website.
  2. Run the .exe file and follow the setup instructions.
  3. Open Command Prompt or Windows Terminal and run: ollama run llama3

🔹 Linux Installation

  1. Download the Ollama binary: curl -fsSL https://ollama.com/install.sh | sh
  2. Run Ollama: ollama run llama3

🔐 Note: You may need to use sudo for permissions during installation.

💾 Save Disk Space: Use External Drives with Symlinks

Ollama stores downloaded model files in:

  • macOS/Linux: ~/.ollama
  • Windows: C:\Users\<username>\.ollama

These models can be multiple GBs. If your internal disk is small (e.g., an SSD), offloading models to an external drive can save critical space.

🧩 What is a Symlink?

A symbolic link (symlink) is like a shortcut that tells your system to look in a different location for files, without the application knowing the difference.


🔄 How to Move .ollama to External Drive Using Symlink

📦 Step-by-Step: macOS / Linux

  1. Close Ollama if it’s running: pkill ollama
  2. Move .ollama folder to your external drive: mv ~/.ollama /Volumes/ExternalDrive/ollama
  3. Create a symlink from the original path to the new one: ln -s /Volumes/ExternalDrive/ollama ~/.ollama
  4. Verify it’s working: ollama list

🪟 Step-by-Step: Windows

  1. Move the .ollama folder to your external drive, for example E:\ollama.
  2. Open Command Prompt as Administrator.
  3. Create a symlink using mklink: mklink /D C:\Users\<your_username>\.ollama E:\ollama
  4. Run Ollama again and confirm the model list: ollama list

📝 Tip: Make sure your external drive is always connected before launching Ollama.


🛠 Troubleshooting Symlink Issues

  • Permission Denied: Run commands with sudo (macOS/Linux) or Admin rights (Windows).
  • Drive Not Detected: Ensure the external drive is properly mounted.
  • Model Not Found: Confirm .ollama was completely moved, not copied.

📚 Conclusion

Installing and running Ollama locally is straightforward, and with symlinks, even machines with limited storage can handle large models efficiently. This approach lets you tap into the power of open-source AI while keeping your system lean.

If you’re dealing with multiple models or working on a laptop with limited space, using an external drive with symbolic links is a game-changer.

Leave a Reply

Your email address will not be published. Required fields are marked *