The quickest guide to hosting a local LLM


Visit https://ollama.com/ and download the tool for your operating system. Currently there is support for macOS, Linux, and Windows.

Once installed open PowerShell and run the model of your choosing from the list available at ollama.com/library.

To run and chat with Llama 3:

ollama run llama3

It’s easy as that! Have fun running large language models with your own gear.