The quickest guide to hosting a local LLM

Visit and download the tool for your operating system. Currently there is support for macOS, Linux, and Windows.

Once installed open PowerShell and run the model of your choosing from the list available at

To run and chat with Llama 3:

ollama run llama3

It’s easy as that! Have fun running large language models with your own gear.