1. Home
  2. AI Content
  3. How to host your own chatgpt on linux

How to host your own ChatGPT on Linux

Have you ever wanted to run ChatGPT on your Linux system? Now, you can with the Ollama tool. This program enables you to download and interact with dozens of open-source large language models, similar to how you’d engage with OpenAI’s ChatGPT. Here’s how you can get it up and running on your system.

The hero image for Ollama.

How to set up ollama on Linux

Setting up Ollama on your Linux system enables you to download a wide variety of large language models using simple commands, eliminating the need for manual processes.

To begin the setup, open a terminal window. You can do this on your Linux system by pressing the Ctrl + Alt + T keyboard shortcut. Alternatively, search for “terminal” in the app menu to open it.

Once the terminal app is open, install the latest version of Ollama using the script below. This script will configure your system for Ollama use.

Note: It’s important to review the code in the script before running it to understand its actions on your computer. The script’s full source is available on GitHub.

curl https://ollama.ai/install.sh | sh

After running the above command, follow the on-screen prompts to complete the software installation on your computer. Upon completion of the script, the Ollama tool will be successfully installed on your system.


Running large language models locally takes a lot of hardware power. To get the best experience when running LLMs via Ollama, you should have an Nvidia GPU or a powerful multi-core Intel/AMD CPU.

If your computer hardware isn’t that powerful, it’s OK. You can still run LLMs with Ollama but be prepared for them to run significantly slower.

How to install a large language model

There are many large language models available for download through Ollama. To see the available models, visit the “library” page on the official Ollama website.

When you reach the “library” page, explore the different models listed there. Some notable LLM models include:

  • Llama2 (Facebook’s large language model)
  • Orca2 (Microsoft’s modified llama2 large language model)
  • Falcon (A large language model built by the Technology Innovation Institute)
  • OpenChat (A family of open-source models trained on a wide variety of data)

Once you’ve selected a large language model (LLM) to use with the Ollama application on Linux, utilize the pull command to download it to your system. For instance, to download the Orca2 model, execute the following commands in a terminal.

First, open a new terminal window (or a terminal tab) and initiate Ollama, if it’s not already running in the background on your system.

ollama serve

You can then pull the LLM model with:

Ollama pulling the Orca-mini large language model.

ollama pull orca2

After pulling the model to your system, you can run it directly with Ollama. To do this, use the ollama run command. Remember, the chat tool included with Ollama is quite basic.

ollama run orca2

If you wish to close the model, you can press Ctrl + D on the keyboard. Doing this will end the tool.

How to install oterm to use ollama

There are numerous tools on Linux for interacting with large language models through Ollama. However, many of these tools require some familiarity with Docker or NPM/NodeJS to get started.

OTerm, on the other hand, is not one of those complex tools. It’s a GUI app for the terminal that only requires Python to operate. This means you can quickly set it up using Python and a Python environment. Here’s how to install it on your system.

First, open a terminal window and follow the provided installation instructions to install both Python and Python3-venv.


sudo apt update
sudo apt install python3 python3-venv

Arch Linux

sudo pacman -Sy python


sudo dnf install python3 python3-virtualenv


sudo zypper install python3 python3-virtualenv

Once you’ve set up the required Python packages, you can create a Python environment.

python3 -m venv python-env

With the environment created, enable it using source.

source python-env/bin/activate

Once you’ve activated your environment, you can easily install Oterm with:

pip install oterm

When the Oterm application is installed, you can launch it with:


Once Oterm is open, make sure you have ollama serve running in a separate terminal window (or terminal tab). Then, using the mouse, select an LLM model and click “Create.”

The Oterm Ollama start page.

Clicking the “Create” button opens a new chat window within Oterm. This window allows you to chat with your AI-powered, self-hosted ChatGPT bot. Enjoy your interaction!

Ollama running the Openchat large language model, reciting a poem.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.