1. Home
  2. AI Content
  3. Use facebooks codellama ai on linux

Use Facebook’s codellama AI on Linux

ChatGPT is talked about often, and it’s hard to see why not, as it is a very useful tool. However, it isn’t the only game in town. There are a lot of other large language models out there, including one Facebook developed for programming called “codellama.”

In this guide, we’ll go over how you can spin up Facebook’s AI coding tool “codellama” to write programs on Linux.

the Hero image for Facebook's codellama.

Before we begin

In this guide, we’ll cover large language models, such as codellama. Large language models require a lot of GPU and CPU power. If you do not have a powerful computer, ollama will still work, though it will be extremely slow.

For the best results in using codellama (as well as other large language models), use an Nvidia GPU, and a multi-core Intel or AMD CPU.

How to install ollama on Linux

Setting up open-source Large Language Models such as Meta’s “codellama” model on Linux can be extremely complicated, especially if you are new to LLMs. Thankfully, the “ollama” tool exists. Think of it like a “package manager” for large language models.

To get started with “ollama” on Linux, you must open up a Linux terminal. To open up a Linux terminal program on your Linux desktop, press Ctrl + Alt + T on the keyboard. Alternatively, search for “terminal” and launch it via the app menu.

Once the terminal app is open and ready to use, you need to run the installation script for the ollama tool. This script modifies your system, so if you are worried about running it, review the code before executing it.

After you’ve looked over everything, you can install the ollama tool using their official installation command.

curl https://ollama.ai/install.sh | sh

Follow the on-screen prompts for the installer. Once it is installed, you can confirm it is there by running the command ollama in the terminal. If it doesn’t install correctly, you must re-run the script.

How to download codellama on Linux

Facebook’s (Meta) codellama is a derivation of their ChatGPT-like large language model llama2, designed specifically for programming. It can generate programming code from user prompts. To download “codellama” you need to first turn on the ollama server.

In a new terminal tab, use ollama serve. This will start up the local server for ollama. It will allow you to interact with it to download large language models, such as codellama.

ollama serve

With ollama serve enabled, you need to pull the codellama LLM directly from the internet. You can do this with the ollama pull command. This command will download Meta’s codellama and place it directly on your computer.

ollama pull codellama

The download will take some time to complete. When the process is finished, you’ll have the codellama LLM. From here, you can initialize it with:

ollama run codellama

However, the built-in LLM interaction tool ollama includes isn’t great. It doesn’t provide code snippets in the correct formatting, and it is nearly impossible to keep track of previous chats.

How to set up Oterm on Linux

There are quite a lot of Llama GUI tools for Linux, and other operating systems. However, Oterm is the most approachable, and easiest to set up. It has a decent UI, and you won’t get frustrated setting everything up if you’re a beginner.

To start the process, you’ll need to install Python PIP. You’ll also need Python virtual env. To set this up, open up a terminal window and follow the instructions below.

Ubuntu/Debian

sudo apt update
sudo apt install python3 python3-venv

Arch Linux

sudo pacman -Sy python

Fedora

sudo dnf install python3 python3-virtualenv

OpenSUSE

sudo zypper install python3 python3-virtualenv

Once you’ve installed the required tools, you can create your Python virtual environment and activate it like so:

python3 -m venv myenv

source myenv/bin/activate

If you want to make activating your Python environment faster, you can add the following bash alias:

echo "alias activate_myenv='source ~/myenv/bin/activate'" >> ~/.bashrc

source ~/.bashrc

Then, activating your Python environment can be done with:

activate_myenv

When you’ve activated your environment, you can use PIP to install Oterm, the terminal-based ollama tool.

pip install oterm

To run Oterm, simply run the oterm command in a terminal. However, Oterm will not work without a separate terminal window running ollama serve.

oterm

How to use codellama

Oterm showcasing the available LLMs to choose from in the app.

Using codellama is a lot like ChatGPT. To start, open up Oterm and select the codellama model with the mouse. Click the “Create” button to start up a new chat. Once you’ve started a new chat, it is ready to use for programming. For example, if you need a quick update script, you can ask codellama to create a Python program that can update your system with: “Develop a program in Python that updates my system automatically,” or something similar.

Codellama being asked the initial prompt for the Python script.

When you ask it this question, it’ll break down how to create this program and give a code example. To get the best out of codellama, be creative!

Codellama being asked a followup question and generating more for the Python program.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.