Installing Ollama on Ubuntu with Graphical user interface and API support

Last update: 11-13-2024

Ollama is an open-source language model that offers powerful natural language processing capabilities. In this tutorial, we'll walk through the steps to install Ollama on Ubuntu, set up the Open-WebUI interface, which provides a user-friendly way to interact with the models, and see how to use the models with API access.

Let's get started:

1. Update System Packages

First, we'll need to update our system's package lists by running:

sudo apt update

This ensures we have the latest software packages available on our system.

2. Install Curl

Next, we'll install the curl utility, which we'll use to download the Ollama installation script:

sudo apt install curl

3. Install Ollama

Now, we can download and run the Ollama installation script:

curl -fsSL https://ollama.com/install.sh | sh

(Detailed instruction in the Ollama download page.)

4. Browse Available Models

Once Ollama is installed, we can use it to download a model. To see the available models, we can visit the Ollama models page.

Click on any model to see the command to install and run it. As an example, let's install the Llaman 3.2 model:

ollama run llama3.2

The model is quite big and will take some time to download. Once the download is done, it will be immediately started and we will be able to run a prompt and receive a response back - all right from our terminal.

5. Exit the Model

To stop interacting with the model and close Ollama, we can simply run /bye.

6. Install Docker

Next, we'll need to install Docker on our system:

sudo apt install docker.io

Docker is a containerization platform that allows us to run the Open-WebUI interface, which provides a more user-friendly way to interact with Ollama models.

7. Configure Docker Permissions

We'll also need to add our user to the Docker group:

sudo adduser $USER docker
newgrp docker

This ensures our user account has the necessary permissions to run Docker commands.

8. Run Open-WebUI

Now, we can run the Open-WebUI Docker container:

docker run -d --net=host -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main

This command will download the Open webUI Docker iamge, which is a big download and will take a while. Once it's done, a container will be immediately created and Open-WebUI will be available.

9. Access the Interface

We can access the Open-WebUI interface by visiting localhost:8080 in our web browser.

10. Create Local Account

We'll need to create a local account to use the Open-WebUI interface. Feel free to use any username and email, but be sure to write down the email and password you use.

Creating a local account allows us to save our preferences and interaction history within the Open-WebUI interface.

11. Use the model from Open-WebUI

After selecting the model, you can enjoy your local installation of LLama 3.2 with the convenience offered by the Open-WebUI graphical interface. It's there for your service with no limits or subscription requirements.

12. Testing the API

To test the Ollama API, we can use an HTTP client like Postman or the 'Rest Client' extension in Visual Studio Code. Using the API directly allows us to integrate Ollama's capabilities into our own applications or scripts.

In Visual Studio Code, install the 'Rest Client' extension. Then create a new file called requests.http and paste the following:

POST http://localhost:11434/api/generate

{
  "model": "llama3.2",
  "prompt": "Why did the chicken cross the road?"
}

Click on the top of the request to send it. This will send a request to the Ollama API, using the Llaman 3.2 model, and return a JSON response. The response contains many objects, each containing one token.

Generate API Response

13. Chat-Style Request

We can also use the API with a more chatbot-friendly format:

POST http://localhost:11434/api/chat

{
  "model": "llama3.2",
  "messages": [{
    "role": "user",
    "content": "Why did the chicken cross the road?"
  }]
}

This format is better suited for requests that contain multiple message, possibly with different roles.

Chat API Response

Conclusion

And that's it! We've now successfully installed and set up the Ollama language model and the Open-WebUI interface, and we're ready to start exploring the capabilities of these powerful tools.

StocksComparison.com ad

0 Comments

Add a new comment: