How to Run Llamabot with Ollama
Overview
In this guide, you'll learn how to run a chatbot using llamabot
and Ollama
. We'll cover how to install Ollama, start its server, and finally, run the chatbot within a Python session.
Installation & Setup
Install Ollama
- macOS Users: Download here
- Linux & WSL2 Users: Run
curl https://ollama.ai/install.sh | sh
in your terminal - Windows Users: Support coming soon.
For more detailed instructions, refer to Ollama's official site.
Running Ollama Server
- Open your terminal and start the Ollama server with your chosen model.
ollama run <model_name>
Example:
ollama run vicuna
For a list of available models, visit Ollama's Model Library.
Note: Ensure you have adequate RAM for the model you are running.
Running Llamabot in Python
- Open a Python session and import the
SimpleBot
class from thellamabot
library.
from llamabot import SimpleBot # you can also use QueryBot or ChatBot
bot = SimpleBot("You are a conversation expert", model_name="vicuna:7b-16k")
Note:
vicuna:7b-16k
includes tags from the vicuna model page.
And there you have it! You're now ready to run your own chatbot with Ollama and Llamabot.