Run AI models in your local machine




Can I run some lightweight AI models on my local machine.

How can you do it?
Ollama comes to your rescue.

Navigate to https://ollama.com/

Download based on your OS and install it. Nothing fancy.

Once you've installed it, you can use your terminal to verify the installation.

Run: ollama -v

Now we are all set to run an AI model on your local machine.

Choose any AI model from the list below based on your preference:
https://ollama.com/search

Consider the size of the AI model because getting a large one might slow down your PC drastically.

Let's choose Gemma3, for example:
https://ollama.com/library/gemma3

I chose a very basic one with the smallest size:
https://ollama.com/library/gemma3:1b

So now, to run the Gemma3 AI model in Ollama:
Run: ollama run gemma3:1b
in your terminal.


It's downloading... 💪

once it is downloaded it will start the AI model, 



Ask something and see how it replies.


That's it for today.. Let's see how we can use Spring AI in the next post. 

Comments

Popular posts from this blog

Apache Kafka - The basics

Kafka: How to handle duplicate messages

Spring: How to deal with circular dependencies