What is this Ollama we Heard About and why Open Source AI Models are Revolutionary? Gemma 2 (Google), Llama 3 (Meta), Qwen 2 (Alibaba), Phi 3 (Microsoft) and more… | Medium Banner

https://aliozkanozdurmus.medium.com/what-is-this-ollama-we-heard-about-and-why-open-source-ai-models-are-revolutionary-5bc2efdf2994

In recent years, the significant advancements in artificial intelligence (AI) technologies have led to profound changes both in the business world and in our daily lives. The Ollama model, developed by Meta, stands out particularly due to its open-source nature. In this article, we will explore what Ollama is, how it works, and why open-source AI models are revolutionary.

What is Ollama?

Ollama is a powerful and flexible open-source AI model developed by Meta. It is optimized for natural language processing (NLP) and other AI applications. With advanced algorithms and extensive datasets, Ollama significantly enhances AI performance and accuracy. Ollama is a versatile platform that can be customized to meet users’ needs and can be used in various AI tasks. For more information, you can visit the [Ollama githubpage](https://github.com/ollama/ollama/blob/main/docs/faq.md).

Key Features of Ollama

Advanced Algorithms

Ollama incorporates the latest advancements in deep learning and machine learning. These algorithms have the ability to learn from large datasets and recognize complex patterns. For example, similar to the GPT-3 model, Ollama also performs well in tasks such as text generation, translation, and summarization (Brown et al., 2020).

Training with Large Datasets

Ollama is trained on large and diverse datasets. These datasets allow the model to become more general and flexible. Especially in natural language processing tasks, large datasets help the model produce more meaningful and accurate results (Devlin et al., 2019).

Open-Source Structure

The open-source nature of Ollama offers great advantages to developers and researchers. The source code and parameters of the model are accessible to everyone. This allows users to customize and improve the model according to their needs.

Types of AI

Artificial intelligence comes in various types and functions. These types are optimized for different applications.

Natural Language Processing (NLP)

NLP is a branch of AI that enables machines to understand, process, and generate human language. Models like Ollama can be used in tasks such as text generation, translation, and sentiment analysis. For example, Ollama’s GPT-3-like algorithms make it ideal for customer service chatbots or automated content creation systems.

Image and Speech Recognition

This type of AI is used to recognize and process visual and auditory data. It is widely used in sectors such as healthcare, security, and automotive. For instance, deep learning algorithms can be used in image recognition tasks for medical image analysis or autonomous driving systems (LeCun, Bengio, and Hinton, 2015).

Data Analytics and Prediction

This type of AI is used to extract meaningful patterns from large datasets and predict future events. It has applications in trend analysis in financial markets or sales forecasting in the retail sector. For example, machine learning algorithms can analyze customer behaviors to optimize marketing strategies (Goodfellow, Bengio, and Courville, 2016).

Prominent Open-Source Models

Open-source AI models like Ollama are making a significant impact in the industry. Here are some important open-source AI models:

Gemma 2 (Google)

Google Gemma 2 is a powerful open-source model offered in two sizes (9B and 27B). It performs well in both text and other AI tasks.

Llama 3 (Meta)

Meta Llama 3 is one of the most powerful open-source language models, available in 8B and 70B parameter sizes. This model can be used in various AI tasks and performs exceptionally well.

Qwen 2 (Alibaba)

Qwen 2 is a series of large language models ranging from 0.5B to 72B parameters. Developed by Alibaba, these models offer a wide range of applications.

DeepSeek-Coder v2

DeepSeek-Coder v2 is an open-source code language model with performance comparable to GPT-4 Turbo. It performs particularly well in coding tasks.

Phi 3 (Microsoft)

Phi-3 is a family of lightweight and powerful language models offered in 3B and 14B sizes. Developed by Microsoft, these models perform well in language understanding and reasoning tasks.

Aya 23 (Cohere)

Aya 23 is a state-of-the-art multilingual model family supporting 23 languages. It can be used in both text and other AI tasks.

Mistral and Mixtral

Developed by Mistral AI, these models are notable for their mixture of experts (MoE) structures. Available in 8x7B and 8x22B sizes, these models offer a wide range of applications.

CodeGemma

CodeGemma is a collection of powerful, lightweight models that can perform various coding tasks such as code completion, code generation, natural language understanding, and mathematical reasoning.

Fine-Tuning in AI

Fine-tuning in AI involves retraining a pre-trained model on a specific task or dataset to improve its performance. This process helps the model perform better in specific applications while maintaining its general capabilities.

Fine-Tuning Process

The fine-tuning process typically involves the following steps:

1. Pre-training: The model is initially trained on a large and diverse dataset.

2. Fine-tuning: The model is then retrained on a specific task or dataset to enhance its performance in that particular application.

7. Importance of Open-Source in AI

Democracy and Accessibility

Open-source AI democratizes access to technology. This allows not only large companies but also small and medium-sized enterprises, academic researchers, and even individual developers to access AI technologies. This access fosters innovation and competition.

Rapid Innovation and Development

Open-source projects benefit from the contributions of a broad community, which accelerates the pace of innovation. These projects receive input from many different perspectives, leading to faster and more effective solutions. For example, the Linux operating system has rapidly evolved and reached a wide user base globally due to its open-source nature (Raymond, 1999).

Transparency and Security

Open-source projects are more transparent because their code is accessible to everyone. This transparency enhances the security of the software, as many people can review the code and identify potential vulnerabilities. Moreover, transparency ensures that the software operates as expected and fosters trust among users.

Big Question: Can We Try These AI Models at Home?

Yes, it is indeed possible to try these AI models at home. Ollama, Pinokio, and Open WebUI are popular AI tools that can be easily installed and used.

Installing Ollama

Ollama, developed by Meta, is a powerful and flexible AI model. Follow these steps to install Ollama at home:

  1. Requirements:

  • Python 3.7 or higher

  • Git

  • A GPU with CUDA support (recommended)

You can simply download from the web-site: https://ollama.com/

Or everycase of it, here is the manual guide:

Install

Install Ollama running this one-liner:

curl -fsSL https://ollama.com/install.sh | sh

Download the ollama binary

Ollama is distributed as a self-contained binary. Download it to a directory in your PATH:

sudo curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/bin/ollama
sudo chmod +x /usr/bin/ollama

Adding Ollama as a startup service (recommended)

Create a user for Ollama:

sudo useradd -r -s /bin/false -m -d /usr/share/ollama ollama

Create a service file in /etc/systemd/system/ollama.service:

[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
[Install]
WantedBy=default.target

Then start the service:

sudo systemctl daemon-reload
sudo systemctl enable ollama

Installing Pinokio

Pinokio is an open-source AI platform. To install Pinokio, follow these steps:

  1. Download:

  1. Requirements:

  • Node.js

  • NPM (Node Package Manager)

  • Git

Here is the facts about local AI service requirements:

GPU and Memory Requirements Based on Model Size

The hardware requirements for running AI models can vary significantly based on the size of the models. Below are some general guidelines on GPU and memory requirements based on the parameter size of the models.

Small Models (up to 1B parameters)

  • GPU: 8 GB VRAM (e.g., NVIDIA GTX 1080, RTX 2060)

  • System Memory: 16 GB RAM

  • Use Case: Fine for running smaller models and performing basic tasks like text generation, sentiment analysis, and small-scale data processing.

Medium Models (1B to 10B parameters)

  • GPU: 16–24 GB VRAM (e.g., NVIDIA RTX 3080, RTX 3090)

  • System Memory: 32 GB RAM

  • Use Case: Suitable for more complex tasks such as larger language models, moderate-scale data analysis, and more intensive NLP tasks.

Large Models (10B to 50B parameters)

  • GPU: 24–48 GB VRAM (e.g., NVIDIA A100, Tesla V100)

  • System Memory: 64 GB RAM or more

  • Use Case: Ideal for running large language models, advanced NLP applications, and extensive machine learning tasks that require substantial computational power.

Extra Large Models (50B parameters and above)

  • GPU: 48 GB VRAM and above, multi-GPU setups (e.g., NVIDIA A100, Tesla V100)

  • System Memory: 128 GB RAM or more

  • Use Case: Required for state-of-the-art models and highly complex tasks. These setups are typically used in research and development environments or large-scale production systems.

Thanks for reading.

In our next article, we will discuss how to perform your own fine-tuning and how to create your own AI models. Stay tuned!

References

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 4171–4186).

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

Raymond, E. S. (1999). The cathedral and the bazaar. Knowledge, Technology & Policy, 12(3), 23–49.

top
en_USEnglish