All posts by Abdulla Ansari

OpenRAG: An Open Source GenAI Application to Supercharge Data Queries with Large Language Models

OpenRAG: An Open Source GenAI Application to Supercharge Data Queries with Large Language Models

Introduction

In the era of artificial intelligence, businesses and developers are increasingly leveraging Large Language Models (LLMs) to streamline data analysis and customer interactions. OpenRAG, an open-source Generative AI (GenAI) application, empowers users by combining the flexibility of LLMs with efficient data querying capabilities across various vector databases. Whether you are working with PDFs, querying large datasets, or seeking insights from stored data, OpenRAG makes it seamless to interact with your data using natural language queries.

Key Features of OpenRAG

  1. Support for All Open-Source LLM Models OpenRAG is designed to integrate with a variety of open-source LLMs, giving users the freedom to choose the model that best fits their unique use case. The platform’s extensibility allows for future expansion, ensuring users can harness the latest advancements in the field of AI without any restrictions.
  2. Multiple Open-Source Vector Database Integrations OpenRAG comes pre-configured to support popular open-source vector databases such as Chroma, FAISS, and Qdrant. These databases facilitate high-performance vector search and retrieval, ensuring users get precise results when querying their data.
  3. PDF Upload and Data Querying One standout feature of OpenRAG is the ability to upload PDF files and convert them into structured data collections. This makes the application highly useful for professionals dealing with large volumes of PDF-based information. Once a PDF is uploaded, users can query the contents using an LLM of their choice, extracting insights quickly and efficiently.
  4. Persistent Collection Names for Reusability OpenRAG assigns unique collection names to uploaded PDFs, allowing users to return and query the data without needing to re-upload the same files. This feature saves time and makes data management more seamless.
  5. Consistency in Vector Database Usage OpenRAG maintains consistency by tying data collections to specific vector databases. Users cannot switch the database once it’s selected for a collection, ensuring stable and accurate data retrieval every time.

Getting Started with OpenRAG

Before diving into the world of AI-driven data querying, make sure to meet the following prerequisites for a smooth installation:

Prerequisites

Python Version: Ensure you have Python 3.9 or greater installed.
Qdrant Docker Image: OpenRAG integrates with Qdrant, and the image should be running. Make sure port 6333 on localhost is accessible.

Installation

1. Clone the Repo:

git clone

2. Create a Virtual Environment:

python3 -m venv openrag-env

source openrag-env/bin/activate

3. Install Dependencies:

pip install -r requirements.txt

4. Download Spacy Language Model:

python3 -m spacy download en_core_web_sm

5. Run the Application:

uvicorn main:app –reload

Dockerization for Easy Deployment

For developers who prefer using Docker for deployment, OpenRAG can be containerized:

  • Build the Docker Image:

docker build -t openrag-app .

  • Run the Container:

docker run -d -p 8000:8000 openrag-app

Once the app is running, access it via http://localhost:8000 in your browser.

Usage: Interact with OpenRAG via API

OpenRAG’s API-first architecture allows it to be integrated into various frontend applications. Here’s an example of how to upload a PDF and query its contents through an API:

Upload a PDF

curl -X POST “http://localhost:8000/upload” \

-H “accept: application/json” \

-H “Content-Type: multipart/form-data” \

-F “[email protected]” \

-F “model_name=GPT-3.5” \

-F “vector_db_name=qdrant”

Start a Chat Session

After uploading a PDF, you can initiate a chat-based query:

curl -X POST “http://localhost:8000/chat” \

-H “Content-Type: application/json” \

-d ‘{

  “collection_name”: “your_collection_name”,

  “query”: “your_query”,

  “model_name”: “GPT-3.5”,

  “vector_db_name”: “qdrant”,

  “device”: “cpu”

}’

Scalability with OpenRAG

One of OpenRAG’s greatest strengths is its scalability. While it can be run on a local machine using tools like uvicorn, it’s production-ready and can be deployed using cloud providers, Docker, or Kubernetes. In production environments, OpenRAG supports scaling through tools like Gunicorn, providing robust performance for high-traffic use cases.

Common Errors and Solutions

During development, users may encounter the following common error:

TypeError: Descriptors cannot be created directly.

To resolve this, consider downgrading the protobuf package to version 3.20.x or lower, or setting the environment variable

PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python

Conclusion

OpenRAG stands out as a flexible, open-source solution for users looking to leverage the power of LLMs and vector databases for data querying and insights. Whether you’re a developer, researcher, or enterprise user, OpenRAG provides the tools to work with your data in a highly efficient and intuitive manner.

For detailed API documentation and more examples, visit OpenRAG’s API Documentation.

Transform your ideas into intelligent solutions with Mindfire’s AI and ML development services, designed to turn data into meaningful insights and fuel innovation.

Contributing to OpenRAG

We welcome contributions from the community! For details on how to contribute, submit issues, or request features, check out the CONTRIBUTING.md.

Github Repo Link
Open Rag Repo

 

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •