go-light-rag Example Implementation
This example demonstrates how to use the go-light-rag
library to build a document retrieval system. The implementation showcases the library's core functionality in a straightforward manner, using the default handler which closely mirrors the behavior of the official Python LightRAG implementation.
Prerequisites
Before running this example, you'll need:
- A graph database (Neo4J or MemGraph)
- An OpenAI API key
- A text document for processing
Setup Instructions
1. Clone & Navigate
After cloning the repository, change to the example directory:
cd example/default
2. Set Up Graph Database
Choose one of these options:
- Create a free account on Neo4J
- Self-host a compatible graph database like MemGraph
Make note of your graph database URI, username, and password.
3. Get OpenAI API Key
Create an account on OpenAI and generate an API key.
4. Prepare Your Document
Create a text file named book.txt
containing the document you want to analyze. The default entity types in this example are configured for A Christmas Carol by Charles Dickens.
Copy the example configuration file and update it with your credentials:
cp config.example.yaml config.yaml
Edit config.yaml
to include your:
- Neo4J connection details
- OpenAI API key and model
- Preferred logging level
Running the Example
Execute the application:
go run main.go
First run: The system will process your document and build the RAG database (approximately 5 minutes depending on document size).
Subsequent runs: As long as the document hasn't changed and the database files (vec.db
and kv.db
) exist, the system will skip processing and directly open the query interface.
How It Works
The system:
- Embeds document text using OpenAI
- Stores vectors in ChromeM and metadata in BoltDB
- Creates a knowledge graph in Neo4J
- Provides an interactive query interface
- Retrieves relevant information for each query
- Uses a prompt template taken directly from the official Python implementation to ensure consistent output quality
Demo

Note: The demo video is played at 2x speed and uses the gpt-4o-mini
model.