feat: integrate nano-graphrag (#433)

* add nano graph-rag

* ignore entities for relevant context reference

* refactor and add local model as default nano-graphrag

* feat: add kotaemon llm & embedding integration with nanographrag

* fix: add env var for nano GraphRAG

---------

Co-authored-by: Tadashi <tadashi@cinnamon.is>
This commit is contained in:
cin-klein
2024-10-30 15:32:30 +07:00
committed by GitHub
parent 19b386b51e
commit 66e565649e
7 changed files with 470 additions and 18 deletions

View File

@@ -170,7 +170,22 @@ documents and developers who want to build their own RAG pipeline.
### Setup GraphRAG
> [!NOTE]
> Currently GraphRAG feature only works with OpenAI or Ollama API.
> Official MS GraphRAG indexing only works with OpenAI or Ollama API.
> We recommend most users to use NanoGraphRAG implementation for straightforward integration with Kotaemon.
<details>
<summary>Setup Nano GRAPHRAG</summary>
- Install nano-GraphRAG: `pip install nano-graphrag`
- Launch Kotaemon with `USE_NANO_GRAPHRAG=true` environment variable.
- Set your default LLM & Embedding models in Resources setting and it will be recognized automatically from NanoGraphRAG.
</details>
<details>
<summary>Setup MS GRAPHRAG</summary>
- **Non-Docker Installation**: If you are not using Docker, install GraphRAG with the following command:
@@ -181,6 +196,8 @@ documents and developers who want to build their own RAG pipeline.
- **Setting Up API KEY**: To use the GraphRAG retriever feature, ensure you set the `GRAPHRAG_API_KEY` environment variable. You can do this directly in your environment or by adding it to a `.env` file.
- **Using Local Models and Custom Settings**: If you want to use GraphRAG with local models (like `Ollama`) or customize the default LLM and other configurations, set the `USE_CUSTOMIZED_GRAPHRAG_SETTING` environment variable to true. Then, adjust your settings in the `settings.yaml.example` file.
</details>
### Setup Local Models (for local/private RAG)
See [Local model setup](docs/local_model.md).