Allow users to add LLM within the UI (#6)

* Rename AzureChatOpenAI to LCAzureChatOpenAI
* Provide vanilla ChatOpenAI and AzureChatOpenAI
* Remove the highest accuracy, lowest cost criteria

These criteria are unnecessary. The users, not pipeline creators, should choose
which LLM to use. Furthermore, it's cumbersome to input this information,
really degrades user experience.

* Remove the LLM selection in simple reasoning pipeline
* Provide a dedicated stream method to generate the output
* Return placeholder message to chat if the text is empty
This commit is contained in:
Duc Nguyen (john)
2024-04-06 11:53:17 +07:00
committed by GitHub
parent e187e23dd1
commit a203fc0f7c
35 changed files with 1339 additions and 169 deletions

View File

@@ -22,7 +22,7 @@ The syntax of a component is as follow:
```python
from kotaemon.base import BaseComponent
from kotaemon.llms import AzureChatOpenAI
from kotaemon.llms import LCAzureChatOpenAI
from kotaemon.parsers import RegexExtractor
@@ -32,7 +32,7 @@ class FancyPipeline(BaseComponent):
param3: float
node1: BaseComponent # this is a node because of BaseComponent type annotation
node2: AzureChatOpenAI # this is also a node because AzureChatOpenAI subclasses BaseComponent
node2: LCAzureChatOpenAI # this is also a node because LCAzureChatOpenAI subclasses BaseComponent
node3: RegexExtractor # this is also a node bceause RegexExtractor subclasses BaseComponent
def run(self, some_text: str):
@@ -45,7 +45,7 @@ class FancyPipeline(BaseComponent):
Then this component can be used as follow:
```python
llm = AzureChatOpenAI(endpoint="some-endpont")
llm = LCAzureChatOpenAI(endpoint="some-endpont")
extractor = RegexExtractor(pattern=["yes", "Yes"])
component = FancyPipeline(