Since the only usage of prompt is within LLMs, it is reasonable to keep it within the LLM module. This way, it would be easier to discover module, and make the code base less complicated.
Changes:
* Move prompt components into llms
* Bump version 0.3.1
* Make pip install dependencies in eager mode
---------
Co-authored-by: ian <ian@cinnamon.is>
This change remove `BaseComponent`'s:
- run_raw
- run_batch_raw
- run_document
- run_batch_document
- is_document
- is_batch
Each component is expected to support multiple types of inputs and a single type of output. Since we want the component to work out-of-the-box with both standardized and customized use cases, supporting multiple types of inputs are expected. At the same time, to reduce the complexity of understanding how to use a component, we restrict a component to only have a single output type.
To accommodate these changes, we also refactor some components to remove their run_raw, run_batch_raw... methods, and to decide the common output interface for those components.
Tests are updated accordingly.
Commit changes:
* Add kwargs to vector store's query
* Simplify the BaseComponent
* Update tests
* Remove support for Python 3.8 and 3.9
* Bump version 0.3.0
* Fix github PR caching still use old environment after bumping version
---------
Co-authored-by: ian <ian@cinnamon.is>
From pipeline > config > UI. Provide example project for promptui
- Pipeline to config: `kotaemon.contribs.promptui.config.export_pipeline_to_config`. The config follows schema specified in this document: https://cinnamon-ai.atlassian.net/wiki/spaces/ATM/pages/2748711193/Technical+Detail. Note: this implementation exclude the logs, which will be handled in AUR-408.
- Config to UI: `kotaemon.contribs.promptui.build_from_yaml`
- Example project is located at `examples/promptui/`
- Use cases related to LLM call: https://cinnamon-ai.atlassian.net/browse/AUR-388?focusedCommentId=34873
- Sample usages: `test_llms_chat_models.py` and `test_llms_completion_models.py`:
```python
from kotaemon.llms.chats.openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_base="https://test.openai.azure.com/",
openai_api_key="some-key",
openai_api_version="2023-03-15-preview",
deployment_name="gpt35turbo",
temperature=0,
request_timeout=60,
)
output = model("hello world")
```
For the LLM-call component, I decide to wrap around Langchain's LLM models and Langchain's Chat models. And set the interface as follow:
- Completion LLM component:
```python
class CompletionLLM:
def run_raw(self, text: str) -> LLMInterface:
# Run text completion: str in -> LLMInterface out
def run_batch_raw(self, text: list[str]) -> list[LLMInterface]:
# Run text completion in batch: list[str] in -> list[LLMInterface] out
# run_document and run_batch_document just reuse run_raw and run_batch_raw, due to unclear use case
```
- Chat LLM component:
```python
class ChatLLM:
def run_raw(self, text: str) -> LLMInterface:
# Run chat completion (no chat history): str in -> LLMInterface out
def run_batch_raw(self, text: list[str]) -> list[LLMInterface]:
# Run chat completion in batch mode (no chat history): list[str] in -> list[LLMInterface] out
def run_document(self, text: list[BaseMessage]) -> LLMInterface:
# Run chat completion (with chat history): list[langchain's BaseMessage] in -> LLMInterface out
def run_batch_document(self, text: list[list[BaseMessage]]) -> list[LLMInterface]:
# Run chat completion in batch mode (with chat history): list[list[langchain's BaseMessage]] in -> list[LLMInterface] out
```
- The LLMInterface is as follow:
```python
@dataclass
class LLMInterface:
text: list[str]
completion_tokens: int = -1
total_tokens: int = -1
prompt_tokens: int = -1
logits: list[list[float]] = field(default_factory=list)
```