* Rename AzureChatOpenAI to LCAzureChatOpenAI
* Provide vanilla ChatOpenAI and AzureChatOpenAI
* Remove the highest accuracy, lowest cost criteria
These criteria are unnecessary. The users, not pipeline creators, should choose
which LLM to use. Furthermore, it's cumbersome to input this information,
really degrades user experience.
* Remove the LLM selection in simple reasoning pipeline
* Provide a dedicated stream method to generate the output
* Return placeholder message to chat if the text is empty
* Add Adobe reader as the multimodal loader
* Allow FullQAPipeline to reasoning on figures
* fix: move the adobe import to avoid ImportError, notify users whenever they run the AdobeReader
---------
Co-authored-by: cin-albert <albert@cinnamon.is>
* serve local model in a different process from the app
---------
Co-authored-by: albert <albert@cinnamon.is>
Co-authored-by: trducng <trungduc1992@gmail.com>
* Create user management page
* Remove old user creating UI
* Add username validation; admin user auto-creation
* Provide docs on user management
* Bump version
1. Introduce the concept of "collection_name" to docstore and vector store. Each collection can be viewed similarly to a table in a SQL database. It allows better organizing information within this data source.
2. Move the `Index` and `Source` tables from the application scope into the index scope. For each new index created by user, these tables should increase accordingly. So it depends on the index, rather than the app.
3. Make each index responsible for the UI components in the app.
4. Construct the File UI page.