Feat/local endpoint llm (#148)

* serve local model in a different process from the app
---------

Co-authored-by: albert <albert@cinnamon.is>
Co-authored-by: trducng <trungduc1992@gmail.com>
This commit is contained in:
ian_Cin
2024-03-15 16:17:33 +07:00
committed by GitHub
parent 2950e6ed02
commit df12dec732
20 changed files with 675 additions and 79 deletions

View File

@@ -14,6 +14,7 @@ IF %ERRORLEVEL% EQU 0 (
ECHO The current workdir has whitespace which can lead to unintended behaviour. Please modify your path and continue later.
GOTO :end
)
CALL :print_highlight "Setup Anaconda/Miniconda"
CALL :download_and_install_miniconda
:: check if function run fail, then exit the script
@@ -30,6 +31,10 @@ CALL :print_highlight "Install requirements"
CALL :install_dependencies
IF ERRORLEVEL 1 GOTO :end
CALL :print_highlight "Setting up a local model"
CALL :setup_local_model
IF ERRORLEVEL 1 GOTO :end
CALL :print_highlight "Launching web UI. Please wait..."
CALL :launch_ui
@@ -126,6 +131,10 @@ IF %ERRORLEVEL% == 0 (
)
GOTO :eof
:setup_local_model
python "%CD%\scripts\serve_local.py"
GOTO :eof
:launch_ui
CALL gradio "%CD%\libs\ktem\launch.py" || ( ECHO. && ECHO Will exit now... && GOTO :exit_func_with_error )
GOTO :eof