We receive numerous requests concerning the security of LLM, the speed of LLM, and the cost of LLM. With the release of Llama3 and the enhancement of PC performance, we've realized that there is a clear demand for local LLM. Therefore, we have developed a feature for integrating local LLM.
Following these steps, you can integrate with the local large model.
You can access the local LLM settings from these two locations.
For Windows users, we can completely automate the download and installation of Ollama. Mac users will need to download and install it themselves following the video instructions.
This step involves selecting the model you need and downloading the corresponding model. This step is the same for both Mac and Windows users. Once completed, the process will automatically proceed to the next step.
Here you can conduct a test; if there is a response, it indicates that all the necessary dependencies have been installed. Click "Finish" to start using it.
Finally, chat in the chat bar, and you’ll see that the model has now been updated to llama2, and the credit consumption is zero!