This is a toy environment for running a very dumbed down version of the Cosmic 2 finetune of Dolphin Mistral.
The model will run in the browser, which is not very fast at all. It can take minutes of processing to start responding to your prompt and since the model is so trimmed down, the answers can be pretty low quality.
Continuing will download just under 2GB worth of data in the form of the model file.
🌌 Please wait while the Cosmic LLM is loaded into your browser…
Cosmic is the result of me tinkering in the world of generative AI and LLMs.
I created it because I wanted an LLM that could talk age-old wisdom,
trained on the important texts of civilisation like of Plato, Dante, and many others.
It is trained to be opinionated and give answers to big questions.
All of your conversations here are processed on your own device and no chat data is sent or received by my servers.
Please be aware that Large Language Models (LLMs), such as the one that powers this conversation, are statistical models and generate responses in a probabilistic manner. While they are trained with a vast amount of text data and are designed to mimic human language, they may produce incorrect, unsafe, and/or harmful information.