These are only my own favorites.. there are others, too; but I don't really like using public versions.. only sometimes.
Gemini Google ChatGPT OpenAIThese three are the apps I've already tested before.
LM Studio llama.cpp ollamaThese ones were not tested by me.. I've only heard about them. Maybe good?
Jan vllm mstyI really like running LLMs on my local machine, or smth. not accessable by others. Even if I'm pretty sure, E.T./A.I. phone home... xD~
Open LLM LeaderboardHere are my favorite models (currently)
...
sorted by date/...randomly!
...
BTW. I prefer pure text models over multi-modal ones; I really believe text/code is the most important form of intelligent data! ... At least for my own purposes.
GLM-4.5 DeepSeek-V3.1-Base DeepSeek-R1-0528 Qwen3-Coder-480B-A35B-Instruct Qwen3-235B-A22B-Thinking-2507 MiniMax-M1-80k MiniMax-Text-01 Kimi-K2-Base Devstral-Small-2507 Mistral-Large-Instruct-2411 medgemma-27b-text-it Mixtral-8x22B-v0.1 DeepSeek-Coder-V2-Instruct-0724 Llama-4-Maverick-17B-128EThis is currently the only abliterated model I'm using:
Meta-Llama-3-70B-Instruct-abliterated-v3.5The only problem is: seems a bit too complicated for me - and there are not than many abliterated, current models available there.
But since the techniques is interesting, I also posted about this here; jfyi.
It's about uncensoring LLMs ... the "easy" way (in a "global" form, not as usual by changing the prompt(s)). Find out more here:
Uncensor any LLM with abliteration #1 Uncensor any LLM with abliteration #2 Refusal in LLMs is mediated by a single direction #1 Refusal in LLMs is mediated by a single direction #2 Demo of bypassing refusalCreated by myself, to make it easier for me to handle the models, etc..
convert-hf-to-gguf.sh hfdownloader.sh hfget.shI collected the interesting links (for myself) while surfing the web..
They aren't sorted in any priority or smth. like that ...
they're just sorted by date!
I don't know how this is scaling in real, but the idea itself seems interesting!
See (w/ it's own GitHub repository).
The official link is above, in the list of Links. I don't know all these types of neural networks, but it looks like an interesting overview.
This is my very own artificial intelligence.
It's totally based on pure byte code, nothing with tokens or smth. like it (like current LLMs are based on).
And such input is processed abstract. The last output is then also injected once again as second, parallel input byte
(some feedback loop).
And here are some screenshots of the process itself and two helper utilities I've created especially for this reason; also my dump.js.