Intelligence

Large Language Models

These are only my own favorites.. there are others, too; but I don't really like using public versions.. only sometimes.

ChatGPT OpenAI Gemini Google

Inference

llama.cpp ollama LM Studio Jan vllm msty

Models

Open LLM Leaderboard

These are my favorite models (currently).
JFYI: sorted by date (newest first).

Qwen3-235B-A22B-Instruct-2507 medgemma-27b-text-it Kimi-K2-Base MiniMax-M1-80k Magistral-Small-2506 Devstral-Small-2505 DeepSeek-R1-0528 DeepSeek-V3-Base Mistral-Large-Instruct-2407 Mixtral-8x22B-v0.1 Hermes 3 Llama-3.3-70B-Instruct MiniMax-Text-01

Abliterated Models

It's about uncensoring LLMs ... the "easy" way (in a "global" form, not as usual by changing the prompt(s)). Find out more here:

Uncensor any LLM with abliteration #1 Uncensor any LLM with abliteration #2 Refusal in LLMs is mediated by a single direction #1 Refusal in LLMs is mediated by a single direction #2 Demo of bypassing refusal

Tools

Created by myself, to make it easier for me to handle all the models.

convert-hf-to-gguf.sh hfdownloader.sh hfget.sh

I collected the interesting links (for myself) while surfing the web..
They aren't sorted in any priority or smth. like that ... they're just sorted by date!

Large Language Model Course Neural Network Zoo "What are the risks from Artificial Intelligence?" "Machine learning in a few clicks" AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs Genesis LLM Visualization Spreadsheets are all you need NodeJS library for Llama Stack The GPT-3 Architecture, on a Napkin Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs Microsoft KBLaM: Knowledge Base augmented Language Model MICrONS Explorer: A virtual observatory of the cortex CL4R1T4S (System Prompt Transparency for all) Echo Chamber: A Context-Poisoning Jailbreak That Bypasses LLM Guardrails How Can AI ID a Cat? An Illustrated Guide. Bias in der künstlichen Intelligenz

Continuous Thought Machines

I don't know how this is scaling in real, but the idea itself seems interesting!
See (w/ it's own GitHub repository).

`Norbert`

This is my very own artificial intelligence.

It's totally based on pure byte code, nothing with tokens or smth. like it (like current LLMs are based on).
And such input is processed abstract. The last output is then also injected once again as second, parallel input byte (some feedback loop).

Screenshots

And here are some screenshots of the process itself and two helper utilities I've created especially for this reason; also my dump.js.

Example screenshot
Example `learn` screenshot
Debugging the intelligent output
Bit testing ..