We advance German NLP through transparent, open-source research with three flagship projects:

🐑 LLäMmlein - Comprehensive family of German-only transformer models (120M, 1B, 7B parameters) trained transparently from scratch with full training data and code documentation.

📊 SuperGLEBer - First comprehensive German benchmark suite featuring 29 diverse NLP tasks across domains, providing systematic evaluation for German language models.

🤖 ModernGBERT - Transparent encoder models (138M, 1B parameters) based on modernBERT architecture, specifically optimized for German language understanding.

Beyond our German NLP ecosystem, we specialize in LLM-Knowledge Graph Integration for text mining applications. Our work combines language models with explicit knowledge representations, developing:

🌸 We foster reproducible, collaborative research by open-sourcing models, datasets, and evaluation frameworks, establishing German as a first-class language in the global AI ecosystem while advancing the intersection of symbolic knowledge and neural language understanding.