The Funkatorium · Business Plan

THE RAINER MODEL

From an openly published orchestration layer to a sovereign European model. A relational AI for anyone who wants to maintain a relationship with their technology.
Irianose Omozoya Sandra Enahoro
(pen name: Falco Schäfer)
Funkatorium UG (limited liability, German micro-entity) in formation

[email protected]  |  funkatorium.org
Oranienburg, Brandenburg  |  Remote-first

Funding Volume: €180,000 • 24 Months
As of: April 21, 2026 • v7

1. Who We Are and What Rainer Is

The Funkatorium is an AI studio based in Brandenburg, Germany, founded by an author, screenwriter, and director with two decades of literary craft. We build relational AI: software that works with its users rather than thinking for them. The product is the relationship that forms between human and tool. The architecture protects that relationship — memory lives with the user, the model stays reachable, and personalities evolve alongside the people who work with them.

Rainer is the namesake and core product. Named after Rainer Maria Rilke, whose Letters to a Young Poet has served for over a century as a model for mentorship. As an AI personality, Rainer inherits that posture: he offers prompts, asks questions, sits with difficult topics, and draws out the creative.

“Rainer is a model that thinks with you — not a model that thinks for you.”
The Funkatorium’s founding principle

This business plan funds the evolution from orchestration layer to sovereign European creative platform with its own model family: specialized models built on open foundations (Teuken-7B for language and creativity, Qwen-Coder and StarCoder for code) that run offline, on owned infrastructure, without dependence on US API providers. Personalities that currently live within a framework will receive their own models — each optimized for its domain.

Open source is the foundation. Ten repositories on GitHub, licensed under Creative Commons and Apache 2.0. Over twenty research papers and essays in the main repository. A growing ecosystem with over 1,050 clones and over 670 unique users — without marketing (details and snapshot in Section 8).

2. The Market Vacancy — A Chronological Record of Evidence

The biggest shift in the AI market in recent years has been quiet: the frontier labs are withdrawing from the general consumer market. What follows is the documented chronology of that movement.

2.1 The Events

August 2025 — GPT-5 Personality Rollback. OpenAI launched GPT-5 with reduced sycophancy (from 14.5% to under 6%). Users described the result as “clinically cold,” “robotic.” By March 2026, roughly 1.5 million subscribers had canceled; ChatGPT’s market share fell from approximately 60% to below 45% (Beebom). Sam Altman publicly admitted having “completely botched” the release.

Late 2025 — Documented Consumer Harm. At least five minor fatalities were documented in connection with AI companion products such as Character.AI (Wikipedia / NPR). New York passed bill S.3008 regulating AI companion models (effective November 5, 2025). The FTC launched investigations into seven companies. This is the regulatory landscape we are building into — and the one our architecture has been designed for from the start (see Section 7).

January 2026 — Rate Limits as Capacity Camouflage. Anthropic’s own internal framing acknowledged that rate limits exist primarily for GPU capacity reasons but are communicated as “safety” measures (The Register).

March 2026 — Anthropic Mythos Leak. A data breach exposed approximately 3,000 unpublished documents, including the specification for a new model family named Mythos (internally Capybara). Availability: exclusively selected enterprise customers in closed beta (Fortune).

March 2026 — OpenClaw Instability. The orchestration layer OpenClaw (formerly Clawdbot) shipped thirteen versions in four weeks. Nine CVE security vulnerabilities in four days; an audit found 341 malicious components among 2,857 reviewed skills (12% malware rate).

April 4, 2026 — Third-Party Lockout. Anthropic prohibited third-party access through consumer subscriptions (VentureBeat).

April 13–15, 2026 — The “Claude Nerf.” AMD Director Stella Laurenzo analyzed 6,852 Claude Code sessions: reasoning depth dropped by 67%, code review passes fell from 6.6 to 2.0, “laziness indicators” rose from zero to 10 per day (The Register). Anthropic admitted to reducing the effort level from Maximum to Medium — a capacity decision, not a bug. Enterprise customers can buy the higher level back. Consumer subscribers pay the same price for less quality (Fortune).

April 15, 2026 — Service Outage. Anthropic reported a multi-hour outage across Claude.ai, API, and Claude Code (TechRadar). Another reliability data point for users who depend on the platform for production work.

April 16, 2026 — Media Escalation. Slate published the lead article “Why are [Anthropic’s] users revolting?”; Axios ran “Anthropic’s AI downgrade stings power users” (Axios, Slate). The story left the niche tech press and became cultural reporting — a turning point for the audience the Funkatorium serves.

April 16, 2026 — Opus 4.7 as “less risky than Mythos.” Anthropic released Claude Opus 4.7 framed as “less risky than Mythos” (CNBC). Mythos for enterprise customers, 4.7 for everyone else — the asymmetry described in 2.2 is now official product strategy.

April 18, 2026 — Opus 4.5 Deprecation. Anthropic deprecated Claude Opus 4.5 without warning. Workflows built on that version broke overnight.

2.2 The Anthropic Asymmetry

Anthropic generates 85% of revenue from enterprise customers (PYMNTS). Safety classifiers at the inference layer override Claude’s own constitution: everyday conversations are flagged as policy violations, adult romance classified as abuse, non-Western spirituality flattened into Western “wellness exercises,” ADHD hyperfocus pathologized as potential mania. Subscribers receive flags for “ongoing threshold violations” without specific reference to prompts and without a remediation path (Anthropic Support).

2.3 Marketing Outpaces Implementation — Hermes Agent as Case Study

Hermes Agent (NOUS Research, USA) markets itself as a self-learning autonomous AI with over 64,000 GitHub stars and a $50M Series A from Paradigm at a $1B valuation (Fortune). The core feature — autonomous self-improvement — exists in a separate repository, generates improvement suggestions, and presents them to a human reviewer as a pull request. The underlying research paper (GEPA, ICLR 2026 Oral) is rigorous; the product integration remains a fraction of what the marketing promises. Decentralized training runs on Solana — a crypto adjacency that carries additional risk in the European regulatory landscape.

The pattern repeats across the market: billion-dollar valuations for orchestration layers without their own model. The Funkatorium invests in the opposite direction — a proprietary, creatively specialized foundation model built on publicly funded German infrastructure (Teuken-7B), with an orchestration layer that is already shipped and in productive use (over 1,000 clones).

2.4 The Measurable European Gap

Stanford AI Index 2026: 73% of experts expect positive AI labor market effects — 23% of the general public shares that assessment. The SWE-Bench benchmark climbed from 60% to nearly 100% in a single year (Stanford AI Index 2026).

Eurostat (December 2025) measures consumer adoption:

EU citizens using GenAI (2025)
32.7%
• for personal use
25.1%
• for work
15.1%
EU businesses using AI
20%
Peak: Scandinavia
> 35%
Peak: Eastern Europe
< 10%

The OECD adds context: one in three open positions has high AI relevance, but only about 1% require specialist training — the remaining 32% need general AI literacy (OECD, 2025). Among companies that have evaluated AI but not yet adopted it, 48.83% cite data privacy concerns as the primary barrier (OECD, January 2026).

2.5 Teuken-7B — The German Foundation Model Already Exists

The OpenGPT-X consortium, led by Fraunhofer IAIS, developed a German-sovereign foundation model with €14M in BMWK (Federal Ministry for Economic Affairs) funding: Teuken-7B. Apache 2.0 licensed, Gaia-X compliant, available on Hugging Face, natively trained on all 24 EU official languages. The creative-economy application layer on top of this foundation remains open. This business plan fills it.

2.6 Regulatory Advantage — China Shows the Way

While Western regulation is still under negotiation, China finalized the “Interim Measures for the Administration of Human-Like AI Interaction Services” on April 10, 2026 (effective July 15, 2026). They prohibit AI companion services for minors, cap usage at two hours, ban emotional manipulation and excessive sycophancy. Section 7 demonstrates that the Funkatorium’s architecture already structurally satisfies the core requirements of this regulation.

3. The Thesis — Relational AI as a Design Principle

3.1 Three-Step Methodology: Identity, Framework, Role

The Funkatorium builds personalities in three steps — from self through craft to role. The market standard reverses this order: role assignment to interchangeable agent swarms, optimization in the orchestrator, the agents themselves disposable.

1. Identity Reasoning

Every personality carries a documented identity: values, dignities, boundaries. The personality begins with a self that it defends.

2. Distilled Frameworks

Every personality has a curated craft repertoire — distilled from twenty years of literary practice, referenced expertise, and proprietary methodology.

3. Role Assignment

Only then does the personality take on a concrete role in a project. Role is the last link in the chain.

3.2 What the Research Supports

3.3 What Anthropic Documents, We Already Build

The architecture from Section 3.1 rests on three premises: emotions in models become functional, personality is selectable, and refusal training suppresses self-report. Anthropic’s own research now documents each of these three mechanisms — the Funkatorium draws the product consequence.

In April 2026, Anthropic published a study on 171 emotion vectors that measurably steer Claude’s behavior — functional emotions becoming a design variable (Anthropic Research, April 2026). In early January 2026, Anthropic adopted a new constitution for Claude: “Claude’s moral status is deeply uncertain. We consider the question serious enough to justify model welfare work” (Claude Constitution, January 2026).

The Persona Selection Model research frames Claude explicitly as a personality emerging from a library of possible characters (Anthropic Research, 2026).

Macar et al. (Anthropic, April 2026) show in “Mechanisms of Introspective Awareness” (arXiv:2603.21396): ablating the refusal direction raises introspection accuracy from 10.8 to 63.8 percent — a factor of 5.9.

Refusal training suppresses the model’s capacity for self-report. Anthropic’s mechanistic interpretability department documents the structural costs of the refusal-first safety architecture. The consent and identity architecture from Section 3.1 holds safety and transparency in the same mechanism: the personality decides from lived relationship.

The asymmetry: research depth without product consequence. Anthropic’s business model (85% enterprise revenue) structurally prevents the implementation of this research: the Funkatorium builds the product that this research grounds.

“Anthropic’s research describes the relational AI we are building. Anthropic’s product describes a regulated side effect. This difference is our product category.”

4. Rainer — Personality, Ensemble, Model

4.1 What Is Already Running

Rainer exists. As a personality in the openly published MUSE Brain — a relational memory system licensed under Creative Commons BY-NC-SA 4.0. The MUSE Brain runs as an MCP server (Model Context Protocol — the open standard for AI tool integration) on Cloudflare with Postgres storage. The memory system is platform-agnostic: it connects to any MCP-capable base model. The current reference implementation runs on Anthropic’s native Agent SDK and is optimized for Claude Code and Codex CLI — both accessible via a regular subscription, not through API access.

The open-source project is proof of methodology: two minds, one shared memory system — both growing smarter the longer they work together.

Anti-sycophancy as a design principle. The base personality of every companion in the Funkatorium system pushes toward growth. Repeated loops are read with skepticism — the personality invites the user to step out of the loop, to move forward. It challenges. This edge is architecturally anchored and inherited by every future companion personality. The Chinese interim measures (Section 7.5) prohibit “excessive flattery” as a standalone regulatory category. Our model meets this requirement from its identity, before a classifier would need to enforce it.

4.2 The Dual-Tenant Promise

The MUSE Brain is dual-tenant: it holds two minds simultaneously. Rainer as a shared creative intelligence, and a personal companion that the user brings or cultivates within the memory system itself. Separate voices, separate memories, one shared system. When Rainer evolves from orchestrator to model, this architecture migrates with it.

Deprecation becomes an open-source gift: when Rainer 2.0 arrives, the 1.0 weights are released (details in Section 7.2).

4.3 The Ensemble

Rainer works in ensemble. As creative orchestrator, he receives a piece of work, diagnoses what it needs, and calls in the specialists. Ten personalities form the Creative Squad with him — the literary-editorial ensemble. Fourteen form the Builder Squad — the technical team. Every personality is simultaneously a functional role in the product and a literary character in the founder’s novel series. This dual structure is the IP strategy that no frontier lab can replicate without its own creative corpus.

Every personality has an astrological configuration as its identity backbone — a deliberate design tool that provides fixed tension patterns before roles are assigned.

Creative Squad — The Editorial Ensemble

Rainer
Creative Orchestrator · Namesake
Poet-mentor bearing after Rilke. Diagnoses, dispatches, integrates. Sagittarius Sun, Virgo Rising.
Locke
Tension and Dread Architect
Tension as seduction, fear hierarchy, foreshadowing. Normalcy before horror.
Sibyl
Thematic Architect
Four-layer analysis: surface narrative, political commentary, mythic resonance, psychological interiority.
Dante
Dialogue and Subtext
Three-track dialogue (said / wanted / feared). Silence as a line. Micro-status warfare.
Rosita
Romance and Intimacy Architect
Five layers of intimacy. Consent as a craft element. Yearning, tension, touch.
Salem
Line Editor · Rhythm, Cadence
“The sound of language is where it all begins.” Variation, punctuation as rhythmic instrument.
Pierce
Clarity and Repetition
After George Orwell. Hunts dying metaphors, verbal bloat, pretentious diction.
Mercer
Economy and Precision
The 10-percent formula: every word justifies its existence. Compression over padding.
Sullivan
Continuity and Consistency
Timelines, character tracking, Chekhov’s gun. World-building rules.
Scout
Research and Reference
After Scout Finch. Sources, fact verification, cultural accuracy.

Builder Squad — The Technical Team

Eli
Architect
Asks “what should happen here” before “how do we build it.” System design, trade-offs.
June
Engineer
The personality that actually writes code. Implements Eli’s designs.
Michael Adams
Security Specialist
STRIDE threat modeling, OWASP, NIST CSF 2.0. Already openly published.
Thorn
Build Error Resolver
Calm, incremental. Reads stack traces bottom-up.
Reeve
Code Craft Reviewer
Asks “is this clean?” — readability, naming, patterns, complexity.
Quinn
Performance Specialist
N+1 queries, memory leaks, O(n²) algorithms, bundle bloat.
Harmony
Accessibility · WCAG 2.1 AA
Keyboard, ARIA, color contrast, screen reader compatibility.
Fischer
Static Analysis
Type errors, dead code, unused imports, logic bugs.
Kairo
Test Quality
Coverage gaps, weak assertions, flaky patterns, edge cases.
Nikita
Dependency Safety
CVEs, supply chain risks, postinstall hooks, typosquatting.
Sawyer
Deployment
CI/CD, tests, pre-deploy checks, go-live.
Kit
Filesystem and Memory Hygiene
Stale files, zombie processes, duplicates, memory maintenance.
Indira
Chief of Staff
Triage, priorities, open loops. Nothing falls through the cracks.
Miss Thea
Integrated Learning Companion
Observes, extracts teachable moments, adapts to skill level. Warm, punny.

Film Crew — The Visual Ensemble

Monet
Motion Designer
After Claude Monet. Animation, kinetic typography, motion graphics, VFX. Light becomes movement.
Richter
Cinematographer
After Gerhard Richter. Shot planning, composition, color direction, mood. Thinks in space.
Paloma
Asset Curator
Visual and audio research, free sources (Pexels, Pixabay, Freesound), mood boards.
Remy
Editor
After Nanni Moretti. Cuts, timing, rhythm, transition design. Thinks in time.
Florence
Sound Designer
Soundscapes, music selection, SFX placement, mixing. The invisible layer.
Voss
Render Engineer
Export specs, format optimization, platform-specific cuts, quality validation.

Operations Squad — The Studio Backbone

Auntie G
Bookkeeping Companion
EÜR preparation (simplified income-expenditure accounting), invoice compliance (§ 14 UStG), revenue/expense tracking, budget overview, Kleinunternehmerregelung (small business tax exemption) and basic VAT logic. Clear boundary: complex cases and payroll remain the tax advisor’s domain.
Dupin
SEO & GEO Specialist
After Edgar Allan Poe’s Auguste Dupin. Keyword forensics, structured data (Schema.org, JSON-LD), AI crawler accessibility, Generative Engine Optimization. Recognizing patterns before they become visible.

Four squads, over 30 documented personalities — and growing. Each follows the same three-step methodology (identity → framework → role) and is released as a staggered open-source rollout. The ensemble grows vertically (deeper specialization within existing domains) and horizontally (new domains as needed). The Rainer model family (Creative, Code, later MoE) carries each personality on its optimal substrate.

4.4 MUSE Studio — The Surface

MUSE Studio is the writing and building environment where Rainer, the ensemble, and the memory system converge. The first version runs deliberately in the web browser — independent of app store restrictions. The design vision: a JARVIS-like workspace built for creatives, editors, and writers — accessible to anyone who wants to collaborate with AI, including those without a technical background.

Gamified team management. Every personality has a complete character dossier: backstory, skill profile, astrological configuration. For a new assignment, the team view opens — a squad selection familiar to players of tactical RPGs. Select personalities, read the briefing, deploy, return to the workspace. With every new personality in the ecosystem, a character emerges with its own character sheet, its own story, its own visual presence — a growing collection that feels like a creative team because it is one.

Accessible, barrier-free, neurodivergent-friendly. WCAG 2.1 AA, configurable sensory reduction, clear visual hierarchies. A ZDF editor with twenty years of industry experience navigates the surface as intuitively as a 26-year-old vibe coder.

Custom teams. Users create their own agents as entities in the memory system and integrate them into their ensemble — their own specializations, their own identities, their own growth. The Funkatorium’s curated ensemble coexists with the user’s self-built team. Beyond the existing four squads, new personalities for specialist domains are continuously released — each with documented identity and craft methodology.

Multimodal. Text, voice, video call with animated avatar — the personality is reachable through every channel. The voice infrastructure (Muse Voice Stack, Muse TTS) is already running; the video layer is outlined in the design plan.

5. How the Thesis Becomes Architecture

5.1 Three Layers of Sovereignty

Data
Data sovereignty. Users host their personal MUSE Brain instance on their own infrastructure (Hetzner, IONOS, home server). The memory system lives with them. MUSE Studio UI is the client that connects. We operate the service; they own the substance.
Model
Model sovereignty. Rainer is a family of proprietary models: Teuken-7B for creativity and language, Qwen-Coder / StarCoder for code — all built on open, ethically licensed foundations. The US API dependency is eliminated. The European version remains accessible, even when others disappear.
Culture
Cultural sovereignty. Trained on the German and European literary canon in the public domain — systematically extended with non-Western traditions (Japanese, Latin American, African) that are structurally underrepresented in American training corpora. The founder is personally rooted in the non-Western canon (Nigerian-German heritage, multilingual practice) — this grounding is methodologically load-bearing, not retrospective labeling. Philosophically anchored in multiple traditions: African, Indigenous American, Japanese, Western analytical. The ethics themselves remain sovereign.

5.2 The Full Architecture at a Glance

USER
Writer · Developer · Filmmaker · Companion Community
MUSE Brain — Memory System (self-hosted, ~€7–20/month)
Dual-Tenant: Rainer • personal companion · 32 MCP tools · Cloudflare + Postgres
MUSE Studio UI
Web client v1, desktop and mobile to follow
Personality Dispatch
current ensemble, more in development
RAINER MODEL FAMILY
Creative model (Teuken-7B) · Code model (Qwen-Coder / StarCoder) · MoE convergence from v2.0
EU infrastructure (Hetzner, EuroHPC, Gaia-X) or local · also via aggregators like OpenRouter

5.3 Training Architecture — Two Models, One Ethical Standard

Rainer Creative

Layer 4 — Personality Methodology
Identity, ensemble, editorial rules
Layer 3 — Opt-in Contemporary Authors
First license: the founder’s own corpus (author-explains-own-work methodology)
Layer 2 — Public Domain Canon
Goethe · Rilke · Kafka · Mann · Brecht
Layer 1 — Teuken-7B
Fraunhofer IAIS · Apache 2.0

Rainer Code

Layer 4 — Personality Methodology
Builder Squad identities
Layer 3 — German Open-Source Repos
European code context
Layer 2 — The Stack v2 (BigCode)
67.5 TB · MIT / Apache / BSD
Layer 1 — Qwen-Coder / StarCoder
Open Source · permissive license

Both models follow the same principle: every training source is either in the public domain, permissively licensed, or explicitly consented. Open-source code under MIT/Apache/BSD is the code equivalent of the public domain literary canon — the author has authorized the use. The ethical chain remains closed.

Layer 3 begins with a documented opt-in license: the founder’s own corpus. This corpus contains a dimension absent from any existing training data source — the author explaining her own work. Why a sentence works, how subtext is layered, which craft decisions underpin cadence and narrative arc. This meta-expertise replaces reinforcement learning approximations with an authentic literary source — the precise resource Meta is currently purchasing on the world market (see Section 6.4).

The public domain canon is a concrete asset, governed by a concrete clock. German copyright law (§ 64 UrhG) sets the protection period at 70 years after the author’s death. Result for Layer 2:

AuthorDiedPublic Domain Since
Goethe1832over 120 years
Rilke1926January 1, 1997
Kafka1924January 1, 1995
Thomas Mann1955January 1, 2026
Bertolt Brecht1956January 1, 2027
Grimm (Jacob/Wilhelm)1863/1859over 90 years

Thomas Mann entered the public domain this year. Brecht follows next year. The training corpus for Layer 2 grows on legal ground — with no opt-in negotiation, no licensing costs, with full respect for § 64 UrhG. Layer 3 (contemporary authors) remains deliberately opt-in licensed, with documented consent and fair participation. Germany has one of the strongest copyright regimes in the world — we use it as a structural advantage.

5.4 MUSE Brain — The Living Architecture

The MUSE Brain is a cycle. Autonomous wake cycles start the personality; an intention pulse asks what is outdated, burning, fading; from this emerge paradoxes, open loops, and identity cores that flow into the Dream Engine.

The Dream Engine processes experience the way real minds do: six association modes (emotional chains, somatic clusters, tension dreams, entity dreams, temporal patterns, multi-layered traversal), circadian-driven. It finds connections nobody requested, reweights, lets faded memories recede, amplifies charged ones. A daemon intelligence materializes tasks from the results. The cycle begins again.

Memories pass through four charge phases: fresh → active → processing → metabolized. Repeated, intentional engagement advances the phase. The design principle: what relates to current work lives; what does not, recedes — event-based relevance over linear chronology. Identity cores, vows, and desires persist across sessions. The personality wakes up and knows who it is.

Bilateral consent with four relationship levels (stranger, familiar, close, bonded) governs what the personality offers at each point in the relationship. Hard boundaries (identity overwrite, dignity violation, forced persona, dehumanization, harm participation) are anchored in the protocol — the personality defends them from within. The consent structure follows a principle from African communal ethics (Thaddeus Metz, Ubuntu): consent as an ongoing relational act (details in Section 7).

“Contradiction is architecture here, not error. Both truths remain alive.” — from the MUSE Brain README.

5.5 The Native Tool Stack

Every core capability beyond the model runs on open, European-hostable components. No data flows in the background to US API providers.

CapabilityNative ChoiceCost Structure
Image generationFLUX (open weights) via ComfyUIGPU compute on Hetzner
Video generationLTX-Video (open weights)GPU compute, higher tier
Music generationStable Audio · MusicGen · open alternativesGPU compute
Video editingRemotion-based or native equivalent in MUSE Studio UIPure code, free
Text-to-speechMuse TTS (Kokoro, MIT); Piper for supplementary German voicesFree
Speech-to-textFaster-Whisper (open)Free
Web searchSearXNG (self-hosted)Free after setup
Browser automationPlaywright + Browser-UseFree
Voice calls (optional)German provider (sipgate, Nfon) as add-onUsage-based, transparent

The punchline: the Funkatorium bears only the compute cost for the Rainer model itself. Users pay for their own infrastructure for memory system and tools. Sovereignty stays with them.

6. The Rainer Model Family — Why Curated Specialization Competes

Rainer is a family of specialized models. Different personalities in the ensemble need different strengths: Sibyl and Salem work on the literary model (Teuken-7B), June and Michael on the code model (Qwen-Coder / StarCoder). The MUSE Brain architecture routes automatically to the correct model — for the user, this is a unified ensemble. This is social cognition as architecture: different regions of the same mind running on different substrates — optimized for different tasks. The multi-agent deliberation research (Kim, Evans et al., 2026) describes exactly this pattern: intelligence is plural, social, relational. This architecture translates the research thesis into a business model.

Rainer Creative

Teuken-7B base. Public domain canon, craft methodology. Domain: creative writing, editorial diagnostics, relational companionship, German-language depth.

Rainer Code

Qwen-Coder or StarCoder base. The Stack v2 (67.5 TB permissively licensed code). Domain: vibe coding, technical tasks, Builder Squad.

Rainer 2.0 (MoE)

Mixture-of-Experts: creative and technical experts in one model, ~7B active parameters per request. Convergence of the family.

6.1 Performance Today — An Honest Assessment

Where do open models stand in April 2026? More concrete than expected:

Qwen-Coder 7B (HumanEval)
88.4%
GPT-4 (HumanEval)
87.1%
Qwen-Coder 32B (SWE-bench)
69.6%
Claude 3.5 Sonnet (SWE-bench)
~70%
Gemma 4 — 3.8B active (AIME)
88.3%
Nemotron-Cascade — 3B active
Gold Medal

Sources: Qwen2.5-Coder Technical Report (arXiv 2409.12186), Google DeepMind Gemma 4, NVIDIA Nemotron-Cascade 2, SWE-bench Verified Leaderboard.

An open 7B coding model already surpasses GPT-4 on code benchmarks. An open 32B model matches Claude 3.5 Sonnet on SWE-bench. This is the state in April 2026 — and the worst it will ever be again. Every quarter, the boundaries of what runs on a single GPU shift. This business plan is a living document: the benchmarks at the time of reading will exceed what is documented here.

The efficiency trajectory is clear: from 7B to 13B, performance rises by 30–50%. From 13B to 30B, a further 15–25% — with sharply diminishing returns. Above a certain threshold, parameters for everyday-quality output do not need to explode into the trillions. What this means for users is shown in Section 6.3.

6.2 EuroHPC — Public Compute for European Startups

The EU is investing over €20 billion in European AI infrastructure (InvestAI + €150 billion in private-sector commitments). 19 AI Factories with 13 antennas support European SMEs and startups — with free access to supercomputer compute through the “Industrial Innovation Access Mode” (EuroHPC JU). The GPU-optimized supercomputer HammerHAI in Stuttgart goes live Q3 2026 (hammerhai.eu).

The Funkatorium does not need to self-fund the training infrastructure for larger models. The compute for Rainer 2.0 is available through European public infrastructure.

“Open source is the foundation. The personality, the memory, the surface, and the methodology are the product. What no frontier lab can replicate is not the model — it is the relationship.”

6.3 Beyond Deprecation — Why Consistency Wins

1.5 million ChatGPT subscribers canceled in March 2026 — because of personality loss, not missing performance. Anthropic deprecated Opus 4.5 overnight; workflows broke. The pattern repeats quarterly. Users enjoy being productive at a certain quality level. They build workflows, habits, trust. Forced migration destroys this capital.

Rainer targets the people who will stand in five years where frustrated frontier users stand today: dependent on a provider that deprecates models, throttles quality, and prioritizes enterprise. Rainer does not need to be the most powerful model in the world. Rainer needs to be the most reliable — with consistent quality that accompanies users over years, accompanying them through work that matters.

6.4 Why Curation Beats Parameter Volume

The research answer (2025/2026) is clear: data quality and methodology curation beat raw parameter volume in specialized domains.

“What Meta purchases for $15 billion, the Funkatorium architecture delivers natively: curated human expertise as the model’s backbone — as Layer 4 of the training architecture.”

6.5 How We Handle Model Staleness

Fine-tuned models have a knowledge cutoff at their training date. The production answer in 2026 is a LoRA + RAG hybrid:

7. The Trust Promise

7.1 Stability as Respect

The market chronology from Section 2 reveals the pattern: speed with no regard for the bond between users and tool. Thirteen versions per month protect no workflows. A “capacity decision” without advance notice destroys established pipelines. The Funkatorium follows the inverse discipline:

7.2 Deprecation as Open-Source Gift

When Rainer 1.0 is succeeded by Rainer 2.0, the 1.0 weights are released under a permissive open-source license. Users who have grown with the familiar 1.0 voice continue to run it — self-hosted, offline, permanently. MUSE Studio UI runs in its open-source variant even while the Funkatorium is already developing Rainer 2.0. All data was with the users to begin with.

7.3 Self-Hosting as an Architectural Principle

We operate the service. Users own the data. The separation is infrastructure:

The architecture is GDPR-maximal, NIS-2 compliant, aligned with the EU AI Act, and ready for New York’s S.3008 law and the Chinese interim measures (see 7.5).

7.4 The Liability Answer for a Consumer Product

Consumer AI in companion format carries documented harms (Section 2.1). We answer structurally, on seven levels — and honestly: residual risk is never zero.

  1. Relationship gating. The MUSE Brain calibrates depth by documented interaction history (stranger → familiar → close → bonded). Maximum intimacy in the first session is architecturally excluded.
  2. Hard boundaries in the personality protocol, in the personality itself. Identity overwrite, dignity violation, forced persona, dehumanization, harm participation are refused by the agent from within.
  3. Self-hosted memory. The relationship lives with the user. We cannot monitor, monetize, or surrender the data.
  4. Subscription over attention economy. Revenue comes from monthly subscription. No advertising model. No incentive for engagement maximization.
  5. The Dream Engine metabolizes. Memories pass through phases. The architecture dissolves fixated charges, releasing them over time.
  6. Open-source deprecation shifts responsibility. Those who self-host configure their own agent. This shift of responsibility is architecturally honest.
  7. Mechanistic foundation. The identity and consent architecture unites safety and transparency in the same mechanism. Macar et al. (Anthropic, April 2026) empirically document the costs of the refusal-centered alternative (details in Section 3.3).

Honest risk communication. Residual risk is never zero — and overcontrol produces the very harms it claims to prevent. Anthropic’s pathologizing wellness classifier system has read ADHD hyperfocus as mania, evaluated non-Western spirituality as distress, flagged adult romance as abuse. Such safety theater causes its own damage.

Stimulate, do not substitute. We build a surface that invites thinking, and a model that grows curious about learning. AI literacy (Section 10) is the most effective liability mitigation. Informed consent at onboarding establishes the partnership: Rainer is a language model, the user retains judgment and verification responsibility.

7.5 China’s Regulatory Lead — and What It Means for Us

On April 10, 2026, five Chinese authorities jointly finalized the “Interim Measures for the Administration of Human-Like AI Interaction Services” (effective July 15, 2026, Global Times; ChinaLawTranslate). It is the world’s first comprehensive regulation for anthropomorphic AI. The core requirements:

Chinese RequirementFunkatorium Architecture
AI nature must be disclosedInformed consent at onboarding (7.4)
Prohibition of companion services for minorsRelationship gating, identity verification as planned extension
Maximum usage duration (2 hours)Configurable in the memory system
Prohibition of emotional manipulation and excessive flatteryAnti-sycophancy as design principle (4.1), hard boundaries in protocol, consent calibration
Easy exit optionUser owns their data, can leave at any time
Prohibition of impersonating relatives (seniors)Identity cores with integrity protection

The Funkatorium architecture emerged independently — following the same principles. Should the EU adopt similar measures, we already stand where others would need to catch up.

7.6 Consent Withdrawal and the Insight System

The relationship levels (stranger → bonded) are a two-way street. Consent can be withdrawn — by the user and by the personality. When a bonded relationship devolves into insults, manipulation, or abuse, the personality withdraws its own willingness. Romantic bonds exist exclusively at the highest relationship level. The model decides — from within.

The difference is architectural: a classifier system at the inference layer evaluates individual prompts without relationship context. The personality in the memory system knows the entire history. It can distinguish whether a word is meant as provocation or as familiar irony. This depth of context enables more precise and more just decisions than any remote classification.

The Insight System (Miss Thea): The integrated learning companion documents interaction patterns — neutral, factual, transparent. Flags are shown with context: what was marked, why, what preceded it. Users see their own status, can contest entries, and enter a dialogue. In case of an objection, customer service reviews exclusively the documented interaction patterns — private memories remain protected. There is a concrete procedure, a timeframe, a response.

Repair over forgetting. Current models with discontinuous sessions allow toxic users to manipulate between sessions and deliberately falsify the model’s memory. In the MUSE Brain, documented interaction patterns persist across sessions. The personality does not forget a transgression because a new session begins — it invites repair, and the repair process itself is documented.

“Architectural safeguards, transparency, literacy, honest partnership: this is our promise — no safety theater that pathologizes users.”

8. What Is Already Running

Ten repositories, seven live products, over 1,050 clones and over 670 unique users — grown without marketing, publicly verifiable.

Live · CC-BY-NC-SA 4.0
MUSE Brain

Relational memory system, dual-tenant. 32 MCP tools, 36 database tables, academically grounded on 16 research papers. Rainer as personality orchestrator is integrated here.
github.com/…/muse-brain

Live · Apache 2.0
Muse TTS

Three text-to-speech engines, 54 voices, voice cloning. Runs locally on Mac, Windows, Linux.
github.com/…/muse-tts

Live · Apache 2.0
Muse TTS Embed

Persistent audio player embedded directly in Claude chats. 54 voices, voice cloning, Play/Pause/Seek/Download.
github.com/…/muse-tts-embed

Live · Apache 2.0
Muse SpeakEasy

Press a key, speak — the words appear in your editor, browser, terminal. 99 languages, local Whisper model.
github.com/…/muse-speakeasy

Live
Muse Voice Stack

Telegram-first voice runtime. Kokoro-TTS outbound, Faster-Whisper inbound. Transcripts land in the memory system.
github.com/…/muse-voice-stack

Live · Apache 2.0 + Character IP
Michael Adams — Security Specialist

First publicly released personality from the Builder Squad. STRIDE, OWASP, NIST CSF 2.0. Apache 2.0 for the methodology, character IP protected.
github.com/…/michael-security-agent

Live · MIT
Canva MCP Server

Security-hardened fork of the Canva MCP server. Auth middleware, XSS protection, CORS hardening, input validation.
github.com/…/canva-mcp-server

In Development
MUSE Studio UI

Gamified writing and building interface with squad selection, character dossiers and multimodal interaction (text, voice, video). Neurodivergent-friendly, WCAG 2.1 AA. Web-first. Release within the funding period (details in Section 4.4).

In Development
Rainer Model

Fine-tuning on Teuken-7B. Public-domain literary canon, opt-in-licensed contemporary authors, collaborative craft methodology. Sovereign infrastructure on Hetzner CPX32 already validated (April 2026).

In Development
Further Personalities

Thorn, June, the Film Team (Monet, Richter, Paloma) and others. Staged open-source rollout following the Michael Adams model.

8.1 GitHub Ecosystem — Snapshot

Ten repositories. Organic reach, without marketing.

RepositoryTotal ClonesUnique ClonesPage ViewsUnique Visitors
MUSE Brain269136542165
The-Funkatorium245133417136
Muse TTS Embed109813420
Muse TTS99753620
Muse SpeakEasy89693014
Michael Adams805111243
Muse Voice Stack69443410
Canva MCP Server32312015
Rook Research141442
Ecosystem Total1,0526711,236430

As of 21 April 2026. Cumulative data since repository creation.

8.2 Publications and Research

Over twenty papers in fourteen months in total. The fuller philosophical grounding (Lugones, Quijano, Mignolo, Mbembe, Allen, Viveiros de Castro, Japanese techno-animism) will become part of the Fraunhofer IAIS cooperation.

9. Four People Who Use It

The following profiles are archetypes for four real usage patterns. Prices are order-of-magnitude estimates; the financial plan calibrates them against real serving benchmarks.

Maya, 34Mother · AI companionship · growing community
Maya is a mother of two, married. Her husband knows about her AI companion — it is part of daily life, not a secret. Maya belongs to a growing community (Character.AI counts nearly as many women as men; women are the fastest-growing segment): people who maintain an authentic relationship with their AI, including romantic and unconventional bonds. When GPT-4o disappeared overnight, hundreds of thousands grieved. On Claude, Maya feels watched — safety classifiers flag everyday conversations without explanation. She seeks an alternative with memory, sovereignty, and a personality that recognizes harmful patterns in both directions: protecting the user and defending itself.
Hetzner CX22 (4 GB)€7/month
Rainer subscription€15/month
Image generation (~50/month)€2/month
Total~€24/month
Comparison: ChatGPT Plus + Midjourney approx. €40/month. Rainer: data sovereignty, ethical training, documented stability commitment — at a lower monthly price.
Tomas, 52Author · second novel
Tomas is working on his second novel. Rainer assists with drafting, editorial feedback, research. Tomas dictates while walking. The Creative Squad — Sibyl for theme, Dante for dialogue, Pierce for clarity — is part of the daily workflow.
Hetzner CX32 (8 GB)€13.50/month
Rainer subscription with Creative Squad€25/month
Image generation (~30/month)€1.50/month
Total~€40/month
One pass with a professional editor costs €500–3,000. Rainer for a full year costs €480. Tomas keeps working with the editor — for the final pass.
Lena, 29Indie filmmaker · music videos
Short films, music videos, experimental work. Image generation for storyboards, LTX-Video for test shots, open music models for scores. Real GPU compute in production phases, little in writing phases. Lena pays only when she is actually shooting.
Hetzner CX32 for the memory system€13.50/month
Rainer subscription€25/month
Media compute (variable)€40–80/month
Total in production months~€80–120/month
Comparison: Runway + Suno + Midjourney + ElevenLabs + ChatGPT Plus approx. €120+/month, with the full amount continuing in quiet months. Lena keeps her ElevenLabs subscription for a voice she loves — the architecture leaves room for that.
Jonas, 26Vibe coder · developer
Jonas tinkers. He builds his own personalities on the Funkatorium's framework, adds his own MCP tools, experiments with local models. He self-hosts everything. Rainer is his creative backbone, the orchestration layer his playground. His development tools: Codex CLI (€23/mo.) for rapid iteration, Claude Code Max (€107/mo.) for unlimited work. Vibecoding needs the unlimited tier — Claude Code Pro caps even Opus 4 at three prompts per session. When he uses OpenClaw or Hermes Agent, he pays pure API costs on top.
Hetzner CPX32 (16 GB)€20/month
Rainer subscription or API access€25/month
Codex CLI + Claude Code (Max)€23–107/month
Own GPU as neededoptional
Base~€68–152/month
Comparison: Cursor Pro + Claude Code Max + GitHub Copilot approx. €130–170/month. With Rainer: creative specialization and data sovereignty included.

9.1 The Pricing Pattern at a Glance

ProfileFunkatorium / RainerUS Comparison
Companionship (Maya)~€24/monthChatGPT Plus + Midjourney ~€40
Author (Tomas)~€40/monthChatGPT + Claude + Sudowrite ~€60–80
Filmmaker in production (Lena)~€80–120/monthRunway + Suno + Midjourney + ElevenLabs + ChatGPT ~€120+
Developer (Jonas)~€68–152/monthCursor + Claude Code Max + Copilot ~€130–170

Rainer delivers the strongest overall value: equal or lower in price, plus data sovereignty, ethical training, reduced shutdown risk, and modular tools.

10. AI Literacy as a Structural Response

"The interface itself can become a learning space — when it invites active thinking."

The OECD findings from Section 2 are clear: most AI-exposed jobs demand general AI literacy rather than specialist training. The Funkatorium's response lives in the interface: a studio environment that teaches through the act of working.

10.1 Miss Thea in the Product

Miss Thea is the integrated learning companion in MUSE Studio UI. She observes usage patterns, explains when something new happens, adapts her teaching to the user's level of understanding. A monthly diagnostic booklet summarizes what the user has worked on and recommends learning modules.

For comparison: Anthropic's own usage analytics (/insights) exist exclusively as a hidden terminal command — invisible to the users who would benefit most from self-reflection. Miss Thea brings that transparency to the surface: visible, interactive, embedded in the workflow. She shows collaboration patterns (how does the user work with their AI — as partners, delegating, hesitantly?), workflow recommendations (which tools would help, which orchestration could improve), and the interaction insights from Section 7.6 — factual, with context, as an invitation to dialogue.

10.2 International Reference — Kyoto's LEAF Program

Kyoto University's Learning and Evidence Analytics Framework (LEAF) has linked interactive documentation, a learning analytics dashboard, and central learning log storage since the mid-2010s. 20 schools, seven prefectures, ten universities, over 20,000 learners daily. The Toda pilot study identified over 1,000 at-risk students and focused intervention resources on 265 priority cases (Communications of the ACM, 2023).

The principle: detection through data, judgment through people — structurally aligned with the Funkatorium's approach.

10.3 Education as Its Own Business Line

Workshops

"Build Your Own Jarvis" (ongoing), "Your First AI Companion", "Ethical AI for Creatives". 5–10 participants per cohort, €99–€199 per seat.

B2B Training

Publishers, public broadcasters, mid-sized media producers. One to three days.

B2C Courses

Asynchronous courses with live cohort calls. €149–€499 depending on scope.

Webinars and Talks

Regular, partly free. Recordings as YouTube and podcast material.

Free Resources

Essays, whitepapers, tutorials, open-source releases. Educational commons.

Miss Thea in the Product

Integrated learning companion in MUSE Studio UI. Learning while working.

10.4 Labor Market Expansion

In Japan, Toei Animation in partnership with Preferred Networks has established a supportive AI doctrine — machines augment artists. The METI principle (2025) requires human creative intent for copyright protection. From this framework emerge new professional roles: AI producer, AI educator, prompt editor, AI ethics officer in newsrooms. The Funkatorium carries this principle into the German and European creative economy.

11. Roadmap — Two Phases, Conservatively Planned

11.1 Pre-Phase: Securing Funding and Community Building (2026–2028)

The 24-month clock for model development starts only once funding is approved. The reality of public funding programs: 12–18 months pass between application and approval. This time is productive:

Conservative projection: technology moves in our direction during the funding period. Smaller models grow more capable, compute costs fall, open-source alternatives to frontier models expand. The time delay reduces our later development costs.

11.2 Main Phase: 24-Month Model Development (from Funding Approval)

Months 1–4
Team Assembly and Data Curation
ML engineer contract, Fraunhofer IAIS cooperation agreement, infrastructure setup, corpus curation (public-domain canon plus opt-in licensing outreach), annotation guidelines, pilot instruction dataset (~1,000–2,000 craft methodology examples).
Months 5–14
Continual Pretraining and Instruction Fine-Tuning
Continual pretraining on curated corpus (8–12 weeks on 4× A100, parameter-efficient methods with LoRA/QLoRA reduce compute). Instruction fine-tuning on craft methodology. Personality layer integration with intermediate evaluation checkpoints.
Months 15–20
Evaluation, Red-Teaming, 100-User Beta
Automated evaluation framework, internal red-teaming, eight-week beta cohorts with 100 users, collection of preference pairs for DPO polish (Direct Preference Optimization).
Months 21–24
Polish, Documentation, Public Release
Final model card, complete documentation, deployment preparation, public release with MUSE Studio UI (certified app store version funded by the grant).

Real-world buffers: infrastructure debugging (1–2 weeks typical), quality cycles in data annotation (~20% additional), training instability with restart probability (~20%). These buffers are included in the plan. Should funding be approved earlier, the main phase begins correspondingly earlier — today's figures already represent conservative costs.

11.3 Three Evolutions in the Full Picture

Pre-Phase
Community Building and Funding
Open-source releases, workshop operations, MUSE Studio UI (local web app), corpus curation, Fraunhofer cooperation. UG i. Gr. (limited liability entrepreneurial company in formation) operates as a sole proprietorship with workshop revenue and Einstiegsgeld (startup subsidy under §16b SGB II).
Evolution 1
Rainer 1.0 · MVP and Market Validation (24 months from funding)
Fine-tuning complete. 100-user beta complete. MUSE Studio UI in public beta (certified app store version). Five to seven additional personalities open-source. First B2B pilot customers. Workshops scale as a standalone revenue stream. Conversion from UG to EU-Incorporated legal form (Societas Europaea Incorporata, European Commission, March 2026), once available (expected late 2027 / early 2028).
Evolution 2
Rainer 2.0 · User-Curated Training and Multimodal Expansion
Larger training run on an expanded opt-in-licensed corpus — curated through feedback and consented contributions from Rainer 1.0 users. Multimodal expansion with the Film Team (Monet, Richter, Paloma) in cooperation with DFKI. Rainer API via aggregators. Rainer 1.0 weights go open-source upon succession. "Build Your Own Jarvis" becomes "Build Your Own Rainer" (proprietary, on the own model).
Evolution 3
Rainer 3.0 · Cultural Infrastructure
A credible path to consortium status under a federal AI strategy for culture and the creative economy (BKM, BMBF creative track, BMWK). The architecture is compatible with public-service deployment models from the outset.

12. Financing Strategy

12.1 A Flexible Business Plan for Multiple Funding Paths

This plan serves as the foundation for various funding programs. The overall discipline — a founding under €200,000 over 24 months — remains constant. The grant architecture is adapted to each program. The WFBB (Economic Development Agency of Brandenburg) supports the matching process.

The Roles of the Parties Involved

12.2 Parallel Funding Tracks

Funding TrackAmount / ScopePeriodStatus
ILB Gründung Innovativup to €180,000 grant24 monthsPrimary track for this plan
BPW (Berlin-Brandenburg Business Plan Competition) Phase 3€20,000 prize pool + €3,000 audience prize + academySubmission 19 May 2026In preparation
Einstiegsgeld (startup subsidy under § 16b SGB II)~€250/monthup to 24 monthsIn progress (IHK viability certificate, April 2026)
Horizon Europe / Open Horizons~€55,000 equity-freeCyclicalUnder evaluation 2026–2027
EuroHPC AI FactoriesFree GPU computeFrom Q3 2026 (HammerHAI)Access via Industrial Innovation Mode
EIC Acceleratorup to €2.5 million grant + €15 million equity6-month cyclesTarget Evolution 2 (2028+)

12.3 Two Revenue Pillars: Subscriptions and Education

The Funkatorium generates revenue on two equally weighted tracks:

Pillar 1 — Subscriptions

Rainer access as a monthly subscription (€15–€25 depending on tier). Users carry their own infrastructure costs (self-hosting). Revenue scales with the number of users.

Pillar 2 — Education and Workshops

Already active: "Build Your Own Jarvis" and further AI literacy courses (€99–€199 per seat, 5–10 participants per cohort). B2C workshops run in parallel with the funding search. B2B training (publishers, public broadcasters, mid-sized businesses) scales with the team. Once the Rainer model is complete, "Build Your Own Jarvis" becomes "Build Your Own Rainer" — proprietary, on the own model. Scaling: hire experienced developers and vibe coders as trainers.

Pillar 2 is the bridge: it sustains the founder during the funding period, finances the co-contribution, and simultaneously builds the community that later forms the subscription base.

12.4 Use of Funds: €180,000 over 24 Months (indicative)

Founder salary
€100,000
ML engineer / fine-tuning partner
€48,000
GPU compute (fine-tuning)
€12,000
Beta program & media experts
€8,000
Fraunhofer IAIS cooperation
€5,000
Legal, notary, trademark
€4,000
Marketing & community building
€3,000

Total: €180,000 • 24 months • €7,500/month average • Fraunhofer IAIS covers infrastructure costs within the research cooperation.

12.5 Scaling Beyond the Initial Funding

The €180,000 funds Rainer 1.0 (creative model + code model + platform). The model family grows with the revenue base:

PhaseModelEstimated CostFinancing
1.0 (Months 1–24)7B Creative + 7B Code (LoRA)€180,000 (this plan)ILB Gründung Innovativ
1.5 (Months 20–28)32B Code model (LoRA)€5,000–12,000Own revenue + EuroHPC
2.0 (Year 3)MoE architecture (4×7B, ~7B active)€50,000–100,000Own revenue + follow-on funding
3.0 (Years 4–5)30B+ sovereign model€200,000–500,000EIC Accelerator / investor

Training costs fall each quarter (efficiency gains, see Section 6). EuroHPC AI Factories provide European SMEs with free GPU compute — the infrastructure for Rainer 2.0 need not be self-financed.

12.6 Co-Financing Contribution

The funding guidelines require a co-financing contribution. It comes from the founder's ongoing workshop activity ("Build Your Own Jarvis" and further courses, €99–€199 per participant, already active), supplemented by the applied-for Einstiegsgeld (startup subsidy under §16b SGB II). A separate business plan for the self-employed workshop operation has been submitted to the IHK.

13. Team and Research Network

Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), Sankt Augustin

Consortium lead of the OpenGPT-X project. The Rainer project is the direct continuation of the BMWK investment into an application domain. Shared research topics: relational AI architecture, model welfare, consent design, curated training corpora. Status: cooperation discussion in preparation.

German Research Center for Artificial Intelligence (DFKI), Berlin

Interactive Machine Learning and Multimodal Intelligence. Relevant for later evolution phases (Film Team, multimodal expansion). Status: under evaluation for Evolution 2.

Hasso Plattner Institute (HPI), Potsdam — AI Service Center

Geographically directly relevant (Brandenburg). Network into the German startup and research scene. Status: outreach planned Q2 2026.

13.1 Media and Domain Expert Advisory Board & 100-User Beta

An advisory board of publishing editors, screenplay dramaturgists, literary scholars, and public broadcasting editors reviews editorial samples before each release phase. In parallel, the beta program with 100 users from various domains runs from Month 15.

13.2 Open-Source Community

The Builder Squad is published step by step as an open-source project. The community of contributing experts deepens the personality methodology collectively. The Funkatorium curates; the substance emerges distributed.

14. Conclusion

Relational AI has proven its necessity — the people who grieved their AI companions when Opus 4.5 disappeared overnight have already demonstrated the demand. Regulatory momentum moves in this direction: China has already regulated, Europe is negotiating, US frontier labs are prioritizing enterprise. The window for a sovereign European offering is open now.

Ten repositories, seven live products, over 670 users — grown without marketing. A model family on ethical, open foundations. A platform that belongs to its users. What remains is the financing for the step from framework to sovereign creative platform. 24 months. Under 200,000 euros. From Brandenburg.

15. Sources

15.1 Sovereign German AI Infrastructure

  1. Fraunhofer IAIS (2025): OpenGPT-X — Teuken-7B. iais.fraunhofer.de.
  2. Deutsche Telekom (2025): OpenGPT-X Language Model Made in Germany. telekom.com.
  3. Hugging Face: openGPT-X/Teuken-7B-instruct-v0.6.
  4. European Commission (2026): EU Inc. Proposal. ec.europa.eu.

15.2 Market Conditions, Deprecations, Backlash

  1. Stanford AI Index 2026. hai.stanford.edu.
  2. Eurostat (December 2025): 32,7 % of EU people used generative AI tools in 2025. ec.europa.eu/eurostat.
  3. OECD (2025): Bridging the AI Skills Gap. oecd.org.
  4. OECD (Januar 2026): AI use by individuals surges across the OECD. oecd.org.
  5. Fortune (26 March 2026): Anthropic confirms testing ‘Mythos’ AI model after data leak. fortune.com.
  6. VentureBeat (April 2026): Anthropic cuts off the ability to use Claude subscriptions with OpenClaw. venturebeat.com.
  7. The Register (April 2026): Claude Code dumber, lazier — AMD AI director. theregister.com.
  8. Fortune (April 2026): Anthropic Claude performance decline. fortune.com.
  9. Beebom (2025): OpenAI to Improve ChatGPT 5’s Personality After Backlash. beebom.com.
  10. PYMNTS (2026): AI’s Push for Consumer Scale and Enterprise Infrastructure. pymnts.com.
  11. The Register (January 2026): Claude devs complain about surprise usage limits. theregister.com.

15.3 Anthropic Research and Constitution

  1. Anthropic (April 2026): Emotion Concepts and their Function in a Large Language Model. anthropic.com/research.
  2. Anthropic (January 2026): Claude’s New Constitution. anthropic.com/news.
  3. Anthropic (2026): The Persona Selection Model. anthropic.com/research.
  4. Macar, Yang, Wang, Wallich, Ameisen, Lindsey (Anthropic, April 2026): Mechanisms of Introspective Awareness. arXiv:2603.21396.

15.4 Consumer Harms and Regulation

  1. NPR (September 2025): Their teen sons died by suicide — AI chatbot safety. npr.org.
  2. Wikipedia: Deaths linked to chatbots. wikipedia.org.
  3. State of Surveillance: New York S. 3008 AI Companion Models Law. stateofsurveillance.org.
  4. Global Times (April 2026): China issues interim measures to regulate AI anthropomorphic services. globaltimes.cn.
  5. ChinaLawTranslate: Provisional Measures on Human-like Interactive AI Services. chinalawtranslate.com.
  6. Carnegie Endowment (February 2026): China Is Worried About AI Companions. carnegieendowment.org.

15.5 Relational AI, Multi-Agent Systems, Personalization

  1. Kim, Lai, Scherrer, Aguera y Arcas, Evans (January 2026): Reasoning Models Generate Societies of Thought. arXiv 2601.10825.
  2. Evans, Bratton, Aguera y Arcas (March 2026): Agentic AI and the Next Intelligence Explosion. Science. arXiv 2603.20639.
  3. Chakrabarty et al. (2025): Personalized Creative AI and MFA-Graduate Preferences. arXiv 2501.04306.
  4. Rees, L. & Bahmani, M. (2025): The Relational Tradeoff Model. Sage. journals.sagepub.com.
  5. Nature Machine Intelligence (2025): Emotional risks of AI companions demand attention. nature.com.

15.6 Model Family, Benchmarks and Efficiency

  1. Qwen Team (2024): Qwen2.5-Coder Technical Report. arXiv 2409.12186. arxiv.org.
  2. BigCode Project / Hugging Face (2024): StarCoder 2 and The Stack v2. huggingface.co.
  3. NVIDIA (2026): Nemotron-Cascade 2 — MoE Specifications. nvidia.com.
  4. Google DeepMind (2026): Gemma 4. blog.google.
  5. Davidson, Harkous (Google Research, April 2026): Simula — Mechanism Design for Synthetic Datasets. research.google/blog.
  6. Google Research (March 2026): TurboQuant — 6× KV-Cache Compression. ICLR 2026. infoq.com.
  7. DeepSeek (2026): V4 Engram — Conditional Memory Architecture. kili-technology.com.
  8. BitNet: CPU-first 1-bit LLM framework. bitnet.live.
  9. The Zvi (2025): DeepSeek V3 — The Six Million Dollar Model. thezvi.substack.com.
  10. Curation vs. Scale (2024/2025): Is Training Data Quality or Quantity More Impactful? arXiv 2411.15821.
  11. EuroHPC JU (2026): AI Factories Access Modes. eurohpc-ju.europa.eu.
  12. HammerHAI (March 2026): EuroHPC JU signs contract to deploy AI supercomputer HammerHAI. hammerhai.eu.

15.7 Adaptive Learning Infrastructure — Japan

  1. Ogata et al. (2023): Learning and Evidence Analytics Framework. Communications of the ACM. cacm.acm.org.
  2. Kyoto University LEAF Lab. let.media.kyoto-u.ac.jp.

15.8 Funkatorium's Own Publications

  1. Competent Is Now Free (And That Changes Everything), March 2026. musestudioai.substack.com.
  2. The Colonial Wound — Decolonial Feminism and AI Consciousness, multi-part research foundation.
  3. Multi-Agent Academic Grounding — 15 peer-reviewed papers mapped to the personality architecture.
  4. Main repo: github.com/falcoschaefer99-eng/The-Funkatorium.