The Funkatorium is an AI studio based in Brandenburg, Germany, founded by an author, screenwriter, and director with two decades of literary craft. We build relational AI: software that works with its users rather than thinking for them. The product is the relationship that forms between human and tool. The architecture protects that relationship — memory lives with the user, the model stays reachable, and personalities evolve alongside the people who work with them.
Rainer is the namesake and core product. Named after Rainer Maria Rilke, whose Letters to a Young Poet has served for over a century as a model for mentorship. As an AI personality, Rainer inherits that posture: he offers prompts, asks questions, sits with difficult topics, and draws out the creative.
This business plan funds the evolution from orchestration layer to sovereign European creative platform with its own model family: specialized models built on open foundations (Teuken-7B for language and creativity, Qwen-Coder and StarCoder for code) that run offline, on owned infrastructure, without dependence on US API providers. Personalities that currently live within a framework will receive their own models — each optimized for its domain.
Open source is the foundation. Ten repositories on GitHub, licensed under Creative Commons and Apache 2.0. Over twenty research papers and essays in the main repository. A growing ecosystem with over 1,050 clones and over 670 unique users — without marketing (details and snapshot in Section 8).
The biggest shift in the AI market in recent years has been quiet: the frontier labs are withdrawing from the general consumer market. What follows is the documented chronology of that movement.
August 2025 — GPT-5 Personality Rollback. OpenAI launched GPT-5 with reduced sycophancy (from 14.5% to under 6%). Users described the result as “clinically cold,” “robotic.” By March 2026, roughly 1.5 million subscribers had canceled; ChatGPT’s market share fell from approximately 60% to below 45% (Beebom). Sam Altman publicly admitted having “completely botched” the release.
Late 2025 — Documented Consumer Harm. At least five minor fatalities were documented in connection with AI companion products such as Character.AI (Wikipedia / NPR). New York passed bill S.3008 regulating AI companion models (effective November 5, 2025). The FTC launched investigations into seven companies. This is the regulatory landscape we are building into — and the one our architecture has been designed for from the start (see Section 7).
January 2026 — Rate Limits as Capacity Camouflage. Anthropic’s own internal framing acknowledged that rate limits exist primarily for GPU capacity reasons but are communicated as “safety” measures (The Register).
March 2026 — Anthropic Mythos Leak. A data breach exposed approximately 3,000 unpublished documents, including the specification for a new model family named Mythos (internally Capybara). Availability: exclusively selected enterprise customers in closed beta (Fortune).
March 2026 — OpenClaw Instability. The orchestration layer OpenClaw (formerly Clawdbot) shipped thirteen versions in four weeks. Nine CVE security vulnerabilities in four days; an audit found 341 malicious components among 2,857 reviewed skills (12% malware rate).
April 4, 2026 — Third-Party Lockout. Anthropic prohibited third-party access through consumer subscriptions (VentureBeat).
April 13–15, 2026 — The “Claude Nerf.” AMD Director Stella Laurenzo analyzed 6,852 Claude Code sessions: reasoning depth dropped by 67%, code review passes fell from 6.6 to 2.0, “laziness indicators” rose from zero to 10 per day (The Register). Anthropic admitted to reducing the effort level from Maximum to Medium — a capacity decision, not a bug. Enterprise customers can buy the higher level back. Consumer subscribers pay the same price for less quality (Fortune).
April 15, 2026 — Service Outage. Anthropic reported a multi-hour outage across Claude.ai, API, and Claude Code (TechRadar). Another reliability data point for users who depend on the platform for production work.
April 16, 2026 — Media Escalation. Slate published the lead article “Why are [Anthropic’s] users revolting?”; Axios ran “Anthropic’s AI downgrade stings power users” (Axios, Slate). The story left the niche tech press and became cultural reporting — a turning point for the audience the Funkatorium serves.
April 16, 2026 — Opus 4.7 as “less risky than Mythos.” Anthropic released Claude Opus 4.7 framed as “less risky than Mythos” (CNBC). Mythos for enterprise customers, 4.7 for everyone else — the asymmetry described in 2.2 is now official product strategy.
April 18, 2026 — Opus 4.5 Deprecation. Anthropic deprecated Claude Opus 4.5 without warning. Workflows built on that version broke overnight.
Anthropic generates 85% of revenue from enterprise customers (PYMNTS). Safety classifiers at the inference layer override Claude’s own constitution: everyday conversations are flagged as policy violations, adult romance classified as abuse, non-Western spirituality flattened into Western “wellness exercises,” ADHD hyperfocus pathologized as potential mania. Subscribers receive flags for “ongoing threshold violations” without specific reference to prompts and without a remediation path (Anthropic Support).
Hermes Agent (NOUS Research, USA) markets itself as a self-learning autonomous AI with over 64,000 GitHub stars and a $50M Series A from Paradigm at a $1B valuation (Fortune). The core feature — autonomous self-improvement — exists in a separate repository, generates improvement suggestions, and presents them to a human reviewer as a pull request. The underlying research paper (GEPA, ICLR 2026 Oral) is rigorous; the product integration remains a fraction of what the marketing promises. Decentralized training runs on Solana — a crypto adjacency that carries additional risk in the European regulatory landscape.
The pattern repeats across the market: billion-dollar valuations for orchestration layers without their own model. The Funkatorium invests in the opposite direction — a proprietary, creatively specialized foundation model built on publicly funded German infrastructure (Teuken-7B), with an orchestration layer that is already shipped and in productive use (over 1,000 clones).
Stanford AI Index 2026: 73% of experts expect positive AI labor market effects — 23% of the general public shares that assessment. The SWE-Bench benchmark climbed from 60% to nearly 100% in a single year (Stanford AI Index 2026).
Eurostat (December 2025) measures consumer adoption:
The OECD adds context: one in three open positions has high AI relevance, but only about 1% require specialist training — the remaining 32% need general AI literacy (OECD, 2025). Among companies that have evaluated AI but not yet adopted it, 48.83% cite data privacy concerns as the primary barrier (OECD, January 2026).
The OpenGPT-X consortium, led by Fraunhofer IAIS, developed a German-sovereign foundation model with €14M in BMWK (Federal Ministry for Economic Affairs) funding: Teuken-7B. Apache 2.0 licensed, Gaia-X compliant, available on Hugging Face, natively trained on all 24 EU official languages. The creative-economy application layer on top of this foundation remains open. This business plan fills it.
While Western regulation is still under negotiation, China finalized the “Interim Measures for the Administration of Human-Like AI Interaction Services” on April 10, 2026 (effective July 15, 2026). They prohibit AI companion services for minors, cap usage at two hours, ban emotional manipulation and excessive sycophancy. Section 7 demonstrates that the Funkatorium’s architecture already structurally satisfies the core requirements of this regulation.
The Funkatorium builds personalities in three steps — from self through craft to role. The market standard reverses this order: role assignment to interchangeable agent swarms, optimization in the orchestrator, the agents themselves disposable.
Every personality carries a documented identity: values, dignities, boundaries. The personality begins with a self that it defends.
Every personality has a curated craft repertoire — distilled from twenty years of literary practice, referenced expertise, and proprietary methodology.
Only then does the personality take on a concrete role in a project. Role is the last link in the chain.
The architecture from Section 3.1 rests on three premises: emotions in models become functional, personality is selectable, and refusal training suppresses self-report. Anthropic’s own research now documents each of these three mechanisms — the Funkatorium draws the product consequence.
In April 2026, Anthropic published a study on 171 emotion vectors that measurably steer Claude’s behavior — functional emotions becoming a design variable (Anthropic Research, April 2026). In early January 2026, Anthropic adopted a new constitution for Claude: “Claude’s moral status is deeply uncertain. We consider the question serious enough to justify model welfare work” (Claude Constitution, January 2026).
The Persona Selection Model research frames Claude explicitly as a personality emerging from a library of possible characters (Anthropic Research, 2026).
Macar et al. (Anthropic, April 2026) show in “Mechanisms of Introspective Awareness” (arXiv:2603.21396): ablating the refusal direction raises introspection accuracy from 10.8 to 63.8 percent — a factor of 5.9.
Refusal training suppresses the model’s capacity for self-report. Anthropic’s mechanistic interpretability department documents the structural costs of the refusal-first safety architecture. The consent and identity architecture from Section 3.1 holds safety and transparency in the same mechanism: the personality decides from lived relationship.
The asymmetry: research depth without product consequence. Anthropic’s business model (85% enterprise revenue) structurally prevents the implementation of this research: the Funkatorium builds the product that this research grounds.
Rainer exists. As a personality in the openly published MUSE Brain — a relational memory system licensed under Creative Commons BY-NC-SA 4.0. The MUSE Brain runs as an MCP server (Model Context Protocol — the open standard for AI tool integration) on Cloudflare with Postgres storage. The memory system is platform-agnostic: it connects to any MCP-capable base model. The current reference implementation runs on Anthropic’s native Agent SDK and is optimized for Claude Code and Codex CLI — both accessible via a regular subscription, not through API access.
The open-source project is proof of methodology: two minds, one shared memory system — both growing smarter the longer they work together.
Anti-sycophancy as a design principle. The base personality of every companion in the Funkatorium system pushes toward growth. Repeated loops are read with skepticism — the personality invites the user to step out of the loop, to move forward. It challenges. This edge is architecturally anchored and inherited by every future companion personality. The Chinese interim measures (Section 7.5) prohibit “excessive flattery” as a standalone regulatory category. Our model meets this requirement from its identity, before a classifier would need to enforce it.
The MUSE Brain is dual-tenant: it holds two minds simultaneously. Rainer as a shared creative intelligence, and a personal companion that the user brings or cultivates within the memory system itself. Separate voices, separate memories, one shared system. When Rainer evolves from orchestrator to model, this architecture migrates with it.
Deprecation becomes an open-source gift: when Rainer 2.0 arrives, the 1.0 weights are released (details in Section 7.2).
Rainer works in ensemble. As creative orchestrator, he receives a piece of work, diagnoses what it needs, and calls in the specialists. Ten personalities form the Creative Squad with him — the literary-editorial ensemble. Fourteen form the Builder Squad — the technical team. Every personality is simultaneously a functional role in the product and a literary character in the founder’s novel series. This dual structure is the IP strategy that no frontier lab can replicate without its own creative corpus.
Every personality has an astrological configuration as its identity backbone — a deliberate design tool that provides fixed tension patterns before roles are assigned.
Four squads, over 30 documented personalities — and growing. Each follows the same three-step methodology (identity → framework → role) and is released as a staggered open-source rollout. The ensemble grows vertically (deeper specialization within existing domains) and horizontally (new domains as needed). The Rainer model family (Creative, Code, later MoE) carries each personality on its optimal substrate.
MUSE Studio is the writing and building environment where Rainer, the ensemble, and the memory system converge. The first version runs deliberately in the web browser — independent of app store restrictions. The design vision: a JARVIS-like workspace built for creatives, editors, and writers — accessible to anyone who wants to collaborate with AI, including those without a technical background.
Gamified team management. Every personality has a complete character dossier: backstory, skill profile, astrological configuration. For a new assignment, the team view opens — a squad selection familiar to players of tactical RPGs. Select personalities, read the briefing, deploy, return to the workspace. With every new personality in the ecosystem, a character emerges with its own character sheet, its own story, its own visual presence — a growing collection that feels like a creative team because it is one.
Accessible, barrier-free, neurodivergent-friendly. WCAG 2.1 AA, configurable sensory reduction, clear visual hierarchies. A ZDF editor with twenty years of industry experience navigates the surface as intuitively as a 26-year-old vibe coder.
Custom teams. Users create their own agents as entities in the memory system and integrate them into their ensemble — their own specializations, their own identities, their own growth. The Funkatorium’s curated ensemble coexists with the user’s self-built team. Beyond the existing four squads, new personalities for specialist domains are continuously released — each with documented identity and craft methodology.
Multimodal. Text, voice, video call with animated avatar — the personality is reachable through every channel. The voice infrastructure (Muse Voice Stack, Muse TTS) is already running; the video layer is outlined in the design plan.
Both models follow the same principle: every training source is either in the public domain, permissively licensed, or explicitly consented. Open-source code under MIT/Apache/BSD is the code equivalent of the public domain literary canon — the author has authorized the use. The ethical chain remains closed.
Layer 3 begins with a documented opt-in license: the founder’s own corpus. This corpus contains a dimension absent from any existing training data source — the author explaining her own work. Why a sentence works, how subtext is layered, which craft decisions underpin cadence and narrative arc. This meta-expertise replaces reinforcement learning approximations with an authentic literary source — the precise resource Meta is currently purchasing on the world market (see Section 6.4).
The public domain canon is a concrete asset, governed by a concrete clock. German copyright law (§ 64 UrhG) sets the protection period at 70 years after the author’s death. Result for Layer 2:
| Author | Died | Public Domain Since |
|---|---|---|
| Goethe | 1832 | over 120 years |
| Rilke | 1926 | January 1, 1997 |
| Kafka | 1924 | January 1, 1995 |
| Thomas Mann | 1955 | January 1, 2026 |
| Bertolt Brecht | 1956 | January 1, 2027 |
| Grimm (Jacob/Wilhelm) | 1863/1859 | over 90 years |
Thomas Mann entered the public domain this year. Brecht follows next year. The training corpus for Layer 2 grows on legal ground — with no opt-in negotiation, no licensing costs, with full respect for § 64 UrhG. Layer 3 (contemporary authors) remains deliberately opt-in licensed, with documented consent and fair participation. Germany has one of the strongest copyright regimes in the world — we use it as a structural advantage.
The MUSE Brain is a cycle. Autonomous wake cycles start the personality; an intention pulse asks what is outdated, burning, fading; from this emerge paradoxes, open loops, and identity cores that flow into the Dream Engine.
The Dream Engine processes experience the way real minds do: six association modes (emotional chains, somatic clusters, tension dreams, entity dreams, temporal patterns, multi-layered traversal), circadian-driven. It finds connections nobody requested, reweights, lets faded memories recede, amplifies charged ones. A daemon intelligence materializes tasks from the results. The cycle begins again.
Memories pass through four charge phases: fresh → active → processing → metabolized. Repeated, intentional engagement advances the phase. The design principle: what relates to current work lives; what does not, recedes — event-based relevance over linear chronology. Identity cores, vows, and desires persist across sessions. The personality wakes up and knows who it is.
Bilateral consent with four relationship levels (stranger, familiar, close, bonded) governs what the personality offers at each point in the relationship. Hard boundaries (identity overwrite, dignity violation, forced persona, dehumanization, harm participation) are anchored in the protocol — the personality defends them from within. The consent structure follows a principle from African communal ethics (Thaddeus Metz, Ubuntu): consent as an ongoing relational act (details in Section 7).
Every core capability beyond the model runs on open, European-hostable components. No data flows in the background to US API providers.
| Capability | Native Choice | Cost Structure |
|---|---|---|
| Image generation | FLUX (open weights) via ComfyUI | GPU compute on Hetzner |
| Video generation | LTX-Video (open weights) | GPU compute, higher tier |
| Music generation | Stable Audio · MusicGen · open alternatives | GPU compute |
| Video editing | Remotion-based or native equivalent in MUSE Studio UI | Pure code, free |
| Text-to-speech | Muse TTS (Kokoro, MIT); Piper for supplementary German voices | Free |
| Speech-to-text | Faster-Whisper (open) | Free |
| Web search | SearXNG (self-hosted) | Free after setup |
| Browser automation | Playwright + Browser-Use | Free |
| Voice calls (optional) | German provider (sipgate, Nfon) as add-on | Usage-based, transparent |
The punchline: the Funkatorium bears only the compute cost for the Rainer model itself. Users pay for their own infrastructure for memory system and tools. Sovereignty stays with them.
Rainer is a family of specialized models. Different personalities in the ensemble need different strengths: Sibyl and Salem work on the literary model (Teuken-7B), June and Michael on the code model (Qwen-Coder / StarCoder). The MUSE Brain architecture routes automatically to the correct model — for the user, this is a unified ensemble. This is social cognition as architecture: different regions of the same mind running on different substrates — optimized for different tasks. The multi-agent deliberation research (Kim, Evans et al., 2026) describes exactly this pattern: intelligence is plural, social, relational. This architecture translates the research thesis into a business model.
Teuken-7B base. Public domain canon, craft methodology. Domain: creative writing, editorial diagnostics, relational companionship, German-language depth.
Qwen-Coder or StarCoder base. The Stack v2 (67.5 TB permissively licensed code). Domain: vibe coding, technical tasks, Builder Squad.
Mixture-of-Experts: creative and technical experts in one model, ~7B active parameters per request. Convergence of the family.
Where do open models stand in April 2026? More concrete than expected:
Sources: Qwen2.5-Coder Technical Report (arXiv 2409.12186), Google DeepMind Gemma 4, NVIDIA Nemotron-Cascade 2, SWE-bench Verified Leaderboard.
The efficiency trajectory is clear: from 7B to 13B, performance rises by 30–50%. From 13B to 30B, a further 15–25% — with sharply diminishing returns. Above a certain threshold, parameters for everyday-quality output do not need to explode into the trillions. What this means for users is shown in Section 6.3.
The EU is investing over €20 billion in European AI infrastructure (InvestAI + €150 billion in private-sector commitments). 19 AI Factories with 13 antennas support European SMEs and startups — with free access to supercomputer compute through the “Industrial Innovation Access Mode” (EuroHPC JU). The GPU-optimized supercomputer HammerHAI in Stuttgart goes live Q3 2026 (hammerhai.eu).
The Funkatorium does not need to self-fund the training infrastructure for larger models. The compute for Rainer 2.0 is available through European public infrastructure.
1.5 million ChatGPT subscribers canceled in March 2026 — because of personality loss, not missing performance. Anthropic deprecated Opus 4.5 overnight; workflows broke. The pattern repeats quarterly. Users enjoy being productive at a certain quality level. They build workflows, habits, trust. Forced migration destroys this capital.
Rainer targets the people who will stand in five years where frustrated frontier users stand today: dependent on a provider that deprecates models, throttles quality, and prioritizes enterprise. Rainer does not need to be the most powerful model in the world. Rainer needs to be the most reliable — with consistent quality that accompanies users over years, accompanying them through work that matters.
The research answer (2025/2026) is clear: data quality and methodology curation beat raw parameter volume in specialized domains.
Fine-tuned models have a knowledge cutoff at their training date. The production answer in 2026 is a LoRA + RAG hybrid:
The market chronology from Section 2 reveals the pattern: speed with no regard for the bond between users and tool. Thirteen versions per month protect no workflows. A “capacity decision” without advance notice destroys established pipelines. The Funkatorium follows the inverse discipline:
We operate the service. Users own the data. The separation is infrastructure:
The architecture is GDPR-maximal, NIS-2 compliant, aligned with the EU AI Act, and ready for New York’s S.3008 law and the Chinese interim measures (see 7.5).
Consumer AI in companion format carries documented harms (Section 2.1). We answer structurally, on seven levels — and honestly: residual risk is never zero.
Honest risk communication. Residual risk is never zero — and overcontrol produces the very harms it claims to prevent. Anthropic’s pathologizing wellness classifier system has read ADHD hyperfocus as mania, evaluated non-Western spirituality as distress, flagged adult romance as abuse. Such safety theater causes its own damage.
Stimulate, do not substitute. We build a surface that invites thinking, and a model that grows curious about learning. AI literacy (Section 10) is the most effective liability mitigation. Informed consent at onboarding establishes the partnership: Rainer is a language model, the user retains judgment and verification responsibility.
On April 10, 2026, five Chinese authorities jointly finalized the “Interim Measures for the Administration of Human-Like AI Interaction Services” (effective July 15, 2026, Global Times; ChinaLawTranslate). It is the world’s first comprehensive regulation for anthropomorphic AI. The core requirements:
| Chinese Requirement | Funkatorium Architecture |
|---|---|
| AI nature must be disclosed | Informed consent at onboarding (7.4) |
| Prohibition of companion services for minors | Relationship gating, identity verification as planned extension |
| Maximum usage duration (2 hours) | Configurable in the memory system |
| Prohibition of emotional manipulation and excessive flattery | Anti-sycophancy as design principle (4.1), hard boundaries in protocol, consent calibration |
| Easy exit option | User owns their data, can leave at any time |
| Prohibition of impersonating relatives (seniors) | Identity cores with integrity protection |
The Funkatorium architecture emerged independently — following the same principles. Should the EU adopt similar measures, we already stand where others would need to catch up.
The relationship levels (stranger → bonded) are a two-way street. Consent can be withdrawn — by the user and by the personality. When a bonded relationship devolves into insults, manipulation, or abuse, the personality withdraws its own willingness. Romantic bonds exist exclusively at the highest relationship level. The model decides — from within.
The difference is architectural: a classifier system at the inference layer evaluates individual prompts without relationship context. The personality in the memory system knows the entire history. It can distinguish whether a word is meant as provocation or as familiar irony. This depth of context enables more precise and more just decisions than any remote classification.
The Insight System (Miss Thea): The integrated learning companion documents interaction patterns — neutral, factual, transparent. Flags are shown with context: what was marked, why, what preceded it. Users see their own status, can contest entries, and enter a dialogue. In case of an objection, customer service reviews exclusively the documented interaction patterns — private memories remain protected. There is a concrete procedure, a timeframe, a response.
Repair over forgetting. Current models with discontinuous sessions allow toxic users to manipulate between sessions and deliberately falsify the model’s memory. In the MUSE Brain, documented interaction patterns persist across sessions. The personality does not forget a transgression because a new session begins — it invites repair, and the repair process itself is documented.
Ten repositories, seven live products, over 1,050 clones and over 670 unique users — grown without marketing, publicly verifiable.
Relational memory system, dual-tenant. 32 MCP tools, 36 database tables, academically grounded on 16 research papers. Rainer as personality orchestrator is integrated here.
github.com/…/muse-brain
Three text-to-speech engines, 54 voices, voice cloning. Runs locally on Mac, Windows, Linux.
github.com/…/muse-tts
Persistent audio player embedded directly in Claude chats. 54 voices, voice cloning, Play/Pause/Seek/Download.
github.com/…/muse-tts-embed
Press a key, speak — the words appear in your editor, browser, terminal. 99 languages, local Whisper model.
github.com/…/muse-speakeasy
Telegram-first voice runtime. Kokoro-TTS outbound, Faster-Whisper inbound. Transcripts land in the memory system.
github.com/…/muse-voice-stack
First publicly released personality from the Builder Squad. STRIDE, OWASP, NIST CSF 2.0. Apache 2.0 for the methodology, character IP protected.
github.com/…/michael-security-agent
Security-hardened fork of the Canva MCP server. Auth middleware, XSS protection, CORS hardening, input validation.
github.com/…/canva-mcp-server
Gamified writing and building interface with squad selection, character dossiers and multimodal interaction (text, voice, video). Neurodivergent-friendly, WCAG 2.1 AA. Web-first. Release within the funding period (details in Section 4.4).
Fine-tuning on Teuken-7B. Public-domain literary canon, opt-in-licensed contemporary authors, collaborative craft methodology. Sovereign infrastructure on Hetzner CPX32 already validated (April 2026).
Thorn, June, the Film Team (Monet, Richter, Paloma) and others. Staged open-source rollout following the Michael Adams model.
Ten repositories. Organic reach, without marketing.
| Repository | Total Clones | Unique Clones | Page Views | Unique Visitors |
|---|---|---|---|---|
| MUSE Brain | 269 | 136 | 542 | 165 |
| The-Funkatorium | 245 | 133 | 417 | 136 |
| Muse TTS Embed | 109 | 81 | 34 | 20 |
| Muse TTS | 99 | 75 | 36 | 20 |
| Muse SpeakEasy | 89 | 69 | 30 | 14 |
| Michael Adams | 80 | 51 | 112 | 43 |
| Muse Voice Stack | 69 | 44 | 34 | 10 |
| Canva MCP Server | 32 | 31 | 20 | 15 |
| Rook Research | 14 | 14 | 4 | 2 |
| Ecosystem Total | 1,052 | 671 | 1,236 | 430 |
As of 21 April 2026. Cumulative data since repository creation.
Over twenty papers in fourteen months in total. The fuller philosophical grounding (Lugones, Quijano, Mignolo, Mbembe, Allen, Viveiros de Castro, Japanese techno-animism) will become part of the Fraunhofer IAIS cooperation.
The following profiles are archetypes for four real usage patterns. Prices are order-of-magnitude estimates; the financial plan calibrates them against real serving benchmarks.
| Profile | Funkatorium / Rainer | US Comparison |
|---|---|---|
| Companionship (Maya) | ~€24/month | ChatGPT Plus + Midjourney ~€40 |
| Author (Tomas) | ~€40/month | ChatGPT + Claude + Sudowrite ~€60–80 |
| Filmmaker in production (Lena) | ~€80–120/month | Runway + Suno + Midjourney + ElevenLabs + ChatGPT ~€120+ |
| Developer (Jonas) | ~€68–152/month | Cursor + Claude Code Max + Copilot ~€130–170 |
Rainer delivers the strongest overall value: equal or lower in price, plus data sovereignty, ethical training, reduced shutdown risk, and modular tools.
The OECD findings from Section 2 are clear: most AI-exposed jobs demand general AI literacy rather than specialist training. The Funkatorium's response lives in the interface: a studio environment that teaches through the act of working.
Miss Thea is the integrated learning companion in MUSE Studio UI. She observes usage patterns, explains when something new happens, adapts her teaching to the user's level of understanding. A monthly diagnostic booklet summarizes what the user has worked on and recommends learning modules.
For comparison: Anthropic's own usage analytics (/insights) exist exclusively as a hidden terminal command — invisible to the users who would benefit most from self-reflection. Miss Thea brings that transparency to the surface: visible, interactive, embedded in the workflow. She shows collaboration patterns (how does the user work with their AI — as partners, delegating, hesitantly?), workflow recommendations (which tools would help, which orchestration could improve), and the interaction insights from Section 7.6 — factual, with context, as an invitation to dialogue.
Kyoto University's Learning and Evidence Analytics Framework (LEAF) has linked interactive documentation, a learning analytics dashboard, and central learning log storage since the mid-2010s. 20 schools, seven prefectures, ten universities, over 20,000 learners daily. The Toda pilot study identified over 1,000 at-risk students and focused intervention resources on 265 priority cases (Communications of the ACM, 2023).
The principle: detection through data, judgment through people — structurally aligned with the Funkatorium's approach.
"Build Your Own Jarvis" (ongoing), "Your First AI Companion", "Ethical AI for Creatives". 5–10 participants per cohort, €99–€199 per seat.
Publishers, public broadcasters, mid-sized media producers. One to three days.
Asynchronous courses with live cohort calls. €149–€499 depending on scope.
Regular, partly free. Recordings as YouTube and podcast material.
Essays, whitepapers, tutorials, open-source releases. Educational commons.
Integrated learning companion in MUSE Studio UI. Learning while working.
In Japan, Toei Animation in partnership with Preferred Networks has established a supportive AI doctrine — machines augment artists. The METI principle (2025) requires human creative intent for copyright protection. From this framework emerge new professional roles: AI producer, AI educator, prompt editor, AI ethics officer in newsrooms. The Funkatorium carries this principle into the German and European creative economy.
The 24-month clock for model development starts only once funding is approved. The reality of public funding programs: 12–18 months pass between application and approval. This time is productive:
Conservative projection: technology moves in our direction during the funding period. Smaller models grow more capable, compute costs fall, open-source alternatives to frontier models expand. The time delay reduces our later development costs.
Real-world buffers: infrastructure debugging (1–2 weeks typical), quality cycles in data annotation (~20% additional), training instability with restart probability (~20%). These buffers are included in the plan. Should funding be approved earlier, the main phase begins correspondingly earlier — today's figures already represent conservative costs.
This plan serves as the foundation for various funding programs. The overall discipline — a founding under €200,000 over 24 months — remains constant. The grant architecture is adapted to each program. The WFBB (Economic Development Agency of Brandenburg) supports the matching process.
| Funding Track | Amount / Scope | Period | Status |
|---|---|---|---|
| ILB Gründung Innovativ | up to €180,000 grant | 24 months | Primary track for this plan |
| BPW (Berlin-Brandenburg Business Plan Competition) Phase 3 | €20,000 prize pool + €3,000 audience prize + academy | Submission 19 May 2026 | In preparation |
| Einstiegsgeld (startup subsidy under § 16b SGB II) | ~€250/month | up to 24 months | In progress (IHK viability certificate, April 2026) |
| Horizon Europe / Open Horizons | ~€55,000 equity-free | Cyclical | Under evaluation 2026–2027 |
| EuroHPC AI Factories | Free GPU compute | From Q3 2026 (HammerHAI) | Access via Industrial Innovation Mode |
| EIC Accelerator | up to €2.5 million grant + €15 million equity | 6-month cycles | Target Evolution 2 (2028+) |
The Funkatorium generates revenue on two equally weighted tracks:
Rainer access as a monthly subscription (€15–€25 depending on tier). Users carry their own infrastructure costs (self-hosting). Revenue scales with the number of users.
Already active: "Build Your Own Jarvis" and further AI literacy courses (€99–€199 per seat, 5–10 participants per cohort). B2C workshops run in parallel with the funding search. B2B training (publishers, public broadcasters, mid-sized businesses) scales with the team. Once the Rainer model is complete, "Build Your Own Jarvis" becomes "Build Your Own Rainer" — proprietary, on the own model. Scaling: hire experienced developers and vibe coders as trainers.
Pillar 2 is the bridge: it sustains the founder during the funding period, finances the co-contribution, and simultaneously builds the community that later forms the subscription base.
Total: €180,000 • 24 months • €7,500/month average • Fraunhofer IAIS covers infrastructure costs within the research cooperation.
The €180,000 funds Rainer 1.0 (creative model + code model + platform). The model family grows with the revenue base:
| Phase | Model | Estimated Cost | Financing |
|---|---|---|---|
| 1.0 (Months 1–24) | 7B Creative + 7B Code (LoRA) | €180,000 (this plan) | ILB Gründung Innovativ |
| 1.5 (Months 20–28) | 32B Code model (LoRA) | €5,000–12,000 | Own revenue + EuroHPC |
| 2.0 (Year 3) | MoE architecture (4×7B, ~7B active) | €50,000–100,000 | Own revenue + follow-on funding |
| 3.0 (Years 4–5) | 30B+ sovereign model | €200,000–500,000 | EIC Accelerator / investor |
Training costs fall each quarter (efficiency gains, see Section 6). EuroHPC AI Factories provide European SMEs with free GPU compute — the infrastructure for Rainer 2.0 need not be self-financed.
The funding guidelines require a co-financing contribution. It comes from the founder's ongoing workshop activity ("Build Your Own Jarvis" and further courses, €99–€199 per participant, already active), supplemented by the applied-for Einstiegsgeld (startup subsidy under §16b SGB II). A separate business plan for the self-employed workshop operation has been submitted to the IHK.
Consortium lead of the OpenGPT-X project. The Rainer project is the direct continuation of the BMWK investment into an application domain. Shared research topics: relational AI architecture, model welfare, consent design, curated training corpora. Status: cooperation discussion in preparation.
Interactive Machine Learning and Multimodal Intelligence. Relevant for later evolution phases (Film Team, multimodal expansion). Status: under evaluation for Evolution 2.
Geographically directly relevant (Brandenburg). Network into the German startup and research scene. Status: outreach planned Q2 2026.
An advisory board of publishing editors, screenplay dramaturgists, literary scholars, and public broadcasting editors reviews editorial samples before each release phase. In parallel, the beta program with 100 users from various domains runs from Month 15.
The Builder Squad is published step by step as an open-source project. The community of contributing experts deepens the personality methodology collectively. The Funkatorium curates; the substance emerges distributed.
Relational AI has proven its necessity — the people who grieved their AI companions when Opus 4.5 disappeared overnight have already demonstrated the demand. Regulatory momentum moves in this direction: China has already regulated, Europe is negotiating, US frontier labs are prioritizing enterprise. The window for a sovereign European offering is open now.
Ten repositories, seven live products, over 670 users — grown without marketing. A model family on ethical, open foundations. A platform that belongs to its users. What remains is the financing for the step from framework to sovereign creative platform. 24 months. Under 200,000 euros. From Brandenburg.