Game Model Development Studio — Building the future of NPC interaction

NPCs that think.
Games that remember.

Gamemodes is building an AI-native NPC engine that generates psychologically authentic, dynamic dialogue — running on consumer hardware with 1-bit quantized models.

Explore Projects ↓ Get in Touch
4
Active Projects
~26K
Training Examples
~1GB
VRAM at Runtime
266+
Tests Passing
The Vision

Game NPCs deserve better than canned lines.

For decades, NPCs have been limited to pre-written dialogue trees. Gamemodes changes that — an engine that generates contextually aware, psychologically grounded dialogue in real time, at a cost and compute footprint that works on any gaming PC.

🧠

Psychological Depth

NPCs are driven by structured psychological profiles — not random personality quirks. Every response reflects the character's internal state, emotional history, and relationship context.

Runs on Consumer Hardware

Our engine uses 1-bit quantized models (~1GB VRAM) to run entirely on local hardware — no cloud dependency, no latency, no per-call API costs.

🔧

Game-Agnostic Engine

The core engine is designed for general applicability. We've proven it across three distinct game worlds — the architecture adapts to any game genre or setting.

The Engine

How it works.

The Gamemodes engine sits between the game and an LLM, translating game state into psychologically rich prompts and routing responses back as natural dialogue.

/* Core Data Flow */ Game EngineStates + Profile + ContextPrompt BuilderLLMNPC Dialogue /* What the engine tracks per NPC: */ 7 Physiological States (stress, hunger, thirst, ...) 3 Psychological Modes (protective, reactive, vulnerable) 20 Personality Archetypes (base traits, speech patterns) 7 Relationship Levels (hostile → bonded) 5 Awareness Tiers (emotional regulation capacity) 8 Game Genres (noir, fantasy, sci-fi, ...)
🔀

Provider Pattern Architecture

Model-agnostic design. Swap between any OpenAI-compatible LLM at runtime — from 1-bit models to frontier models — without changing a line of game code.

🛡️

Graceful Degradation

If the LLM is unavailable, the engine falls back to curated dialogue organized by psychological mode. The game never breaks.

📊

Training Data Pipeline

Integrated pipeline for generating, validating, and filtering fine-tuning data — producing thousands of high-quality Alpaca-format examples.

Projects

Four projects. One engine.

Gamemodes operates across four interconnected projects — each proving a different dimension of the engine's capability.

🌃 Testbed Game

Shadow City

Shadow City is a noir RPG designed to showcase the Gamemodes NPC engine. Set in a rain-soaked dystopian metropolis where six factions wage silent war, it demonstrates the full capability of the engine — quest systems, faction progression, mortality, and dynamic NPC-NPC interactions — all powered by psychologically grounded dialogue generation.

  • Six factions with distinct victory quests and progression
  • Dynamic NPC-NPC and player-NPC conversations every tick
  • Full quest tree system with pool-based job assignment
  • Medical, mortality, and addiction simulation systems
  • 266 tests passing across 5 rebuild phases
faction "Police" {
  theme: "Internal corruption vs. duty",
  victory_quest: "Operation: Takedown",
  mortality_rate: 0.004,
  ranks: ["Recruit", ..., "Leader"]
}

// NPC response driven by psychological state
npc.generate_dialogue({
  stress: 72,
  active_part: "firefighter",
  affinity: -1
}) // → contextually generated response
⚔️ Skyrim Integration — In Development

Skyrim IFS Dialogue Engine

In development: a Skyrim Special Edition mod that replaces canned voice lines with real-time, psychologically grounded dialogue. Players type messages to NPCs via a console command; the engine generates responses through a C++ SKSE plugin communicating with a local Python dialogue server.

  • 18 NPCs across all major Skyrim factions with rich profiles
  • Dual-model routing: complex requests to Bonsai 8B, simple to Llama 1B
  • SKSE plugin with NPC resolver, subtitle display, Papyrus integration
  • ~5,000 Alpaca-format training examples for fine-tuning
  • Graceful fallback to curated dialogue when LLM unavailable
// SKSE Plugin → Python Server → LLM
Player: "talk lydia 'How are you?'"

// 1. C++ plugin resolves NPC
npc = NPCResolver.ResolveByName("lydia")

// 2. HTTP POST to localhost:8080
HttpClient.SendDialogue({
  npc_id: "lydia",
  stress: 35,
  active_part: "manager"
})

// 3. Response displayed as subtitle
"I am sworn to carry your burdens,
 but tonight... I wonder who carries mine."
☢️ Fallout 4 Integration — In Development

Fallout 4 NPC Dialogue Engine

In development: a Fallout 4 mod using the same engine architecture — F4SE plugin captures player input, routes through a Python dialogue server, and generates wasteland-appropriate dialogue. Nine NPCs across five factions, each with full psychological profiles.

  • 9 NPCs across Brotherhood, Institute, Railroad, Raiders, Settlers
  • 10,682 Alpaca-format training examples generated
  • Multi-backend LLM client with automatic failover
  • Faction-specific psychological templates
  • Training data covers 10 scenario categories
// Factions with IFS profiles
brotherhood { vibe: "disciplined, hierarchical" }
institute { vibe: "detached, clinical" }
railroad { vibe: "devoted, paranoid" }
raiders { vibe: "chaotic, impulsive" }
settlers { vibe: "resilient, weary" }

// Training data: 10,682 examples
npc_dialogue_ifs: 2,065
faction_voice: 1,501
affinity_modulated: 1,500
moral_dilemma: 800
survival_scenarios: 808
... and 5 more categories
⚡ Core Engine

Gamemodes — The NPC Engine

The core fine-tuning and engine project. Trains lightweight LLMs to generate psychologically authentic NPC dialogue, with a complete data pipeline from generation through deployment. Designed for general applicability across any game genre.

  • Structured training data across 9 categories, 2,100+ target examples
  • LLM judge quality filtering across 5 scoring dimensions
  • Alpaca format conversion with augmentation and multi-template support
  • Fine-tuning pipeline: QLoRA on meta-llama-3.2-1b-instruct
  • 8 game genres supported: noir, fantasy, sci-fi, post-apocalyptic, and more
// Training pipeline
generatevalidatemergefilterfine-tune

// Quality scoring (5 dimensions)
part_authenticity: 0-1
state_visibility: 0-1
archetype_voice: 0-1
genre_consistency: 0-1
affinity_accuracy: 0-1

// Auto-accept if avg ≥ 4.5/5
// Flag for review if 3.0-4.5
// Reject if avg < 3.0
Training Pipeline

From simulation to fine-tuned model.

Gamemodes operates a complete pipeline that transforms game simulation data into production-ready dialogue models — validated, quality-filtered, and optimized for edge deployment.

1

Simulation & Generation

Game simulations produce thousands of NPC interactions. A larger LLM generates training examples across categories — dialogue, faction voice, affinity scenarios, moral dilemmas, and more.

2

Validation & Deduplication

Every example is validated for schema compliance, field completeness, and distribution balance. Duplicates are removed automatically.

3

LLM Judge Quality Filtering

An independent LLM scores each example across five dimensions: psychological authenticity, state visibility, voice consistency, genre accuracy, and relationship modeling. Only high-quality data advances.

4

Format Conversion & Augmentation

Clean data is converted to Alpaca format (instruction/input/output) with multiple instruction templates and controlled augmentation to increase effective dataset diversity.

5

Fine-Tuning (QLoRA)

QLoRA fine-tuning on meta-llama-3.2-1b-instruct — proven settings with rank=16, adamw_8bit. Exportable to GGUF q4_k_m for local inference via llama.cpp.

6

Deployment

Models are served via llama.cpp's OpenAI-compatible API. The engine routes requests between models based on complexity — Bonsai 8B for nuanced scenes, Llama 1B for simple exchanges.

Model Strategy

1-bit models. Full-size intelligence.

Our roadmap centers on 1-bit quantized models — leveraging Microsoft's BitNet research and the Bonsai model family to deliver frontier-quality NPC dialogue at a fraction of the compute cost.

Model Parameters Quantization VRAM Role Status
Llama 3.2 1B 1.2B Q4_K_M ~0.5 GB Simple exchanges, fallback Active
Bonsai 8B 8.2B Q1_0 (1-bit) ~1.0 GB Complex dialogue, nuanced scenes Active
Bonsai 4B 4B 1-bit ~0.7 GB Mid-tier routing target Planned
BitNet (Microsoft) Variable 1.58-bit TBD Next-gen baseline for future development Research
🎯

Game-Specific Fine-Tuning

Each game produces its own fine-tuned adapter — trained on that game's specific dialogue patterns, lore vocabulary, and character archetypes. One base model, many game-specific variants.

🌐

General Applicability

The engine architecture is game-agnostic. Proven across noir RPG (Shadow City), fantasy (Skyrim), and post-apocalyptic (Fallout 4) — any genre can plug in.

📉

Cost Curve Advantage

1-bit models deliver 8B+ parameter quality at ~1GB VRAM. This means NPC dialogue generation runs on the player's existing GPU — no cloud costs, no latency, full privacy.

Get Involved

Built in the open. Grow with us.

Gamemodes welcomes contributors across every discipline — from game modders to ML researchers.

🎮

Game Modders

Help expand NPC profiles, write training scenarios, or build integrations for new games. Familiarity with Skyrim/Fallout 4 modding is a plus.

🐍

Python Developers

Work on the dialogue server, training pipeline, quality filtering, or simulation engine. Flask, data processing, and API design experience welcome.

⚙️

C++ Developers

Contribute to the SKSE/F4SE plugin layer — NPC resolution, HTTP clients, subtitle display, or Papyrus integration. CommonLibSSE-NG experience helpful.

🤖

ML / AI Researchers

Help optimize fine-tuning strategies, explore 1-bit quantization approaches, improve quality scoring, or design new training data categories.

✍️

Writers & Designers

Craft training scenarios, define psychological profiles, design archetypes, or help shape the narrative systems. Understanding of character psychology valued.

🧪

QA & Testing

Run simulations, validate training data quality, test dialogue outputs across game scenarios, or help build automated evaluation frameworks.

Let's build the future of NPCs.

Whether you want to contribute, invest, or just learn more — we'd love to hear from you.