Initial commit: Character Details MCP Server

This commit is contained in:
Aodhan Collins
2026-03-06 19:23:05 +00:00
commit f330fe8eb3
13 changed files with 1028 additions and 0 deletions

10
.dockerignore Normal file
View File

@@ -0,0 +1,10 @@
.venv/
__pycache__/
*.pyc
*.pyo
.git/
.gitignore
*.md
*.jsonl
Dockerfile
.dockerignore

16
Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
FROM python:3.12-slim
WORKDIR /app
# Copy project definition first (layer-cached separately from source)
COPY pyproject.toml ./
COPY src/ src/
# All dependencies have pre-built wheels — no compiler needed
RUN pip install --no-cache-dir -e .
# Cache lives here — mount a named volume to persist across container runs
VOLUME ["/root/.local/share/character_details"]
# MCP servers communicate over stdio; no port needed
CMD ["character-details"]

88
README.md Normal file
View File

@@ -0,0 +1,88 @@
# Character Details MCP
An MCP server that helps LLMs gather rich, structured information about fictional characters for storytelling and image generation.
It fetches from **Fandom wikis** and **Wikipedia**, then caches results locally (24-hour TTL) for fast follow-up responses.
## Tools
| Tool | Description |
|---|---|
| `get_character` | Fetch full character details (cached or live) |
| `refresh_character` | Force re-fetch from external sources |
| `list_characters` | List all locally cached characters |
| `remove_character` | Delete a character from the local cache |
| `generate_image_prompt` | Build a tag list for image generation tools |
| `generate_story_context` | Build a structured reference doc for roleplay/writing |
## Data Sources
- **Fandom wikis** — franchise-specific, much richer character data
- **Wikipedia** — supplements missing sections
Supported franchise → wiki mappings are defined in `fetcher.py` (`FRANCHISE_WIKIS`). Adding a new franchise is a one-liner.
## Setup
### Requirements
- Python 3.11+
- [uv](https://docs.astral.sh/uv/) (recommended) or pip
### Install
```bash
cd character_details
uv pip install -e .
```
### Run (stdio transport)
```bash
uv run character-details
```
### Add to Claude Desktop
Edit `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"character-details": {
"command": "uv",
"args": [
"run",
"--directory",
"/absolute/path/to/character_details",
"character-details"
]
}
}
}
```
## Cache
Character data is cached at `~/.local/share/character_details/cache/` as JSON files.
Each entry expires after 24 hours. Use `refresh_character` to force an update.
## Project Structure
```
character_details/
src/character_details/
__init__.py
models.py # Pydantic data model (CharacterData)
cache.py # Read/write/list local JSON cache
fetcher.py # Fandom + Wikipedia fetching & section parsing
server.py # FastMCP server and tool definitions
pyproject.toml
README.md
```
## Test Characters
- Aerith Gainsborough — Final Fantasy VII
- Princess Peach — Super Mario
- Sucy Manbavaran — Little Witch Academia

236
USER_GUIDE.md Normal file
View File

@@ -0,0 +1,236 @@
# Character Details MCP — User Guide
An MCP server that fetches structured information about fictional characters from Fandom wikis and Wikipedia, making it easy for LLMs to accurately portray or generate content for characters from games, anime, and other media.
---
## Quick Start (Docker — recommended)
### 1. Build the image
```bash
cd character_details
docker build -t character-details-mcp .
```
### 2. Add to Claude Desktop
Edit `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"character-details": {
"command": "docker",
"args": [
"run", "--rm", "-i",
"-v", "character-details-cache:/root/.local/share/character_details",
"character-details-mcp"
]
}
}
}
```
Restart Claude Desktop. The server will appear in the tools list.
**Important flags:**
- `-i` — keeps stdin open (required for stdio MCP transport)
- `--rm` — removes the container after the session ends
- `-v character-details-cache:...` — named Docker volume so the cache persists across sessions
---
## Quick Start (Local Python)
### Requirements
- Python 3.11+
- [uv](https://docs.astral.sh/uv/)
### Install
```bash
cd character_details
uv venv && uv pip install -e .
```
### Add to Claude Desktop
```json
{
"mcpServers": {
"character-details": {
"command": "uv",
"args": [
"run",
"--directory", "/absolute/path/to/character_details",
"character-details"
]
}
}
}
```
---
## Tool Reference
### `get_character`
Fetches complete character data. Checks the local cache first (24-hour TTL), fetches live if stale.
**Parameters:**
| Name | Type | Description |
|---|---|---|
| `name` | string | Character name, e.g. `"Aerith Gainsborough"` |
| `franchise` | string | Franchise name, e.g. `"Final Fantasy VII"` |
**Returns:** Formatted markdown with Description, Appearance, Personality, Background, Abilities, Relationships, and source URLs.
**Example prompt:**
> Use get_character to look up Aerith Gainsborough from Final Fantasy VII, then write a short scene where she tends her flowers.
---
### `refresh_character`
Same as `get_character` but bypasses the cache and re-fetches from source.
**Use when:** You suspect the cached data is incomplete, or you want the latest wiki content.
---
### `list_characters`
Lists all characters currently in the local cache with their franchise and cache timestamp.
---
### `remove_character`
Deletes a specific character from the cache.
**Parameters:** `name`, `franchise`
---
### `generate_image_prompt`
Builds a comma-separated tag list for image generation tools (Stable Diffusion, Midjourney, DALL-E, etc.).
**Parameters:**
| Name | Type | Description |
|---|---|---|
| `name` | string | Character name |
| `franchise` | string | Franchise name |
| `style` | string | Art style hint (optional), e.g. `"anime"`, `"oil painting"`, `"pixel art"` |
| `scene` | string | Scene description (optional), e.g. `"in a flower field"`, `"battle pose"` |
| `extra_tags` | string | Comma-separated additional tags (optional) |
**Example prompt:**
> Use generate_image_prompt for Princess Peach (Super Mario) in an anime style, standing in her castle throne room.
---
### `generate_story_context`
Produces a structured reference document for roleplay or creative writing. Includes overview, personality, background, appearance, abilities, and relationships.
**Parameters:**
| Name | Type | Description |
|---|---|---|
| `name` | string | Character name |
| `franchise` | string | Franchise name |
| `scenario` | string | Current scenario to include (optional) |
| `include_abilities` | boolean | Whether to include abilities section (default: true) |
**Example prompt:**
> Use generate_story_context for Sucy Manbavaran from Little Witch Academia. The scenario is: she's been asked to teach a potions class after the regular teacher falls ill.
---
## Supported Franchises
The following franchises have a dedicated Fandom wiki mapped for richer data:
| Franchise | Wiki |
|---|---|
| Final Fantasy VII / FF7 / FFVII | finalfantasy.fandom.com |
| Final Fantasy | finalfantasy.fandom.com |
| Super Mario / Mario | mario.fandom.com |
| Little Witch Academia / LWA | little-witch-academia.fandom.com |
Characters from other franchises will still work using Wikipedia as the data source. The data will be less detailed but usable.
---
## Adding a New Franchise
Open [src/character_details/fetcher.py](src/character_details/fetcher.py) and add an entry to `FRANCHISE_WIKIS`:
```python
FRANCHISE_WIKIS: dict[str, str] = {
# existing entries ...
"my hero academia": "myheroacademia", # maps to myheroacademia.fandom.com
"mha": "myheroacademia",
}
```
The value is the Fandom community subdomain (the part before `.fandom.com`). You can find it by visiting the character's wiki and noting the URL.
After editing, rebuild the Docker image:
```bash
docker build -t character-details-mcp .
```
---
## Cache
- Location: `~/.local/share/character_details/cache/` (host) or the `character-details-cache` Docker volume
- Format: one JSON file per character
- TTL: 24 hours — expired entries are re-fetched automatically
- Manual clear: use `remove_character` tool, or delete the JSON files / volume directly
---
## Using with the Claude API / SDK
If you're building an application with the Anthropic SDK rather than Claude Desktop, start the server in stdio mode and connect it as an MCP server:
```python
import anthropic
client = anthropic.Anthropic()
# Point to the server command
server_params = {
"command": "docker",
"args": [
"run", "--rm", "-i",
"-v", "character-details-cache:/root/.local/share/character_details",
"character-details-mcp"
]
}
```
See the [MCP Python SDK docs](https://github.com/modelcontextprotocol/python-sdk) for full integration examples.
---
## Troubleshooting
**No data returned / all fields empty**
- Check internet connectivity from inside the container: `docker run --rm character-details-mcp python -c "import httpx; import asyncio; asyncio.run(httpx.AsyncClient().get('https://en.wikipedia.org'))"`
- Some franchises don't have a Fandom wiki mapped — add one (see above)
**Stale or incorrect data**
- Use `refresh_character` to bypass the cache
- Or delete the cached file from the volume
**Claude Desktop doesn't show the server**
- Verify the JSON config is valid (no trailing commas)
- Confirm the image name matches exactly: `character-details-mcp`
- Check Claude Desktop logs: `~/Library/Logs/Claude/`

21
pyproject.toml Normal file
View File

@@ -0,0 +1,21 @@
[project]
name = "character-details"
version = "0.1.0"
description = "MCP server for fictional character details - storytelling and image generation"
requires-python = ">=3.11"
dependencies = [
"mcp[cli]>=1.0.0",
"httpx>=0.27.0",
"pydantic>=2.0.0",
"beautifulsoup4>=4.12.0",
]
[project.scripts]
character-details = "character_details.server:main"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/character_details"]

View File

View File

@@ -0,0 +1,58 @@
import json
from pathlib import Path
from datetime import datetime, timedelta
from .models import CharacterData
CACHE_DIR = Path.home() / ".local" / "share" / "character_details" / "cache"
CACHE_TTL_HOURS = 24
def _cache_path(name: str, franchise: str) -> Path:
key = f"{name.lower().replace(' ', '_')}_{franchise.lower().replace(' ', '_')}"
return CACHE_DIR / f"{key}.json"
def get_cached(name: str, franchise: str) -> CharacterData | None:
path = _cache_path(name, franchise)
if not path.exists():
return None
try:
data = json.loads(path.read_text())
character = CharacterData.model_validate(data)
if datetime.now() - character.cached_at > timedelta(hours=CACHE_TTL_HOURS):
return None
return character
except Exception:
return None
def save_cache(character: CharacterData) -> None:
CACHE_DIR.mkdir(parents=True, exist_ok=True)
path = _cache_path(character.name, character.franchise)
path.write_text(character.model_dump_json(indent=2))
def list_cached() -> list[dict]:
if not CACHE_DIR.exists():
return []
result = []
for path in sorted(CACHE_DIR.glob("*.json")):
try:
data = json.loads(path.read_text())
result.append({
"name": data["name"],
"franchise": data["franchise"],
"cached_at": data["cached_at"],
})
except Exception:
pass
return result
def delete_cached(name: str, franchise: str) -> bool:
path = _cache_path(name, franchise)
if path.exists():
path.unlink()
return True
return False

View File

@@ -0,0 +1,343 @@
"""
Fetches fictional character data from Fandom wikis and Wikipedia.
Strategy:
1. Try the franchise-specific Fandom wiki via its MediaWiki API (richer data)
2. Fall back to / supplement with Wikipedia
3. Parse sections into structured CharacterData fields
"""
import re
from datetime import datetime
import httpx
from bs4 import BeautifulSoup
from .models import CharacterData
# Wikipedia requires a descriptive user-agent with contact info
HEADERS = {
"User-Agent": (
"character-details-mcp/1.0 "
"(https://github.com/example/character-details; contact@example.com)"
)
}
# Map franchise keywords -> Fandom community subdomain
FRANCHISE_WIKIS: dict[str, str] = {
"final fantasy vii": "finalfantasy",
"final fantasy 7": "finalfantasy",
"ff7": "finalfantasy",
"ffvii": "finalfantasy",
"final fantasy": "finalfantasy",
"super mario": "mario",
"mario": "mario",
"little witch academia": "little-witch-academia",
"lwa": "little-witch-academia",
}
# Section title keywords -> model field
APPEARANCE_KW = {"appearance", "design", "outfit", "clothing", "physical"}
PERSONALITY_KW = {"personality", "traits", "behavior", "attitude", "nature"}
BACKGROUND_KW = {"background", "history", "biography", "story", "backstory", "past"}
ABILITIES_KW = {"abilities", "powers", "skills", "magic", "combat", "techniques", "weapons"}
RELATIONSHIPS_KW = {"relationships", "family", "friends", "allies", "enemies", "romance"}
QUOTES_KW = {"quotes", "sayings", "dialogue"}
def _strip_html(html: str) -> str:
return BeautifulSoup(html, "html.parser").get_text(separator=" ", strip=True)
def _classify_section(title: str) -> str | None:
# Split into words for accurate matching (avoids "characteristics" matching "character")
words = set(re.split(r"\W+", title.lower()))
if words & APPEARANCE_KW:
return "appearance"
if words & PERSONALITY_KW:
return "personality"
if words & BACKGROUND_KW:
return "background"
if words & ABILITIES_KW:
return "abilities"
if words & RELATIONSHIPS_KW:
return "relationships"
if words & QUOTES_KW:
return "quotes"
return None
def _find_wiki(franchise: str) -> str | None:
return FRANCHISE_WIKIS.get(franchise.lower().strip())
async def fetch_character(name: str, franchise: str) -> CharacterData:
"""Fetch character from Fandom and/or Wikipedia, return structured data."""
sections: dict[str, str] = {}
sources: list[str] = []
async with httpx.AsyncClient(timeout=15.0, follow_redirects=True, headers=HEADERS) as client:
# 1. Try Fandom wiki (MediaWiki API) — rich character-specific sections
wiki = _find_wiki(franchise)
if wiki:
fandom = await _fetch_fandom(client, name, wiki)
if fandom:
sections.update(fandom["sections"])
sources.append(fandom["url"])
# 2. Wikipedia — always prefer its description (cleaner, no infobox cruft)
# and supplement any sections Fandom didn't provide
wiki_data = await _fetch_wikipedia(client, name, franchise)
if wiki_data:
for k, v in wiki_data["sections"].items():
# Description: Wikipedia always wins (Fandom lead is infobox-polluted)
if k == "description" or k not in sections:
sections[k] = v
sources.append(wiki_data["url"])
return _build_character(name, franchise, sections, sources)
# ---------------------------------------------------------------------------
# Fandom (MediaWiki API — avoids Cloudflare-blocked /api/v1/)
# ---------------------------------------------------------------------------
async def _fetch_fandom(client: httpx.AsyncClient, name: str, wiki: str) -> dict | None:
base = f"https://{wiki}.fandom.com"
api = f"{base}/api.php"
# Search for the article
try:
resp = await client.get(
api,
params={"action": "query", "list": "search", "srsearch": name,
"srlimit": 5, "format": "json"},
)
except httpx.HTTPError:
return None
if resp.status_code != 200:
return None
results = resp.json().get("query", {}).get("search", [])
if not results:
return None
# Pick best match (exact name preferred)
article = _best_search_match(name, results)
pageid = article["pageid"]
page_title = article["title"]
# Get section list
try:
resp = await client.get(
api,
params={"action": "parse", "pageid": pageid, "prop": "sections", "format": "json"},
)
except httpx.HTTPError:
return None
if resp.status_code != 200:
return None
all_sections = resp.json().get("parse", {}).get("sections", [])
# Always fetch lead (section 0) as description
sections: dict[str, str] = {}
lead_text = await _fetch_fandom_section(client, api, pageid, 0)
if lead_text:
sections["description"] = lead_text[:1500]
# Fetch sections that match our fields of interest
# Only fetch top-level sections (toclevel == 1) we care about
for sec in all_sections:
title = _strip_html(sec.get("line", ""))
field = _classify_section(title)
if field is None:
continue
index = sec.get("index", "")
if not index:
continue
text = await _fetch_fandom_section(client, api, pageid, index)
if text:
key = title.lower()
sections[key] = text[:3000]
return {
"sections": sections,
"url": f"{base}/wiki/{page_title.replace(' ', '_')}",
}
async def _fetch_fandom_section(
client: httpx.AsyncClient, api: str, pageid: int, section: int | str
) -> str | None:
try:
resp = await client.get(
api,
params={"action": "parse", "pageid": pageid, "section": section,
"prop": "text", "format": "json"},
)
except httpx.HTTPError:
return None
if resp.status_code != 200:
return None
html = resp.json().get("parse", {}).get("text", {}).get("*", "")
if not html:
return None
# Strip noisy elements: infoboxes, tables, ToC, headings, references, Fandom widgets
soup = BeautifulSoup(html, "html.parser")
for tag in soup.select(
"table, aside, h1, h2, h3, h4, h5, h6, "
".navbox, .toc, #toc, .reference, sup, "
".mw-editsection, .portable-infobox, .infobox, "
".error, .cite-error, .mw-references-wrap, "
# Fandom "Quick Answers" / "AI Answers" widgets
".trfc161, section[class^='trfc'], "
".fandom-community-question-answer, .qa-placeholder"
):
tag.decompose()
text = soup.get_text(separator=" ", strip=True)
# Collapse excess whitespace
text = re.sub(r"\s{2,}", " ", text).strip()
# Strip stray date stamps Fandom sometimes injects at the top (e.g. "February 1, 2011 ")
text = re.sub(r"^\w+ \d{1,2},? \d{4}\s+", "", text).strip()
return text if len(text) > 20 else None
def _best_search_match(name: str, results: list[dict]) -> dict:
name_lower = name.lower()
for item in results:
if item["title"].lower() == name_lower:
return item
return results[0]
# ---------------------------------------------------------------------------
# Wikipedia
# ---------------------------------------------------------------------------
async def _fetch_wikipedia(client: httpx.AsyncClient, name: str, franchise: str) -> dict | None:
search_query = f"{name} {franchise} character"
try:
resp = await client.get(
"https://en.wikipedia.org/w/api.php",
params={"action": "query", "list": "search", "srsearch": search_query,
"srlimit": 3, "format": "json"},
)
except httpx.HTTPError:
return None
if resp.status_code != 200:
return None
results = resp.json().get("query", {}).get("search", [])
if not results:
return None
# Pick the result whose title best overlaps with the character name
name_words = set(name.lower().split())
best = max(results, key=lambda r: len(name_words & set(r["title"].lower().split())))
title = best["title"]
title_words = set(title.lower().split())
article_is_about_character = bool(name_words & title_words)
sections: dict[str, str] = {}
# Extracts API — clean plain-text intro, no infobox cruft
# Only use as description if the Wikipedia article is actually about the character
# (not about the franchise, which happens when no dedicated character article exists)
if article_is_about_character:
try:
resp = await client.get(
"https://en.wikipedia.org/w/api.php",
params={"action": "query", "titles": title, "prop": "extracts",
"exintro": True, "explaintext": True, "format": "json"},
)
if resp.status_code == 200:
pages = resp.json().get("query", {}).get("pages", {})
extract = next(iter(pages.values()), {}).get("extract", "").strip()
if extract:
sections["description"] = re.sub(r"\s{2,}", " ", extract)[:1500]
except httpx.HTTPError:
pass
return {
"sections": sections,
"url": f"https://en.wikipedia.org/wiki/{title.replace(' ', '_')}",
}
# ---------------------------------------------------------------------------
# Build CharacterData from raw sections
# ---------------------------------------------------------------------------
def _build_character(
name: str,
franchise: str,
sections: dict[str, str],
sources: list[str],
) -> CharacterData:
appearance_parts: list[str] = []
personality_parts: list[str] = []
background_parts: list[str] = []
abilities: list[str] = []
relationships: list[str] = []
quotes: list[str] = []
extra: dict[str, str] = {}
description = sections.get("description", "")
for title, text in sections.items():
if title == "description":
continue
field = _classify_section(title)
if field == "appearance":
appearance_parts.append(text)
elif field == "personality":
personality_parts.append(text)
elif field == "background":
background_parts.append(text)
elif field == "abilities":
abilities.extend(_extract_list_items(text))
elif field == "relationships":
relationships.extend(_extract_list_items(text))
elif field == "quotes":
quotes.extend(_extract_list_items(text))
else:
extra[title] = text[:500]
return CharacterData(
name=name,
franchise=franchise,
description=description.strip(),
appearance="\n\n".join(appearance_parts).strip(),
personality="\n\n".join(personality_parts).strip(),
background="\n\n".join(background_parts).strip(),
abilities=abilities[:20],
relationships=relationships[:15],
notable_quotes=quotes[:10],
extra_sections={k: v for k, v in list(extra.items())[:8]},
sources=sources,
cached_at=datetime.now(),
)
def _extract_list_items(text: str) -> list[str]:
"""Extract bullet items or split prose into sentences if no bullet structure."""
lines = [l.strip() for l in text.splitlines() if l.strip()]
# Check if content is bullet-structured
bullet_lines = [l.lstrip("-•*").strip() for l in lines if re.match(r"^[-•*]", l)]
if len(bullet_lines) >= 2:
return [l for l in bullet_lines if len(l) > 5]
# Otherwise return short, sentence-split chunks (max 300 chars each)
sentences = re.split(r"(?<=[.!?])\s+", " ".join(lines))
items = [s.strip() for s in sentences if len(s.strip()) > 10]
return items[:15] # cap to avoid bloat

View File

@@ -0,0 +1,18 @@
from pydantic import BaseModel, Field
from datetime import datetime
class CharacterData(BaseModel):
name: str
franchise: str
description: str = ""
appearance: str = ""
personality: str = ""
background: str = ""
abilities: list[str] = Field(default_factory=list)
relationships: list[str] = Field(default_factory=list)
notable_quotes: list[str] = Field(default_factory=list)
# Extra sections keyed by section title (lowercase)
extra_sections: dict[str, str] = Field(default_factory=dict)
sources: list[str] = Field(default_factory=list)
cached_at: datetime = Field(default_factory=datetime.now)

View File

@@ -0,0 +1,238 @@
from mcp.server.fastmcp import FastMCP
from .cache import delete_cached, get_cached, list_cached, save_cache
from .fetcher import fetch_character
from .models import CharacterData
mcp = FastMCP(
"character-details",
description=(
"Gather detailed information on fictional characters for storytelling "
"and image generation. Fetches from Fandom wikis and Wikipedia, "
"caches results locally for 24 hours."
),
)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
async def _get_or_fetch(name: str, franchise: str) -> CharacterData:
cached = get_cached(name, franchise)
if cached:
return cached
character = await fetch_character(name, franchise)
save_cache(character)
return character
def _format_character(c: CharacterData) -> str:
parts = [f"# {c.name} ({c.franchise})"]
if c.description:
parts.append(f"\n## Description\n{c.description}")
if c.appearance:
parts.append(f"\n## Appearance\n{c.appearance}")
if c.personality:
parts.append(f"\n## Personality\n{c.personality}")
if c.background:
parts.append(f"\n## Background\n{c.background}")
if c.abilities:
bullet = "\n".join(f"- {a}" for a in c.abilities)
parts.append(f"\n## Abilities\n{bullet}")
if c.relationships:
bullet = "\n".join(f"- {r}" for r in c.relationships)
parts.append(f"\n## Relationships\n{bullet}")
if c.notable_quotes:
bullet = "\n".join(f'- "{q}"' for q in c.notable_quotes)
parts.append(f"\n## Notable Quotes\n{bullet}")
if c.extra_sections:
extras = "\n\n".join(
f"### {title.title()}\n{text}"
for title, text in c.extra_sections.items()
)
parts.append(f"\n## Additional Info\n{extras}")
if c.sources:
src = "\n".join(f"- {s}" for s in c.sources)
parts.append(f"\n## Sources\n{src}")
return "\n".join(parts)
# ---------------------------------------------------------------------------
# Tools
# ---------------------------------------------------------------------------
@mcp.tool()
async def get_character(name: str, franchise: str) -> str:
"""
Get detailed information about a fictional character.
Returns structured data including appearance, personality, background,
abilities, relationships, and source URLs. Uses a 24-hour local cache;
call refresh_character to force an update.
Args:
name: Character name (e.g. "Aerith Gainsborough")
franchise: Franchise or series (e.g. "Final Fantasy VII")
"""
character = await _get_or_fetch(name, franchise)
return _format_character(character)
@mcp.tool()
async def refresh_character(name: str, franchise: str) -> str:
"""
Force re-fetch character data from external sources, bypassing the local cache.
Args:
name: Character name
franchise: Franchise or series
"""
character = await fetch_character(name, franchise)
save_cache(character)
return _format_character(character)
@mcp.tool()
def list_characters() -> str:
"""List all locally cached characters."""
cached = list_cached()
if not cached:
return "No characters cached yet. Use get_character to fetch one."
lines = ["Cached characters:\n"]
for c in cached:
lines.append(f"- **{c['name']}** ({c['franchise']}) — cached at {c['cached_at']}")
return "\n".join(lines)
@mcp.tool()
async def remove_character(name: str, franchise: str) -> str:
"""
Remove a character from the local cache.
Args:
name: Character name
franchise: Franchise or series
"""
removed = delete_cached(name, franchise)
if removed:
return f"Removed {name} ({franchise}) from cache."
return f"{name} ({franchise}) was not in the cache."
@mcp.tool()
async def generate_image_prompt(
name: str,
franchise: str,
style: str = "",
scene: str = "",
extra_tags: str = "",
) -> str:
"""
Generate an image generation prompt for a character.
Pulls appearance data and assembles a comma-separated tag list suitable
for tools like Stable Diffusion, Midjourney, or DALL-E.
Args:
name: Character name
franchise: Franchise or series
style: Art style hint (e.g. "anime", "oil painting", "pixel art")
scene: Scene description (e.g. "in a forest", "battle pose")
extra_tags: Additional comma-separated tags to append
"""
character = await _get_or_fetch(name, franchise)
tags: list[str] = [character.name, character.franchise]
# Use first 400 chars of appearance as descriptors
if character.appearance:
tags.append(character.appearance[:400])
elif character.description:
tags.append(character.description[:300])
if style:
tags.append(f"{style} style")
if scene:
tags.append(scene)
if extra_tags:
tags.extend(t.strip() for t in extra_tags.split(",") if t.strip())
tags.extend(["detailed", "high quality"])
return ", ".join(t.strip() for t in tags if t.strip())
@mcp.tool()
async def generate_story_context(
name: str,
franchise: str,
scenario: str = "",
include_abilities: bool = True,
) -> str:
"""
Generate a story/roleplay context snippet for a character.
Produces a structured reference document an LLM can use to portray
or write about the character authentically.
Args:
name: Character name
franchise: Franchise or series
scenario: Optional scenario or setting to include
include_abilities: Whether to include abilities in the context
"""
character = await _get_or_fetch(name, franchise)
parts = [f"# {character.name} — Story Reference", f"**Franchise:** {character.franchise}"]
if character.description:
parts += ["\n## Overview", character.description[:600]]
if character.personality:
parts += ["\n## Personality", character.personality[:500]]
if character.background:
parts += ["\n## Background", character.background[:500]]
if character.appearance:
parts += ["\n## Physical Appearance", character.appearance[:400]]
if include_abilities and character.abilities:
bullet = "\n".join(f"- {a}" for a in character.abilities[:10])
parts += ["\n## Abilities & Skills", bullet]
if character.relationships:
bullet = "\n".join(f"- {r}" for r in character.relationships[:8])
parts += ["\n## Key Relationships", bullet]
if character.notable_quotes:
bullet = "\n".join(f'- "{q}"' for q in character.notable_quotes[:5])
parts += ["\n## Notable Quotes", bullet]
if scenario:
parts += ["\n## Current Scenario", scenario]
return "\n".join(parts)
# ---------------------------------------------------------------------------
# Entrypoint
# ---------------------------------------------------------------------------
def main() -> None:
mcp.run()
if __name__ == "__main__":
main()