MUD on Urbit

MUD Moderation & Governance

research Doc 21

How admin teams actually handle disputes, bans, appeals, burnout, and day-to-day operations — drawn from real MUD case studies, academic research, and practitioner accounts.


1. Moderation in Practice — Case Studies

1.1 Aardwolf: Lean Team, Heavy Automation

Aardwolf runs one of the longest-lived MUDs (since 1996) with a remarkably small staff. Lasher, the founder and lead coder, maintains a team of roughly 10 active Immortals — deliberately kept small to preserve alignment on vision and values.

The Helper/Advisor system. Below the Immortal tier sit four volunteer roles: Helpers, Testers, Builders, and Advisors. Helpers and Advisors staff a dedicated “Newbie” channel and assist players through their first 200 levels. This creates a buffer layer — most player issues get resolved by experienced players before they ever reach an admin.

Lasher’s philosophy:

  • Minimal staff, maximum automation. Systems handle routine enforcement; admins step in only for edge cases.
  • Zero tolerance for admin abuse. “Any kind of admin abuse is absolutely not tolerated.” Immortals are monitored as closely as players.
  • Recruit from the playerbase. Staff are promoted from within, never hired externally. Self-selection filters for people who genuinely want to improve the game.
  • Separate roles. Great quest coders aren’t necessarily great at handling player disputes — the two skill sets are treated as distinct.
  • Spirit over letter. Aardwolf enforces “the spirit of the rules, not just the letter of the law.” Persistent violators are treated as having opted out of the community.

Rules highlights: 12 core rules covering language (PG-13 in public), no multiplaying, no personal attacks, mandatory bug reporting, no AFK gains, and a logging policy that warns players all commands may be recorded when investigations are active.

Sources: Aardwolf Immortals, Interview with Lasher, Aardwolf Rules


1.2 Achaea / Iron Realms: Professional Staff, Divine Characters, Graduated Penalties

Achaea (1997–present) is one of the few MUDs with paid, professional staff. Iron Realms Entertainment pioneered the microtransaction model that funds full-time development and administration.

The Divine system. Gods are in-character admin characters who serve as patrons of player-run cities. They range from nurturing mentors to demanding tyrants — but their in-character behavior is distinct from their out-of-character administrative role. Separate admin characters (Meletus, Anytus, Lycon, Lathis) handle OOC issues like rule violations and help requests.

The Issue system. Players file formal complaints via the ISSUE command. Key design decisions:

  • Issue once, then wait. No spamming or re-filing.
  • Both sides respond. The accused gets one chance to reply.
  • Admin investigates independently. Decisions are based on the issue, the reply, and the admin’s own research. No hearings or interviews.
  • Escalation to Senior Admin. Disputed decisions can be raised with the Senior Administrator (Lathis) via email.
  • Filing extinguishes IC revenge. If you issue someone, you’ve chosen the OOC path — you can’t also pursue in-character retaliation for the same incident.

Graduated penalty system (“shrubbing”). Achaea uses a structured escalation framework with a 6-month rolling window. If a repeat offense occurs within 6 months of a previous penalty, punishment escalates to the next tier. Examples:

Offense1st2nd3rd4th+
Multiplaying (no interaction)Warning3-day disfavour3-day shrub7+ day shrub
Harassment (after IGNORE)Warning30-day shrub180-day shrub365-day to permanent
Hate speech30-day shrub90-day shrub180+ day shrubPermanent
Character sharingPermanent shrub
Seconds abuseWarning + deletion30-day shrub (both)365-day shrubPermanent

“Shrubbing” is Achaea’s term for temporary character removal — the character is transformed into a shrub, unable to act. It’s a visible, in-world punishment that carries social weight.

Sources: Achaea Issue Filing, Administrative Penalties, Achaea Wiki


1.3 Sindome: Strict RP Enforcement

Sindome (1997–present) is a cyberpunk MOO with some of the strictest roleplay enforcement in the MUD world. Every action is expected to be in-character, and the admin actively polices the IC/OOC boundary.

The Void. Sindome’s primary moderation tool. When a player’s behavior needs investigation or is deemed unacceptable, they’re removed from the game world and placed in a featureless space called “the void.” Duration depends on severity — from a 30-minute cool-down for arguing with admin via xhelp, to indefinite holds for serious violations.

Metagaming enforcement. Sindome treats metagaming as a cardinal sin. Two categories:

  • Active metagaming: Using OOC knowledge to benefit your character. Using Discord to warn friends about in-game threats. Altering behavior toward a character based on your OOC relationship with their player.
  • Passive metagaming: Discussing IC events through OOC channels. Sharing game mechanics information outside the game. Even accidental disclosure of who plays which character can result in suspension.

Conflict of interest rules. If players learn each other’s character identities through private contact, they must report this to staff immediately. Those players cannot then engage in close RP together — no shared housing, employment, gift-giving, or extended plot involvement. This restriction persists until both characters permanently die.

Identity protection. Revealing who plays any character — even accidentally — results in suspension. Doing it purposefully may result in a permanent ban.

Sexual content rules. Players must be 18+. Rape RP is categorically forbidden, including perpetration, roleplay, and discussion — even with consent. Players may decline unwanted sexual RP using local OOC commands; continuing after refusal triggers admin intervention and potential banning.

Progressive enforcement:

  • BGBB (forum) access suspension
  • xhelp access suspension
  • OOC channel suspension
  • Full game suspension (by staff vote)
  • Indefinite ban (appealable after 1 year via email)

Reporting obligations. Players engaging in private OOC contact are responsible for reporting rule violations they witness. Failure to report is itself a disciplinary offense.

Sources: Sindome Rules of Conduct, Sindome on MudStats


1.4 Discworld MUD: Player-Elected Governance

Discworld MUD (1991–present) runs one of the most sophisticated player governance experiments in MUD history, drawing inspiration from the board game Nomic.

Player councils. Four councils cover distinct geographic areas of the game world: Ankh-Morpork, Djelibeybi, Bes Pelargic, and one other. Each council consists of elected magistrates — Ankh-Morpork has seven, Djelibeybi has five.

Election process:

  • Magistrates serve 20 real-world week terms
  • Elections have a 14-day nomination phase followed by a 14-day voting phase
  • Only citizens can vote (citizenship is free, requires application)

Magistrate powers:

  • Create new laws and amend existing ones (subject to citizen ratification)
  • Hear cases filed by any player alleging rule violations
  • Impose punishments: fines, blacklisting, banishment from the city, negative titles
  • All punishments are tied to specific cases with transparent reasoning
  • Decisions are appealable

Citizenship system. Players can hold citizenship in multiple city-states simultaneously. Citizens agree to follow local rules and gain benefits like property ownership. Only player characters can be defendants — NPCs are outside the system.

Key design insight: The system separates game-level administration (handled by traditional admin) from community governance (handled by elected players). This gives players genuine agency over social norms while keeping technical and balance decisions with the development team.

Sources: Discworld Player Council, Discworld Citizenship, Discworld Wikipedia


1.5 LambdaMOO: The Democratic Experiment

LambdaMOO is the most studied case in virtual world governance, largely because of Julian Dibbell’s landmark 1993 article “A Rape in Cyberspace.”

The incident. In March 1993, a user named Mr. Bungle exploited a “voodoo doll” subprogram to force other players’ avatars into violently sexual acts without consent. The attack continued for hours and caused genuine emotional distress — one victim reported “post-traumatic tears.” The incident forced the community to confront fundamental questions: Can virtual actions cause real harm? Who has the authority to punish?

The governance crisis. LambdaMOO’s creator, Pavel Curtis (known in-game as Archwizard Haakon), had previously issued a “New Direction” document declaring that wizards would serve only as technicians, not social decision-makers. The community was left to invent self-governance from scratch.

Community response:

  1. A community meeting lasting ~2 hours 45 minutes produced no consensus on punishment
  2. A programmer named JoeFeedback independently deleted Mr. Bungle’s character (“toaded” him) before any formal process concluded
  3. Mr. Bungle’s player (an NYU student, encouraged by dormmates) simply created a new character

The petition/ballot system. In summer 1993, LambdaMOO implemented formal democratic mechanisms:

  • Any player with 30+ days of membership could create a petition
  • Petitions needed 10+ signatures to proceed
  • Wizards vetted petitions for appropriateness, clarity, feasibility, system integrity, and legal compliance
  • Vetted petitions needed 5% of average ballot votes to reach open voting
  • Ballots remained open 2 weeks and required a 2-to-1 majority
  • The system forced signers to scroll through entire documents before signing (preventing uninformed votes)

The arbitration system:

  • Volunteer arbitrators with 4+ months of community experience
  • All dispute parties must agree on an arbitrator
  • Decisions posted to case-specific mailing lists
  • Punishments included quota modifications, object deletion, power reduction, temporary bans, community service, or toading

What happened next. The democratic experiment produced “a small arsenal of mechanisms for dealing with violence” — @boot commands, mediation systems, and community norms. But it also revealed the limits of pure democracy in virtual spaces: the process was slow, participation was uneven, and technical power remained concentrated with the wizards regardless of what the ballots said.

Academic legacy. The incident influenced Lawrence Lessig’s work on cyberlaw. Sociologist David Trend called Dibbell’s article “one of the most frequently cited essays about cloaked identity in cyberspace.” Jennifer Mnookin’s 1996 paper “Virtual(ly) Law” analyzed LambdaMOO’s governance evolution in depth.

Sources: Wikipedia: A Rape in Cyberspace, Dibbell’s Original Article, Stanford: LambdaMOO Governance, Mnookin: Virtual(ly) Law


1.6 When Moderation Goes Wrong — Cautionary Tales

The founder’s trap. Most MUDs begin with a small, tight-knit team where “be fair” is the only needed policy. This breaks down as the game scales. New staff lack the original cohesion, competing visions emerge, and without codified policies, trust collapses between players and staff. The LabMUD article on staff policy documents this pattern: a MUD called Atonement collapsed when staff with conflicting visions (Wild West, cyberpunk, prison ship, lunar colony) tried to coexist without clear governance.

Admin alt abuse. One of the most common MUD scandals involves admin using secret alternate characters to gain unfair advantages — winning PvP fights, manipulating economies, or favoring friends. Sindome’s community has discussed the tension around admin alts at length: players perceive potential abuse and extrapolate, breeding mistrust even when no abuse occurs. The mere possibility of abuse corrodes community trust.

The death spiral. When admin burn out or leave without succession plans, MUDs enter a predictable decline: active development stops, bugs go unfixed, rule enforcement becomes sporadic, engaged players leave, and the game becomes “a place for old admins to visit and chat and admire their handiwork.” Most MUDs that die follow this pattern — not a dramatic collapse but a slow fade.

Overcorrection. Some MUDs respond to moderation challenges by becoming increasingly authoritarian — more rules, harsher punishments, less player autonomy. This drives away the creative, engaged players who build community, leaving only those who don’t push boundaries. Richard Bartle has written about this in “Why Governments Aren’t Gods and Gods Aren’t Governments” — overly rigid authority structures lead to player disengagement and community fragmentation.

Sources: LabMUD Staff Policy, Death of a MUD, Bartle on Governance


2. Ban and Appeals Processes

2.1 How Bans Are Typically Issued

The pattern across well-run MUDs follows a consistent sequence:

  1. Report received. A player files a complaint (Achaea’s ISSUE, Sindome’s xhelp, Aardwolf’s admin channel).
  2. Evidence gathering. Admin review logs, interview involved parties, check command histories. Aardwolf’s logging policy explicitly warns that all commands may be recorded during investigations.
  3. Internal discussion. In multi-admin MUDs, bans are often decided by staff vote (Sindome explicitly requires this for full game suspensions).
  4. Decision and notification. The player receives the decision, usually with reasoning. Achaea’s system automatically provides the penalty rationale.
  5. Graduated response. Nearly all mature MUDs use escalating penalties rather than jumping to bans. Warning -> mute -> temporary suspension -> extended suspension -> permanent ban.

2.2 Types of Bans

Character bans (shrubbing/voiding). The character is disabled but the account remains. Achaea’s “shrubbing” transforms the character into an in-world object. Sindome’s “void” removes the character from the game world. These are time-limited and serve as both punishment and cooling-off period.

Account bans. The entire account is suspended. All characters are inaccessible. This prevents the player from simply switching to an alt to continue playing.

IP bans. Block connections from specific IP addresses. The bluntest instrument — effective against casual ban evaders but easily circumvented with VPNs, proxies, or different networks. Most experienced MUD admins treat IP bans as a speed bump, not a wall.

Site bans. Block entire IP ranges (e.g., all connections from a specific university or ISP). A nuclear option that can collaterally ban innocent players. Used sparingly against severe, persistent ban evaders.

2.3 Ban Evasion Detection

MUD ban evasion detection has historically been simpler than web platform detection, but also harder in some ways:

  • IP tracking. The basic approach. Log IP addresses at login, compare against banned IPs. Easily defeated by VPNs.
  • Pattern matching. Experienced admin recognize behavioral signatures — writing style, play patterns, favorite commands, social connections. Hard to automate but effective.
  • Character similarity. New characters with suspiciously similar names, backstories, or knowledge to banned characters raise flags. Sindome explicitly prohibits creating similarly-named characters after a character death.
  • Connection fingerprinting. Client type, terminal settings, connection timing. Less reliable than web fingerprinting (100+ browser signals) but still useful as corroborating evidence.
  • Social graph analysis. Tracking who a suspected evader interacts with. Banned players returning on new accounts often gravitate toward the same social circles.

2.4 Formal Appeals

Sindome: Appeals permitted after 1 year via email to support. Senior staff review and respond. No timeline guarantees.

Achaea: Disputed decisions can be raised with the Senior Administrator via email. No formal appeals process for shrubbing — the graduated system is considered the process itself.

General best practices from MUD experience:

  • Cool-down period. Require a waiting period before appeal (Sindome’s 1-year minimum is on the extreme end; many MUDs use 30–90 days).
  • Different reviewer. The person who issued the ban shouldn’t be the sole reviewer of the appeal.
  • Written appeals only. Prevents emotional confrontations and creates a record.
  • Criteria for reinstatement. Does the player demonstrate understanding of what they did wrong? Have they shown genuine reflection? Do they acknowledge the rules they violated?
  • Conditional reinstatement. Some MUDs allow return with conditions — probationary period, restricted access, automatic escalation for any future violations.

2.5 Documentation and Record-Keeping

Admin notes. Reddit’s “mod note” system mirrors what MUDs have done for decades — attaching notes to player accounts that persist across admin sessions. Critical for continuity when different admins handle different incidents involving the same player.

Log retention. Well-run MUDs maintain logs of all administrative actions, not just player actions. This creates accountability in both directions.

The second-chance question. Research on ban appeals (including a study published in the Journal of Computer-Mediated Communication) found that appeals are most effective when they require the banned player to demonstrate genuine reflection and awareness of community norms, rather than simply apologizing. The asymmetry problem is real: users can submit low-effort appeals, while admin spend significant time investigating each one.

Sources: Sindome Rules, Achaea Penalties, Ban Evasion Detection, AppealMod Research


3. Admin Team Management

3.1 Burnout — Why It Happens, How to Prevent It

Admin burnout is the leading cause of MUD death. Research on volunteer content moderators (published in New Media & Society, 2024) identifies the core drivers:

Why it happens:

  • Compassion fatigue. Constantly dealing with angry, upset, or manipulative players drains emotional reserves.
  • Effort asymmetry. Players generate problems in seconds; admin spend hours investigating and resolving them.
  • Under-appreciation. Volunteers invest significant time with little recognition. Players notice bad moderation instantly but rarely acknowledge good moderation.
  • Scope creep. Admin roles expand beyond original expectations. A builder who agreed to create areas finds themselves also handling disputes, running events, fixing bugs, and mentoring new staff.
  • Toxic exposure. Repeated exposure to harassment, abuse, and manipulative behavior takes a psychological toll.
  • Decision fatigue. Every ban, every rule interpretation, every dispute requires judgment. The cumulative weight is exhausting.

How to prevent it:

  • Role separation. Aardwolf’s model: builders build, moderators moderate. Don’t ask one person to do everything.
  • Automation. Let systems handle routine enforcement (spam detection, rate limiting, profanity filtering). Reserve human judgment for edge cases.
  • Rotation. Don’t let the same admin handle all disputes. Share the emotional load.
  • Clear boundaries. Define what admin are expected to do, how much time they’re expected to invest, and when they can say “not today.”
  • Recognition. Acknowledge admin work publicly. Players should know who keeps the lights on.
  • Support structures. Admin need a private space to vent, debrief, and support each other. Moderation is emotional labor.

3.2 Succession Planning

The bus factor. If one person holds all the keys — the server access, the codebase knowledge, the domain registration, the player trust — the MUD dies when they leave. Barry Hollander’s “Death of a MUD” describes this pattern: the game persists as a ghost town where old admins occasionally visit and reminisce.

Practical measures:

  • Shared access. At least 2–3 people should have server access, domain control, and codebase access.
  • Documentation. Write down how things work. Not just the code, but the procedures — how to handle a ban appeal, how to restart the server, how to resolve common disputes.
  • Mentorship. Senior admin should actively train replacements, not hoard knowledge.
  • Graceful transitions. When an admin wants to leave, give them an exit path that doesn’t involve abandoning the game overnight.

3.3 Recruitment and Vetting

Lasher’s approach (recruit from the playerbase) is the standard across MUDs, for good reason:

  • Known quantities. You’ve seen how they interact with other players, handle conflict, and contribute to the community.
  • Investment. They already care about the game. External hires may lack emotional investment.
  • Cultural fit. They understand the norms and history of the community.

Vetting considerations:

  • Maturity and emotional stability matter more than technical skill. You can teach someone to use admin tools; you can’t teach them to handle a screaming player at 2 AM.
  • Look for people who resolve conflicts, not people who start them.
  • Trial periods are essential. Grant limited powers first and observe how they’re used.
  • References from other players carry weight.

3.4 Admin Codes of Conduct

Core principles found across well-run MUDs:

  • No admin alt advantage. Admin characters should not benefit from admin knowledge.
  • No favoritism. Friends and enemies get the same treatment under the rules.
  • No retaliation. Players who complain about admin should not face consequences for complaining.
  • Transparency in process, privacy in details. Players should understand how moderation works without seeing every case’s specifics.
  • Recusal. Admin should step back from cases involving people they have personal relationships with.

3.5 Internal Disagreements

The LabMUD analysis identifies the pattern: admin disagree on vision, nobody has documented what the vision is, and the argument becomes personal. Prevention:

  • Written vision document. What is this game? What is it trying to be? What is it not?
  • Decision-making framework. Who has final say on what? When is consensus required vs. when can one person decide?
  • Private disagreement, public unity. Admin can argue behind closed doors, but present a unified front to players.
  • Escalation path. When two admin disagree and can’t resolve it, who breaks the tie?

3.6 Who Watches the Watchers?

This is the fundamental unsolved problem in MUD governance. Approaches:

  • Monitoring admin activity. Lasher: “Imms are monitored as much as players think the players are monitored.” Log admin commands. Review admin actions.
  • Multiple admin review. No single admin should have unchecked power. Bans should be reviewed by peers.
  • Player feedback channels. Anonymous or semi-anonymous ways for players to report admin misconduct.
  • Term limits. Some MUDs rotate admin roles to prevent entrenchment. This trades institutional knowledge for accountability.
  • External oversight. Achaea’s model — a paid Senior Administrator who reviews disputed decisions — provides a backstop. But this requires resources most MUDs lack.

Sources: Volunteer Moderator Burnout, LabMUD Staff Policy, Death of a MUD


4. Automated Moderation Systems

4.1 Spam Detection and Rate Limiting

MUDs have used rate limiting since the early days — the concept is simple but critical:

  • Command throttling. Limit how many commands a player can execute per second. Prevents macro abuse and channel flooding.
  • Channel cooldowns. Minimum time between messages on public channels. Aardwolf’s Rule 9 (“No Command Spam”) is enforced partly through automated systems.
  • Repeat detection. Flag or block identical messages sent in rapid succession.
  • Progressive penalties. First offense: automatic mute for N minutes. Repeated offenses: longer mutes, then admin notification.

4.2 Profanity Filters

MUDs handle profanity filtering with varying philosophication:

  • Replacement filters. Swap offensive words with symbols (***) or humorous alternatives. Simple but easily circumvented through creative spelling.
  • Channel-specific filters. Aardwolf maintains PG-13 standards on public channels but allows more latitude in private communication and adult-flagged channels.
  • Cultural sensitivity. Filters must account for multiple languages, cultural context, and evolving slang. A word offensive in one culture may be innocent in another.
  • Custom blocklists. Players can maintain personal ignore lists and word filters, giving them agency over their own experience.
  • The Scunthorpe problem. Overly aggressive filters catch innocent words containing offensive substrings. Good filters use word boundaries and context awareness.

4.3 Bot Detection — Alter Aeon’s “Bot Thwacker”

Alter Aeon takes a unique approach: bots are technically permitted, but the game fights back.

The Bot Thwacker is a fully automated system that hunts for bot-like behavior. Key characteristics:

  • Invisible operation. Runs independently from admin, notifying them only when bots are found or penalized.
  • Secret algorithm. The exact detection method is undisclosed and periodically updated in a cat-and-mouse game with bot creators.
  • Penalties are in-game, not bans. Detected bots lose experience instead of gaining it, and their spell/skill proficiency degrades. This is elegant — it makes botting counterproductive without requiring admin intervention.
  • Graduated severity. Simple bots (spam-casting a spell overnight) are caught quickly. Sophisticated bots with varied actions, multiple locations, and diverse targets evade longer — but “your bot will never be successful forever; no matter how good it is, Dentin will always catch up.”

Why this matters for design: Alter Aeon’s approach acknowledges that the line between “bot” and “scripted client with triggers” is blurry, especially for accessibility users who may rely on automation. Rather than banning all automation, they make exploitative automation self-defeating.

4.4 Automated Logging and Alert Systems

Command logging. Most MUDs log all commands server-side, at least temporarily. Aardwolf’s policy explicitly states: “Except for commands involving your password, anything has the potential to be logged.”

Trigger words and escalation. Modern chat moderation systems use pattern matching (keywords, regex, account metadata) to flag content for human review. Discord’s AutoModerator is a well-known example. MUD equivalents typically:

  • Flag uses of slurs or hate speech for immediate admin review
  • Alert admin when players mention specific sensitive topics
  • Auto-mute or auto-void players who trigger specific patterns
  • Route flagged content to on-duty admin via a staff channel

Priority systems. Not all flags are equal. High-severity flags (credible threats, hate speech, CSAM-adjacent content) should route to admin immediately. Low-severity flags (mild language violations, borderline spam) can queue for batch review.

4.5 Chat Moderation Bots

Some MUDs deploy in-game bot characters that:

  • Monitor public channels for rule violations
  • Automatically enforce rate limits and language standards
  • Provide warnings before escalating to admin
  • Maintain statistics on channel activity and violation frequency

The key principle: automated systems should handle clear-cut violations (spam, slurs, rate abuse) and flag ambiguous cases for human judgment. Over-automation leads to false positives and player frustration; under-automation overwhelms admin.

Sources: Alter Aeon Botting, Content Moderation Best Practices, Aardwolf Rules


5. Player Rights and Community Standards

5.1 Real Rules Documents

The best MUD rules share common structural elements:

Aardwolf (12 rules, concise):

  • PG-13 language in public; no multiplaying; no personal attacks; no kill-stealing; mandatory bug reporting; no powerleveling; no AFK gains; logging notice; no command spam; no rule evasion; no advertising; raiding rules.
  • Philosophy: “the spirit of the rules, not just the letter of the law.”

Sindome (extensive, heavily detailed):

  • Separate sections for IC conduct, OOC conduct, metagaming, sexual content, conflict of interest, identity protection, and reporting obligations.
  • Notable: Players have an obligation to report violations they witness during private OOC contact.

Achaea (graduated penalty matrix):

  • Published penalty progressions for each offense category.
  • Notable: The 6-month rolling window for escalation. The distinction between “warning” and “active penalty” tiers.

5.2 Writing Fair, Enforceable Rules

Principles drawn from decades of MUD governance:

Be specific enough to enforce, general enough to cover edge cases. “No harassment” is too vague. “No continued unwanted contact after being asked to stop” is enforceable. But you also need a catch-all: “The administration reserves the right to address behavior that violates the spirit of these rules.”

Explain the why. Players follow rules they understand. “No metagaming” doesn’t help if players don’t know what metagaming is. Sindome’s rules include detailed definitions and examples.

Define consequences upfront. Achaea’s published penalty matrix is a model. Players know exactly what will happen before they act. This removes the perception of arbitrary enforcement.

Rules should be findable. Accessible in-game (help rules), on the website, and in new player orientation. If players have to search for the rules, they won’t read them.

Update rules, not traditions. When community norms evolve, update the written rules to match. Unwritten rules breed inconsistency.

5.3 Transparency vs. Privacy in Moderation

A fundamental tension:

Arguments for transparency:

  • Players can verify that rules are applied consistently
  • Public accountability discourages admin abuse
  • Community can learn from others’ mistakes
  • Reduces conspiracy theories about secret admin agendas

Arguments for privacy:

  • Protecting the accused from mob justice
  • Allowing admin to investigate without tipping off suspects
  • Preventing players from gaming the system by learning exactly what gets caught
  • Respecting the dignity of people who make mistakes

Best practice: Transparent process, private details. Publish how moderation works, what the rules are, and what the consequences are. Keep individual case details confidential. Publish aggregate statistics (e.g., “X bans issued this month”) without identifying individuals.

5.4 Player Feedback Channels

  • Dedicated feedback commands or forums. Aardwolf welcomes ideas and feedback from all players.
  • Anonymous reporting. Let players report issues without fear of retaliation.
  • Regular community surveys. Ask players what’s working and what isn’t.
  • Town halls. Periodic community meetings where admin answer questions and discuss upcoming changes.

5.5 Community Input on Rules Changes

Discworld’s model: Player-elected magistrates can create and amend laws, subject to citizen ratification. This gives the community genuine legislative power over social norms.

LambdaMOO’s model: Petition and ballot system with signature thresholds, wizard vetting, and supermajority requirements. Technically democratic but practically limited by wizard veto power.

Pragmatic middle ground: Admin propose changes, community discusses for a defined period, admin make the final decision informed by community input. Not democratic, but responsive. Most successful MUDs use this model implicitly.

5.6 The Balance Between Freedom and Safety

Richard Bartle’s work identifies the core tension: too much freedom leads to griefing and harassment that drives away new players; too much control stifles the emergent social dynamics that make virtual worlds compelling.

His framework (elaborated in Designing Virtual Worlds, Chapter 24: “Rights”) argues that designers have ethical obligations to players, including providing a positive game experience while respecting player agency. The designer’s role is to know what will provide players with a positive experience — and sometimes that means protecting them from each other.

The practical balance: default to freedom, intervene for safety. Let players resolve their own conflicts when possible. Step in when power imbalances make self-resolution impossible (experienced player harassing a newbie, organized groups targeting individuals, sexual harassment).

Sources: Bartle - Designing Virtual Worlds, Bartle on Ethics, Achaea Penalties


6. Urbit-Specific Moderation Considerations

Building a MUD on Urbit fundamentally changes the moderation landscape. Some problems get easier; some get harder; some transform entirely.

6.1 @p Identity Makes Bans Meaningful

On traditional MUDs, bans are a cat-and-mouse game. Players create new accounts from new IPs and come back. The ban is an inconvenience, not a consequence.

On Urbit, your @p is a scarce, owned, persistent identity. Getting your planet banned from a MUD means:

  • Financial cost. Planets cost money. Creating a throwaway to evade a ban isn’t free.
  • Reputation cost. The banned @p’s reputation is ruined across the entire network, not just in one game. If a player has spent years building reputation through %groups, %pals, and other Urbit apps, a ban carries social consequences far beyond the MUD.
  • No anonymous fallback. Unlike the web, there’s no easy way to create a disposable Urbit identity. Comets exist but carry no reputation weight and can be blocked categorically.

Implications for design: Bans can be shorter and less punitive because they’re more meaningful. A 24-hour ban on Urbit carries more weight than a permanent ban on a free-to-play MUD, because the player can’t just make a new account.

6.2 Decentralized Moderation

Traditional MUDs have a central server with absolute authority. The admin control everything — they can read any message, modify any character, ban any player. This is simple but creates a single point of failure and a single point of abuse.

On Urbit, the question becomes: who moderates when there’s no central server?

Possible models:

Host authority. The ship hosting the MUD has admin powers, similar to how %groups hosts control their groups. Simple, familiar, but centralizes power in one entity.

Federated moderation. Multiple ships share hosting responsibilities and moderation authority. Decisions require consensus or majority vote. More resilient but slower to act.

Reputation-gated access. Instead of explicit moderation, gate access based on reputation scores derived from network behavior. Ships with poor reputations can’t join. This is the Urbit-native approach — let the identity layer do the moderation.

Community governance. Discworld-style elected councils, but backed by cryptographic identity. Elected moderators serve terms, with powers enforced by the MUD’s Gall agent.

6.3 Reputation Systems Tied to Cryptographic Identity

Urbit’s pseudonymous reputation system creates unique possibilities:

  • Every action affects reputation. Unlike throwaway accounts, your Urbit ID accumulates history over time. Bad behavior follows you.
  • Reputation is cross-application. A player known as a troll in %groups or %furum carries that reputation into the MUD. And MUD behavior affects their reputation elsewhere.
  • Reputation is verifiable. Because it’s tied to a cryptographic identity, reputation claims can be verified without revealing real-world identity.
  • Sybil resistance. The cost of Urbit IDs makes it impractical to create armies of fake accounts to manipulate reputation systems.

Design consideration: A MUD could query a player’s network-wide reputation before granting access, offer different trust levels based on reputation, or weight votes in community governance by reputation score. The galaxy/star/planet hierarchy already provides a rough reputation proxy — galaxies and stars represent larger investments and are treated with more trust by default.

6.4 Ship-Level Blocking and Filtering

Urbit’s architecture provides built-in moderation primitives:

  • Ship-level blocking. Any ship can refuse communication with any other ship. This is absolute and unilateral — no appeal needed, no admin intervention required.
  • Content filtering. Ships can filter incoming content based on source, content type, or any other criteria. Players can maintain personal blocklists that persist across sessions and applications.
  • Rank-based filtering. %groups already supports banning by Urbit rank (galaxy, star, planet, moon, comet). A MUD could block all comets by default, requiring players to use full planets — immediately raising the cost of ban evasion.

6.5 Community Governance on Urbit

%groups provides a template:

  • Host controls. The group host sets rules, invites/removes members, and assigns roles.
  • Role-based permissions. Different roles can have different capabilities (post, moderate, admin).
  • Ban mechanics. Ships can be banned from groups, kicked from groups, or have their rank banned. Bans can be reversed by admins for re-invitation.

A MUD could extend this model:

  • In-game council elections. Using Urbit’s identity system for verifiable, Sybil-resistant voting.
  • Transparent moderation logs. Published to a %group channel so all players can see what actions were taken and why.
  • Cross-MUD reputation. If multiple MUDs exist on Urbit, they could share ban lists or reputation scores, making bad actors unwelcome across the ecosystem.

6.6 The Advantage of Persistent Identity

The fundamental insight: Urbit solves the accountability problem that has plagued virtual worlds since 1978.

Every cautionary tale in MUD moderation history — LambdaMOO’s Mr. Bungle, admin alt abuse, ban evasion, throwaway account trolling — stems from the same root cause: identity is cheap and disposable. When creating a new identity costs nothing, accountability is impossible to maintain.

Urbit inverts this. Identity is expensive, persistent, and carries reputation. This doesn’t eliminate the need for moderation — humans will always find ways to conflict — but it changes the dynamics fundamentally:

  • Deterrence works. When bad behavior has lasting consequences tied to a valuable identity, most people moderate themselves.
  • Bans stick. Ban evasion becomes expensive rather than trivial.
  • Trust is earnable. New players can build trust over time through consistent behavior, and that trust is verifiable.
  • Decentralization is feasible. Democratic governance models like Discworld’s councils or LambdaMOO’s ballot system become more practical when votes are tied to verified, Sybil-resistant identities.

The open question: how to handle the transition from traditional MUD culture (where anonymity is expected) to Urbit’s pseudonymous identity model. Players accustomed to consequence-free alting may resist. The design challenge is making persistent identity feel like a feature, not a constraint.

Sources: Urbit Pseudonymous Reputation, Urbit Address Space, Urbit Groups API, Urbit Beliefs and Principles


Key Takeaways for Urbit MUD Design

  1. Start with clear, published rules. Aardwolf’s 12-rule model is a good template. Keep it concise, explain the why, and publish consequences.

  2. Use graduated penalties. Achaea’s escalating system with a rolling window is the gold standard. Adapt it for Urbit’s identity model (shorter penalties carry more weight when identity is persistent).

  3. Automate routine enforcement. Spam detection, rate limiting, basic profanity filtering. Reserve human judgment for ambiguous cases. Alter Aeon’s Bot Thwacker shows how elegant automation can be.

  4. Separate roles. Builders, moderators, and technical admins have different skills. Don’t make one person do everything.

  5. Plan for succession from day one. Document everything. Share access. Mentor replacements. The MUD should outlive any individual admin’s involvement.

  6. Leverage Urbit’s identity layer. @p identity, ship-level blocking, rank-based filtering, and cross-app reputation are moderation tools no traditional MUD has ever had. Use them.

  7. Consider player governance. Discworld’s elected councils + Urbit’s Sybil-resistant identity = a governance system that could actually work at scale. Start small (advisory councils) and expand based on community maturity.

  8. Default to freedom, intervene for safety. Let players handle their own conflicts when they can. Step in when power imbalances make self-resolution impossible.