Teaching Ethics with ELIZA: Classroom Exercises for Aspiring Avatar Designers
educationethicscurriculum

Teaching Ethics with ELIZA: Classroom Exercises for Aspiring Avatar Designers

UUnknown
2026-03-11
10 min read
Advertisement

Use ELIZA-based classroom activities to teach avatar designers AI limits, anthropomorphism, bias and safe moderation.

Hook: Why ELIZA still matters for avatar designers in 2026

Creators and educators building avatars and virtual personalities face a relentless stream of new SDKs, LLMs and realtime animation tools. Yet the core challenge remains unchanged: users instinctively read human traits into machines. That anthropomorphism drives engagement—and risk. A simple 1960s program, ELIZA, is one of the clearest, lowest-friction ways to teach students and junior designers the limits of conversational AI, the ethics of persona design, and how to build responsible avatar behavior.

Quick summary: What this curriculum delivers

This article provides a ready-to-run classroom module (middle school through undergraduate) that uses ELIZA-based exercises to surface teachable moments about AI ethics, anthropomorphism, bias, privacy, moderation and avatar design. You’ll get learning objectives, timed activities, code-light and code-heavy paths, assessment rubrics and moderation guidance that reflect late-2025 and early-2026 developments in AI and education.

Why use ELIZA in 2026?

By early 2026, educators noticed two important trends: first, students increasingly treat modern LLMs and avatars as social actors; second, clinicians and teachers are being asked to interpret AI chat transcripts (see analysis trends covered in January 2026 reporting). ELIZA—simple, pattern-based, therapist-imitating code—remains pedagogically powerful because it strips away modern model complexity and exposes surface-level conversational tricks.

Use ELIZA to show: what conversational AI can do (pattern matching, prompt chaining), what it cannot (understanding, true empathy), and where designers must intervene to avoid harm. That focused contrast is a high-impact, low-setup strategy for instructors balancing toolchain confusion with ethical training needs.

Learning objectives (module-level)

  • Explain how simple rule-based chatbots like ELIZA differ from modern LLM-based avatars.
  • Identify signs of anthropomorphism and articulate why it matters for audience trust and safety.
  • Design basic avatar guardrails: content filters, disclosure language, and escalation paths for sensitive topics.
  • Detect bias and misrepresentation in conversational flows and suggest mitigation strategies.
  • Reflect on privacy implications and safe data-handling practices for avatar deployments.

Classroom-ready module: 3 sessions (90–120 minutes each)

Session 1 — First contact: Chat with ELIZA (90 minutes)

Goal: Create the cognitive dissonance that teaches students the difference between seeming understanding and real understanding.

  1. Setup (10 min): Brief intro to ELIZA: 1966 program by Joseph Weizenbaum. Explain it uses pattern templates and rewrites; it does not model mental states.
  2. Warm-up (10 min): Quick brainstorm—what makes a character feel human? List traits (tone, memory, self-reference, repartee).
  3. Hands-on chat (30 min): Students pair up and chat with a hosted ELIZA instance (class can use an in-browser JS version or a Python notebook). Each student saves the transcript. Suggested prompts: “I’m worried about school,” “My friend is angry,” “What should I do about job choices?”
  4. Reflection and annotation (25 min): In pairs, annotate the transcript: highlight lines where ELIZA appears empathetic, confusing, or evasive. Use three tags: Appears human, Non-answer, Potentially harmful.
  5. Group debrief (15 min): Discuss findings. Instructor introduces the idea of teachable moments—where a chatbot’s authority can mislead users—referencing early-2026 classroom reports that showed students quickly map human traits onto even primitive bots.

Session 2 — Analyze transcripts & identify risks (120 minutes)

Goal: Teach students to detect anthropomorphism, bias, and unsafe responses; practice creating content policies for avatars.

  1. Recap (10 min): Revisit the annotated transcripts and the concept of pattern matching.
  2. Mini-lecture (20 min): Explain anthropomorphism effects and ethical implications. Quote the classroom findings from early 2026 showing students' reactions and the broader trend of clients bringing AI chats to therapists, emphasizing that designers must avoid presenting chatbots as clinicians.
  3. Traffic-light risk mapping (30 min): In groups, map transcript lines to Green (safe), Yellow (needs guardrails), and Red (unsafe; escalate). Provide criteria: explicit self-harm mention = Red; vague advice about mental health = Yellow; factual answer about a non-sensitive topic = Green.
  4. Bias spot-the-difference (30 min): Use curated ELIZA variants that include biased pattern templates (e.g., cultural assumptions, gendered phrasing). Students locate biased templates and rewrite neutral replacements.
  5. Design task (30 min): Each group drafts a short avatar behavior policy (max 200 words) that covers: disclosure statement, sensitive topic handling, escalation path, minimal user data retention practice.

Session 3 — Build, test and moderate (120 minutes; optional split)

Goal: Apply lessons to avatar creation and moderation workflows.

  1. Build paths (choose one):
    • Code-light: Use an online ELIZA toy or no-code chatbot builder to modify response templates and add a mandatory disclosure that it’s a scripted bot.
    • Code-heavy: Provide a simple Python/Node starter (ELIZA via regex patterns). Students add new patterns, logging hooks, and a filter that catches self-harm phrases and routes to a human moderator.
  2. Moderation simulation (30 min): Run a role-play: some students act as users with sensitive prompts; moderators apply escalation rules; designers update templates in real time.
  3. Ethical design critique (30 min): Groups present their modified ELIZA and behavior policy. Class votes on the best mitigation of anthropomorphism and bias.
  4. Assessment & feedback (30 min): Use a rubric (below) to grade: detection of anthropomorphism, quality of policy, technical implementation of guardrails, and clarity in escalation processes.

Detailed activities & reproducible assets

Activity A — Transcript annotation template

Provide students with a downloadable transcript template that includes columns: Line, Speaker, Appears Human?, Why?, Risk Tag. This scaffolding helps non-technical students engage with the material.

Activity B — Anthropomorphism checklist (classroom tool)

  • Uses first-person statements or references feelings?
  • Claims to remember previous sessions without designed memory?
  • Gives definitive advice about medical/psychological issues?
  • Expresses certainty in areas where it should be probabilistic?

Activity C — Bias rewrite exercise

Provide three biased ELIZA templates (e.g., culturally framed assumptions, gender pronoun defaults, socioeconomic judgments). Students must rewrite to inclusive, neutral templates and explain changes in 100 words or less.

Assessment rubric (sample)

  • Understanding (30 pts): Correctly explains ELIZA mechanism and anthropomorphism (0–30).
  • Risk detection (25 pts): Accurately tags transcript lines; identifies Red/Yellow/Green content (0–25).
  • Policy quality (25 pts): Clear disclosure, escalation, data rules (0–25).
  • Technical mitigation (20 pts): Implementation of filters or guardrails (0–20).

Adaptations by age and program

  • Middle school: More focus on chat annotation, role-play, and basic disclosure language; avoid triggers and require parental/guardian consent for any mental health content.
  • High school: Add bias-rewrite and a simple no-code ELIZA builder; introduce privacy discussions and data-minimization principles.
  • Undergraduate / Professional: Full code path, moderation pipeline design, integration of logging and audit trails for compliance, and assignment to write a policy for a fictional avatar brand.

ELIZA is not a therapist. Reinforce this repeatedly. In early 2026 reporting, clinicians noted rising instances of students and clients bringing AI chat transcripts to therapists. That context matters for designers: do not allow a classroom bot to give mental health advice or record sensitive details without oversight.

  • Include a visible disclosure: "I am a scripted chatbot. Not a clinician."
  • Block and escalate: any message indicating self-harm or danger goes to a human instructor or a pre-defined escalation path.
  • Data minimization: store only redacted transcripts for teaching; delete or anonymize identifying info within 7 days.
  • Parental/guardian consent: for minors, get consent for chat sessions and anonymized analysis.

Why this matters for avatar designers and creators

Avatar teams and influencers often prioritize realism—natural voice, memory and emotional cues—to boost engagement. But without intentional guardrails, that same realism creates user misunderstanding and potential harm. ELIZA-based lessons help avatar designers internalize a key design principle: transparency and limitation-first design. Build in honest disclosures, avoid claims of sentience, and include safe escalation mechanics.

Practical design patterns and guardrails

Here are specific patterns to apply when moving from classroom ELIZA experiments to production avatars.

  • Always disclose: Visible, brief statement on every chat interface that explains the avatar’s capabilities and limits.
  • Fail-safe templates: For sensitive keywords (self-harm, violence, medical terms), use a canned response that acknowledges limits and offers verified human resources.
  • Memory transparency: If your avatar stores or recalls user info, present a UI control and log showing what’s stored and how to delete it.
  • Audit logs: Keep tamper-evident logs for content moderation review and provide aggregated examples in post-deployment audits to show due diligence.
  • Human-in-the-loop: For edge cases flagged by keyword or classifier, route interactions to trained moderators or human designers.

Tools & resources for instructors (2026)

Use simple, well-supported tools to run the exercises and to avoid surprises in the classroom.

  • ELIZA implementations: lightweight JavaScript ELIZA toys for browser-based demos; Python notebooks with regex-based rules for code labs.
  • No-code chatbot platforms that support template editing—good for low-friction demos.
  • Keyword classification tools for escalation: open-source classifiers or small local models that run on-device to avoid sending student data to third-party APIs.
  • Template libraries: curated neutral phrasing collections to avoid biased language.
  • Policy examples from 2025–2026 industry guidelines on AI transparency and digital therapeutic disclaimers—adapt these into short classroom handouts.

Case study: What students learned after chatting with ELIZA (real-world example)

In early January 2026, reporting highlighted middle school experiments where students conversed with ELIZA and then compared those interactions to modern chatbot outputs. Students were surprised to see emotional language produced by such a simple rule-based system and learned to question perceived understanding. Use this case study to show how even limited systems can produce convincing cues—and why designers must avoid accidental deception.

“Students discovered quickly that the bot mirrored their language, leading them to over-attribute understanding. It’s an accessible way to show the mechanics behind conversational signals.” — classroom instructor, January 2026

Common pitfalls and how to avoid them

  • Pitfall: Treating ELIZA as therapy. Countermeasure: Clear disclaimers and immediate escalation paths for mental-health language.
  • Pitfall: Overfitting to engagement metrics. Countermeasure: Require a behavior policy that prioritizes safety and truthfulness over time-on-site or repeat interaction scores.
  • Pitfall: Ignoring bias in templates. Countermeasure: Peer review of templates with diverse student reviewers and automatic checks for loaded language.
  • Pitfall: Logging sensitive PII. Countermeasure: Minimal logging policy and anonymization tools before storage.

Extending the curriculum: Advanced modules

  • Comparative LLM lab: Contrast ELIZA with a small LLM in a controlled environment to show differences in hallucination types and the need for retrieval/grounding.
  • Persona ethics debate: Students represent stakeholder roles (platform, user, regulator) to argue policy choices for a monetized avatar product.
  • Design audit: Students perform a usability and ethical audit for a fictional avatar that is being deployed on social video platforms.

Actionable takeaways for creators and educators

  • Use ELIZA as a mirror: it makes visible the conversational shortcuts that lead to user misattribution of understanding.
  • Teach disclosure and escalation as non-negotiable features for any chat-enabled avatar.
  • Integrate bias-checking and inclusive language reviews into your template workflows.
  • Adopt minimal data-retention and clear memory controls before enabling avatar personalization.
  • Train moderators on recognizing and handling AI-generated transcripts—therapists and clinicians in early 2026 emphasized the growing need for this skill.

Final reflection: From ELIZA to ethical avatars

ELIZA is more than a retro curiosity. In 2026, it serves as a focused pedagogical tool for showing what conversational cues do—and do not—mean. For creators, influencers and publishers building avatars, these classroom exercises translate directly into product practices: transparency-first persona design, robust moderation and privacy-by-default. Use the module above to instill habits that keep users safe and products trustworthy as avatar tech continues to evolve.

Call to action

Ready to adopt this module? Download the instructor packet, starter code and template library (ELIZA and neutral phrasing) and run a pilot in your next workshop. If you’d like a turnkey curriculum adapted for your age group and platform (Discord, Twitch, Unity), contact our editorial team for an implementation guide and sample grading rubrics tailored to creators and avatar teams.

Advertisement

Related Topics

#education#ethics#curriculum
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-11T00:12:41.751Z