As AI-driven chatbots enter college courses, the potential to offer students 24/7 support is game-changing. However, there’s a critical caveat: when we customize chatbots by uploading documents, we don’t just add knowledge — we introduce biases. The documents we choose influence chatbot responses, subtly shaping how students interact with course material and, ultimately, how they think. So, how can we ensure that AI chatbots promote critical thinking rather than merely serving to reinforce our own viewpoints?

Thank you for reading this post, don't forget to subscribe!

How Course Chatbots Differ from Administrative Chatbots

Chatbot teaching assistants have been around for some time in education, but low-cost access to large language models (LLMs) and accessible tools now make it easy for instructors to create customized course chatbots. Unlike chatbots used in administrative settings that rely on a defined “ground truth” (e.g., policy), educational chatbots often cover nuanced and debated topics. While instructors typically bring specific theories or perspectives to the table, a chatbot trained with tailored content can either reinforce a single view or introduce a range of academic perspectives.

With tools like ChatGPT, Claude, Gemini, or Copilot, instructors can upload specific documents to fine-tune chatbot responses. This customization allows a chatbot to provide nuanced responses, often aligned with course-specific materials. But, unlike administrative chatbots that reference well-defined facts, course chatbots require ethical responsibility due to the subjective nature of academic content.

Curating Content for Classroom Chatbots

Having a 24/7 teaching assistant can be a powerful resource, and today’s tools make it easy to upload course documents and adapt LLMs to specific curricula. Options like OpenAI’s GPT Assistant, IBL’s AI Mentor, and Druid’s Conversational AI allow instructors to shape the knowledge base of course-specific chatbots. However, curating documents goes beyond technical ease — the content chosen affects not only what students learn but also how they think.

The documents you select will significantly shape, though not dictate, chatbot responses. Combined with the LLM’s base model, chatbot instructions, and the conversation context, the curated content influences chatbot output — for better or worse — depending on your instructional goals.

Curating for Critical Thinking vs. Reinforcing Bias

A key educational principle is teaching students “how to think, not what to think.” However, some educators may, even inadvertently, lean toward dictating specific viewpoints when curating content. It’s critical to recognize the potential for biases that could influence students’ engagement with the material.

Here are some common biases to be mindful of when curating chatbot content:

  1. Personal Research Bias: Prioritizing topics aligned with the instructor’s research interests and underrepresenting other areas.
  2. Temporal Bias: Emphasizing either well-established theories or recent research, with the potential to overlook complementary or contrasting studies.
  3. Subdiscipline Bias: Over-representing core topics while neglecting interdisciplinary areas.
  4. Theoretical Framework Bias: Leaning heavily on preferred theories, while minimizing alternative frameworks.
  5. Methodological Bias: Preferring specific research methods and neglecting diverse methodological approaches.
  6. Cultural Bias: Reflecting dominant cultural or ideological perspectives.
  7. Ethical Stance Bias: Presenting only one ethical position on controversial issues.
  8. Journal/Publisher Bias: Limiting sources to certain publications or publishers.
  9. Technological Bias: Prioritizing studies aligned with certain technologies or tools.
  10. Institutional Bias: Favoring research from prestigious institutions, potentially disregarding valuable contributions from lesser-known organizations.
  11. Disciplinary Conservatism Bias: Focusing on accepted paradigms and avoiding emerging, controversial research.
  12. Experimental Bias: Preferring studies with positive results over those with null findings.
  13. Open Access Bias: Over-relying on open-access resources.

While this list isn’t exhaustive, it highlights the complexities of curating content for educational chatbots. It’s important to recognize that adding data shifts — not erases — inherent biases in the LLM’s responses. Few academic disciplines offer a single, undisputed “truth.” AI-Driven Chatbots in Education.

Tips for Ethical and Thoughtful Chatbot Curation

Here are some practical tips to help you create an ethically balanced course chatbot:

  1. Recognize Biases: Be aware that LLMs carry biases from their training data, and additional biases will arise from your content selections. Know that intentional and unintentional biases impact the chatbot’s responses.
  2. Curate Diverse Perspectives: Choose reputable sources that represent a wide range of views. Include materials from international journals or sources with different ideological positions.
  3. Encourage Exploration: Instruct the chatbot to offer multiple perspectives on complex questions, or consider using a multi-agent system for further diversity.
  4. Frame Responses Thoughtfully: Design responses to highlight diverse viewpoints, for example, by structuring answers to show multiple sides of an issue.
  5. Document Your Choices: Keep a clear record of why you selected specific materials. Transparency allows for self-reflection and supports questions from students or colleagues.
  6. Share Your Curation Choices: Make curated documents available to students, providing insights on why these materials were selected and, when applicable, noting any resources you considered but excluded.
  7. Collect Student Feedback: Invite students to report if they feel a viewpoint is underrepresented. Regular feedback can help maintain a balanced approach.
  8. Update Regularly: Re-evaluate your curated materials each semester to stay current and diverse. Incorporate new, peer-reviewed studies to align with ongoing discussions in your field.
  9. Mind Institutional Policies: Ensure the chatbot aligns with academic freedom and institutional policies, especially in publicly funded institutions where viewpoint neutrality may be expected.
  10. Compare Outputs: Regularly assess the supplemented chatbot’s responses compared to the original LLM to identify new biases or shifts in response quality.

This approach helps prevent a chatbot from merely reflecting a single perspective, instead guiding students toward a broader understanding of the material.

Ethical Obligations

As educators, our ethical obligations extend to ensuring transparency about curated materials and explaining our selection choices. If some documents represent what you consider “ground truth” (e.g., on climate change), it’s still crucial to include alternative views and equip students to evaluate the chatbot’s outputs critically.

Equity

Customizing chatbots for educational use is powerful but requires deliberate consideration of potential biases. By curating diverse perspectives, being transparent in choices, and refining chatbot content, instructors can foster critical thinking and more meaningful student engagement.

AI-Driven Chatbots in Education

AI-powered chatbots are interactive tools that can help educational institutions streamline communication and improve the learning experience. They can be used for a variety of purposes, including: 

  • Student support Chatbots can answer student questions about courses, assignments, and deadlines. They can also help students with disabilities and make the educational system more accessible. 
  • Student accessibility Chatbots serve a significant gap in students more comfortable on a keyboard than a lecture hall.
  • Administrative tasks Chatbots can help with tasks like admissions, curriculum updates, and data retrieval. They can also automate routine tasks like grading and attendance tracking. 
  • Personalized learning Chatbots can analyze student data to provide personalized academic recommendations. They can also offer immediate feedback and fine-tune learning strategies based on a student’s pace. 
  • Collaborative learning Chatbots can facilitate group discussions and project-based learning. They can also be programmed to ask questions and enter dialogue with learners. 
  • Reducing cognitive burden Chatbots can decode complex information and provide clear explanations, making learning less overwhelming. 

Some examples of AI chatbots in education include:

  • Khanmigo: An AI tutoring bot that helps students work through math problems and other subjects 
  • Mongoose harmony: A chatbot and virtual assistant designed for higher education 

While AI chatbots can be a strategic move for educational institutions, it’s important to balance innovation with the privacy and security of student data. 

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

Salesforce’s Quest for AI for the Masses
Roles in AI

The software engine, Optimus Prime (not to be confused with the Autobot leader), originated in a basement beneath a West Read more