Posted on Leave a comment

The classroom: An untapped testbed for human-centric AI

When it comes to testing AI in the real world, many instinctively look to boardrooms or innovation labs. But it turns out the real proving ground is schools. Classrooms sit at the crossroads of unpredictable human behaviour, whether it’s diverse needs, learning styles, or developing real emotions. This makes them one of the best places to see whether AI works in everyday life or is just impressive in theory.

In Southeast Asia (SEA), edutech adoption is rising alongside the spread of Generative AI (GenAI), signalling a shift in how teaching and learning are approached. Deloitte finds that SEA ranks second out of nine for GenAI usage, with 9 out of 10 students having tried it.

As SEA works to build strong AI ecosystems, responsible edutech is poised to become a foundation for long-term digital growth. The World Economic Forum finds that technology skills, including AI, are expected to see rapid growth in demand – but at the same time, human skills, such as creative thinking, resilience, flexibility and agility, will remain critical.

Therefore, AI must be taught with thoughtful framing from the start, so students develop the right mix of digital skills and ethical awareness to engage with the technology confidently and safely, while their views on how to use technology, behave online, and judge information are still taking shape.

Where GenAI in education can meet educators

Efforts to incorporate AI use within the classroom are encouraging, with studies showing that thoughtfully integrated, vetted platforms can improve learning outcomes and meaningfully support children’s cognitive development. This reinforces the importance of getting the  foundation right, and positioning technology as an enhancement in their day-to-day efforts, rather than replacing critical thinking. Meeting educators where they are is essential to unlocking this potential.

Edutech companies must rethink whether their innovation is designed to truly help children learn better, more responsibly, and with greater agency. For instance, platforms may generate polished student work or assess assignments without a teacher’s input. At the baseline, educators remain wary of tools that oversell the merits of efficiency and reinforce passive automation rather than active guidance. 

Also Read: The future of work is here: The role of edutech in an AI-ready workforce

Responsible workflow design is a winning differentiator over flashy features. This includes:

  • Multi-layer safety: Monitorable chat logs, in-built detection for inappropriate content which can quickly flag alerts to educators for their intervention, and safeguards against bias
  • Pedagogical alignment: Tools must support “productive struggle,” enabling collaboration with AI, not outsourcing cognition to it
  • Zero ambiguity in data use: Strict prohibitions on training models with student inputs
  • Customisation: Toggles across grade levels, subjects, and accessibility features for students with different learning needs
    Building digital citizenship into the learning experience

The role of edutech in shaping digital citizenship adds another layer of responsibility in shaping how an entire generation learns to use AI ethically. Responsible behaviour should be embedded directly into the user experience, for example, reminders to fact-check the research claims made by AI, linkbacks to how certain answers are generated and disclosures against sharing sensitive data. Features that make learning accessible to students with different needs also contribute to healthier AI habits.

Transparency around its limitations is also equally important. These include unreliable plagiarism detectors and inaccessible features that can entrench bias or exclude learners.

How schools can put a human-first approach to AI into practice

Responsible AI deployment in classrooms often starts with choosing tools that can enhance teacher-student interaction rather than distance it. Some schools, including Stamford American International School, are approaching AI as an intentional enhancement to learning. This entails tapping on it to support and scaffold learning transparently and through safe exploration, while keeping human judgment at its core.

Examples of this in practice could include:

  • Scaffold-first AI use: Tools that guide students through inquiry and problem-solving instead of delivering answers
  • Safety-by-design systems: Transparent chat logs, content flagging, and teacher-intervention checkpoints
  • Embedded AI literacy: Short primers before tool use, plus ongoing reminders to cite AI and avoid sensitive data
  • Co-creation models: Students produce original work, then use AI for enhancement, for example, to visualise portfolios or create artwork for storybooks

These principles provide a blueprint for edutech founders, emphasising that AI should support pedagogy and enhance creativity while preserving the irreplaceable role of the teacher.

Also Read: Edutech in SEA is ripe for acceleration. This is why they can help build a more inclusive society

Such practices will also help students to learn more about AI use in a responsible, controlled manner. By learning to question outputs, cite AI use, and understand tool limitations within a safe and supervised environment, students develop the foundation for healthy AI habits that will shape how they use it, well beyond the classroom.

What’s in a successful school pilot?

For startups, the real test of their readiness lies in how well they navigate school environments. Start by engaging schools in sync with their planning cycles – a partnership is more likely to be successful when edutech vendors’ outreach coincides with curriculum planning, so that it can be meaningfully integrated from the start.

Offer modular packages. Schools respond best to providers that allow flexibility, with different offerings that schools can tailor for their specific needs, such as products to fit the region’s learning styles, cultures, and accessibility needs.

Moving into the evaluation stage, prioritise whole-community feedback. Assess opinions from everyone who uses the tool, such as teachers, students, and parents. Pilots tend to push through when data practices are kept clear.

If classrooms are the proving ground for human-centric AI, then edutech companies have an opportunity and an obligation to design with intention. Schools prioritise tools that uphold learning, amplify human judgement, and help students build the digital fluency they will need long after graduation. The future will belong to the products that understand the classroom — not as a market to enter, but as a community to serve.

Building an AI-ready generation without losing what makes us human

The promise of AI in education rests on how well it can strengthen, rather than substitute, the human elements of learning. That means designing tools that can support thinking and creativity without taking the reins of social and interpersonal skills, which no technology can replicate. Measures like device-free time, group tasks, and supervised collaboration remain essential, ensuring students continue to fail safely, build empathy, communication, and teamwork even as AI becomes more embedded in the classroom.

If SEA is looking to cultivate a generation ready for an AI-enabled future, the path forward lies in pairing technological progress with an unwavering commitment to people.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image credit: Canva

The post The classroom: An untapped testbed for human-centric AI appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *