by Martha Scharping
November 11, 2024
The collapse of the AI chatbot Ed underscores the risks of integrating AI into education and the need for safeguards.
Artificial intelligence (AI) continues to make headlines in education, but not always for the right reasons. The recent collapse of the AI chatbot, Ed, designed for the Los Angeles Unified School District (LAUSD), highlights both the potential and the dangers of integrating AI into schools. Launched in March 2024 with high expectations, Ed was meant to be a revolutionary "educational friend" for students, offering learning resources, attendance monitoring, and even sense student emotions to provide a personalized learning experience. However, by July 2024 the start-up behind Ed, AllHere (Boston, MA), faced financial turmoil and ultimately ceased its operations, leaving the district grappling with cybersecurity concerns and unfulfilled promises.
The breakdown of Ed shines a light on the broader risks associated with adopting AI technologies in educational settings. Although AI holds vast potential, it also brings heightened cybersecurity risks, as schools rely more on digital platforms to manage sensitive student data. As district tech leaders have pointed out, AI tools are a growing source of vulnerability, with potential for cyberattacks, data breaches, and misuse of AI capabilities. In fact, according to the Consortium for School Networking's annual State of EdTech District Leadership report, released April 30, more than 63% of district technology leaders express deep concerns about AI enabling new forms of cyberattacks. These concerns are compounded when start-ups like AllHere collapse, leaving behind critical infrastructure that lacks proper oversight and protection.
For LAUSD, the collapse of Ed has led to a call for greater transparency and more robust cybersecurity measures. The district has been left scrambling to reassure families and educators that sensitive data remains secure, even as investigations continue into possible breaches. The risks are not isolated to Los Angeles either; this incident serves as a cautionary tale for districts nationwide, emphasizing the need for careful planning, adequate resources, and the right expertise when implementing AI.
The fall of Ed also coincides with an ongoing debate among educators and stakeholders regarding the role of AI in classrooms. While many teachers are beginning to embrace AI tools, there are still significant reservations about their potential impact. An October 2023 Forbes survey of educators revealed that 60% are already using AI to support teaching and learning, but many are concerned about its broader implications, including the possibility of increased academic dishonesty, reduced human interaction, and threats to data privacy.
Teachers are not the only stakeholders with concerns. Parents and educational leaders have expressed worries about the shift towards AI. In LAUSD’s case, some parents had openly opposed the idea of investing in AI technologies like Ed, arguing that the district should instead focus on more tangible educational needs, such as smaller class sizes and expanding arts and sports programs. Despite the growing presence of AI in schools, these criticisms highlight an underlying discomfort with the role technology plays in shaping education and the potential for AI to divert resources away from other priorities.
At the same time, there is a recognition that AI can offer meaningful benefits. AI-powered educational games, adaptive learning platforms, and automated feedback systems are increasingly being used to enhance learning experiences, providing personalized instruction at scale. Moreover, AI is seen as a potential solution to help address pressing challenges in education, from academic recovery post-pandemic to addressing student mental health needs. However, the success of AI tools in education depends on how well they are implemented and whether they align with the core values of educators, parents, and students.
The failure of Ed and similar projects should serve as a wake-up call for educational leaders, policymakers, and AI developers. Several AI-driven educational chatbots or platforms similar to AllHere have faced challenges or even failed. Here are a few examples of why some of these products did not succeed:
• Newton (Knewton): Knewton was a well-known adaptive learning platform, which used AI to provide personalized education experiences. Initially, it was heralded as revolutionary, but it failed to live up to expectations. Educators found it difficult to integrate into existing systems, and the adaptive algorithms were not as effective as anticipated. Eventually, Knewton was sold to Wiley in 2019, and its technology was incorporated into their offerings.• Pearson's AI Tutor: Pearson, a major player in the educational space, attempted to create its own AI tutor solutions. However, it struggled with adoption because of the complexity of its setup, its lack of engagement compared to human tutoring, and skepticism from educators about relying too heavily on AI for teaching.
• IBM Watson Education: IBM introduced Watson into education with high hopes, positioning it as a tool for personalized learning and teacher support. Despite the strong technology behind it, the project failed to gain widespread traction. The product was ultimately seen as too complex, expensive, and not easily scalable in everyday classroom settings. IBM has since scaled back its efforts in education.• AltSchool: AltSchool combined AI-driven personalized learning with a platform that allowed teachers to create customized learning plans for students. It garnered significant attention and funding but eventually struggled to scale its model effectively. The focus on data-driven learning and technology over traditional pedagogy didn’t resonate with many educators, leading to a pivot away from its original model.
• AI-Powered Virtual Learning Assistants in General: Several other AI-powered learning assistants, particularly during the surge in edtech around 2020, failed because they couldn’t address the core pedagogical needs of students. They were often seen as tools that lacked depth, were not aligned with curriculum standards, or provided responses that weren’t nuanced enough for students’ individual learning needs.
The common reasons for these failures include over-promising capabilities, lack of proper integration with traditional education systems, the high cost of implementation, and educators’ preference for more proven, teacher-driven methods over experimental AI-driven tools.While AI has the power to transform education, its implementation must be approached cautiously and responsibly. Here are some key safeguards to consider when integrating AI into schools:
• Invest in Strong Cybersecurity Protocols: Schools must prioritize cybersecurity to protect sensitive student data. As AI tools become more prevalent, they introduce new vectors for cyberattacks, making robust, up-to-date cybersecurity measures non-negotiable. Districts should work closely with cybersecurity experts to audit and safeguard their systems before implementing AI solutions.
• Transparent Communication and Collaboration: Schools must foster open dialogue with teachers, parents, and students to understand their concerns and needs regarding AI. Transparency about how AI systems work, how data is protected, and the ethical implications of AI usage is crucial to building trust and ensuring AI adoption serves the school community effectively.
• Provide Comprehensive AI Training for Educators: The lack of training is a significant issue when integrating AI into classrooms. Teachers need to be empowered with the skills to use AI tools responsibly, not only for their instructional purposes but also to guide students in ethical AI usage. The October 2023 Forbes survey stated that more than 60% of educators have called for comprehensive training on AI ethics and usage.
• Start Small and Scale Gradually: Rather than rushing to implement large-scale AI projects, schools should adopt a phased approach, testing AI tools in controlled environments before full deployment. This ensures that potential risks can be identified and addressed early, minimizing disruptions to the broader educational system.
AI offers significant opportunities in the education market, but its successful integration depends on how well we safeguard against its risks. By prioritizing cybersecurity, ethical considerations, and the needs of the school community, we can create an educational future where AI serves as a tool for growth, not a source of disruption.
Where to Learn More
For more information on AI in education and other key trends, read Simba Information’s bi-monthly newsletter Education Market Advisor. Subscribe to our blog using the blue button on the bottom right to easily access more articles like this in the future.
About the blogger: Martha Scharping is the Education Analyst and Writer for Simba Information, the leading authority of strategic intelligence for EdTech companies and other producers of instructional materials for K-12 and higher education.
Provide the following details to subscribe.