Key points:
- Experts are going behind the curtain to get a true feel for what AI is all about
- Everything you need to know about AI in education
- Can artificial intelligence help teachers improve?
- For more news on AI in education, visit eSN’s Digital Learning page
Misconception: AI will encourage students to cheat.
Truth: Educators need to reconsider how they assess student work.
By Carl Hooker
One of the biggest misconceptions about AI in education is that it will encourage students to cheat and cause academic integrity concerns. Did students cheat before AI was around? Yes. Could students use generative AI tools like ChatGPT to cheat and cut corners on an assignment? ABSOLUTELY. However, there are a couple of major problems with this line of thinking.
The first is an equity concern. Educators find it socially acceptable for a student to hire a tutor to help them write their college admission essay. We also accept the fact that, many times, a parent helps build their 4th-grader’s science fair project. In both of these instances, we don’t consider it cheating. However, if a student uses generative AI to help them edit their college admission paper or brainstorm a science fair idea, there’s a belief that it is dishonest. By considering human-assisted help fair but computer-assisted help not fair, we create an equity gap.
The second reason why cheating with AI is being mishandled is the belief that it will encourage students to cheat. This is akin to saying that a vape will encourage students to smoke. If you take the vape away, you still don’t address the behavior. The same is true with AI.
Rather than focus on students using technology to cheat, educators should reflect on what they are assessing. Are they truly measuring student learning or is it a compliance-based assignment or worksheet? Is the “process” being evaluated with the same or greater care than the final “product?” By focusing evaluations on the process of learning instead of the product, educators can not only prevent AI-assisted cheating, but they can also better evaluate a student’s understanding of a particular topic.
Misconception: AI will eliminate jobs.
Truth: It will create more jobs, with different requirements.
By David McCool
The simple misconception is that AI will eliminate jobs, but really it will create more jobs, with different requirements, than it will eliminate. These new jobs will disproportionately require durable skills like critical thinking and collaboration, making it more important than ever for people to learn these skills and, if they can, display them for employers by earning microcredentials.
As we continue into an AI-driven world, students, employees, and jobseekers need to stay agile and competitive in the marketplace, so upskilling is essential. Microcredentaling your durable skills will demonstrate your abilities for future jobs like Sentiment Analysis, Content Creators, and AI roles that require durable skills and cannot be automated. In many industries, AI will simply change the nature of available jobs. For the most part, those transformed jobs will be more engaging than the menial tasks they’re replacing. Manufacturing workers, for example, may be freed from the production line where they used to watch for defective products all day to instead spend their time improving processes using insights gained from AI systems.
AI’s role in education has changed, and not everyone understands what it can do. A discovery we made during the pilot of our durable skills course SkillBuild by Muzzy Lane is that learners were unaware that AI was guiding them to improve based on their input. They were appreciative when they found out that, in our microcredential courses, AI helps learners by providing the extra assets and feedback they need to perfect their durable skills—pushing them to the top of today’s changing job market.
Misconception: AI is a static tool.
Truth: AI is constantly evolving.
By Wilson Tsu
What I see people getting wrong the most about AI is thinking that what it is now is what it will be in the future. Take Open AI, for example. Going from ChatGPT, which was released in November 2022, to GPT4 in March 2023 was a huge leap in capability. When ChatGPT came out and educators really started digging into it, they may have thought, “It’s not going to pass my class like a human would, so I don’t have to worry about this.” And then only a few months later, they saw that GPT4 could pass their class. And now Open AI has announced that people will be able to create their own GPTs. We don’t know the extent to which that’s going to change things, but it’s a huge step.
My point is that you can’t think in static terms when it comes to AI. It’s changing so fast, and there’s so much investment in AI right now, so many resources, so many smart people working on it, that as soon as you think you know what’s going on, it’s going to drastically change. And it’s changing so fast that no one can even really know what’s going on, except for a small handful of people who work deeply in it. To me, the biggest truth about AI right now is that as soon as we think we have a grasp on it, it’s going to be different.
- The importance of the ITS and Facilities relationship - March 22, 2024
- How edtech is transforming bilingual education in the U.S. - March 21, 2024
- 5 ways to keep schools safer with innovative visitor management - March 21, 2024