Can Artificial Intelligence replace teachers?
Recently, I came across an article that talked about the applications of artificial intelligence (AI) in teacher training programs. The authors were pretty confident in simulating the process of human intelligence by machines, especially in activities involving cognition, reasoning, planning, voice recognition, problem solving, perception, etc. the most sophisticated ones that make the stock market work and have made life easier. The article emphasizes using AI for a personalized curriculum, identifying gaps in understanding and making personalized suggestions, resolving difficulties by answering questions, and more. Such large AI integration projects would require huge amounts of data to be fed as input to the algorithm or depend on machine learning to do so.
“Teaching is a profession that creates other professions”, explains an anonymous person. First, policy makers should not view teachers as algorithms that operate on a set of instructions. A teacher continues to mold themselves into new skins as experience enriches their mind. A teacher’s job is non-repetitive, structured and well-defined, exactly the opposite of what an algorithm is good for. A+B doesn’t always equal C for a teacher, but for an AI it does. AI research is advancing at a pioneering speed. But scientists aren’t yet able to separate how electrical signals in the brain translate into thoughts or feelings. They are not yet able to insert “emotions” into the algorithm script. “Empathy” can probably never be automated through a machine. Therefore, an AI will only respond to a certain limited set of situations by providing countless learning options to the student. Here they miss a crucial point: students’ ability to learn also depends on the teacher’s hand-to-face gestures, the vividness of the concepts created in imaginative minds, the connection made with eye contact , the ability to recognize the learning difficulty experienced by the student, integrating all of this into the AI is a difficult task. Also, AI has not yet entered the realm of humor and understanding sarcasm. Humans are gregarious in nature, and children are more so.
Humans are afflicted with decision-making problems and biases, as psychologists Daniel Kahneman, Amos Tversky, Richard Thaler and others have pointed out. If such biases and flaws are embedded in the code that develops an AI, imagine the cascading effect that would have, if it were a self-learning algorithm, especially if all the machines beyond borders are connected. The software itself will be bitten by confirmation bias that will snowball and have devastating consequences. Even if such AI is used in teacher training programs, these biases will cross brains for eternity. If such a problem goes unnoticed, the consequences are unimaginable. The quality of teachers trained in this way will depend entirely on the humans who wrote the code. Such a loophole cannot be justified.