When I entered 9th grade, there was this teacher. She sat down with us and basically told us that we need to learn how to learn. Dropping this on a bunch of 9th graders is a bold move. Not only did we not understand what she was telling us, we also laughed at the matter and went on with our lives.
Later in life, I understood what she meant with her trying to teach us how to learn properly. She was doing nothing else but giving us a peek into academia, where people ask questions and look for answers. And all the while, they learn how to do stuff: editing genomes, programming computers, determining a black hole. We learn from doing, not from reading a book someone wrote 100 years ago. That’s just information intake. It’s not learning.
Now to the core: when we think about learning with the help of AI, it is not learning. We as humans don’t learn, really. We do not repeat tasks a thousand times and learn how to master a skill by asking AI. We are gathering information and data and think we have learned something. It’s the same as when you throw a question at Google, only with more detail.
But Sebastian, humans have always used tools to think better (books, calculators, maps). Socrates hated writing. Teachers feared calculators. AI may just be the next tool, the end of learning. It’s not the end of learning. But maybe it’s the end of thinking independently? When we use a data corset to enrich our learning, it might be too biased. AI can help us skip the slow parts of learning. But skipping the slow parts might mean we also skip the thinking parts. What about the inner struggle, the sparks of doubt, the discomfort that often leads to deeper understanding? It’s a huge trade-off, I feel.
AI is still a tool. We have to learn how to use it properly. It’s a skill that we need to master. OpenAI and Anthropic and the like need to adapt to this quicker in order to stay connected to the very subjects they want to earn money with.