may be easy to think that AI is rapidly becoming super intelligent, and gain all
the good and evil powers awarded to it in popular culture. This, of course, is not
the case. The current AI systems are severely limited, and there are technical,
social, scientific, and conceptual limits to what they can do. Perhaps surprisingly,
well-established research on human learning provides important tools and concepts that
help us understand the state-of-the-art and future of AI. Many current AI systems use
rather simplified models of learning and biological intelligence, and learning theories thus
help us gain better understanding of the capabilities of current AI systems.
There will be great economic incentives to use AI to address problems that are currently
perceived as important by educational decision- and policy-makers. This creates policy
challenges. For educational technology vendors it is easy to sell products that solve
existing problems, but it is very difficult to sell products that require changes in
institutions, organizations and current practices. To avoid hard-wiring the past, it would
be important to put AI in the context of the future of learning. Policy may be needed to
orient development in AI towards socially useful directions that address the challenges,
opportunities, and needs of the future. As AI scales up, it can effectively routinize
old institutional structures and practices that may not be relevant for the
future. Future-oriented work, therefore, is needed to understand the potential impact of
AI technologies. How this potential is realized depends on how we understand learning,
teaching and education in the emerging knowledge society and how we implement this
understanding in practice. Future-oriented policy experimentation, as suggested
by the Digital Education Action Plan, may, therefore, be an effective way to
Do'stlaringiz bilan baham: |