Why AI Literacy Is a Learning Challenge, Not an IT One
AI literacy in enterprise L&D: What tulips can teach us about technology hype
“Tulip mania” was a 17th-century phenomenon in the Netherlands, often cited as one of the earliest speculative bubbles in history. Contract prices for tulip bulbs rose rapidly during a period of economic optimism before the market eventually collapsed. In hindsight, the issue was not the tulips themselves, but the widening gap between enthusiasm and actual understanding of value and use.
Over time, similar patterns have appeared around antique collectables, internet domains, cryptocurrencies, and more recently, artificial intelligence. While the contexts differ, the underlying dynamic of rapid adoption outpacing clarity around purpose, value, and responsible use, remains the same.
In a previous article, we highlighted the importance of understanding AI before integrating it into complex enterprise environments. A key takeaway was the need to build AI literacy as a foundation of the business, rather than assuming that technology alone will deliver sustainable results. This remains especially relevant in the context of corporate learning and development (L&D).
The reference to tulip mania is not just a historical comparison, but a useful lens through which we can re-examine the current AI discourse in enterprise L&D. The conversation revolves around not whether AI will be introduced into learning environments, but whether organisations clearly understand why, how, and to what effect they intend to use it. Addressing this question is a critical first step towards meaningful AI literacy.
A measured step forward: Is there a role for AI in L&D?
To benefit from AI while avoiding short-lived or misaligned initiatives, organisations should resist the temptation to adopt AI simply because it is available. As with any strategic investment, the first question should be whether a genuine need exists.
This consideration applies to enterprise L&D just as it does to any other function. Exploring what AI can realistically support, what alternatives already exist, and what outcomes are expected helps organisations make informed decisions. These reflections also form an essential part of AI literacy, enabling stakeholders to move beyond superficial enthusiasm towards practical understanding.
At the same time, many employees have already found their own answers by adopting personal AI tools to support day-to-day work. While this reflects initiative and curiosity, it also introduces challenges. Uncoordinated AI usage can lead to data protection risks, unreliable outputs, and an inflated sense of confidence in AI-generated results.
From an L&D perspective, the most significant risk may be the gradual development of false competence. Addressing this issue solely through technical controls or IT governance is unlikely to be sufficient, as the underlying challenge is not access to AI, but the ability to use it responsibly and effectively.
“Consider it done”: Where AI initiatives often fall short
When organisations decide to introduce AI, responsibility is frequently assigned to IT departments. This is a logical step, as AI solutions are software-based and raise valid questions around infrastructure, security, and system integration. However, while IT can enable AI, it cannot establish AI literacy across the workforce.
Many AI initiatives falter not because the technology fails, but because they are treated as tool deployments rather than capability transformations. Training is often delivered as a one-off event, focused on functionality rather than application. Once completed, the topic is considered resolved.
As a result, employees often understand what an AI tool does, but not how it should influence their decision-making, judgement, or collaboration. This gap is visible across all organisational levels, including management and executive leadership, and contributes to uncertainty around the real value of AI in L&D.

Several recurring patterns can be observed in unsuccessful AI adoption efforts:
- One-off training sessions that raise awareness but do not build lasting capability
- Tool-focused learning that emphasises interfaces over decision-making and context
- Limited guidance on appropriate usage, leaving employees unsure when AI support is beneficial or risky
The outcome is familiar: AI is technically available but inconsistently applied, misunderstood, or quietly avoided. In other cases, it is widely used without transparency, governance, or learning feedback.
AI literacy, however, cannot be deployed like software. It must be developed over time. From a learning perspective, this requires a different approach:
- Continuous learning to keep pace with evolving AI capabilities
- Role-specific content aligned with real responsibilities and decision contexts
- Reflection and assessment to ensure understanding goes beyond basic usage
- Learning embedded in daily work, enabling immediate application
From unstructured use to responsible enablement
At this stage, the discussion around AI in enterprise L&D should move beyond whether AI will influence everyday work. It already does, however often informally and without any clear structure. The more relevant question is whether organisations choose to acknowledge this reality and take responsibility for shaping it.
When AI usage remains unstructured, learning happens by coincidence rather than by design. While trial and error can support individual learning, it is not a model organisations would intentionally endorse at scale. Skills develop unevenly, risks remain hidden, and confidence may grow faster than actual competence.
This is where many AI initiatives lose momentum. Tools are introduced, access is granted, and the topic is informally closed. Yet availability does not guarantee understanding, and access alone does not create accountability. Without a learning framework, organisations lack visibility into how AI is used and what impact it truly has.
AI literacy therefore cannot be addressed purely through technical measures. IT plays a critical role in enabling and securing AI solutions, but it cannot teach judgement, contextual awareness, or responsible decision-making.
Enterprise L&D operates precisely in this space. It connects learning to roles, responsibilities, and real work scenarios. Treating AI literacy as a learning discipline rather than a rollout task changes the outcome significantly.
The lesson of tulip mania was not that tulips lacked value, but that value collapses when adoption accelerates beyond understanding. The same risk applies to AI. Organisations that invest in literacy before acceleration are better positioned to realise more meaningful benefits. Those that do not may find themselves equipped with powerful tools, and still a great deal of uncertainty about how to use them effectively.
Whitepaper: AI in L&D
Explore with us how AI can revolutionise the provision of corporate training programs, enhancing efficiency, individualisation, and speed of learning.
From “I Agree” to “I Understand”: Rethinking AI in Corporate Learning
Learn how organisations can shift from agreeing "blindly" to conscious consent when it comes to deploying AI.