Josh Kostreva
The Uncomfortable Truth About AI in Learning and Development
Back to Blog
Artificial IntelligenceNovember 6, 20259 min read

The Uncomfortable Truth About AI in Learning and Development

Every vendor in the learning space is selling AI. Most of it is smoke and mirrors. Here's what AI can genuinely transform in L&D, what it can't, and what the industry is getting dangerously wrong.

AILearning & DevelopmentEdTechMachine LearningTraining Technology
Josh Kostreva

Josh Kostreva

Training & Technology Leader

Every vendor in the learning space is selling AI. Most of it is smoke and mirrors. Here's what AI can genuinely transform in L&D, what it can't, and what the industry is getting dangerously wrong.

I sat through seven vendor demos last month. Every single one led with AI. "AI-powered learning paths." "AI-generated content." "AI-driven analytics." "AI personalization at scale."

In six of the seven demos, the "AI" was a recommendation engine — the same collaborative filtering technology that Netflix has used for fifteen years, wrapped in a new marketing skin. One of them was literally sorting courses by popularity and calling it "AI curation."

The learning and development industry has an AI honesty problem. And it is costing organizations real money, because the hype is so loud that it drowns out the genuine, transformative applications of AI in education.

What AI Actually Does Well in Learning

Let me start with what is real, because the real applications are genuinely powerful — more powerful, in fact, than the hype suggests, once you strip away the marketing language and look at what is actually happening.

Intelligent content creation. This is the application with the most immediate, tangible impact. AI can generate first drafts of training scripts, assessment questions, job aids, and scenario dialogues at a speed and volume that human content teams cannot match. A skilled instructional designer working with AI tools can produce in a day what used to take a week.

But — and this is critical — the quality of AI-generated educational content without human oversight ranges from mediocre to dangerously wrong. AI does not understand pedagogy. It does not know which concepts learners struggle with. It cannot distinguish between information that is technically accurate and information that is practically useful. It generates plausible content, which is not the same as effective content.

The right model is AI as a drafting tool and humans as editors, curators, and quality gatekeepers. The organizations getting value from AI in content creation are the ones that use it to accelerate their existing content teams, not to replace them.

Transcript and search. AI-powered speech-to-text has reached a level of accuracy that makes every video and audio recording in your library fully searchable. A learner can type a question and find the exact moment in a 45-minute video where that topic is discussed. This is not glamorous, but it is genuinely transformative — it turns linear media into a searchable knowledge base.

Combined with AI summarization, this means that a library of recorded webinars, product demos, and training sessions — content that most organizations have but few organize — becomes a structured, searchable educational resource without any manual effort.

Adaptive assessment. AI can analyze a learner's responses in real time and adjust the difficulty and focus of subsequent questions. Instead of a fixed 20-question quiz, the assessment narrows in on the learner's specific knowledge gaps, testing more deeply on areas of weakness and skipping areas of demonstrated competence. This produces more accurate assessments in less time.

In-product contextual assistance. This is where AI intersects most powerfully with customer education. An AI that understands your product — its features, its configuration options, its common workflows — can provide contextual help to users in real time. Not generic chatbot responses, but guidance that is aware of what the user is currently doing, what they have configured, and what they are likely trying to accomplish.

This application is genuinely new. Previous generations of in-product help were keyword-matching systems that surfaced static articles. AI-powered contextual assistance can understand natural language questions, reason about the user's situation, and provide guidance that feels like talking to a knowledgeable colleague.

What AI Cannot Do — And the Industry Pretends It Can

AI cannot design learning experiences. It can generate content, but it cannot determine what content should be created in the first place. It cannot identify the performance gaps that training should address. It cannot decide whether a problem is best solved through training, process improvement, or tool redesign. It cannot navigate the organizational politics that determine whether a training program succeeds or fails.

These are judgment calls that require understanding of the business context, the learner population, the organizational culture, and the desired outcomes. AI has access to none of this. When vendors claim that AI can "design personalized learning paths," what they usually mean is that AI can sequence content based on metadata tags. That is sorting, not designing.

AI cannot replace human connection in learning. Mentorship, coaching, facilitation, and collaborative learning are fundamentally human activities. The value comes from the relationship — the trust, the shared experience, the ability to read emotional cues and adjust accordingly. AI can supplement these interactions (by preparing a mentor with data about the learner's progress, for example), but it cannot replicate them.

The organizations that are cutting human facilitators and replacing them with AI chatbots are going to discover, painfully, that information delivery and learning facilitation are not the same thing.

AI cannot guarantee accuracy. This is the most dangerous gap. AI language models generate plausible text, not verified text. In a corporate training context, where the content may affect how people do their jobs, inaccurate training content is not just unhelpful — it is a liability risk.

A training module on compliance procedures that contains a subtle AI hallucination could lead to actual compliance violations. A technical training course that confidently describes a configuration process incorrectly could cause system outages. The confidence of AI-generated text makes these errors harder to catch, not easier — because the content reads as authoritative even when it is wrong.

Every piece of AI-generated training content requires human review by someone with subject matter expertise. Organizations that skip this step to save time are accepting a risk they may not fully appreciate.

The Personalization Promise

"AI-powered personalization" is the most oversold concept in learning technology. Let me explain why.

True personalization means adapting the learning experience to the individual — their prior knowledge, their learning style, their goals, their pace, their context. This is what a great human tutor does: observe, assess, adjust, and respond.

What most AI "personalization" actually delivers is segmentation. The system places learners into categories based on their role, their assessment scores, or their usage patterns, and then serves pre-built content paths to each segment. This is useful — it is better than one-size-fits-all — but it is not personalization. It is automated routing.

The gap between segmentation and personalization matters because organizations make investment decisions based on the personalization promise. They buy platforms expecting individual-level adaptation and receive segment-level routing. The result works well enough that no one complains, but it does not deliver the transformative learning outcomes that the sales pitch implied.

True AI personalization in learning is coming. The technology to deliver genuinely individualized learning experiences — where the AI understands not just what the learner knows, but how they think, what motivates them, and what pedagogical approach will be most effective for them specifically — is advancing rapidly. But it is not here yet for most practical implementations, and pretending otherwise sets unrealistic expectations.

What to Do Now

If you are a learning leader navigating the AI landscape, here is my honest advice:

Invest in AI for content production. The ROI is immediate and measurable. AI drafting tools, video transcript generation, and automated translation will make your content team dramatically more productive. Just maintain rigorous human review.

Invest in AI for search and discovery. Making your existing content library searchable and surfaceable through AI is high-value, low-risk, and often underappreciated. Most organizations have more valuable educational content than they realize — it is just buried in recordings, documents, and archives that nobody can find.

Be skeptical of AI personalization claims. Ask vendors to demonstrate exactly how their AI adapts to individual learners. If the answer involves metadata tags and content sequencing, you are buying sophisticated automation, not personalization. That is fine — automation has value — but price it accordingly.

Do not cut your human expertise. AI makes skilled instructional designers, facilitators, and subject matter experts more productive. It does not make them unnecessary. The organizations that will build the best learning programs in the AI era are the ones that use AI to amplify human expertise rather than replace it.

The genuine AI revolution in learning is happening. It is just not the revolution the vendors are selling. It is quieter, more specific, and more powerful than the marketing suggests — if you know where to look.

Enjoyed this article? Share it with others

Need Help With Artificial Intelligence?

Let's discuss how I can help your organization with training, technology, or digital transformation.

Get in Touch