juno journey
ICYMI: From Learning to Capability: How Human + AI Systems Actually Change Performance
Blog
February 25, 2026
Klil Nevo
6 min read read

ICYMI: From Learning to Capability: How Human + AI Systems Actually Change Performance

Blog

Challenge accepted series

February 24th 2026 | Hosted by The Learning Table & Juno Journey
Moderator: Malvika Jethmalani, Panelists: Shveta Malhan, Rachel Cossar, Michelle Corday.


Most organizations have never had more learning content, more platforms, and more “AI-enabled” training options. And yet, leaders still feel the same frustration: performance isn’t changing fast enough, and business outcomes aren’t materializing.

Yesterday’s panel tackled the uncomfortable truth behind that frustration: learning is not a capability. Capability is what shows up when the pressure is on — inside real workflows, with real constraints, and real incentives.

This conversation wasn’t about “which tools to buy.” It was about the operating model: how AI and humans work together to actually shift behavior, improve execution, and make capability measurable.

Below are the core discussion threads and takeaways.

 

 

The real breakdown: why AI learning doesn’t translate to performance

When organizations say, “We’re investing heavily in AI-powered learning,” the panelists described three predictable failure points:

1) No measurable business outcome → no signal of success

Michelle Corday framed it bluntly: many programs start with vague intentions (“be a better leader”) instead of measurable outcomes (e.g., reduce time-to-hire by 5 days, increase revenue by 10%, improve first-pass quality). Without that clarity, AI may personalize learning — but you still can’t prove impact.

Key takeaway: If you can’t name the measurable outcome, you can’t build the capability (or justify the investment).

2) Learning is treated as an event, not embedded in work

Shveta Malhan pointed to what she called the “imagination gap”: teaching AI concepts without helping people integrate AI into their daily workflows creates theoretical knowledge — not usable capability.

Capability changes when organizations are brave enough to redesign how work gets done, not just add new content on top.

Key takeaway: Transfer of learning happens in applied work, not in content consumption.

3) AI doesn’t magically solve the historic “behavior change” problem

Rachel Cossar reminded everyone: the gap between training and behavior change has existed for decades. AI makes it easier to scale practice and feedback — but it doesn’t remove the need for program design, reinforcement, and accountability.

Key takeaway: AI is an amplifier — not a shortcut.


The missing ingredient: practice (and why most learning stops too early)

Rachel shared one of the strongest metaphors of the session, rooted in her background as a professional ballet dancer:

For one hour of performance, dancers spend thousands of hours rehearsing.

In most organizations, the learning journey ends at “exposure”:

  • Watch the content
  • pass the quiz
  • return to work
  • hope for behavior changes

But capability requires a space to practice, iterate, make mistakes, and build confidence — before people have to perform live.

Practical translation for leaders:
If the skill matters (feedback, performance reviews, leadership conversations, customer communication), you need structured practice before the moment of truth.


Psychological safety: why people practice with AI before humans

One of the most actionable insights came from Rachel’s example with a university public-speaking studio:

Students were given access to both an AI practice environment and human mentoring. They requested a specific sequence:

  1. Practice with AI first
  2. Then bring in a human mentor once confidence is higher

Why? People fear judgment. AI can offer “objective-feeling” feedback and repetition without the social risk.

Leadership implication:
AI can become a low-friction rehearsal space that lowers shame, increases reps, and makes coaching conversations more productive.


Capability as infrastructure: skills data + talent marketplace as an operating system

Shveta offered a powerful reframing: capability is not a “learning problem” — it’s a visibility + deployment problem.

If you can’t see capability (who has which skills, at what level, what adjacent skills exist, where risk is forming), you can’t deploy it intelligently. Staffing becomes guesswork and proximity-based (“who knows who”).

She argued for thinking about:

  • skills as dynamic infrastructure (not a static dashboard)
  • talent marketplaces as a system for matching skills to real work (not just mobility)
  • reinforcement and feedback happening in the flow of work, not months later

Case example (very concrete)

Shveta described a professional services organization where:

  • Employees were leaving because they couldn’t see “career point B.”
  • Business leaders needed 6 weeks to assemble teams for client pitches

They built an internal talent marketplace where AI parsed skills from LinkedIn/resumes, created profiles employees could edit, and supported skill validation through manager feedback prompts.

Outcome: talent discovery moved from 6 weeks → 3–4 hours.
And HR gained visibility into skills in demand vs. fading skills — enabling smarter, demand-driven development.

Takeaway: capability compounds when you can deploy the right skills into real work faster.


The underestimated “lift”: AI isn’t a tech challenge, it’s a human challenge

Michelle’s change-management message landed hard:

  • AI success depends less on tools and more on process redesign + people adoption
  • Organizations often “layer tools on top of old processes,” losing all expected efficiency
  • adoption improves when you involve the people doing the work from the start — so change feels like it’s happening with them, not to them
  • AI requires culture permission: experimentation, learning, and failure must be normalized — and reinforced often (not a one-time email)

Key takeaway: AI transformation is cross-functional by nature; treating it as “HR’s tool” or “IT’s project” guarantees disappointment.


Lightning round: what leaders should do in the next 90 days

The panel converged on a simple, practical playbook:

Rachel: Design the program before deploying the tool

Spend more time than feels comfortable:

  • defining goals
  • matching activities to those goals
  • timing the launch close to real performance moments (e.g., performance reviews)
  • creating a tight window for participation (not open-ended access)

Intentional design drives usage and measurable progress.

Shveta: Start at the workflow level, then redesign end-to-end

Map one workflow and identify:

  • where AI can augment (add capacity/intelligence)
  • where AI can remove headaches (low-judgment, low-value tasks)

Then redesign the whole workflow as if you had “intelligence on tap.”

Start with one workflow. Win there. Expand.

Michelle: Pick one C-suite goal and tie incentives to outcomes

  • Choose one measurable business goal AI can impact
  • Create a cross-functional team
  • define outcomes
  • make those outcomes part of performance goals (and ideally compensation incentives)

Nothing accelerates capability like real incentives.


Final synthesis: the panel’s “north star.”

If you take one message from the entire session, it’s this:

Capability is a system.
It’s not content. It’s not a platform. It’s the combination of:

  • measurable outcomes
  • workflows designed for execution
  • practice + feedback loops
  • skills visibility + deployment
  • incentives + leadership clarity
  • psychological safety to experiment and improve

AI can help dramatically, but only when it’s used to reinforce capability inside real work, not decorate learning libraries.