
ICYMI: Change management in the Era of AI
Challenge accepted series
January 28, 2026 | Hosted by The Learning Table & Juno Journey
Led by Shlomit Gruman- Navot
A lot of organizations are treating AI like a classic adoption project:
roll out a tool, train people, measure usage, hope for productivity...Shlomit’s point was sharper: If you frame AI as “adoption,” you’ll get shallow usage. If you frame AI as “work redesign,” you’ll get transformation.
Because AI doesn’t just add a new tool to the stack. It touches how work gets done, how decisions get made, and how operating models evolve.
Why AI change feels different (and harder)
Shlomit put language to what many of us feel:
AI is touching the actual work (tasks, workflows, decision flow).Which means it impacts operating models, ways of working, decision-making, accountability, AND leadership judgment.
So the uncomfortable truth is: you can’t “implement AI” without understanding the work.
And understanding the work means breaking it down:
- What tasks exist today
- What should stay human-led
- What should be augmented
- What can be automated end-to-end
Framework #1: The Iceberg Problem
AI transformation has a surface layer… and a below-the-surface layer.
Above the surface (what everyone sees)
- copilots
- chatbots
- coding tools
- “shiny” AI features
Below the surface (where impact actually happens)
- decision-making prep (“back and forth with AI before a decision”)
- coordination of work between teams
- analytics + synthesis
- admin + transactional work being reshaped
- how judgment gets built (or outsourced)
And here’s the paradox she flagged (especially for People / L&D): If we delegate the very tasks that build judgment… What happens to leadership development in the long run?
That’s not a reason to avoid AI. It’s a reason to be intentional about what stays human-led.
Framework #2: Above the Line / At the Line / Below the Line
This was one of the most practical takeaways, because it turns “AI strategy” into a sorting exercise.
1) Above the line = Humans lead
Where we need judgment, ethics, tradeoffs, accountability, ownership, “I’ve seen this before” muscle memory, Shlomit flagged a real risk here (especially for younger talent): Beautiful output that looks right but has no substance.
2) At the line = Humans in the loop
AI helps you draft, synthesize, research, and iterate, but you’re still steering memos, presentations, analysis, planning docs, and structured thinking
3) Below the line = Automate
End-to-end tasks AI can do better (with the right context + governance):
- repetitive admin work
- coordination workflows
- “Someone has to answer this again” type questions
- routing/summarization / documentation
Framework #3: “Make AI Boring” (HubSpot case)
This was the crowd favorite because it’s so… unsexy.
And that’s why it works.
Shlomit’s summary of the HubSpot approach:
Step 1: Relief
Use AI to reduce effort and friction.
Step 2: Normalize
Turn it into a habit (the 3Rs):
Reminders → Repetition → Rituals
Step 3: Accountability
Only once it’s a habit can it raise expectations and ownership.
The punchline: AI doesn’t scale through hype. It scales through routines.
Framework #4: Leadership / Lab / Line (LLL)
If you want AI change to stick, you need three layers operating together:
Leadership
It starts from the top. Not in theory, in behavior.
One example Shlomit gave: a CEO openly saying
“this memo was written with ChatGPT—it's not perfect, we’ll iterate.”
That single move does a lot:
- normalizes experimentation
- reduces shame
- signals permission to learn in public
Lab
Champions across functions who test what works, share patterns, and scale locally.
Because people follow people. Not playbooks.
Line
Everyone. Because work redesign happens in day-to-day reality.
And (important point):
do change with people, not to people—because the people closest to the work know the pain points best.
The mindset layer: why change breaks
Resistance isn’t about “people are difficult.”
It’s often about safety and ambiguity:
- Who is accountable if AI gets it wrong?
- Is it safe to experiment?
- What happens when I make mistakes?
Two concepts Shlomit brought in:
“Pilot mindset” (BetterUp)
People who embrace AI tend to have optimism (the work can get better), and agency (“I’m still in control of how I redesign my work”).
And those are buildable.
Non-tech competencies that matter most
When asked what enables AI-enabled transformation beyond tools, Shlomit went straight to curiosity (always learning), trial + error muscle (and reflection), psychological safety (permission to try), and resilience (to stay in it when it’s messy)
Her line that stuck with me:
Mindset is even more important than experience…
except in roles where judgment is the product.
Q&A highlights (what the room actually needed)
“How do I position this as an enterprise transformation to senior leaders?”
Shlomit’s advice:
- Bring data about why AI transformations fail when treated as “tool adoption.”
- bring cases where it works (and why)
- start small: one function, one process, build internal proof, then scale
“How do we redesign HR’s work specifically?”
Two dimensions:
- HR transforming HR (what work can be automated, augmented, human-led—bias + accountability, especially in hiring)
- HR leading enterprise redesign (capacity freed up to drive broader change)
“What framework should I name when leadership asks for a ‘model’?”
Her answer (very pragmatic):
- Use what works, but execution matters more than the label
- If you need named structures:
- AI Fluency model (for role-level expectations)
- LLL (for how change scales)
- Iceberg + Above/At/Below line (for work redesign)
The closing takeaway
If you remember one sentence from this session, make it this:
AI change management is no longer about adoption.
It’s about redesigning work, decision-making, and operating models—continuously.
And that’s exactly why HR, L&D, and People leaders should be at the front of it.