Wade Foster’s Post

We've run 13 private AI Leaders Labs. The question that comes up in every single one: "How do we get our people building safely?" I spoke at MIT last week about this same tension. AI adoption is outpacing the governance frameworks designed to manage it. It seems that every company is feeling this, asking these questions. So for the first time ever, we're opening the Leaders Lab to the public. Next Thurs, April 23rd. Join: - Brandon Sammut , Zapier's CPO & Chief AI Transformation Officer - Iain Roberts, Airbnb's CPO & Global Head of Employee Experience - Hannah Calhoon, Indeed's VP of AI Innovation - Ankur Bhatt, Rippling's VP of AI We’ll help you go from mandate to results, safely. 90 minutes, totally free. Reserve a seat here before it fills up: https://lnkd.in/gKcYahq7

  • No alternative text description for this image

What stands out to me is the focus on “building safely” as a shared question across organizations 💙 It feels like the most meaningful progress happens when governance and practice evolve together, not in parallel silos!

Like
Reply

I signed up—look forward to hearing from these leaders. Aligns with what we are seeing as well! From a Make It Toolkit perspective, “build fast but safely” is a tension between motivation, ability, and risk. Most teams over-index on: - Make it Intriguing (AI excitement) - Make it Empowering (anyone can build) But under-invest in: - Make it Easy → clear guardrails, not just policies- Safe (Aversive) → reduce fear/compliance risk - Make it Obvious → what “good” looks like - Make it Goal-Oriented → what progress to make So teams get stuck between pull of progress and anxiety of consequences. Winners won’t just add governance—they’ll design experiences where the right behavior is the easiest, safest, and most rewarding path.

Like
Reply

AI adoption outpacing governance isn't a tech problem — it's an incentive problem. Speed gets rewarded. Caution doesn't. Until something breaks. That's why "building safely" stays a question and never becomes a system. Curious what frameworks are actually sticking vs. the ones that sound good in a lab but die in execution?

Like
Reply

Feels like every org is now trying to solve speed vs safety at the same time And both sides are pulling in different directions

true Wade Foster most orgs are still figuring out safe AI adoption while usage is already exploding👍

Like
Reply

We’re past “should we use AI?” Now it’s “how do we not break things while using it?”

Like
Reply

Love this! Having an industry-wide framework for AI deployment allows companies at the forefront of AI to be ready in such a changing landscape. We don't know where AI will be in 5 years, but we CAN create a framework to understand how we want to approach it!

Like
Reply

I really like how you frame the tension between speed and safety in AI adoption 💙 In edtech we’re seeing a similar shift, where enabling experimentation without losing guardrails is becoming the real challenge!

Like
Reply

I registered and shared with my team!

This looks really great. Building safely with AI is an important aspect of innovation. Probably one of the most important.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories