Great Expectations? Which of these two images better portrays the software delivery life cycle in your company? Dickens’ masterpiece was written long before hyperscalers were a thing. Unfortunately the dystopian parody accurately represents how most CTOs deliver technology enablement to business. A dark world of backlog items, delays and overworked engineering. a16z’s thesis is that next AI unicorns won't be making AI tools, but simply delivering the services is not just for startups. It is also for CTOs. Of the CTOs I speak with about AI strategy, every single one says they have a strategy. For most "strategy" is the team's use of AI co-pilots, tooling, and things that deliver more lines of code. The conversations here are mostly about barriers. The exhausted political capital in the company, the difficulty of changing some of the most entrenched (and expensive) vendor agreements in the company, and the resistance to change from their own teams. A select few are moving to AI-enabled services. There is an energy here among these leaders, the ones that recognize the rising tide. There is an insatiable hunger in these companies, to tear up precedent and redefine the relationship between technology and business. In the book Joe Gargery — the humble blacksmith — gets the best line: "Life is made of ever so many partings welded together." Something to consider for tech leaders navigating painful transitions.
Daniel Haudenschild’s Post
More Relevant Posts
-
Remember when AI was supposed to automate everything by the 1990s? It didn’t happen. And the reason might surprise you. In the 1980s, companies poured billions into AI automation—especially expert systems and LISP machines—promising intelligent decision-making at scale. But most failed spectacularly. Why? They overpromised what the tech could actually deliver. These systems lacked robust reasoning architectures. They could follow rules, but couldn’t adapt, learn, or handle ambiguity—the very things real-world problems demand. Sound familiar? Today’s founders are repeating the same pattern: hyping AI tools that automate surface tasks while ignoring foundational architecture. The result? Short-term wins, long-term collapse. Here’s what history teaches us: • Build for reasoning, not just rules — automation without understanding fails under complexity • Validate assumptions early — pilot in constrained environments before scaling • Invest in modular design — so your system can evolve as understanding deepens • Measure real outcomes, not just speed — efficiency ≠ intelligence • Stay humble about capabilities — overconfidence kills innovation The 1980s AI winter wasn’t just about funding cuts. It was a reckoning for those who confused hype with capability. Founders today: don’t let your startup become a cautionary tale. Learn from the past. Design for depth, not just demos. #AI #Founders #Automation #TechHistory #StartupLessons #ArtificialIntelligence #Innovation
To view or add a comment, sign in
-
Last week, a Founder came to us with a very familiar problem. Caching issues, slow load times, mounting technical debt for the feature they are currently building. The culprit? Their team had been heavily supplementing development with AI, treating it as a catch all solution due to being down a headcount. We diagnosed the technical debt quickly. What took longer to surface was the talent debt underneath it. When your team doesn't build, they don't learn. When they don't learn, they can't troubleshoot, iterate, or scale what they've shipped. AI can accelerate your team. But it can't replace the understanding that comes from doing the work. The goal isn't to prompt your way to a product. It's to build the right stack and the right team to sustain it. #TechLeadership #AIAdoption #EngineeringLeadership #TechFounders #StartupLife #HumanPlusAI
To view or add a comment, sign in
-
Three AI business cases got blocked by CTOs this month. All three had strong business ROI. Here’s what they missed: They optimized for one deployment. The CTO asked: “What happens when we scale this across 5 teams? 10 use cases?” When the answer was “we’ll rebuild a lot of this again,” the conversation stalled. CTOs don’t think in projects. They think in systems. If every AI initiative starts from scratch, new pipelines, new context, new integrations, you’re not scaling AI. You’re scaling effort. The proposals that are getting approved right now show a different curve: Project one builds the foundation. Project two reuses context, pipelines, and learnings. Project three compounds speed, not complexity. A declining effort curve, not just a cost curve. That only happens when: • context is retained, not rediscovered • integrations are reusable, not rewritten • learnings compound, not reset At Kaara, we’re seeing up to 85% faster ramp-up on second implementations because teams are not starting from zero again. That’s not just efficiency. That’s how engineering velocity scales without burning out teams or bloating architecture. If your AI plan requires rebuilding every time, it’s not a platform. It’s a series of experiments. Agree or disagree? #EnterpriseAI #AIStrategy #ProductionAI #CompoundingBuild
To view or add a comment, sign in
-
-
Most AI teams are not building a product. They are rebuilding the same product over and over. Every use case treated as a custom project. Every deployment starting from scratch. Every integration rebuilt from the ground up because nothing from the last initiative carried forward. This is not innovation. It is repetition with a bigger budget. The organizations that have crossed from pilot to production are not smarter or better funded. They made one different decision: they stopped treating every AI initiative as a one-off project and started building a platform that makes the next deployment faster than the first. That is what the Composable Stack actually means. Not speed. Not scale. Repeatability. How many times has your team rebuilt the same integration from scratch? #EnterpriseAI #Geddin #AppliedAI #ComposableStack
To view or add a comment, sign in
-
-
Stop treating AI like a one-off project and start treating it like a platform. The goal of a Composable Stack is simple: Repeatability. If you are rebuilding the same integration for the third time, you are moving backward.
Founder & CEO, Geddin | Applied AI Growth Architect • Enterprise Demand Generation | Former NextEra Energy | Turning Intelligence into Revenue
Most AI teams are not building a product. They are rebuilding the same product over and over. Every use case treated as a custom project. Every deployment starting from scratch. Every integration rebuilt from the ground up because nothing from the last initiative carried forward. This is not innovation. It is repetition with a bigger budget. The organizations that have crossed from pilot to production are not smarter or better funded. They made one different decision: they stopped treating every AI initiative as a one-off project and started building a platform that makes the next deployment faster than the first. That is what the Composable Stack actually means. Not speed. Not scale. Repeatability. How many times has your team rebuilt the same integration from scratch? #EnterpriseAI #Geddin #AppliedAI #ComposableStack
To view or add a comment, sign in
-
-
The AI development boom has made building software faster than ever—but speed alone doesn’t guarantee success. Rocket has launched Rocket 1.0, introducing what it calls the world’s first Vibe Solutioning platform—designed to help teams answer two critical questions often ignored in the AI development cycle: • What should we build? • What happens after we launch? By combining strategic decision-making, product development, and competitive intelligence into one connected platform, Rocket 1.0 enables businesses to move from a business question to a product launch—and continuously track market changes—all in the same workflow. As AI-driven development evolves, the focus is shifting from just building faster to building smarter with real market insight. How is your team deciding what product to build next in the AI era? Read the Full Announcement Here: https://lnkd.in/dQVceQpv #ArtificialIntelligence #AIDevelopment #VibeCoding #ProductDevelopment #NoCode #LowCode #StartupTechnology #TechInnovation #CompetitiveIntelligence #DigitalTransformation
To view or add a comment, sign in
-
-
In most cases, when organizations talk about GenAI ROI, the focus is on model selection, accuracy, or speed, while the real cost drivers tend to sit quietly in the background. This piece looks at one of those areas, tokenization, and why it often becomes a material cost factor long before leaders realize it, especially as GenAI use cases move from experimentation into sustained production environments. At Bridgeforce, we see this pattern show up frequently. The technology works, the use case makes sense, but the underlying cost mechanics were never fully modeled or pressure tested, which leads to confusion when expected ROI does not materialize the way it was originally pitched. If you are evaluating or already scaling GenAI initiatives, this is worth a read. https://lnkd.in/eU269Gdy #GenAI #AIgovernance #AIstrategy #OperationalReality #Bridgeforce
To view or add a comment, sign in
-
🗞 Tech News Alert 💥💥 The A.I. frenzy is back. But this wave feels less like experimentation—and more like a land grab. Capital is pouring into models, chips, and “AI-first” product narratives again, yet the real competition is shifting downstream. The winners won’t just have better demos; they’ll control distribution, data rights, and the workflows where AI becomes habitual rather than impressive. For founders, the strategic question is changing from “What model should we use?” to “What defensible system are we building around it?” That means proprietary data pipelines, tight integration into a specific job-to-be-done, and a clear path to measurable ROI. If your product can be swapped by a prompt and an API key, it’s not a moat—it’s a feature. For operators and investors, the risk is mistaking momentum for durability. The next correction won’t punish “AI” as a category; it will punish companies that can’t translate inference costs, reliability, and governance into sustainable unit economics and trust. What’s your current litmus test for separating AI theater from durable advantage? #AI #Strategy #Startups #ProductManagement #VentureCapital #EnterpriseSoftware #TechTrends 🔗Original news: https://www.nytimes.com FOLLOW FOR MORE UPDATES !
To view or add a comment, sign in
-
From PoC to Production: Scaling AI the Right Way In the last 18 months, I’ve had countless conversations with engineering leaders facing the same challenge. They’ve built impressive LLM-powered prototypes. Internal demos work flawlessly. Stakeholders are excited. But the moment the question shifts to: 👉 “Can this handle 50,000 real users?” Silence. The reality? A Proof of Concept (PoC) proves the math. Production proves the engineering. Today, the industry is filled with brilliant AI PoCs that never made it to production. Not because the models failed—but because the systems around them weren’t ready. Scaling AI isn’t just about better prompts or bigger models. It’s about systems thinking and operational excellence. Here’s what truly matters: 🔹 Reliability over novelty – Can your system handle failures gracefully? 🔹 Scalability by design – Architecture must grow with demand, not break under it 🔹 Observability – You can’t scale what you can’t measure 🔹 Cost control – LLM usage at scale is an engineering problem, not just a budget line 🔹 Security & governance – Especially when dealing with real user data If you treat AI like a science experiment, it stays in the lab. If you treat it like a production system, it creates real impact. At Icanio, we’re focused on bridging that gap—turning promising AI ideas into scalable, reliable systems that actually serve users at scale. 💡 The future of AI isn’t just about what models can do. It’s about what systems can sustain. Read the full blog here: https://icanio.com #AI #LLM #Engineering #Scalability #MLOps #SystemDesign #TechLeadership #ArtificialIntelligence #Startups #Innovation
To view or add a comment, sign in
-
-
"Free cinder blocks don't build high-rises." 🏗️ That's the analogy someone used to describe where we are with AI today, and it came up at a networking dinner I attended last week, where a room of founders, VCs, and senior technology leaders were talking about AI and where it's all heading. Claude, GPT, Gemini, "the cinder blocks", have never been more accessible, more powerful, or cheaper to use. The barrier to building software, SaaS, and tools has been significantly reduced by AI. So the question every enterprise team is now asking is a fair one: "Should we build software in-house?" It's a harder question than it first appears because building a working demo and building a production system are two fundamentally different things. A few questions worth thinking through before deciding: ✔️ Who is responsible for safety checks and edge cases? ✔️ Who maintains the system when something breaks at 2 am? ✔️ Who owns it, technically and commercially, when the team changes? These are the questions that separate a v1 prototype from something that runs reliably at enterprise scale. AI is more accessible than ever. That's genuinely exciting. But accessible is just the starting point. What comes after, such as reliability, safety, and long-term ownership, is where the real work lives, and where the real value gets created. So, building in-house with AI is accessible, but who in your organisation is accountable when it's running your business at 2 am, and something breaks? #AI #EnterpriseAI #Innovation #B2B #SaaS
To view or add a comment, sign in
-