New piece is out — "The Value of Correlations: Validation and Anomaly detection" (https://lnkd.in/g_rSeTCq) The first part ( https://lnkd.in/gRWG2cEV) of the three part series explored how orthogonal context gives AI the independent dimensions it needs to build a specific interpretation of a business and not a generic one. This piece explores the other side: how correlated metrics give AI the internal consistency checks to validate that interpretation and surface anomalies. Together, they point to the same truth: the quality of AI analysis depends on the type of context it has access to. The last part of the series which will be published in two weeks discussed how we can build a system to improve the quality of analysis.
Chandra Narayanan’s Post
More Relevant Posts
-
🏆 Top Cited Paper in Issue 3 (2024) in MAKE Evaluation Metrics for Generative Models: An Empirical Study 👥 Authors: Eyal Betzalel, Coby Penso, and Ethan Fetaya How reliable are the metrics we use to evaluate generative AI models? 🤖📊 This highly cited paper takes a critical look at widely used metrics such as FID and Inception Score, showing that while they correlate with probabilistic measures, they can produce inconsistent rankings when comparing similar models. It introduces a novel evaluation protocol based on synthetic datasets with known likelihoods, offering a more robust and interpretable framework for assessing generative models. 🔍 📖 Read more: https://lnkd.in/gsEEpTFt #GenerativeModels #PerformanceEvaluation #SyntheticData
To view or add a comment, sign in
-
-
“Ambient businesses” are coming. This means agents must be able to a) keep long-term context across sessions via persistent memory b) RELIABLY retrieve (remember) the right context in the right moment aka “situational awareness” and c) adapt their behavior (agency) based on experience You can’t learn what you can’t remember and you can’t remember what you can’t retrieve. Solving this, unlocks ambient AI, proactive AI and ultimately “ambient businesses” https://lnkd.in/g_m6SUwX
To view or add a comment, sign in
-
"ambient businesses" are coming, and most people are underestimating what it takes to build them. They're thinking about agents as "better chatbots" when the reality is completely different. The three things you listed - persistent memory, situational awareness, adaptive agency -aren't just "nice to haves." They're the prerequisite infrastructure that most AI stacks are completely missing. Here's why this matters: Right now, everyone is building agents that are essentially stateless APIs with a fancy wrapper. They can hold a conversation within a session, but the moment that session ends? Gone. No memory, no context, no learning. That's not ambient - that's just a fancier way of doing the same thing we've been doing for decades. The agents that win in the "ambient business" era need to be fundamentally different: They need memory that persists across sessions - not just storing vectors, but understanding what was useful before, what the user actually cares about, what worked and what didn't. They need retrieval that works at decision time, not just query time - situational awareness is the hard part. Knowing what the user needs right now based on time, context, patterns, history. Most RAG systems are built for "find what's relevant" not "find what's right for this moment." They need agency that adapts from outcomes - not just following prompts, but learning from what happened. Did the user correct the agent? Did the action succeed or fail? That's experience. That's learning. That's what turns a stateless API into something that gets better over time. And the piece everyone overlooks: Trust and UX. For these agents to run businesses with "almost no daily human input," users need to actually trust them. That means signals not answers, transparency about decisions, human-in-the-loop for high-stakes actions, and calibration so the agent knows when to act vs when to wait. This is the infrastructure that doesn't exist yet. And that's the opportunity we're building toward. www.midbrain.ai
Founder @Midbrain | Entrepreneur | AI + Spatial Computing Researcher est 2015 | Prev XR + AI astronaut training and ops tools HCI | BCI | Currently: Awareness for autonomous embodied agents
“Ambient businesses” are coming. This means agents must be able to a) keep long-term context across sessions via persistent memory b) RELIABLY retrieve (remember) the right context in the right moment aka “situational awareness” and c) adapt their behavior (agency) based on experience You can’t learn what you can’t remember and you can’t remember what you can’t retrieve. Solving this, unlocks ambient AI, proactive AI and ultimately “ambient businesses” https://lnkd.in/g_m6SUwX
To view or add a comment, sign in
-
There has never been more hype around AI agents in academia. Everyone is talking about them, but it remains unclear what these agents are truly capable of or what constitutes best practice. To address this, I'm organizing a Research with AI Agents Workshop Series, inviting colleagues and friends who have been actively exploring the application of AI agents to share how they're using them in their work and research. The first talk will feature my friend Dr. Yanbo Zhang, with the title "Vibe Research: Framework, Trust, and Collaboration." See abstract and bio below. Time: Sat, March 21, 10am-11am ET Location: please email me for Zoom link More information can be found at: https://lnkd.in/gi2jFWi3
To view or add a comment, sign in
-
-
Virtual Vanguard Recap: Jim Lecinski https://ift.tt/P76VhCb Our industry is in a critical phase of AI transformation that’s less about tools and more about intent, sequencing, and discipline. There’s not yet a tried-and-true playbook for AI applications […] via Adweek Feed https://www.adweek.com March 30, 2026 at 07:07PM https://ift.tt/FeurNE3
To view or add a comment, sign in
-
If you can’t clearly define what success looks like… You probably shouldn’t be investing in AI yet. That might sound blunt—but it’s what we’re seeing over and over in contact centers right now. Teams are jumping into multi-agent AI expecting transformation… Without answering basic questions: 👉 What should the customer experience actually look like? 👉 What metrics should materially improve—and by how much? 👉 What should your agents stop doing vs. start doing? No clear answers = no clear structure. And without structure… AI doesn’t fix your problems — it exposes them. Longer handle times. Inconsistent experiences. Expensive tools that underdeliver. The companies actually winning with AI aren’t ahead on tech. They’re ahead on clarity. They start with the end in mind… define success in concrete terms… then build the structure to support it. Everyone else is just hoping AI figures it out for them. It won’t. 💁♂️ Read the blog: https://lnkd.in/e9w9FUb9 — If you’re already investing in AI and not seeing results, that’s usually the gap. Happy to have a direct conversation about it.
Is your contact center truly ready for AI, or are you just hoping it is? Many organizations want the benefits of multi‑agent AI, but few have the structured foundations needed to actually get there. Structure matters more than we often admit. When things are unstructured, AI struggles. When they’re semi‑structured, it can help, but only to a point. Build a solid, organized foundation, and suddenly multi‑agent AI becomes a powerful, predictable partner in your contact center. If you want AI that works with your contact center (not against it), this one’s worth a read. 💁♂️ Read the blog: https://lnkd.in/e9w9FUb9
To view or add a comment, sign in
-
-
Is your contact center truly ready for AI, or are you just hoping it is? Many organizations want the benefits of multi‑agent AI, but few have the structured foundations needed to actually get there. Structure matters more than we often admit. When things are unstructured, AI struggles. When they’re semi‑structured, it can help, but only to a point. Build a solid, organized foundation, and suddenly multi‑agent AI becomes a powerful, predictable partner in your contact center. If you want AI that works with your contact center (not against it), this one’s worth a read. 💁♂️ Read the blog: https://lnkd.in/e9w9FUb9
To view or add a comment, sign in
-
-
The organizations that win the next three years won't be the ones that used the most AI tools. They'll be the ones that connected the intelligence. There's a gap forming between enterprise teams running on AI tools and those running on AI operating models. The first group is faster. The second group is learning, cycle over cycle, every program compounding on the last. The next jump in AI maturity is not adding another tool. It's connecting the intelligence you already have into a coherent operating model. We wrote about this in depth. Take a look: 🔗 https://lnkd.in/ek83mSRR
To view or add a comment, sign in
-
-
Yamify is partnering with alx_africa to host an exclusive OpenClaw workshop. This session is designed for anyone looking to move from AI curiosity to practical implementation. Participants will build and deploy an AI agent that can automate real workflows. The focus is simple. Practical application over theory. As AI continues to reshape how work gets done, the ability to design and use AI agents is becoming a core skill. April 17, 2026 5:00 PM Register here: https://alx.ng/openclaw
To view or add a comment, sign in
-
-
How do we ensure AI remains accurate when the data it sees at "test time" looks different from its training set due to underlying latent distribution shift, also called dataset/domain shift? 🤔 My recent Medium article explores Test-Time Adaptation—a crucial step toward deploying AI in high-stakes environments. The article builds upon developing the intuition to deep mathematical study on one of the first Test Time adaptation paper: Tent: Fully Test-time Adaptation by Entropy Minimization(Wang et al , ICLR 2021). Encourage feedbacks and appreciate you all to have a look. Thanks. https://lnkd.in/dPsb6-xz
To view or add a comment, sign in