Customer Experience

Explore top LinkedIn content from expert professionals.

  • View profile for Aakriti Bansal

    Marketing Consultant | Helping Brands Grow Strategically | Author, Gita on the Go (5K+ Happy Readers) | Ex-L’Oréal, Noise | IMT Ghaziabad

    72,187 followers

    Ever noticed how Dhabas on Delhi highway have more cars outside than people inside? You slow down, see the crowd, and think,  “This one must be good.” That’s social proof! It’s a psychological shortcut, when we’re unsure what’s best, we assume the choice of many must be right. Some dhaba owners rent cars to park outside their restaurants. They know you’ll equate a full parking lot with good food. That’s not a coincidence. That’s marketing psychology.. Because humans don’t just follow logic. We follow signals. A crowded place signals trust and an empty one signals risk. You’ll see the same principle everywhere, “Bestseller” tags on e-commerce sites. “100k downloads” on apps.  “Trusted by 10,000+ customers” on websites. It’s all modern-day parked cars. So if you’re building a brand, Don’t just tell people you’re good. Let them see that others already believe it. Because in marketing, perception often drives action and a few parked cars can create a lot of traffic.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    718,933 followers

    Most Retrieval-Augmented Generation (RAG) pipelines today stop at a single task — retrieve, generate, and respond. That model works, but it’s 𝗻𝗼𝘁 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁. It doesn’t adapt, retain memory, or coordinate reasoning across multiple tools. That’s where 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗥𝗔𝗚 changes the game. 𝗔 𝗦𝗺𝗮𝗿𝘁𝗲𝗿 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗳𝗼𝗿 𝗔𝗱𝗮𝗽𝘁𝗶𝘃𝗲 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 In a traditional RAG setup, the LLM acts as a passive generator. In an 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗥𝗔𝗚 system, it becomes an 𝗮𝗰𝘁𝗶𝘃𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺-𝘀𝗼𝗹𝘃𝗲𝗿 — supported by a network of specialized components that collaborate like an intelligent team. Here’s how it works: 𝗔𝗴𝗲𝗻𝘁 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗼𝗿 — The decision-maker that interprets user intent and routes requests to the right tools or agents. It’s the core logic layer that turns a static flow into an adaptive system. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗠𝗮𝗻𝗮𝗴𝗲𝗿 — Maintains awareness across turns, retaining relevant context and passing it to the LLM. This eliminates “context resets” and improves answer consistency over time. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗟𝗮𝘆𝗲𝗿 — Divided into Short-Term (session-based) and Long-Term (persistent or vector-based) memory, it allows the system to 𝗹𝗲𝗮𝗿𝗻 𝗳𝗿𝗼𝗺 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲. Every interaction strengthens the model’s knowledge base. 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗟𝗮𝘆𝗲𝗿 — The foundation. It combines similarity search, embeddings, and multi-granular document segmentation (sentence, paragraph, recursive) for precision retrieval. 𝗧𝗼𝗼𝗹 𝗟𝗮𝘆𝗲𝗿 — Includes the Search Tool, Vector Store Tool, and Code Interpreter Tool — each acting as a functional agent that executes specialized tasks and returns structured outputs. 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗟𝗼𝗼𝗽 — Every user response feeds insights back into the vector store, creating a continuous learning and improvement cycle. 𝗪𝗵𝘆 𝗜𝘁 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 Agentic RAG transforms an LLM from a passive responder into a 𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝘃𝗲 𝗲𝗻𝗴𝗶𝗻𝗲 capable of reasoning, memory, and self-optimization. This shift isn’t just technical — it’s strategic It defines how AI systems will evolve inside organizations: from one-off assistants to adaptive agents that understand context, learn continuously, and execute with autonomy.

  • View profile for Lauren Stiebing

    Founder & CEO at LS International | Helping FMCG Companies Hire Elite CEOs, CCOs and CMOs | Executive Search | HeadHunter | Recruitment Specialist | C-Suite Recruitment

    57,853 followers

    Over the last year, nearly every FMCG executive I’ve spoken to whether sitting in Chicago, Paris, or São Paulo has echoed the same challenge: “We need to get closer to the consumer, faster.” Global brand, local nuance the future of FMCG growth depends on how well your leadership understands the street, not just the spreadsheet. It’s no longer enough to run a global playbook and hope for local resonance. Why? Because the center of gravity in FMCG has shifted. 84% of FMCG companies are now increasing local decision autonomy in key growth markets. (Bain FMCG Operating Model Report, 2023) → That means your CMO can’t be the only one with a finger on the pulse. → Your regional GM can’t just execute HQ strategy. → And your global leaders can’t lead with assumptions they need cultural fluency and operational humility. In other words: local-for-local is not just a supply chain shift. It’s a leadership shift. The most successful candidates weren’t those who had rotated through five global hubs. They were the ones who could… → Read the cultural nuances of consumer behavior in that specific region → Navigate the regulatory quirks that could derail a product launch → Influence global teams while building trust with local retailers → Speak the language literally and commercially They understood the street not just the spreadsheet. And they had the rare ability to connect what’s happening on the ground with what needs to be shifted at the center. These are the leaders FMCG needs now. → Strategists who don’t just adapt to the market, they anticipate it. → Operators who don’t wait for HQ they build and test in-market. → Connectors who know when to push back and when to align. Because in today’s world, speed and relevance win. And that doesn’t come from waiting for global sign-off. It comes from empowering the right local leaders. Here’s where I see many companies trip up: They treat “local” as junior. As operational. As reactive. The truth? Your next competitive edge may be a GM in Manila, a Marketing Director in Lagos, or a Commercial Lead in Warsaw who’s trusted enough to build strategy from the ground up. That’s what global FMCG companies are starting to understand and what we’re helping them solve for in every executive search we run. Not just global leaders who can work across regions…but local leaders who can lead across functions, cultures, and expectations while driving growth with urgency and empathy. This is the new face of global FMCG. Not centralized, but coordinated. Not rigid, but responsive. Not top-down, but built from the middle out. #ExecutiveSearch #FMCGLeadership #GlobalGrowth #ConsumerGoods #TalentStrategy #LeadershipHiring

  • View profile for Nick Mehta
    Nick Mehta Nick Mehta is an Influencer

    Board Member: Gainsight, F5 (NASDAQ: FFIV), Pubmatic (NASDAQ: PUBM), Larridin

    105,352 followers

    “We hired you 3 months ago? Why has our churn not dropped yet?” That’s a real quote that a CCO I know recently heard from their CEO. Too often, I witness the following 4 act play: Act 1: “We have a big churn problem” Act 2: “Let’s hire a Chief Customer Officer” Act 3: “Why is churn still high?” Act 4: “We didn’t really need a Chief Customer Officer anyways” Putting aside the title, the issue is that Chief Customer Officers OWN operations and INFLUENCE the rest of the company. If I had to list the levers in reducing churn across companies that I’ve experienced, they’d go in descending order: * Product-market fit * Balance of desire for growth with aligning to the Ideal Customer Profile * Product stickiness * Competitive dynamics * Pricing * Product functionality and quality * Post-sales operations “Wait - did Nick say that post-sales operations don’t matter?” Of course not. All I’m saying is that rethinking onboarding, hiring #CustomerSuccess Managers, streamlining support, etc. can only get you so far. Putting numbers on it… - If your Gross Retention is < 80%, I’ve found that strong Chief Customer Officers can reduce churn by 3-5 points, since there is a lot of low hanging fruit. - If your GRR is between 80 and 90%, it’s probably closer to a 1-2 point reduction potential. - If your GRR is above 90%, a 1 point churn drop is massive. What about the rest? The biggest churn drops come from things like the below, which CCOs can identify and then partner with colleagues to implement: * “Customers that use feature [X] have 10 points less churn” => Product: Make feature X easier to deploy * “Clients that buy from us that use [integrated system Y] churn at a high rate” => Marketing: Avoid outbound efforts to [Y] audience * “Our pricing model is causing churn because it becomes unaffordable at high volumes” => Product Marketing: Rethink the high end of the pricing curve The CCO role isn’t just about being a detective and solving churn on your own. It’s also about being a search light - shining visibility onto how the rest of the company can reduce churn.

  • View profile for Panagiotis Kriaris
    Panagiotis Kriaris Panagiotis Kriaris is an Influencer

    FinTech | Payments | Banking | Innovation | Leadership

    158,191 followers

    The digital bank is an outdated concept. Fast being replaced by the intelligent bank. The only question is how soon banks can manage the transition. Let’s take a look. I have broken down the main elements that make up the transition to the intelligent bank: 1. From transactional to predictive banking: digital banking enabled 24/7 self-service, but intelligent banking takes it further by predicting customer needs. AI-driven models analyse real-time data to offer personalised financial insights, proactive credit offerings, and automated investment recommendations. 2. AI-powered risk & fraud management: traditional risk assessment relied heavily on historical data. Intelligent banks use AI and machine learning to detect fraud in real time, identify suspicious patterns and prevent threats before they occur. 3. Hyper-personalisation: instead of generic offers, intelligent banks use AI to tailor financial products to individual customers (mass personalisation). 4. Seamless omni-channel experience: customers no longer interact with banks through a single channel. Intelligent banking ensures that a user can start a transaction on a mobile app, continue it via a chatbot, and complete it with a human advisor. All while maintaining a seamless, connected experience. 5. Autonomous banking operations: intelligent banks optimise back-office processes using cloud and AI automation, reducing human errors and significantly improving efficiency. Functions such as loan approvals, compliance checks, and reconciliation are increasingly self-regulated by AI-driven workflows.   Banks are in a time race. They not only need to move from digital to intelligent but also do it fast.   In doing so technology is the biggest dependency. One of the most interesting approaches I have seen on how to best support banks in this transition is Huawei's 4-Zero model, which is based on 4 main pillars:   1. Zero Downtime → Instant Readiness AI-powered predictive maintenance and cloud resilience ensure 24/7 availability, allowing banks to deploy and scale AI solutions without service disruptions. 2. Zero Wait → Faster Customer Experiences AI-driven real-time processing eliminates delays in transactions, approvals, and customer interactions, making banking services ultra-responsive. 3. Zero Touch → Reduced Operational Burden End-to-end automation using AI and machine learning removes manual intervention in processes like KYC, loan approvals, and compliance, freeing up resources for AI innovation. 4. Zero Trust → Seamless AI Integration AI-driven security frameworks continuously validate access, ensuring trust and compliance while enabling banks to integrate AI-powered services without increasing risk. The era of intelligent banking isn’t a distant future - it’s happening now. Banks will not be able to transform in months but getting a head start can make a difference. Opinions and graphics: Panagiotis Kriaris  #HuaweiMWC  #RAAS  #IntelligentFinance

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,459,084 followers

    The Voice Stack is improving rapidly. Systems that interact with users via speaking and listening will drive many new applications. Over the past year, I’ve been working closely with DeepLearning.AI, AI Fund, and several collaborators on voice-based applications, and I will share best practices I’ve learned in this and future posts. Foundation models that are trained to directly input, and often also directly generate, audio have contributed to this growth, but they are only part of the story. OpenAI’s RealTime API makes it easy for developers to write prompts to develop systems that deliver voice-in, voice-out experiences. This is great for building quick-and-dirty prototypes, and it also works well for low-stakes conversations where making an occasional mistake is okay. I encourage you to try it! However, compared to text-based generation, it is still hard to control the output of voice-in voice-out models. In contrast to directly generating audio, when we use an LLM to generate text, we have many tools for building guardrails, and we can double-check the output before showing it to users. We can also use sophisticated agentic reasoning workflows to compute high-quality outputs. Before a customer-service agent shows a user the message, “Sure, I’m happy to issue a refund,” we can make sure that (i) issuing the refund is consistent with our business policy and (ii) we will call the API to issue the refund (and not just promise a refund without issuing it). In contrast, the tools to prevent a voice-in, voice-out model from making such mistakes are much less mature. In my experience, the reasoning capability of voice models also seems inferior to text-based models, and they give less sophisticated answers. (Perhaps this is because voice responses have to be more brief, leaving less room for chain-of-thought reasoning to get to a more thoughtful answer.) When building applications where I need a more control over the output, I use agentic workflows to reason at length about the user’s input. In voice applications, this means I end up using a pipeline that includes speech-to-text (STT) to transcribe the user’s words, then processes the text using one or more LLM calls, and finally returns an audio response to the user via TTS (text-to-speech). This, where the reasoning is done in text, allows for more accurate responses. However, this process introduces latency, and users of voice applications are very sensitive to latency. When DeepLearning.AI worked with RealAvatar (an AI Fund portfolio company led by Jeff Daniel) to build an avatar of me, we found that getting TTS to generate a voice that sounded like me was not very hard, but getting it to respond to questions using words similar to those I would choose was. Even after much tuning, it remains a work in progress. You can play with it at https://lnkd.in/gcZ66yGM [At length limit. Full text, including latency reduction technique: https://lnkd.in/gjzjiVwx ]

  • View profile for Shewali Tiwari

    marketer under metamorphosis: creative. content-led. writer.

    22,980 followers

    So, here’s a quick story about how I managed to take our app ratings at airtel from a 3.2 to a solid 4.3 in just 30 days. I was on a call with our account executive at MoEngage where we were discussing the RFM model. If you’re not familiar, RFM stands for Recency, Frequency, Monetization—it’s basically a way to understand customer behavior based on how often they use the app, how recently they’ve been active, and if they’ve made any purchases. After the call, I started thinking—how can we use this data beyond just targeting users for offers or notifications? And then it clicked: we could use this to improve our app ratings. Here’s what I did next: instead of showing the app rating prompt to everyone (which was clearly not working), I decided to get more specific. I created a segment of users who were really engaged—people who were listening music for at least 20-30 minutes a day and opening the app 5-6 times daily. These were our power users, the ones who were already loving the app. But I didn’t just stop there. I made sure the rating prompt would only pop up after an “aha moment,” like after they listened to five songs or changed their hello tune. I wanted to catch them at a high point when they were already feeling good about their experience. Plus, we capped the prompt to only show up once a week, so we weren’t bombarding them. And guess what? It worked! By focusing on the users who were most likely to give us positive feedback, we managed to take our ratings from 3.2 to 4.3 in just a month. It was all about understanding who to ask, when to ask, and how to make that moment feel seamless.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    225,112 followers

    🧑🏼 How To Design Better Personas In UX (https://lnkd.in/eGPXmPNZ), a step-by-step guide to reduce decoration and add meaningful data to make personas more helpful and effective. Neatly put together by Slava Shestopalov. ✅ We need to know who users are and what they need to do. ✅ We can use both personas and Jobs-to-Be-Done for that. 🤔 They serve different purposes and focus on different things. ✅ Jobs-to-Be-Done focuses on user needs and outcomes. ✅ Personas focus on users, their behavior and mental model. ✅ Useful personas emerge from profound user research. ✅ They help visualize users, their goals and motivation. 🚫 Don’t focus on demographics to avoid stereotypes. ✅ Include the way of thinking, background, “a day in life”. ✅ Always add at least one persona with a disability. ✅ Add a story, pain points and how they use your product. ✅ List user’s habits/products they use daily, often and rarely. ✅ Finally, add needs, wants and fears mentioned by users. ✅ Then, prioritize key points for each role in your team. We often speak about personas being an outdated tool, successfully replaced by Jobs-to-Be-Done. Yet often in practice they are compatible. Both move the focus to user needs, yet they shed light onto user from different perspectives. Knowing how users think, behave and feel is as important as what they do. As Page Laubheimer noted, personas help remove box-checking mentality. They tell a story of the customer, what their environment is, what their habits are, the tools they use daily — and give product teams a way to think about users in a much more approachable and tangible way. Ultimately, use what works for you and for your team: just make sure that the user details aren’t invented, and root in actual research with actual customers. Useful resources: Personas vs. Jobs-to-Be-Done, by Page Laubheimer https://lnkd.in/eHA2Ft4J A Guide To Building Personas For UX, by Maze https://lnkd.in/ehCzACZW Personas for UX, Product, and Design Teams, by UserInterviews https://lnkd.in/eeE3pVUK A Simple Guide To Personas, by Rikke Friis Dam, Yu Siang Teo https://lnkd.in/eRA52v5m Five-Steps Framework for Building Better Personas, by Nikki Anderson, MA https://lnkd.in/eGWpqkdz Fixing User Personas, by Jordan Bowman https://lnkd.in/eDPCr63Q Personas Make Users Memorable, by Aurora Harley https://lnkd.in/eh-PYMxc A Closer Look At Personas (A Series), by Mo Goltz https://lnkd.in/eGqbr9wy https://lnkd.in/eBDsSsaR #ux #design #research

  • View profile for Steve Bartel

    Founder & CEO of Gem ($150M Accel, Greylock, ICONIQ, Sapphire, Meritech, YC) | Author of startuphiring101.com

    33,730 followers

    We analyzed 4 million recruiting emails sent through Gem. Most get opened. But only 22.6% get replies. Half those replies are "thanks, but no thanks." We dug into what actually works. Here are 8 factors that drive REAL responses: 1. Strategic timing beats everything else - 8am gets 68% open rates. 4pm hits 67.3%. 10am lands at 67% - Most recruiters blast at 9am when inboxes are flooded - Avoiding peak times alone can boost your opens by 7-10% 2. Weekend outreach is criminally underused - Saturday/Sunday emails get ≥66% open rates consistently - Why? Empty inboxes. Zero competition. Candidates actually have time - Yet few recruiters send on weekends. Their loss is your gain 3. Keep messages between 101-150 words - Shorter feels spammy. Longer gets skimmed - You need exactly 10 sentences to nail the essentials - Every word beyond 150 drops performance 4. Generic templates kill response rates - Generic templates: 22% reply rate - Personalized outreach: 47% increased response rate - Even adding name + company to subject lines boosts opens by 5% 5. Subject lines need 3-9 words - Include company name + job title for highest opens - "Senior Engineer Role at [Company]" beats clever wordplay - 11+ words can work if genuinely intriguing, but why risk it? 6. The 4-stage sequence is optimal - One-off emails are dead. Send exactly 4 follow-up messages - You'll see 68% higher "interested" rates with proper sequencing - After stage 4, engagement completely flatlines. Stop there 7. Get the hiring manager involved - Having the hiring manager send ONE follow-up boosts reply rates by 50%+ - Yet most recruiters don't use this tactic - Weekend advantage: Minimal competition for attention 8. Leadership involvement is a cheat code - Role-specific timing (tech vs non-tech) matters - Technical roles: 3 of 4 best send times are weekends - Engineers check email differently than salespeople. Adjust accordingly TAKEAWAY: These aren't opinions. This is what 4 million emails tell us. Most recruiting teams are stuck in 2019 playbooks wondering why their reply rates won't budge. Meanwhile, recruiters who implement these 8 factors see dramatically better results. The data is right there. The patterns are clear. The only question is: will you actually change how you operate? Or will you keep sending the same tired emails at 9am on Tuesday? Your call.

  • View profile for Lenny Rachitsky
    Lenny Rachitsky Lenny Rachitsky is an Influencer

    Deeply researched no-nonsense product, growth, and career advice

    357,907 followers

    My biggest takeaways from Jason Cohen: 1. “Too expensive” is never the real reason customers cancel. They already saw your pricing and decided to buy, so something else changed. When customers cite price, dig deeper—the actual reason might be changing needs, integration issues, or feature gaps. 2. Ask “What made you cancel?” instead of “Why did you cancel?” Jason tested both phrasings and saw response quality double with the “what made you” version. The first version directs attention to the product or situation and invites one-word deflections like “budget.” 3. Most companies undercharge because they just guessed at pricing and never revisited it. One founder selling to enterprise charged $300 per year, and Jason advised switching to $300 per month. Signups stayed exactly the same. When you 12x your price and conversion doesn’t budge, you’re not even close to finding the right number yet. 4. Pricing selects your market more than it signals value. When your product costs too little, larger companies assume it can’t be serious: not mature enough, no governance policies, inadequate support. Raise prices and you don’t necessarily lose customers; you enter a different market segment where your price signals credibility. The founder who went from $300 per year to $300 per month and saw no change in signups—he just shifted who was buying. 5. Your churn rate sets a ceiling on your business that most founders underestimate. The math is simple: divide your monthly new customers by your monthly cancellation rate, and that’s the maximum number of customers you will ever have. This is why logo churn is the first metric to examine when growth stalls. 6. Onboarding is the highest-leverage lever to reduce churn. Small improvements in the first 30 days compound into retention gains over the customer lifetime. When you don’t know where to start on retention, start with onboarding. 7. Positioning can allow you to charge an order of magnitude more without changing your product. The same exact product can command higher prices depending on how you frame it. “Cut your ad costs in half” caps what customers will pay—they’ll only give you a fraction of the savings you drive them. While “double your leads” aligns with what executives actually want and opens a much higher budget. CEOs reward growth; they merely acknowledge cost savings. 8. Sometimes the right answer is accepting that not growing is OK. If you’ve optimized churn, pricing, NRR, and channels, maybe growth has natural limits. Bootstrap founders reach a point where healthy annual dividends make further scaling optional. The question “Do you need to grow?” deserves honest examination—not because stasis is fine but because growth at all costs can destroy what made the company good.

Explore categories