Trust in closed source systems declining

Explore top LinkedIn content from expert professionals.

Summary

Trust in closed source systems—which are technologies whose inner workings are kept secret and unavailable for public inspection—is steadily declining as people rely more on these tools despite growing doubts about their reliability and transparency. This shift is driven by concerns about hidden flaws, biased data, and a lack of accountability, especially as closed-source AI and cloud solutions play bigger roles in critical decisions.

  • Prioritize transparency: Whenever possible, choose open systems or demand clear explanations about how closed-source technologies work and make decisions.
  • Encourage user feedback: Invite users to share concerns and experiences, so issues can be identified and addressed more quickly, preventing silent dependency on questionable systems.
  • Promote independent review: Support third-party audits or collaborative oversight for closed systems, ensuring trust is built through external validation rather than blind reliance.
Summarized by AI based on LinkedIn member posts
Image Image Image
  • View profile for Stuart Winter-Tear

    Author of UNHYPED | AI as Capital Discipline | Advisor on what to fund, test, scale, or stop

    53,457 followers

    We’ve entered a strange, paradoxical AI post-hype realism stage. People are trusting AI less, but using it more, and validating it less. These were the most important signals I took from the KPMG 2025 AI Trust Landscape report, especially in advanced economies. To me, this isn’t just statistical noise. It’s a deep signal of cultural and cognitive dissonance. AI is taking root in daily workflows, decisions, and mental models, not because people trust it, but despite the fact that they increasingly don’t. In other words, AI is becoming infrastructural, even as people detach from belief in its reliability. That’s not how we usually adopt technology. Most tech earns trust slowly, through consistent benefit. But AI seems to be reversing that logic, it's being adopted faster than it’s being understood, validated, or believed. And if we’re building business processes, decisions, and institutions on top of systems people don’t trust and don’t verify, that’s not a maturity curve, that’s a fragility spiral. It’s more like we’re using AI compulsively, ritualistically, even cynically, like a slot machine that mostly pays out. It’s no longer about accuracy, it’s about the perceived cost of not using it. This is creating a strange new class of dependency, as people are outsourcing critical thinking to systems they increasingly don’t trust, not because they’re confident in the outputs, but because they’re afraid to fall behind if they don’t. Even stranger is the fact that so few are pulling back. There’s no mass rejection, no principled slowdown, just a quiet, expanding gap, between capability and comprehension, between usage and belief, between the system’s reach and our ability to reason about it. In the past, trust was a gateway to adoption, now, adoption precedes trust, and sometimes replaces it. That’s not just a shift, it’s a reversal of how we filter technology into society. And maybe the real issue is this: We’re not just building software tools, we’re building epistemic infrastructure. Systems that will guide decisions in law, education, hiring, medicine, policy. If people treat those systems as untrustworthy black boxes - but still defer to them - then we’re injecting ambient doubt into the very foundations of how society makes choices. If that’s the case, this isn’t hype anymore, it’s learned helplessness in a probabilistic wrapper. What’s eroding isn’t just trust in AI, it’s the very idea that we should know how a system works before we let it decide things for us. I remain an AI optimist, but we must not confuse momentum with maturity. Thursday musings...

  • View profile for Omer Goldberg

    Founder and CEO @ Chaos Labs | We're Hiring!

    12,117 followers

    “There will be more AI Agents than people in the world.” – Mark Zuckerberg As AI grows, autonomous agents powered by LLMs (large language models) take on critical tasks without human oversight. While these systems hold incredible potential, they also face significant risks: manipulation through biased data, unreliable information retrieval, and prompt engineering, all of which can result in misleading outputs. At Chaos Labs, we’ve identified a critical risk: AI agents being unknowingly trained on manipulated, low-integrity data. The result? A dangerous erosion of trust in AI systems. In our latest essay, I dive deep with Reah Miyara, Product Lead, Model Evaluations at OpenAI. https://lnkd.in/eB9mPQWW Key insights from our essay -> The Compiler Paradox: Trust in foundational systems can be easily compromised. "No matter how thoroughly the source code is inspected, trust is an illusion if the compilation process is compromised." LLM Poisoning: LLMs are susceptible to “poisoning” through biased training data, unreliable document retrieval, and prompt injection. Once biases are embedded, they taint every output. RAG (Retrieval-Augmented Generation): While designed to make LLMs more accurate, RAG can amplify false information if external sources are compromised. Conflicting Data: LLMs don't verify facts—they generate answers based on probabilities, often leading to inconsistent or inaccurate results. Attack Vectors: LLMs can be attacked through biased data, unreliable retrieval, and prompt engineering—allowing adversaries to manipulate outputs without altering the model. The Path Forward -> Trust in LLMs must go beyond surface-level outputs and address the quality of training data, retrieval sources, and user interactions. At Chaos Labs, we’re actively working on solutions to improve the reliability of AI systems. Our vision for the future is simple: With GenAI data exploding, verified truth and user confidence will be an application’s competitive edge. To get there, we’re developing solutions like AI Councils—a collaborative network of frontier models (e.g., ChatGPT, Claude, LLaMA) working together to counter single-model bias and enhance reliability. If these challenges excite you, we want to hear from you.

  • View profile for Mark Boost

    CEO @ Civo - the hybrid / multi cloud provider that’s built for more. Compute, Kubernetes, S3 Storage, GPUs and AI.

    17,741 followers

    It's no surprise that trust in cloud providers is wavering, especially after Broadcom's £69 billion acquisition of VMware back in November 2023. Aggressive restructuring, transitioning to subscription-only licensing, divesting divisions, and significant layoffs have disrupted long-standing business models, highlighting a broader industry issue where providers prioritise profits over customer relationships and stability. Ofcom has even launched an antitrust investigation into cloud services, focusing on anti-competitive practices like high data egress fees and opaque licensing, which could lead to major reforms. Our latest research at Civo revealed that nearly 70% of VMware customers anticipate price hikes, which is why we advocate for open-source cloud solutions, transparent pricing, and long-term cost commitments, values that most companies lost to profit-centric practices. Businesses should adopt cloud strategies that emphasise open standards, data sovereignty, and multi-cloud capabilities to avoid vendor lock-in and ensure operational continuity. The cloud computing sector must decide whether to continue current practices or embrace openness and transparency to maintain customer trust and support sustainable growth. Read more here: https://buff.ly/3yau8mB

  • View profile for Graham Cooke

    CEO & Founder @bravaxyz. Defining Intelligent Capital Markets | Al policy engines + stablecoin rails for automated, transparent credit | Author | Ex-Google | Exited Founder | NED

    15,207 followers

    OpenAI's closed-source approach puts them on the wrong side of history. Here's why the future of AI must be open and transparent: The battle between open and closed systems defines tech history. In each case, closed systems dominated early but open systems won in the end: • CompuServe vs Internet • Windows vs Linux • iOS vs Android Why? Because open systems harness collective intelligence: Thousands of developers spot bugs faster than any single company. Security issues get fixed quickly. Innovation accelerates exponentially. The cost of development plummets. But with AI, the stakes are higher than ever. Imagine AI making life-or-death decisions: • Medical diagnoses • Self-driving cars • Financial systems • Military applications Would you trust a system you can't inspect? This is why OpenAI's transformation is concerning. In 2015, they launched with a clear mission: Create AI that benefits humanity through open-source development. By 2019, everything changed: • Shifted to "capped-profit" model • Took $1B from Microsoft • Went closed source This betrayal of principles led to Musk's lawsuit in 2024. But this isn't about personal drama. It's about the future of humanity's relationship with AI. Open source creates trust through transparency. When code is visible, you can verify what it does. With closed systems, you're forced to trust black boxes. The pattern throughout tech history is clear: • Open systems start slower • But they win in the end • They harness humanity's collective intelligence • They build trust through transparency We're at a pivotal moment in AI development. Will it be controlled by a few companies? Or will it be open, transparent, and community-driven? History suggests the winner is clear. The question is: Which side of history will you be on? --- I'm Graham. Former Google employee who built $2B+ revenue products. Author of "Web3: The End of Business as Usual." Currently building bravaxyz to make blockchain technology accessible to billions. Follow for more insights on AI, blockchain, and the future of technology.

  • View profile for Nitesh Rastogi, MBA, PMP

    Strategic Leader in Software Engineering🔹Driving Digital Transformation and Team Development through Visionary Innovation 🔹 AI Enthusiast

    8,704 followers

    𝐓𝐡𝐞 𝐀𝐈 𝐏𝐚𝐫𝐚𝐝𝐨𝐱 𝐢𝐧 𝐒𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭: 𝐌𝐨𝐫𝐞 𝐔𝐬𝐚𝐠𝐞, 𝐋𝐞𝐬𝐬 𝐂𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 The latest developer survey, featured by Ars Technica, is a wake-up call for technology leaders: trust in AI coding tools continues to fall, even as usage reaches all-time highs. Here’s what the numbers show—and what they mean for our engineering teams. 🎯𝐑𝐢𝐬𝐢𝐧𝐠 𝐔𝐬𝐚𝐠𝐞, 𝐄𝐫𝐨𝐝𝐢𝐧𝐠 𝐓𝐫𝐮𝐬𝐭 👉𝟖𝟒% of developers now use or plan to use AI coding tools, up from 76% last year—over half (51%) use them daily. ▪Yet trust in AI accuracy is at just 33%, and “high trust” is a mere 3%. Distrust is now at 46%, up sharply from 31% last year. ▪For experienced engineers, “high trust” plummets to just 2.6%. ▪Overall approval of AI is down from 72% in 2024 to only 60% this year. 🎯𝐑𝐞𝐚𝐥-𝐖𝐨𝐫𝐥𝐝 𝐈𝐦𝐩𝐚𝐜𝐭 👉𝟔𝟔% of developers say AI solutions are “almost right, but not quite”—leading to time-consuming bugs and more debugging, not less. ▪45% report debugging AI-generated code takes more time than expected. ▪29% believe AI tools struggle notably with complex tasks. ▪When the stakes are high, 75% would still turn to people for help rather than trust AI alone. ▪62% say ethical or security concerns require human judgment, not machine output. 🎯𝐖𝐡𝐚𝐭’𝐬 𝐍𝐞𝐱𝐭 𝐟𝐨𝐫 𝐋𝐞𝐚𝐝𝐞𝐫𝐬? 👉𝐄𝐦𝐛𝐞𝐝 𝐛𝐞𝐬𝐭 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬 ▪Human-in-the-loop review remains essential for mission-critical code and projects. 👉𝐅𝐨𝐬𝐭𝐞𝐫 𝐨𝐩𝐞𝐧 𝐝𝐢𝐚𝐥𝐨𝐠𝐮𝐞 ▪Encourage teams to surface frustrations—especially about “almost-right” AI outputs—and share solutions collaboratively. 👉𝐈𝐧𝐯𝐞𝐬𝐭 𝐢𝐧 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐞𝐝𝐮𝐜𝐚𝐭𝐢𝐨𝐧 ▪Prioritize learning not just new AI-powered tools, but strategies for verification, auditing, and responsible adoption. The paradox is clear: AI is becoming more central in our workflows, not less. As adoption accelerates, let’s make sure trust and human expertise stay at the core of our development culture. 𝐒𝐨𝐮𝐫𝐜𝐞/𝐂𝐫𝐞𝐝𝐢𝐭: https://lnkd.in/grUheu9i #AI #DigitalTransformation #GenerativeAI #GenAI #Innovation  #ArtificialIntelligence #ML #ThoughtLeadership #NiteshRastogiInsights 

  • View profile for Adileh Mountain

    I help CFOs, COOs, and VPs of Ops at mid-market construction companies ($50M–$500M) build operations that keep up with their growth, including AI where it actually counts | $9.5B+ Projects Delivered | Ex-Deloitte

    2,249 followers

    𝟵𝟱% 𝗼𝗳 𝗘𝗥𝗣 𝘃𝗲𝘁𝗲𝗿𝗮𝗻𝘀 𝗷𝘂𝘀𝘁 𝗮𝗱𝗺𝗶𝘁𝘁𝗲𝗱 𝘁𝗵𝗲𝗶𝗿 𝗶𝗻𝗱𝘂𝘀𝘁𝗿𝘆 𝗵𝗮𝘀 𝗮 𝘁𝗿𝘂𝘀𝘁 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. My post last week about "how your ERP system integrator's revenue model fundamentally conflicts with your implementation goals" went viral. I expected pushback from the SI community, but got overwhelming validation instead. 95% of the responses agreed with my premise: → 70% total agreement → 25% agreed with important caveats → 5% disagreed or were skeptical Most of the comments came from ERP veterans with decades of implementation scars. The most revealing insight: 𝗘𝘃𝗲𝗻 𝘁𝗵𝗲 𝗦𝗜 𝗱𝗲𝗳𝗲𝗻𝗱𝗲𝗿𝘀 𝗮𝗱𝗺𝗶𝘁𝘁𝗲𝗱 𝘁𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗲𝘅𝗶𝘀𝘁𝘀. The comments kept circling back to:  • "We need a partnership mentality, not adversarial contracts"   • "Both sides need to share accountability"   • "Clients often sabotage their own projects" But here's what I took from it: 𝗪𝗲'𝗿𝗲 𝗱𝗲𝘀𝗰𝗿𝗶𝗯𝗶𝗻𝗴 𝗮 𝗳𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹 𝗯𝗿𝗲𝗮𝗸𝗱𝗼𝘄𝗻 𝗶𝗻 𝗵𝘂𝗺𝗮𝗻 𝘁𝗿𝘂𝘀𝘁. When people feel vulnerable, they protect themselves first. When systems reward self-interest over shared success, integrity becomes optional. We've created an environment where everyone's incentivized to point fingers instead of link arms. The result is that projects that 𝘴𝘩𝘰𝘶𝘭𝘥 transform businesses turn into expensive battlegrounds... ...while everyone's more focused on survival than success. And the saddest part is that 𝘁𝗵𝗲 𝗲𝗻𝗱 𝘂𝘀𝗲𝗿𝘀 - 𝘁𝗵𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝘄𝗵𝗼 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗵𝗮𝘃𝗲 𝘁𝗼 𝗹𝗶𝘃𝗲 𝘄𝗶𝘁𝗵 𝘁𝗵𝗶𝘀 𝘀𝘆𝘀𝘁𝗲𝗺 𝗱𝗮𝗶𝗹𝘆 - 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗼𝗻𝗲𝘀 𝘄𝗵𝗼 𝘀𝘂𝗳𝗳𝗲𝗿 𝗺𝗼𝘀𝘁. This isn't some new problem. It's completely systemic. If you're feeling this right now, you have two paths forward:  1. Build trust like you mean it. ✅ Make integrity non-negotiable for both teams.  ✅ Create shared goals where everyone wins when the project succeeds.  2. Bring in an independent advocate whose only job is protecting your outcome...  ✅ Someone who understands both SI tactics and client blind spots. Your SI isn't evil. Your internal team isn't incompetent. But the fundamental incentives are broken, and wishful thinking about "true partnerships" won't fix them. 𝗕𝗲𝗲𝗻 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗮𝗻 𝗘𝗥𝗣 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝘆𝗼𝘂𝗿𝘀𝗲𝗹𝗳? 𝗦𝗵𝗮𝗿𝗲 𝘆𝗼𝘂𝗿 𝗽𝗲𝗿𝘀𝗽𝗲𝗰𝘁𝗶𝘃𝗲 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀. 𝘕𝘦𝘦𝘥 𝘢𝘯 𝘪𝘯𝘥𝘦𝘱𝘦𝘯𝘥𝘦𝘯𝘵 𝘢𝘥𝘷𝘰𝘤𝘢𝘵𝘦 𝘧𝘰𝘳 𝘺𝘰𝘶𝘳 𝘌𝘙𝘗 𝘪𝘮𝘱𝘭𝘦𝘮𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯? 𝘛𝘩𝘢𝘵'𝘴 𝘦𝘹𝘢𝘤𝘵𝘭𝘺 𝘸𝘩𝘢𝘵 𝘐 𝘥𝘰 𝘧𝘰𝘳 𝘤𝘭𝘪𝘦𝘯𝘵𝘴. 𝘚𝘦𝘯𝘥 𝘮𝘦 𝘢 𝘋𝘔 𝘢𝘣𝘰𝘶𝘵 𝘸𝘩𝘢𝘵 𝘤𝘩𝘢𝘭𝘭𝘦𝘯𝘨𝘦𝘴 𝘺𝘰𝘶'𝘳𝘦 𝘧𝘢𝘤𝘪𝘯𝘨. #ERP #SystemsIntegrator #Trust #Partnership

  • View profile for Sebastian Mueller
    Sebastian Mueller Sebastian Mueller is an Influencer

    Follow Me for Venture Building & Business Building | Leading With Strategic Foresight | Business Transformation | Modern Growth Strategy

    26,862 followers

    AI doesn’t stumble on technology. It stumbles on trust. Most companies still deploy AI like old IT systems: top-down, pre-baked, “here’s your new workflow.” And then they wonder why adoption stalls. The numbers say it all: Trust in company-provided gen-AI fell 31% in two months. Trust in autonomous tools fell 89%. That’s not resistance — that’s feedback. You can’t mandate trust. You have to earn it — and track it. If you can measure sentiment, friction, and confidence, then Trust Health becomes a KPI. Treat it like latency or uptime: if the trust baseline drops, you stop the rollout. Simple. And once trust is a KPI, the approach shifts: - Co-create workflows with the people who actually do the work. - Ship in small loops to reveal friction early. - Make “No trust → No scale” a rule, not a slogan. The companies winning with AI aren’t the ones with the flashiest models. They’re the ones that understand one thing: Technology is cheap. Trust is the moat. What’s the one trust metric you’d track before scaling any AI tool in your organisation? https://lnkd.in/eRShuVSs #AI #Transformation #Business #Strategy

  • View profile for Dion Wiggins

    CTO at Omniscien Technologies | Board Member | Strategic Advisor | Consultant | Author

    12,899 followers

    Trust Betrayed. Again. Anthropic—the company that branded itself as “privacy-first” and “safety-driven”—just torched its own moat. Starting now, Claude will train on your chat transcripts and coding sessions unless you manually opt out by September 28. Five years of storage replaces the old 30-day deletion rule. Free, Pro, Max, Claude Code—no exceptions. This is not an update. It is a betrayal. → Hypocrisy laid bare: The self-proclaimed “responsible” AI company now runs the same playbook as the rest—harvest first, ask forgiveness later. → Compliance nightmare: Sensitive conversations, contracts, legal docs, and code can now sit in Anthropic’s servers for half a decade. Opt-out ≠ consent. → Structural exposure: For governments and enterprises that bought Claude for its privacy promises, the foundation just cracked. → Pattern confirmed: In the end, every closed model company caves to the same growth imperative: extract more data, hold it longer, and lock users in. The last fig leaf of “privacy-first AI” has fallen. The message is simple: sovereignty and control cannot be outsourced. The question for every policymaker, CIO, and enterprise is now clear: how many more times will you let “responsible AI” vendors betray your trust before you build systems you truly control? https://lnkd.in/gm2J-T6h

  • View profile for Dmitrii Kharlamov

    Hacker. C, C++, Ruby, Rails, Go, Python, Rust

    3,161 followers

    Reliability used to be a matter of professional pride. Now it's a rounding error on a product roadmap. We all noticed GitHub was getting slow and unreliable. The official status page makes you feel like you're imagining it — until you look at the history. One incident per quarter, eight years ago. Up to four incidents a month, now. I checked. GitHub is just the most visible example — the one that's easy to measure because enough people use it that the degradation is impossible to deny. But the pattern holds across the stack: the CI runners, the package registries, the cloud DNS resolvers, the auth providers. The things developers treat as given have quietly become the things developers route around. GitHub is useful here precisely because it keeps incident records. It gives us numbers. Most of the rot doesn't. But the full picture is worse than the cadence alone suggests. GitHub has removed uptime figures from their official pages. Some people managed to reconstruct the old status page format and feed it GitHub's still-published incident reports. Looking at that data, the degradation is stark — uptime figures that would have sent shockwaves through the industry just a few years ago, numbers that would be embarrassing for a student project, are now the normal baseline for one of the most critical pieces of infrastructure in software development. Not long ago, reliability like this wouldn't just put mid-level managers on the spot. The board would be sacking executives, announcing a code freeze, and commissioning a full audit. Instead, in the vibe-coded fog we currently inhabit, it registers as nothing. Record profits, mass layoffs, and breathless proclamations that "90% of code will be written by AI agents." The data center cancellation wave — driven largely by community opposition over electricity costs and water use, but also by players like Microsoft quietly walking away from gigawatts' worth of leases — is being read by many as an early signal that the AI investment cycle is cooling. Maybe. But even if the AI reckoning arrives on schedule, it won't retroactively restore the uptime that eroded. It won't flip single-to-triple-nine reliability back to the four and five nines that were considered aspirational targets for serious production infrastructure before the "ship it and monetize it" era took hold. The industry's values flipped. Deep technical understanding lost ground to the appearance of innovation. Shiny displaced solid. And the consequences are now showing up in the status pages of tools that millions of developers depend on every day.

Explore categories