🔬 Paper Alert: Trust in AI is not built in isolation – it’s social. 🤖 Proud supervisor moment: My (and Simon B. de Jong´s) doctoral student Türkü Erengin has just published her very first paper, "You, Me, and the AI: The Role of Third-Party Human Teammates for Trust Formation Toward AI Teammates," in Wiley´s Journal of Organizational Behavior. 🤖 So, what does this research tell us? AI teammates are becoming a reality in modern workplaces. But while research has focused on how humans individually evaluate AI, Türkü’s work brings a fresh perspective: trust in AI is shaped severely by and learned from the people around us. Using two main (+ two supplementary) studies including a really cool observational, incentivized study with human-AI teams (including real GPT-powered physical service robot Temi, see picture)—this paper shows that: ✅ If a human teammate trusts an AI, their colleagues are more likely to trust it too. This effect is not only quite strong, it is also stable when controlling for people´s own, initial preferences after trying the AI the first time and holds true in contexts where actual money is on the table! ✅ This effect disappears if the human teammate themselves is seen as untrustworthy. ✅ Trust in AI is not just about the AI's own reliability—it depends on social context and human relationships. 🚀 Why does this matter? 1️⃣ Organizations implementing AI should focus on social dynamics and context rather than just AI performance. It does not (only) matter how well AI functions - if relevant others around employees don´t trust AI, employees won´t either. 2️⃣ Building trust in AI requires trusted human advocates—if key employees are skeptical, adoption suffers. 3️⃣ AI trust calibration is crucial: Over-reliance and under-reliance on AI both have risks, and leaders should consider social influences when introducing AI teammates. 🎉 Huge congratulations to Türkü for this important contribution! If you’re interested in how social cognitive theory can explain trust in AI teams, check out the full paper. What makes this even more special? JOB is the journal where my first academic paper was published—and where my own PhD supervisor (Frank Walter) had their first journal publication. A true academic full-circle moment! 🎓🔁 I’d love to hear from others: Have you noticed social influences shaping how people trust AI in your workplace? Have you ever seen CEOs, leaders, colleagues modeling (or refraining from modeling) trust in AI? #AI #TrustInAI #HumanAITeams #OrganizationalBehavior #FutureOfWork #Leadership #AcademicMentorship
Cognitive processes in technology trust development
Explore top LinkedIn content from expert professionals.
Summary
Cognitive processes in technology trust development refer to how people use their thinking, feelings, and social cues to decide whether or not to trust new tools like artificial intelligence (AI) or digital systems. Trust in technology is shaped by individual experiences, social influences, and the way people reason through uncertainty, rather than just technical features or ease of use.
- Support critical thinking: Encourage users to question and verify AI outputs, helping them rely on their own judgment rather than blindly trusting technology.
- Highlight social cues: Show how trusted colleagues interact with technology, since seeing others’ confidence or skepticism can strongly influence personal trust decisions.
- Respect user agency: Give people transparency, choices, and explanations about how technology works so they feel in control and can decide when to trust it.
-
-
Trust in technology is not about making systems look friendly or adding more explanations. It is about how people decide to rely on something when there is uncertainty. In human computer interaction, trust is a judgment users make. It is shaped by expectations, experience, social cues, perceived control, and context. The same system can be trusted in one situation and distrusted in another. That is why trust is so hard to design and so easy to break. Research shows that users do not trust systems for a single reason. Sometimes trust comes from reasoning. Does this system behave consistently? Does it do what I expect? Other times trust comes from feeling. Does this interface feel human, present, or socially responsive? In many cases trust is social. If people I trust rely on this system, I am more likely to trust it too. There are also moments where trust collapses. When users feel forced, manipulated, or stripped of control, distrust appears even if the system is accurate. When early experiences violate expectations, trust erodes fast and rarely recovers on its own. One of the most important insights is that trust is dynamic. It builds slowly through repeated positive interactions and can disappear quickly after a single negative one. Designing for trust is not about maximizing trust. It is about supporting appropriate trust. Helping users know when to rely on a system and when not to. For AI, automation, and complex digital products, this matters more than ever. Overtrust is just as dangerous as distrust. Good design respects user agency, supports understanding, and stays honest about limitations. Trust is not a feature you add at the end. It is an outcome of how the entire system behaves over time.
-
🤔 A fascinating new study from Oregon State University, GitHub, and Northern Arizona University researchers reveals what really drives developers to trust and adopt AI tools - and it's not what most of us assumed. As someone who's spent years studying organizational psychology and now helps companies navigate AI adoption initiatives, what caught my attention wasn't just what influences trust in AI - but what doesn't. Here are three surprising insights that challenge conventional wisdom: 1. Ease of use? Not a significant factor. Unlike traditional tech adoption, developers' trust in AI tools isn't heavily influenced by how easy they are to use. This suggests the game has changed - we're moving beyond basic usability concerns to deeper questions of value and alignment. 2. Trust is built on three pillars: - System/output quality (does it do what it claims?) - Functional value (does it provide tangible benefits?) - Goal maintenance (does it align with developers' objectives?) 3. 🔍 Most fascinating: Cognitive styles matter more than we thought. Developers who: - Are intrinsically motivated by technology - Have higher computer self-efficacy - Show greater risk tolerance ...are significantly more likely to adopt these tools. Through my work at Fractional Insights, I've observed how organizations often focus on technical training while overlooking these psychological factors. But this research suggests we need a more nuanced approach to AI adoption - one that accounts for cognitive diversity and individual differences in how people approach new technology. 💡 The key takeaway for organizational leaders: Successful AI adoption isn't just about the technology - it's about understanding and supporting the diverse ways people think about and interact with these tools. What's your experience? What have you noticed about how psychology impacts AI tool adoption in your organization? Throwback pic to talking about technology and humanity with some of my favorite experts: Amir Ghowsi Moritz Sudhof at NYU with Anna A. Tavis, PhD. #FutureOfWork #OrganizationalPsychology #AIAdoption #TechnologyTransformation #InclusiveDesign #LeadershipInsights
-
Is AI Replacing Our Thinking? | ¿Estamos empezando a delegarle el pensamiento a la IA? Two recent research papers (links below) challenged my assumption that AI is simply a tool. The reality may be more complex...and these studies suggest a deeper shift may already be underway. 😱 AI is starting to reshape how we reason 😱 Let’s unpack this step by step. Keep reading! 👇 👇 👇 1️⃣ A new participant in human cognition For decades, behavioral science (popularized by Daniel Kahneman) described thinking as two systems: • System 1 (fast, intuitive, automatic) • System 2 (slow, analytical, effortful) But generative AI introduces a third actor: • Artificial cognition —> external reasoning generated by AI systems. 🤭 For the first time in history, part of our thinking process can occur outside the human mind. 2️⃣ The risk of "cognitive surrender" It happens when people rely heavily on AI outputs without critical evaluation. This doesn’t necessarily happen because people are careless. It happens because AI often appears confident, coherent, and fast...qualities that naturally trigger trust. 3️⃣ What real-world conversations reveal Studies provide rare empirical evidence from real usage: severe "disempowerment" cases were rare, but not negligible at scale. Researchers identified patterns where AI can subtly reduce human agency, including: • Users asking the AI to decide what they should do • Delegating personal communication (messages, decisions, judgments) • Treating the model as an authority rather than a collaborator Perhaps the most surprising finding: Interactions with higher disempowerment potential often receive higher user satisfaction scores. 🤯 🔥 4️⃣ The paradox of the AI era AI can dramatically improve decisions when used well. But if we stop questioning outputs, the outcome of many decisions becomes dependent on the AI rather than the human judgment behind it. 🫣 What does this mean? It seems clear to me: the real challenge of AI is far from technical capability, but close to cognitive governance. So... If AI is becoming part of our cognitive process, then AI literacy becomes a core capability, not a nice-to-have. People need to understand: • how AI systems reason • where they fail • when to trust them • and when to challenge them Links: - “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage.” https://lnkd.in/emi-DVQp - “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.” https://lnkd.in/eVPb3pQv - Thoughtful analysis published by My Tech Plan: https://lnkd.in/eJuSxmV5 #GovernYourAINow #GobiernaYaTuIA
-
Trust in AI is rarely absolute. It is conditional, contextual and should be constantly recalibrated. I’m pleased to share my second paper, “Trust, but Verify: A Reflexive Thematic Analysis of Human–AI Interaction”, has now been published in the Advances in Social Sciences Research Journal (ASSRJ). This paper was co-authored with Dr. Chris Fong, an experienced psychologist and academic leader in psychological science, and draws on qualitative interviews with professionals across psychology, technology, and leadership to explore how trust in AI is formed, tested and sometimes withdrawn in real-world practice. This work builds on my earlier review on Human-Centred AI design, developed during my MSc in Psychology, moving from how we should design AI to how people actually experience and trust it in practice. Some key insights from the study: - Trust in AI is conditional, shaped by verification practices and expectations of transparency - Professionals value AI for reducing cognitive load and improving efficiency, but remain cautious about hallucinations and accuracy - There is growing concern that overreliance on AI may dull critical thinking and imagination over time - Explainability and source transparency are not optional. They are prerequisites for sustained trust The paper argues that AI should function as a cognitive scaffold, not a cognitive substitute. Building better AI means supporting human judgement, not replacing it. Full paper here: https://lnkd.in/ghsk-HWE I’d be interested to hear your views: How do you personally calibrate trust when working with AI? #AI #HumanCentredAI #HumanAIInteraction #TrustInAI #ExplainableAI #XAI #Psychology #ResponsibleAI #BuildingbetterAI
-
Trust comes from knowing AI’s limits. One of the most consistent surprises was how limited many lawyers’ understanding was of where generative AI struggles in practice, particularly with hallucinations and numeric reasoning. While participants knew outputs needed to be checked, few had a clear sense of when models were most likely to fail or how to interrogate those weaknesses effectively. Confidence increased when limits were made explicit, and lawyers were given practical ways to verify outputs. Instead of treating AI as something to either trust or avoid, participants learned how to supervise it. They learned to cross-check assumptions, review outputs critically, and build verification into their workflows. This resulted in more disciplined use rather than looser risk tolerance. Lawyers remained accountable, and trust became a function of judgment.
-
I often hear that AI fails due to technical challenges: models, data, or infrastructure. But AI initiatives most often stumble because people don’t trust them, cognitively and emotionally. If you want your teams to follow and your leadership to have real impact, join me for Part #2 of the AI mini-series. Too many AI strategies focus solely on accuracy, scale and ROI. Few address how people actually feel when AI becomes part of their daily work. Trust isn’t built with dashboards or motivational quotes on the walls... It’s built when people feel truly seen, safe and respected. To achieve this, AI must be positioned as a catalyst for human potential and not as a silent evaluator. If your AI strategy overlooks HUMAN emotions like fear, curiosity, pride and identity, then it will never fully land. Adoption isn’t just a technical hurdle. It’s fundamentally a leadership behavior challenge. Let me unpack how education, empathy and intentional leadership foster cognitive trust and why EMOTION is the missing layer in most enterprise AI transformations. Here are 5 actionable leadership behaviors to drive successful, AI Enterprise adoption: 1. Lead with intent, not just tools: Launch every AI initiative by clearly stating its purpose for people, not just the business. Connect AI to personal growth and learning, not only efficiency. 2. Normalize emotion in AI conversations: Invite concerns, discomfort, and skepticism. Treat emotional responses as valuable data, not resistance. Psychological safety is essential for adoption. 3. Educate beyond “how it works”: Train teams on AI’s limitations, trade-offs, and failure modes not just on capabilities. Cognitive trust grows when leaders are transparent about what AI cannot do. 4. Model vulnerability at the top: Leaders should share openly what they are learning and calibrating about AI. Authentic leadership accelerates trust. 5. Align incentives, lexicon, and decision-making with people-centric AI use. One misaligned KPI can erode months of trust-building. Consistency creates credibility. Let’s remember: the success of AI isn’t just about technology, it’s about human connection and trust. #AILeadership #TrustInAI #EnterpriseAITransformation #HumanCenteredAI #CognitiveTrust #LeadershipMatters #ChangeManagement
-
𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝗔𝗜 𝗶𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗶𝘀 𝗹𝗲𝘀𝘀 𝗮𝗯𝗼𝘂𝘁 𝘁𝗼𝗼𝗹𝘀 𝗮𝗻𝗱 𝗺𝗼𝗿𝗲 𝗮𝗯𝗼𝘂𝘁 𝘁𝗿𝘂𝘀𝘁. I recently spoke to a few engineers on our team about how AI fits into their day-to-day work. The intention wasn’t to find out which models they use, but what actually slows adoption down. A few observations stood out: “I don’t mind AI suggestions. I mind not knowing why it suggested something.” “It’s fast, but I still want to be accountable for what ships.” “AI helps with the first 70%, but the last 30% needs judgment.” “I trust it more when I can review, correct, and guide it.” The pattern was clear. Adoption improves when developers stay in control. AI works best as a collaborator, not an authority. Productivity scales when humans remain in the loop, guardrails are explicit, and responsibility isn’t outsourced. Trust grows when teams understand where AI helps, where it stops, and where human judgement takes over. The future of AI in development isn’t about replacing decision-making. It’s about designing systems that let humans move faster without giving up ownership. Tell us about your experience including AI in your business workflows. #AIAdoption #ProductThinking #Engineering #TechLeadership
-
To accelerate AI adoption, you must invest in people. Research on emotional and cognitive trust in AI shows that when employees lack meaningful learning opportunities and feel surveilled or threatened, they restrict or manipulate their digital footprint, effectively degrading AI performance and stalling adoption. In this context, empty ‘augment, not replace’ slogans or purely automation‑driven, prompt‑only use cases can easily make people feel they are training systems that might one day replace them. The key success factor of AI adoption remains trust: how employees feel about AI matters just as much as their assessment of its technical quality. The paper by Natalia Vuori, Barbara Burkhard, and Leena Pitkaranta shows that once trust starts to erode, employees’ behaviours regarding their digital footprints undermine AI, which then further erodes trust and adoption. This pattern does not fix itself without sustained leadership action. Both must be balanced: cognitive trust and emotional trust. Cognitive Trust: belief that the AI is competent, reliable, and useful. Emotional Trust: how safe, comfortable, and non‑threatening it feels. In the end, this is another finding to highlight that AI adoption is a leadership and people issue, not just a technology issue. Leaders must actively build both cognitive trust (clear use cases, transparency, reliability) and emotional trust (psychological safety, fair and non‑surveillance use of data), and 👉 tailor 👈 interventions to different trust configurations rather than rely on generic change management. With that, I wish you all a happy New Year and may 2026 become your personal year of trust! https://lnkd.in/eaZEGFyw