How AI Will Shape Software Security

Explore top LinkedIn content from expert professionals.

Summary

Artificial intelligence is transforming software security by introducing new risks and defenses, impacting both how systems are protected and how threats are detected. AI can make decisions, adapt to changing environments, and act autonomously, which changes traditional approaches to safeguarding digital information.

  • Understand AI-specific risks: Focus on new threats such as prompt injection and model manipulation, which require tailored strategies beyond those used for regular software.
  • Monitor AI systems continuously: Regularly check for unexpected behaviors and vulnerabilities since AI models can evolve and create unpredictable outcomes.
  • Strengthen security with automation: Use AI tools to scan for flaws and automate threat detection, making it easier to stay ahead of sophisticated cyber attackers.
Summarized by AI based on LinkedIn member posts
Image Image Image
  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    52,639 followers

    Why AI Is The New Cybersecurity Battleground - Forbes AI has evolved from a tool to an autonomous decision-maker, reshaping the landscape of cybersecurity and demanding innovative defense strategies. Artificial intelligence has quickly grown from a capability to an architecture. As models evolve from backend add-ons to the central engine of modern applications, security leaders are facing a new kind of battlefield. The objective not simply about protecting data or infrastructure—it’s about securing the intelligence itself. In this new approach, AI models don’t just inform decisions—they are decision-makers. They interpret, respond, and sometimes act autonomously. That shift demands a fundamental rethink of how we define risk, build trust, and defend digital systems. From Logic to Learning: The Architecture Has Changed Historically, enterprise software was built in layers: infrastructure, data, logic, and presentation. Now, there’s a new layer in the stack—the model layer. It’s dynamic, probabilistic, and increasingly integral to how applications function. Jeetu Patel, president and chief product officer at Cisco, described this transformation to me in a recent conversation: “We are trying to build extremely predictable enterprise applications on a layer of the stack which is inherently unpredictable.” That unpredictability is not a flaw—it’s a feature of large language models and generative AI. But it complicates traditional security assumptions. Models don’t always produce the same output from the same input. Their behavior can shift with new data, fine-tuning, or environmental cues. And that volatility makes them harder to defend. AI Is the New Attack Surface As AI becomes more central to application workflows, it also becomes a more attractive target. Attackers are already exploiting vulnerabilities through prompt injection, jailbreaks, and system prompt extraction. And with models being trained, shared, and fine-tuned at record speed, security controls struggle to keep up. Runtime Guardrails and Machine-Speed Validation Given the speed and sophistication of modern threats, legacy QA methods aren’t enough. Patel emphasized that red teaming must evolve into something automated and algorithmic. Security needs to shift from periodic assessments to continuous behavioral validation. Agentic AI: When Models Act on Their Own The risk doesn’t stop at outputs. With the rise of agentic AI—where models autonomously complete tasks, call APIs, and interact with other agents—the complexity multiplies. Security must now account for autonomous systems that make decisions, communicate, and execute code without human intervention. #cybersecurity #AI #AgenticAI #dynamic #riskmanagment

  • View profile for AJ Yawn

    GRC Engineering | Advisor | Author | Founder of GRC Engineering Club on Patreon | Veteran | LinkedIn Learning Instructor | SANS Instructor | Mental Health Advocate | Anchored Ambition

    51,397 followers

    I spent more time digging into the new NIST Cybersecurity Profile for AI... The document frames AI cybersecurity around three distinct focus areas. Not just securing AI systems. But understanding how AI changes cybersecurity as a whole. The first focus area is securing AI systems themselves. This includes protecting and understanding training data implications, safeguarding model artifacts, securing inference APIs, and preventing things like model theft, prompt injection, or adversarial manipulation. The second focus area is using AI to strengthen cybersecurity operations. Security teams are already experimenting with AI for threat detection, GRC, anomaly analysis, and automating investigation workflows. The third focus area is defending against attackers who are using AI. That last point is where things start to change the security landscape. AI can accelerate vulnerability discovery, generate convincing phishing campaigns, and automate reconnaissance in ways that were previously very manual. In other words, AI is now influencing both sides of the cybersecurity equation. Organizations have to secure the AI systems they deploy while also preparing for attackers who are increasingly augmented by AI themselves. That dual pressure is why AI security is quickly becoming part of mainstream cybersecurity strategy. It is not a niche governance topic anymore. It is becoming part of how modern security programs operate. #AI #GRCEngineering

  • View profile for Tristan Ingold

    AI Governance at Meta

    5,765 followers

    Is your team still treating AI systems exactly like regular software when it comes to security? 🤔 I've been digging into NIST's draft Cyber AI Profile (IR 8596), which I think is essential reading for any GRC professional. The comment period closed last Friday, and this guidance confirms something many of us have felt for a while: AI challenges some of the core assumptions behind our traditional security frameworks. Unlike typical software which behaves predictably AI models are probabilistic and keep evolving. That means we face a new class of risks that require us to rethink our approach. A few takeaways for those of us in GRC: 💡 1️⃣ Static Checklists Don't Cut It: Because AI behavior is less predictable, relying solely on fixed checklists risks missing important threats. The guidance encourages adopting risk models designed specifically for AI's unique uncertainties. 2️⃣ New Threats Require New Defenses: Attacks like prompt injection, data poisoning, and model extraction aren't simply variations of traditional threats like malware or SQL injection. These AI-specific risks call for tailored mitigation strategies. 3️⃣ Seeing Beyond Vendor Reports: A SOC 2 report isn't enough anymore. To truly understand AI security, you have to trace data lineage, model origins, and base models. That means gaining much deeper insight into the AI supply chain. 4️⃣ Keep an Eye on AI Models Continuously: The draft stresses ongoing monitoring to catch things like model drift, unexpected behavior, and adversarial manipulation as soon as they happen. For those guiding AI risk and compliance programs, this is a strong nudge to update your frameworks. It also reinforces my conviction that the future belongs to practitioners fluent in both AI's technical landscape and sound governance principles. Although the comment period has closed, I encourage you to review the draft. Understanding this guidance now will help you prepare for the compliance landscape that's taking shape. If you're wrestling with how to handle AI's probabilistic risks, I'd be glad to swap notes on what I'm learning. 🤝 Find the draft here --> https://lnkd.in/gzxHSsQb #AIGovernance #GRC #Cybersecurity #AIrisk #NIST #RiskManagement

  • View profile for Ricky Ray Butler
    Ricky Ray Butler Ricky Ray Butler is an Influencer

    Passionate about AI, RevTech, and Entertainment.

    14,236 followers

    Criminals, Spies, and AI: A New Front in Cyber Warfare The use of AI in cybersecurity is rapidly changing the landscape, creating a new "arms race" between hackers and cybersecurity professionals. Here's a look at how different groups are leveraging this technology. AI and Malicious Actors Bad actors are increasingly incorporating AI into their cyberattacks. For example, Russian hackers have been caught using large language models (LLMs) to create malicious code for phishing campaigns, enabling them to automate the search for sensitive files on a victim's computer. Similarly, cybersecurity firm CrowdStrike has noted a growing trend of advanced adversaries, including Chinese, Russian, and Iranian state-sponsored groups, using AI to their advantage. The technology is making skilled hackers more efficient and effective, particularly in areas like social engineering and creating convincing phishing emails. AI in Cyber Defense The cybersecurity industry is also using AI to combat these threats. Google's security team, for instance, has used its Gemini LLM to hunt for software vulnerabilities. This process has already led to the discovery of at least 20 overlooked bugs in commonly used software, allowing companies to fix them before they can be exploited by criminals. While AI isn't yet finding entirely new types of vulnerabilities, it is significantly speeding up the process of discovering and patching known types of flaws. As Google's VP of Security Engineering, Heather Adkins, said, "It’s the beginning of the beginning." The use of AI in both offensive and defensive cybersecurity is still in its early stages, but it is clear that the technology is making a tangible impact, creating a faster, more complex, and more dynamic environment for everyone involved.

  • View profile for Ismail Orhan, CISSO, CTFI, CCII

    CISO @ASEE | Cybersecurity Leader of the Year 2025 🏆 | HBR Contributor | Published Author | Thought Leader | International Keynote Speaker

    21,901 followers

    Anthropic 's Claude new code security capability didn’t introduce a better scanner — it introduced a new layer in AppSec. For years, we scaled detection: more tools, more alerts, more triage. But security never scaled at the same speed as software. What changed now is simple but structural — security reasoning moved into the developer workflow. AI doesn’t just find patterns, it explains risk, understands intent, and proposes secure alternatives. That shift compresses the distance between detection and remediation, which is where most AppSec friction has always lived. This doesn’t replace the AppSec stack, but it forces consolidation. Lightweight SAST, standalone review workflows, and parts of manual code assessment will increasingly become capabilities rather than products. The value moves upward — toward orchestration, governance, runtime validation, and decision quality. In other words, security is moving from tools to intelligence. From a CISO perspective, this is an operating model change, not a tooling trend. Teams that embed AI as a control layer will scale expertise without scaling headcount at the same rate. Teams that treat it as a developer feature will see incremental gains but miss the structural advantage. Within the next two years, most mature engineering organizations will run an AI reasoning layer inside their SDLC — formally or organically. The real risk is not adopting early. The real risk is adoption without design. AI-native code security doesn’t eliminate AppSec. It reveals which parts were process — and which parts were expertise. #AI #CyberSecurity #AppSec #DevSecOps #CISO #AIsecurity #Claude #SoftwareSecurity

  • View profile for Ulf Larsson

    SEB Group Security CTO

    2,011 followers

    AI is increasingly moving into the control plane of our digital platforms, and that shift has profound implications for cybersecurity. Much of today’s AI discussion focuses on productivity and automation. Important topics, but not the most consequential from a security perspective. What matters more is where AI is being embedded. Increasingly, it is becoming part of the control layers we depend on, including identity, access, analytics, decision support, and security tooling itself. Cybersecurity has traditionally focused on protecting data: where it resides, who can access it, and how it is encrypted. These concerns remain essential, but they are no longer sufficient. AI systems do more than process information. They infer, prioritise, adapt, and influence behaviour. As AI becomes embedded in security-relevant platforms, the core question shifts from where data is stored to who controls system behaviour. From a security perspective, control equals trust. As AI capabilities advance, some long-standing assumptions about static trust need to be re-examined. Systems are updated frequently, operate across platforms and jurisdictions, and increasingly act autonomously. In this environment, trust cannot be implicit. It must be continuously established, verified, and monitored. Protecting customer data therefore means protecting the whole system. Data flows through identities, platforms, APIs, and AI-driven components. When AI influences these flows, security requires transparency, accountability for automated decisions, the ability to intervene, and resilience when dependencies change or fail. At SEB, we approach AI with both ambition and discipline. Our focus is on strong control, continuous verification, and resilience by design. AI does not reduce our responsibility for cybersecurity. It increases it. The real question is not whether AI will change cybersecurity. It already has. The question is whether we are prepared for what that change truly means.

  • View profile for Chris Konrad

    Vice President, Global Cyber | Business Roundtable | Forbes Tech Council | Speaker | Leader | Trusted Executive Advisor

    18,796 followers

    Anthropic’s release of #ClaudeCode Security is being framed as progress in AI-driven defense. It is progress. But it also exposes something more complicated. We are entering a period where AI is reviewing code written by humans and, increasingly, by other AI systems. That changes the development lifecycle in ways most organizations have not absorbed. A model can trace logic, infer intent, and surface flaws that static scanners miss. It can also misunderstand architectural context, misinterpret business logic, or confidently recommend a flawed remediation. The danger is not the model making mistakes. The danger is teams assuming the model does not. Security review has always required judgment. Architecture decisions, compensating controls, operational realities, regulatory constraints—those do not live cleanly inside a code file. They live in conversations, tradeoffs, and institutional memory. AI can analyze artifacts. It does not carry accountability. There is also a broader dynamic unfolding. As models get better at identifying vulnerabilities, adversaries gain similar capabilities. The defensive advantage is temporary and unevenly distributed. Organizations that integrate AI review without governance, auditability, and human ownership are increasing velocity without increasing control. AI code security tools will improve software quality. They will also widen the gap between disciplined programs and everyone else. The question is not whether to adopt them. It is whether you are prepared to own the outcomes they accelerate. https://lnkd.in/guzuva3J #SecureAllTogether

  • As we step into 2025, I wanted to start the year sharing something that’s reshaping cybersecurity in real-time—and will only become more critical as we move forward: countering AI with AI. Attackers are evolving, using AI to reshape malicious scripts into forms that evade detection. These aren’t entirely new threats; they’re familiar tactics, reworked with precision to outsmart defenses. The challenge lies in how subtle these changes can be. AI allows attackers to modify scripts in ways that appear deceptively benign—renaming variables, inserting dead code, or altering structures—while preserving malicious intent. It’s a strategic evolution with significant implications for every industry navigating today’s digital landscape. Yet, the same AI that attackers use to refine their strategies can be their own undoing. By harnessing AI’s capabilities, defenders can anticipate these shifts and ensure the balance tips in their favor. It’s a case of innovation meeting innovation. In this context, adversarial machine learning offers a promising solution. It 𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝘀 𝗱𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹𝘀 𝘄𝗶𝘁𝗵 𝗔𝗜-𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝘀𝗮𝗺𝗽𝗹𝗲𝘀, enabling them to recognize obfuscation tricks used in AI-rewritten malicious scripts. By harnessing adversarial machine learning at Palo Alto Networks, we have significantly improved our Advanced URL Filtering. And the proof is in the pudding—a 10% boost in real world detection rate! We have essentially turned the attackers’ own tools against them, ensuring we stay ahead in an ever-shifting landscape. Tackling challenges like these reminds me why I love this field—it’s fast-paced, deeply complex, and constantly evolving. If you’re as intrigued as I am about how AI is reshaping cybersecurity, I highly recommend Unit 42’s recent article on this fascinating challenge: https://lnkd.in/g-Eg2usB 𝗛𝗲𝗿𝗲’𝘀 𝘁𝗼 𝗸𝗶𝗰𝗸𝗶𝗻𝗴 𝗼𝗳𝗳 𝟮𝟬𝟮𝟱 𝘄𝗶𝘁𝗵 𝗯𝗼𝗹𝗱 𝗶𝗱𝗲𝗮𝘀, 𝗿𝗲𝗹𝗲𝗻𝘁𝗹𝗲𝘀𝘀 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻, 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝘁𝗼 𝗸𝗲𝗲𝗽 𝗹𝗲𝗮𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗰𝗵𝗮𝗿𝗴𝗲. #HappyNewYear #CounterAIWithAI #AI #Cybersecurity Image Credit: Palo Alto Networks Unit 42

Explore categories