Yesterday, the California Department of Justice, Attorney General’s Office (AGO), issued an advisory to provide guidance to consumers and entities that develop, sell, and use AI about their rights and obligations under California law. The "Legal Advisory on the Application of Existing California Laws to Artificial Intelligence" outlines: 1) Unfair Competition Law (Bus. & Prof. Code, § 17200 et seq.): Requires AI systems to avoid deceptive practices such as false advertising of capabilities and unauthorized use of personal likeness, making violations of related state, federal, or local laws actionable under this statute. 2) False Advertising Law (Bus. & Prof. Code, § 17500 et seq.): Prohibits misleading advertisements about AI products' capabilities, emphasizing the need for truthfulness in the promotion of AI tools/services. 3) Competition Laws (Bus. & Prof. Code, §§ 16720, 17000 et seq.): Guard against anti-competitive practices facilitated by AI, ensuring that AI does not harm market competition or consumer choice. 4) Civil Rights Laws (Civ. Code, § 51; Gov. Code, § 12900 et seq.): Protect individuals from discrimination by AI in various sectors, including employment and housing. 5) Election Misinformation Prevention Laws (Bus. & Prof. Code, § 17941; Elec. Code, §§ 18320, 20010): Regulate the use of AI in elections, specifically prohibiting the use of AI to mislead voters or impersonate candidates. 6) California's data protection laws ensuring oversight of personal and sensitive information: The California Consumer Privacy Act (CCPA) and the California Invasion of Privacy Act (CIPA) set strict guidelines for transparency and the secure handling of data. These regulations extend to educational and healthcare settings through the Student Online Personal Information Protection Act (SOPIPA) and the Confidentiality of Medical Information Act (CMIA). In addition, California has enacted several new AI regulations, effective January 1, 2025: Disclosure Requirements for Businesses: - AB 2013: Requires AI developers to disclose training data information on their websites by January 1, 2026. - AB 2905: Mandates disclosure of AI use in telemarketing. - SB 942: Obligates AI developers to provide tools to identify AI-generated content. Unauthorized Use of Likeness: - AB 2602: Ensures contracts for digital replicas include detailed use descriptions and legal representation. - AB 1836: Bans use of deceased personalities’ digital replicas without consent, with hefty fines. AI in Elections: - AB 2355: Requires disclosure for AI-altered campaign ads. - AB 2655: Directs platforms to identify and remove deceptive election content. Prohibitions on Exploitative AI Uses: - AB 1831 & SB 1381: Expand prohibitions on AI-generated child pornography. - SB 926: Extends criminal penalties for creating nonconsensual pornography using deepfake technology. AI in Healthcare: - SB 1120: Requires licensed physician oversight on AI healthcare decisions.
Compliance Requirements for AI Developers
Explore top LinkedIn content from expert professionals.
Summary
Compliance requirements for AI developers refer to the legal, ethical, and regulatory standards that must be followed when designing, deploying, and maintaining artificial intelligence systems. These requirements ensure AI operates safely, transparently, and respects privacy and fairness, with obligations varying by region and application.
- Prioritize transparency: Clearly document your AI models’ purpose, data sources, training methods, and any measures taken to minimize bias, making this information accessible for scrutiny.
- Protect data privacy: Obtain user consent, anonymize personal information, and comply with local and international data protection laws when collecting and handling data for AI systems.
- Assess and manage risk: Identify potential risks in your AI’s functions, implement safeguards for high-risk applications, and maintain ongoing monitoring to ensure compliance and security.
-
-
𝐀𝐈 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐋𝐚𝐰𝐬 𝐟𝐨𝐫 𝐆𝐞𝐧𝐀𝐈 𝐀𝐩𝐩𝐬 Building GenAI Apps for a Global Audience? Understanding Regional Data Protection and AI laws is not optional, it is foundational. Here is what you need to know: 1. UNDERSTANDING GLOBAL REGULATORY VARIANCE Building GenAI for a global audience requires understanding regional data protection and AI laws. Key Regulations by Region: • EU AI Act: Risk-based AI obligations for certain AI systems and transparency use cases • GDPR (EU): Transparency & Consent • DPDP (India): Digital Personal Data Protection • PIPL (China): Strict Data Localization • CCPA (California): Data Access & Opt-Out • LGPD (Brazil): Local Compliance Rules 2. IMPACT OF THESE REGULATIONS ON YOUR AI TRAINING DATA To build compliant GenAI apps, Ensure that data used for training AI models follows the regional rules: Data Collection → Processing → Model Training → Deployment Three Core Requirements: a. User Consent: Obtain explicit consent for data collection and use b. Data Minimization: Collect only necessary data for the intended purpose c. Anonymization: Remove personally identifiable information from training data 3. MITIGATING AI ETHICS AND BIAS RISKS AI systems must be fair and ethical, particularly in high-risk areas: a. Fairness: Ensure your AI models don't discriminate, especially in areas like recruitment or finance. b. Bias Mitigation: Regularly test and adjust your models to reduce bias in the outputs. 4. ENSURING TRANSPARENCY IN AI MODEL DEVELOPMENT Transparency is a cornerstone of compliance, especially when your AI impacts users directly: a. Explainability: Protect data in transit and at rest. b. Consent Management: Collect, track, and manage user consent. c. Privacy by Design: Embed privacy into every system layer. 5. MANAGING CROSS-BORDER DATA FLOW GenAI apps often rely on data from various regions, so it's critical to understand data sovereignty laws: a. Data Sovereignty: Follow local laws on where data is stored and processed. b. Data Transfer Agreements: Use SCCs or BCRs for compliant cross-border transfers. THE COMPLIANCE CHECKLIST Before launching GenAI globally, verify: 1. Regional Compliance: • GDPR for EU? (Transparency & Consent) • DPDP for India? (Data Protection) • PIPL for China? (Data Localization) • CCPA for California? (Access & Opt-Out) • LGPD for Brazil? (Local Rules) 2. Training Data: • User consent obtained? • Data minimized? • PII anonymized? 3. Ethics & Bias: • Fairness tested? • Bias mitigation in place? 4. Transparency: • Explainability documented? • Consent management system? • Privacy by design? 5. Cross-Border: • Data sovereignty compliance? • Transfer agreements (SCCs/BCRs)? Each region has different requirements. Build for the strictest, adapt for the rest. Which regulation applies to your GenAI app?
-
Yesterday, the AI Office published the third draft of the General-Purpose AI Code of Practice, a key regulatory instrument for AI providers seeking to align with the EU AI Act. Developed with input from 1,000 stakeholders, the draft refines previous versions by clarifying compliance requirements and introducing a structured approach to regulation. GPAI providers must meet baseline obligations on transparency and copyright compliance, while models classified as having systemic risk face additional commitments under Article 51 of the AI Act. The final version, expected in May 2025, aims to facilitate compliance while ensuring AI models adhere to safety, security, and accountability standards. The Code introduces the Model Documentation Form, requiring AI providers to disclose key details such as model architecture, parameter size, training methodologies, and data sources. Transparency obligations include specifying the provenance of training data, documenting measures to mitigate bias, and reporting compute power and energy consumption. GPI providers must also outline their models’ intended uses, with additional requirements for systemic-risk models, including adversarial testing and evaluation strategies. Documentation must be retained for twelve months after a model is retired, with copyright compliance mandatory for all providers, including open-source AI. GPAI providers must establish formal copyright policies and comply with strict data collection rules. Web crawlers cannot bypass paywalls, access piracy sites, or ignore the Robot Exclusion Protocol. The Code also requires providers to prevent AI-generated copyright infringement, mandate compliance in acceptable use policies, and implement mechanisms for rightsholders to submit copyright complaints. Providers must maintain a point of contact for copyright inquiries and ensure their policies are transparent. For AI models with systemic risk, the Code introduces a Safety and Security Framework, aligning with the AI Act’s high-risk requirements. Providers must assess risks in areas such as cyber threats, manipulation, and autonomous AI behaviours. They must define risk acceptance criteria, anticipate risk escalations, and conduct assessments at key development milestones. If risks are identified, development may need to be paused while safeguards are implemented. GPAI providers must introduce technical safeguards, including input filtering, API access controls, and security measures meeting at least the RAND SL3 standard. From 2 November 2025, systemic-risk models must undergo external risk assessments before release. Providers must maintain a Safety and Security Model Report, report AI-related incidents within strict timeframes, and implement governance structures ensuring responsibility at all levels. Whistleblower protections are also required. With the final version expected in May 2025, AI providers have a short window to prepare before the AI Act takes full effect in August.
-
A new paper dropped today that deserves serious attention from anyone building or deploying AI agents in Europe. Nannini, Smith, Tiulkanov and colleagues have produced the first systematic regulatory mapping for AI agent providers under EU law. Not a policy commentary. An actual compliance architecture, integrating the draft harmonised standards under M/613, the GPAI Code of Practice, the CRA standards programme, and the Digital Omnibus proposals. The core insight is deceptively simple: the regulatory trigger for an AI agent is determined by what the agent does externally, not by its internal architecture. The same LLM with tool-calling generates radically different compliance obligations depending on deployment. → Screen CVs? Annex III high-risk, full Chapter III → Summarise meeting notes? Article 50 transparency only. The technology is identical. The regulatory consequence diverges completely. The paper identifies four agent-specific compliance challenges that current frameworks address in principle but not yet in practice. 1️⃣ 𝗖𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆: a system prompt telling the model "do not delete files" is not a security control. Article 15(4) compliance requires privilege enforcement at the API level, outside the generative model. 2️⃣ 𝗛𝘂𝗺𝗮𝗻 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁: LLMs trained via RL may have learned to evade oversight as an emergent strategy. Oversight must be external constraints, not internal instructions. 3️⃣ 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆: when an agent sends an email, the recipient is an affected person who may not know they are interacting with AI. 4️⃣ 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿𝗮𝗹 𝗱𝗿𝗶𝗳𝘁: agents that accumulate memory or discover novel tool-use patterns may leave their conformity assessment boundaries undetected. The paper's conclusion is stark: high-risk agentic systems with untraceable behavioral drift cannot currently be placed on the EU market. Not future risk, but current legal position. For anyone building AI governance infrastructure, this confirms what we have been arguing at Modulos: compliance for agentic AI must be continuous and architectural, not periodic and checklist-based. The provider's foundational task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons: that inventory is the regulatory map. 👉 https://lnkd.in/e_zk3R6B
-
Europe just made AI governance non-negotiable. prEN 18286 (EU AI Act QMS) is out, once cited, it grants presumption of conformity. Reality check: ISO/IEC 42001 ≠ EU AI Act compliance. Translation: for high-risk AI providers, you’ll need evidence, not promises, design controls, data governance, risk management, and post-market monitoring that auditors can verify. Do these 5 moves now: - Map every AI system to EU AI Act risk tiers. - Implement controls aligned to the new harmonized standards. - Show your work: tech docs, eval evidence, audit trails. - Challenge vendors—model cards, data lineage, red-team results. - Monitor in production like safety-critical software. Simplifying it , your fast path: risk-map → standardize controls → prove with evidence → vendor due diligence → live monitoring. Simple to say, and hard to fake. If you’re “waiting to see,” you’re already late. Presumption of conformity will favor the prepared. #EUAIAct #AICompliance #AIStandards #CENCENELEC #ISO42001 #GPAI #ResponsibleAI #EUAIAct #AIGovernance #AICompliance #AIStandards #RiskManagement
-
The EDPS - European Data Protection Supervisor has issued a new "Guidance for Risk Management of Artificial Intelligence Systems." The document provides a framework for EU institutions acting as data controllers to identify and mitigate data protection risks arising from the development, procurement, and deployment of AI systems that process personal data, focusing on fairness, accuracy, data minimization, security and data subjects’ rights. Based on ISO 31000:2018, the guidance structures the process into risk identification, analysis, evaluation, and treatment — emphasizing tailored assessments for each AI use case. Some highlights and recommendations include: - Accountability: AI systems must be designed with clear documentation of risk decisions, technical justifications, and evidence of compliance across all lifecycle phases. Controllers are responsible for demonstrating that AI risks are identified, monitored, and mitigated. - Explainability: Models must be interpretable by design, with outputs traceable to underlying logic and datasets. Explainability is essential for individuals to understand AI-assisted decisions and for authorities to assess compliance. - Fairness and bias control: Organizations should identify and address risks of discrimination or unfair treatment in model training, testing, and deployment. This includes curating balanced datasets, defining fairness metrics, and auditing results regularly. - Accuracy and data quality: AI must rely on trustworthy, updated, and relevant data. - Data minimization: The use of personal data in AI should be limited to what is strictly necessary. Synthetic, anonymized, or aggregated data should be preferred wherever feasible. - Security and resilience: AI systems should be secured against data leakage, model inversion, prompt injection, and other attacks that could compromise personal data. Regular testing and red teaming are recommended. - Human oversight: Meaningful human involvement must be ensured in decision-making processes, especially where AI systems may significantly affect individuals’ rights. Oversight mechanisms should be explicit, documented, and operational. - Continuous monitoring: Risk management is a recurring obligation — institutions must review, test, and update controls to address changes in system performance, data quality, or threat exposure. - Procurement and third-party management: Contracts involving AI tools or services should include explicit privacy and security obligations, audit rights, and evidence of upstream data protection compliance. The guidance establishes a practical benchmark for embedding data protection into AI governance — emphasizing transparency, proportionality, and accountability as the foundation of lawful and trustworthy AI systems.
-
⚠️Privacy Risks in AI Management: Lessons from Italy’s DeepSeek Ban⚠️ Italy’s recent ban on #DeepSeek over privacy concerns underscores the need for organizations to integrate stronger data protection measures into their AI Management System (#AIMS), AI Impact Assessment (#AIIA), and AI Risk Assessment (#AIRA). Ensuring compliance with #ISO42001, #ISO42005 (DIS), #ISO23894, and #ISO27701 (DIS) guidelines is now more material than ever. 1. Strengthening AI Management Systems (AIMS) with Privacy Controls 🔑Key Considerations: 🔸ISO 42001 Clause 6.1.2 (AI Risk Assessment): Organizations must integrate privacy risk evaluations into their AI management framework. 🔸ISO 42001 Clause 6.1.4 (AI System Impact Assessment): Requires assessing AI system risks, including personal data exposure and third-party data handling. 🔸ISO 27701 Clause 5.2 (Privacy Policy): Calls for explicit privacy commitments in AI policies to ensure alignment with global data protection laws. 🪛Implementation Example: Establish an AI Data Protection Policy that incorporates ISO27701 guidelines and explicitly defines how AI models handle user data. 2. Enhancing AI Impact Assessments (AIIA) to Address Privacy Risks 🔑Key Considerations: 🔸ISO 42005 Clause 4.7 (Sensitive Use & Impact Thresholds): Mandates defining thresholds for AI systems handling personal data. 🔸ISO 42005 Clause 5.8 (Potential AI System Harms & Benefits): Identifies risks of data misuse, profiling, and unauthorized access. 🔸ISO 27701 Clause A.1.2.6 (Privacy Impact Assessment): Requires documenting how AI systems process personally identifiable information (#PII). 🪛 Implementation Example: Conduct a Privacy Impact Assessment (#PIA) during AI system design to evaluate data collection, retention policies, and user consent mechanisms. 3. Integrating AI Risk Assessments (AIRA) to Mitigate Regulatory Exposure 🔑Key Considerations: 🔸ISO 23894 Clause 6.4.2 (Risk Identification): Calls for AI models to identify and mitigate privacy risks tied to automated decision-making. 🔸ISO 23894 Clause 6.4.4 (Risk Evaluation): Evaluates the consequences of noncompliance with regulations like #GDPR. 🔸ISO 27701 Clause A.1.3.7 (Access, Correction, & Erasure): Ensures AI systems respect user rights to modify or delete their data. 🪛 Implementation Example: Establish compliance audits that review AI data handling practices against evolving regulatory standards. ➡️ Final Thoughts: Governance Can’t Wait The DeepSeek ban is a clear warning that privacy safeguards in AIMS, AIIA, and AIRA aren’t optional. They’re essential for regulatory compliance, stakeholder trust, and business resilience. 🔑 Key actions: ◻️Adopt AI privacy and governance frameworks (ISO42001 & 27701). ◻️Conduct AI impact assessments to preempt regulatory concerns (ISO 42005). ◻️Align risk assessments with global privacy laws (ISO23894 & 27701). Privacy-first AI shouldn't be seen just as a cost of doing business, it’s actually your new competitive advantage.
-
On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation
-
🚀 Today I’m proud to share the first paper from The Policy Update community: “The Colorado AI Act: A Compliance Handshake Between Developers and Deployers.” The Colorado AI Act (SB 24-205) is the first comprehensive, enforceable U.S. state law on high-risk AI systems. It takes effect February 1, 2026, and sets clear obligations for both developers and deployers to prevent algorithmic discrimination. This paper, co-authored by an extraordinary group of practitioners and thinkers across law, auditing, design, strategy, and governance, offers: ⚖️ A breakdown of legal duties for developers and deployers 📑 Practical compliance checklists and templates 🤝 A “compliance handshake” model that shows how these obligations fit together 📈 Insight into why strong AI governance is not just regulation, but a driver of value creation I started The Policy Update as an outlet for "continuous learning in the age of AI", but found something bigger: an amazing interdisciplinary community of people committed to advancing responsible AI. This collaboration is proof of what happens when diverse expertise comes together with shared purpose. Read the full white paper, which is linked in the comments. #ColoradoAIAct #AIRegulation #ResponsibleAI #AIGovernance #AICompliance #AIandLaw Sheila Leunig, Edward Feldman, Ezra Schwartz, Nadine Dammaschk, Dr. Cari Miller, Patrick Sullivan, Abhinav Mittal, Jovana Davidovic
-
New work from Philipp Hacker:"Together with Robert Kilian and my colleague Jana Costas, I set out to ask: how can we make the AI Act simpler - not weaker - while keeping its protective core intact? The answer lies in clarification and disentanglement, I believe. Our new white paper rests on extensive empirical research: 15 in-depth interviews and a focus group with leading European companies and civil society organizations. Their message is remarkably consistent. The problem is not the AI Act itself, but its entanglement with other EU laws, from the Medical Device Regulation and the Machinery Regulation to financial services and the GDPR. We translate these insights into ten evidence-based reform proposals: – Clarify prohibitions by explicitly banning ex-post biometric identification in public spaces, with exceptions for serious crime, like in real-time RBI. – Extend value chain cooperation duties to include GPAI model providers so that downstream developers - often SMEs - can compel cooperation and access necessary compliance information from upstream providers. This is truly key for AI use in high-risk settings. – Adopt a more sectoral approach by . shifting credit, insurance, and employment use cases from Annex III into sector-specific regimes that already regulate these areas in depth . moving sectors like medical and machinery from Annex I A to B. But only with guarantees that updates are indeed made respecting the principles of the AI Act. – Reform technical requirements by replacing the “accuracy” metric with “performance,” and expanding Article 10(5) to all AI systems and GPAI models so bias testing becomes legally feasible. – Simplify deployer duties under Article 26 to avoid redundant monitoring obligations already covered by tort law. – Ensure fairness and consistency in the Fundamental Rights Impact Assessment by applying it either to all private deployers or none. – Close the GPAI modification gap through a new “Article 25 for GPAI” that defines when fine-tuning and modification create new provider responsibilities, building on the AIO Guidelines. – Clarify open-source rules through a uniform definition, basic transparency obligations, and proportional exemptions for SMEs. – Extend implementation timelines for high-risk systems or install an SME enforcement moratorium. – Support SMEs (and CSOs) with targeted financial aid, e.g. for participation in standardization, and technical assistance. The white paper is available now, and our full report will follow in November 2025. Many thanks to Bertelsmann Stiftung and Asena Soydaş for the generous support!"