
xAI Corp. et al
v. Li, No. 3:2025cv07292
The legal case of xAI versus the State of California under that State’s AI Training Data Transparency Law (AB2013) continues a recent run of AI companies seemingly voting against themselves in respect of the adoption of the AI sector’s products.
Where do these cases, in conjunction with the EU AI Act and the emerging AI laws leave us, in terms of managing IT/cyber and regulatory compliance risks?
Obfuscation, plausible deniability, trade secret protection for corporate AI risk & compliance teams, may now have been replaced by mandated human-in-the loop, defined guardrails, & high-level disclosures.
The riposte to such a statement, may be that xAI’s Challenge to California’s AI Training Data Transparency Law (AB2013) only applies to that specific State. However, when reviewed within a broader context of the Anthropic AI battle against the US Federal Government in how its’ products are used, there are broader implications to be taken into consideration when addressing AI implementation.
Although the EU AI Act applies directly to every EU member state (directives require national law transposition), each EU country must: designate their national authority, define fines/legal consequences & create legal sandboxes. By Q1 2026 (at the time of this post), only 50% had completed the task, not due to a lack of effort, or motivation/resources, but simply due to the complexity attached to AI across all of its’ domains; technologically, legally, operationally, morally/ethically, etc.
By contrast, in the US in 2025, 1,208 AI-related bills were introduced across all 50 states (145 enacted into law – NCSL), thereby highlighting current concerns as to it potential for severe harm.
The take-aways from these events appear to be that high-risk AI systems, as defined by each country, fall under the requirements to disclose per the xAI case & the EU AI Act:
- Training data general sources & characteristics
- Dataset relationships to AI system intended purposes
- Data size by range/estimates
- Inclusion of copyrighted/licensed material within data
- Inclusion of personal/aggregated consumer information
- Use of synthetic data
- Prohibited use for certain purposes
- AI literacy for all employees within AI-use work functions
From the software, embedded device, OT security/compliance perspective, this increases complexity on top of current complexity, even with 3 basic questions being posed:
- How do you define dependencies & relationships within the SBOM, where AI is integrated within multiple points architecturally & operationally?
- How do you develop correct environment metadata?
- How is MCP mapped to vulnerabilities?
Without the ability to comprehend what the risks created by the various types of AI models are, organisations are unable to provide Regulators with auditable evidence of the intent to comply with whichever laws apply to an entity.
Commencing with the fundamentals of AI audits, SBOM analysis, 3rd party AI models, defining appropriate controls and guardrails, non-compliance risks become those of inevitability, rather than a possibility.
For more information how Quantar can assist your organisation in managing AI security and compliance risks, contact us: