EU AI Act Compliance and AI Transparency
Intradiegetic uses artificial intelligence (AI) tools in a limited and carefully governed way to support our communications, advisory and content services. Our aim is to align with the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) and its risk‑based, role‑based approach to AI governance.
How we use AI
- We act primarily as a deployer of third‑party general‑purpose AI (GPAI) tools (for example, large language models used for drafting, summarisation and ideation). We do not currently develop or place our own standalone AI models on the market.
- AI output is treated as a starting point only: all client‑facing deliverables are reviewed, edited and approved by a human before they are shared.
- Where we use AI in our internal operations (research assistance, drafting, organisation and planning), this is to augment—not replace—human judgment.
If our role changes (for example, if we begin to offer AI‑enabled products or high‑risk AI systems), we will update this page and our internal controls accordingly.
Your interactions with AI
In some cases, you may interact with AI‑assisted interfaces or receive content that has been partially generated or transformed by AI. In line with the EU AI Act transparency principles, we will:
- Inform you when you are directly interacting with an AI system, unless this is already obvious from the context.
- Clearly mark AI‑generated or significantly AI‑modified content where required, using human‑readable notices and, where technically feasible, machine‑readable markers.
- Explain in plain language the capabilities and limitations of any AI features we provide to you, especially where they may influence your decisions.
You always remain free to request that your project, deliverable or engagement be handled without any AI tools beyond basic infrastructure (for example, spam filtering or standard productivity software).
Data protection, risk Data protection, risk and human oversighthuman oversight
AI use at Intradiegetic is designed to complement our existing data protection, confidentiality and professional standards.
- We avoid feeding special categories of personal data or confidential client information into third‑party AI tools unless this is contractually permitted, technically safeguarded and strictly necessary for the task.
- We select AI providers that commit to compliance with the EU AI Act, including applicable transparency, documentation and copyright‑related obligations for general‑purpose AI.
- We maintain human oversight over AI‑supported work products and do not delegate final decisions or professional judgments to AI systems.
Where our work touches higher‑risk use‑cases under the EU AI Act (for example, certain employment, credit, or eligibility‑related decisions taken by our clients), our role is advisory. Clients remain responsible for classifying, governing and documenting their own AI systems in line with the AI Act.
Support for SMEs and evolving compliance
As an SME, Intradiegetic follows guidance and simplified measures foreseen for small businesses under the EU AI Act, while recognising that obligations flow from our role and the risk level of each AI use‑case, not from company size alone. We are committed to progressively enhancing our AI governance, documentation and training as further regulatory guidance, codes of practice and sector standards emerge.
This page is intended to provide high‑level, non‑exhaustive information about how we approach AI and EU AI Act alignment. It does not constitute legal advice, and it does not replace the specific terms of any contract we enter into with you.
Contact
If you have questions about how we use AI, how the EU AI Act may affect your engagement with Intradiegetic, or if you wish to exercise rights available to you under applicable law, you can contact us at:
