SPECIALIST CAPABILITIES
Three disciplines. One team.
EU AI Act readiness. AI red teaming. AI security and governance. Folded into every CBRX engagement — and bookable on their own.
EU AI ACT READINESS
Know what's high-risk. Know what's missing. Know what to do.
A 14-day, evidence-first assessment. Identifies your EU AI Act exposure and ships a board-readable gap register — with named owners and clear evidence requirements.
Format
14 business days. Fixed scope. Senior-led readout on day 14.
30-minute intro.
What it covers
Risk classification
Each AI use case mapped to its likely risk class under the EU AI Act. No hand-waving — the obligations follow from the class.
Obligations map
Which articles apply to you. What evidence each one demands. What you already have, what's missing.
Gap register + remediation blueprint
Board-readable register across governance, documentation, oversight, logging, and security controls. Each gap costed and owned.
AI RED TEAMING
We attack your LLMs, agents and RAG before someone else does.
Offensive testing for LLM apps, agents, RAG systems and AI supply chain. We behave like an attacker — so you see how your AI actually fails.
What you get
- Red team report with attack paths, impact, and likelihood.
- Reproducible attack scenarios.
- Prioritised fixes for prompts, access control, logging, and guardrails.
- Secure architecture recommendations.
Best for
- Companies with live LLM apps or pre-production pilots.
- CISOs demonstrating AI security due diligence.
- AI and product teams shipping fast without blind risk.
30-minute intro.
What we test
Prompt injection and jailbreaking
Override instructions, leak secrets, bypass policies.
Data exfiltration and privacy failures
Pulling sensitive or proprietary data through model queries.
Agentic systems and tool usage
Manipulating agents into harmful actions through their own tools.
RAG systems and knowledge bases
Retrieval poisoning. Document manipulation. Index abuse.
AI supply chain weaknesses
Third-party models, plugins, gateways, APIs and integrations.
AI SECURITY & GOVERNANCE
Fractional AI security lead. Governance. Incident response.
Ongoing partner for the teams that need a specialist layer. CBRX acts as your fractional AI security and compliance lead — or augments the security, engineering and compliance teams already in place.
Best for
- Organisations planning multiple AI initiatives in the next 12–24 months.
- CISOs, CTOs and Heads of AI who need a specialist partner for governance and security.
- Companies deploying AI in rights-impacting or regulated workflows — HR, finance, healthcare, identity, fraud.
- SaaS vendors selling AI features into enterprise customers who demand proof of controls.
30-minute intro.
Three workstreams
Governance and compliance
- AI policies, decision processes, role ownership.
- AI system inventory and risk classification.
- EU AI Act, GDPR, NIS2 and DORA alignment.
- Model and data lifecycle governance — approval, monitoring, retirement.
Secure AI and custom systems
- Architecture for LLM apps, agents and RAG systems.
- Threat modelling for AI workflows and integrations.
- Guardrails, logging, monitoring, abuse detection.
- Vendor selection for model gateways, vector DBs and platforms.
AI incident response
- Playbooks for prompt injection, leakage and abuse.
- Integration with SOC and IR workflows.
- Incident investigations.
Engagement models
Project
Fixed outcome. Stand up governance and controls for the first three AI systems. Build the EU AI Act evidence structure. Time-boxed.
Retainer
Monthly governance reviews. Change approvals for new AI use cases. Security reviews for AI releases. Continuous evidence improvement.
Partner
Co-delivery alongside MSSPs, SIs and internal teams. CBRX provides the specialist layer.
ONE NEXT STEP
20 minutes. No deck.
Pick the discipline. We'll tell you what's worth doing and what isn't, on the call.
Or email sales@cbrx.ai
