Model inventory & lineage
A written register of every model in production, every dataset behind it, every owner, and every external dependency. The starting point for any defensible governance posture — and the first thing a regulator asks for.
Drift monitoring, model inventories, access reviews, incident processes and cost optimisation — aligned to the AU AI Safety Standard and ISO/IEC 42001. Light-touch governance that operators will actually follow.
Dynamis Advisory — Governance provides decision-grade counsel on AI governance, risk and optimisation. Model inventories, lineage, drift monitoring (WhyLabs, Evidently, MLflow), access policy, incident response and cost optimisation. Aligned to the Australian Government Voluntary AI Safety Standard, ISO/IEC 42001, ISO/IEC 23894 and the NIST AI Risk Management Framework. Documentation prepared so external ISO certifying bodies (BSI, JAS-ANZ-accredited) can audit cleanly.
Workstreams
The work that keeps a model in production through audits, incidents, vendor changes and the inevitable cost spike. We write governance that engineers will read, not lawyers will hide.
Model inventory & lineage
A written register of every model in production, every dataset behind it, every owner, and every external dependency. The starting point for any defensible governance posture — and the first thing a regulator asks for.
Drift & performance monitoring
Statistical drift detectors, evaluation cadence, golden-set benchmarks and the alarm thresholds that wake an engineer up. The work that turns "the model is fine" from a feeling into a metric with a chart.
Access, policy & incident process
Who can deploy, who can retrain, who sees what data, and what happens when something goes sideways. Aligned to the AU AI Safety Standard and ISO/IEC 42001 — light enough that operators will actually follow it.
Cost & FinOps optimisation
Token budgets, GPU utilisation, model right-sizing, caching strategy, and the contract terms that stop a runaway bill in month seven. Governance and finance are the same conversation here.
Standards we align to
We write to operators, not to certificate auditors. But we write so the certificate auditors can read it, too.
AU AI Safety Standard
The Australian government voluntary AI safety standard for risk-based controls.
ISO/IEC 42001
The international management-system standard for AI — the AI equivalent of ISO 27001.
ISO/IEC 23894
AI risk management guidance, applied to model lifecycle and deployment.
NIST AI RMF
Where US-aligned vendors and partners need a common reference, the NIST AI Risk Management Framework.
Engagement shape (Briefing, Review, Fractional) lives on the Advisory overview.
Common questions
Here are some of our most frequently asked questions. Can't find what you're looking for? Reach out to our support team.
Start a conversation
Bring us the situation. We’ll pair you with a solution architect and write back — no hand-offs across divisions, no sales cadence.