Skip to main content
Codeview Digital
AI & Governance15 min read

AI Readiness Assessment for Canadian Government Departments

TL;DR

Before deploying AI in a Canadian federal department or Crown corporation, you need a structured readiness assessment covering governance, data quality, infrastructure, talent, and compliance with the Directive on Automated Decision-Making. The Algorithmic Impact Assessment is mandatory - but true readiness goes beyond compliance to evaluate whether your organisation can adopt, operate, and govern AI systems responsibly.

Why AI Readiness Matters Now

The Government of Canada has made AI adoption a strategic priority. The GC AI Strategy 2025-2027 explicitly calls on departments to identify and implement AI solutions that improve service delivery, reduce costs, and enhance decision-making. The political direction is clear - AI is coming to government operations whether departments are ready or not.

But readiness is the critical question. Rushing into AI without assessing your organisation's ability to adopt, operate, and govern these systems is how you end up with expensive pilots that never scale, compliance gaps that create audit findings, and - worst case - automated decisions that harm citizens.

Canada's position on the Oxford Insights Government AI Readiness Index has been strong but inconsistent in execution. The policy framework is there. The investment is increasing. What is missing in most departments is the practical groundwork - clean data, clear governance, skilled people, and mature processes - that makes AI work in production, not just in proof of concept.

The departments that move first on structured AI readiness assessments will be the ones that deploy AI successfully. The departments that skip this step will be the ones explaining to the Auditor General why their AI initiative failed to deliver value.

The Mandatory Framework

The Treasury Board Secretariat's Directive on Automated Decision-Making is the starting point for any AI initiative in the federal government. It is not optional. If your system uses automation to make or support decisions affecting individuals, you must comply.

Directive on Automated Decision-Making

The Directive requires departments to complete an Algorithmic Impact Assessment (AIA) before deploying automated decision-making systems. It establishes four impact levels - from Level I (little to no impact) to Level IV (very high impact) - with escalating requirements for transparency, quality assurance, human oversight, and governance at each level.

Algorithmic Impact Assessment (AIA)

The AIA is a questionnaire-based tool that helps departments evaluate the risks associated with an automated decision-making system. It covers project scope, data inputs, decision types, impact on individuals, and risk mitigation measures. The output is an impact level that determines what governance and transparency requirements apply.

Important: the AIA is a compliance tool, not a readiness tool. Completing the AIA tells you what governance requirements apply to a specific system. It does not tell you whether your organisation is ready to build, deploy, and operate AI systems effectively. That requires a broader readiness assessment.

AIDA (Artificial Intelligence and Data Act)

At the federal legislative level, the Artificial Intelligence and Data Act (AIDA) - part of Bill C-27 - will establish additional requirements for high-impact AI systems. While the final form of AIDA is still evolving, departments should be tracking its development because it will likely create new obligations for government AI systems, particularly around transparency, fairness, and accountability.

Beyond Compliance: What Readiness Actually Means

Compliance with the Directive on ADM is the floor, not the ceiling. A department can be fully compliant and still not ready to deploy AI effectively. True AI readiness means your organisation has the data, infrastructure, talent, processes, and governance to adopt AI systems that deliver sustained value.

Think of it this way: the Directive tells you the rules of the game. Readiness determines whether you can actually play. A department with poor data quality, no data governance framework, limited ML engineering talent, and no experience operating production AI systems is technically allowed to deploy AI - but the probability of success is low and the risk of harm is high.

The readiness assessment answers a simple question: given where we are today, what do we need to put in place before we can deploy AI responsibly and effectively?

Assessment Dimensions

A comprehensive AI readiness assessment evaluates your organisation across eight dimensions. Each dimension matters - weakness in any one area can derail an AI initiative.

Governance and leadership - Does your organisation have an AI strategy, executive sponsorship, and clear accountability for AI initiatives? Is there a governance body that can make decisions about AI use cases, risk tolerance, and resource allocation?
Data quality and management - Is your data accurate, complete, timely, and accessible? Do you have data governance practices in place? Can you identify and access the data sets that AI systems will need? Data quality is the single most common gap we see in government AI readiness.
Infrastructure and technology - Do you have the compute, storage, and platform capabilities needed to develop, train, and deploy AI models? This includes development environments, MLOps tooling, and production hosting that meets security requirements.
Talent and skills - Do you have people who can develop, evaluate, and operate AI systems? This is not just data scientists - you also need ML engineers, data engineers, domain experts who can validate outputs, and operations staff who can monitor production systems.
Ethics and responsible AI - Beyond the Directive on ADM, does your organisation have principles and practices for evaluating bias, fairness, transparency, and accountability in AI systems? Can you explain how your AI systems make decisions?
Process maturity - Do your existing IT processes (change management, incident management, release management) support AI system operations? AI systems have different operational characteristics than traditional applications - they drift, they need retraining, they can fail in subtle ways.
Change management readiness - Is your organisation prepared for the cultural shift that AI adoption brings? Will staff trust AI-assisted decisions? Do unions have concerns? Is there a communication plan?
Vendor and procurement readiness - Can you procure AI tools, platforms, and services through existing government procurement vehicles? Do you understand the market of AI solution providers qualified to work with government?

Common Gaps We See

Across dozens of readiness assessments with government departments and Crown corporations, these are the most common gaps.

Data quality is the universal problem

Almost every department overestimates its data readiness. The data exists, but it is scattered across systems, inconsistently formatted, poorly documented, and often incomplete. Before you can do anything meaningful with AI, you need to invest in data quality, data cataloguing, and data governance. This is not glamorous work, but it is essential.

Governance frameworks exist on paper but not in practice

Many departments have written AI principles and governance frameworks. Fewer have operationalized them. A governance framework that does not have clear decision-making authority, a defined process for reviewing AI use cases, and regular oversight of deployed systems is just a document. It needs to be a living practice.

Talent gaps are deeper than expected

The government struggles to recruit and retain AI talent at competitive salaries. But the talent gap is not just about data scientists. Departments also lack ML engineers who can productionize models, data engineers who can build reliable data pipelines, and IT operations staff who understand how to monitor and maintain AI systems in production.

IT operations maturity is a prerequisite nobody talks about

Here is the gap that most AI readiness consultants miss: you cannot operate AI systems reliably if your underlying IT operations are immature. If your incident management process is ad hoc, your change management is inconsistent, and your monitoring is fragmented, adding AI to the mix will create more problems than it solves. IT operations maturity is a foundational prerequisite for AI adoption.

How to Structure the Engagement

A structured AI readiness assessment for a government department typically follows this pattern.

  1. Scoping - Define which parts of the organisation are in scope, what AI use cases are being considered, and what the timeline looks like. This takes 1-2 weeks.
  2. Discovery - Interviews with stakeholders across IT, data, policy, and business areas. Review of existing documentation, data inventories, governance frameworks, and technology platforms. This takes 2-3 weeks.
  3. Assessment - Score each of the eight dimensions against a maturity model. Identify critical gaps that would block AI adoption. This takes 1-2 weeks.
  4. Roadmap - Build a prioritized plan to address gaps, with clear milestones and resource requirements. Quick wins first, foundational work in parallel, advanced capabilities after the foundation is solid. This takes 1-2 weeks.
  5. Debrief and alignment - Present findings to executive sponsors and key stakeholders. Align on priorities and next steps. This takes 1 week.

Total elapsed time: 6-10 weeks, depending on the size and complexity of the organisation.

What to Look For When Choosing an AI Readiness Consultant

The AI readiness consulting market is crowded and noisy. Here is how to separate the practitioners from the presenters.

Government experience - not just 'public sector' but actual experience with Canadian federal departments or Crown corporations. They should understand the Directive on ADM, the AIA, and government procurement and security requirements.
Breadth across all eight dimensions - many AI consultants are strong on data science but weak on governance, operations, and change management. Your readiness assessment needs to cover the full picture, not just the technology.
IT operations depth - this is the differentiator. Most AI readiness consultants do not understand ITSM, ITOM, or how production AI systems need to be operated. The ones who do will give you a much more realistic readiness assessment.
Practical, not theoretical - ask for examples of readiness assessments they have completed and what happened next. Did the department actually deploy AI? What worked? What did they miss?
Honest about timelines - any consultant who tells you a government department can go from zero to production AI in three months is not being truthful. Readiness takes time. A good consultant will give you a realistic timeline, not an optimistic one.
Tool-agnostic - they should not be pushing a specific AI platform or vendor. The readiness assessment should inform your technology choices, not the other way around.
Senior delivery - AI readiness assessment requires experienced practitioners who can have substantive conversations with executives, data leaders, and IT teams. This is not work for junior analysts.

Frequently Asked Questions

Is the Algorithmic Impact Assessment mandatory for all government AI systems?

The Directive on Automated Decision-Making applies to systems that make or support administrative decisions about individuals. Not all AI systems fall under this scope - internal analytics tools, for example, may not trigger the requirement. However, the safe practice is to complete an AIA for any system that uses AI or automation in a decision-making context. It is better to assess and determine it is low-impact than to skip the assessment and face an audit finding later.

How long does an AI readiness assessment take for a government department?

A focused assessment covering all eight dimensions typically takes 6-10 weeks from kickoff to final report. Larger organisations or those with complex IT environments may need 10-12 weeks. This includes stakeholder interviews, document review, gap analysis, and roadmap development. The assessment itself is the fastest part - acting on the findings takes much longer.

Do we need a separate AI governance consultant and an AI technology consultant?

Not necessarily, but your consultant needs to cover both. The biggest risk is hiring a technology-focused consultant who treats governance as a checkbox, or a policy-focused consultant who does not understand the technical realities of deploying AI. The ideal consultant has depth in both areas - they can assess your data pipelines and your governance frameworks with equal rigour. Boutique firms that combine IT operations and AI readiness expertise often provide better integrated assessments than firms that specialise in only one dimension.

What if our data quality is poor - should we still do a readiness assessment?

Absolutely. Identifying data quality as a gap is one of the most valuable outcomes of a readiness assessment. The assessment quantifies the problem - how poor is the data, which data sets are affected, and what remediation is needed - so you can build a realistic plan to improve it. Skipping the assessment because you already know your data is messy just means you never get a structured plan to fix it.

Can we do an AI readiness assessment in-house instead of hiring a consultant?

You can, if you have people with the right mix of AI knowledge, government context, and organisational assessment experience. The challenge is objectivity - internal teams tend to overestimate readiness in areas they own and underestimate gaps they are not aware of. An external consultant brings fresh eyes, cross-departmental benchmarks, and the ability to deliver uncomfortable findings without internal political consequences. A hybrid approach - internal team leading the process with external validation - can work well.

How does AI readiness relate to IT operations maturity?

They are directly connected. AI systems in production need the same operational support as any other IT system - incident management, change management, monitoring, and release management - plus additional capabilities like model performance monitoring, data drift detection, and retraining pipelines. If your IT operations maturity is low, your ability to operate AI systems reliably will be limited. A good AI readiness assessment evaluates IT operations maturity as a foundational prerequisite.

Related Services

About the Author

Corey Derouin is the founder and principal consultant at Codeview Digital. With extensive experience in federal government IT operations, ServiceNow platform delivery, and digital transformation, Corey brings a practitioner's perspective to every engagement - not a slide deck, but hands-on delivery from someone who has done the work inside government.

Learn more about our team

Ready to talk?

We don't do high-pressure sales. Just a straightforward conversation about your challenges and whether we can help.

Start a Conversation