Why AI Readiness Matters Now
The Government of Canada has made AI adoption a strategic priority. The GC AI Strategy 2025-2027 explicitly calls on departments to identify and implement AI solutions that improve service delivery, reduce costs, and enhance decision-making. The political direction is clear - AI is coming to government operations whether departments are ready or not.
But readiness is the critical question. Rushing into AI without assessing your organisation's ability to adopt, operate, and govern these systems is how you end up with expensive pilots that never scale, compliance gaps that create audit findings, and - worst case - automated decisions that harm citizens.
Canada's position on the Oxford Insights Government AI Readiness Index has been strong but inconsistent in execution. The policy framework is there. The investment is increasing. What is missing in most departments is the practical groundwork - clean data, clear governance, skilled people, and mature processes - that makes AI work in production, not just in proof of concept.
The departments that move first on structured AI readiness assessments will be the ones that deploy AI successfully. The departments that skip this step will be the ones explaining to the Auditor General why their AI initiative failed to deliver value.
The Mandatory Framework
The Treasury Board Secretariat's Directive on Automated Decision-Making is the starting point for any AI initiative in the federal government. It is not optional. If your system uses automation to make or support decisions affecting individuals, you must comply.
Directive on Automated Decision-Making
The Directive requires departments to complete an Algorithmic Impact Assessment (AIA) before deploying automated decision-making systems. It establishes four impact levels - from Level I (little to no impact) to Level IV (very high impact) - with escalating requirements for transparency, quality assurance, human oversight, and governance at each level.
Algorithmic Impact Assessment (AIA)
The AIA is a questionnaire-based tool that helps departments evaluate the risks associated with an automated decision-making system. It covers project scope, data inputs, decision types, impact on individuals, and risk mitigation measures. The output is an impact level that determines what governance and transparency requirements apply.
Important: the AIA is a compliance tool, not a readiness tool. Completing the AIA tells you what governance requirements apply to a specific system. It does not tell you whether your organisation is ready to build, deploy, and operate AI systems effectively. That requires a broader readiness assessment.
AIDA (Artificial Intelligence and Data Act)
At the federal legislative level, the Artificial Intelligence and Data Act (AIDA) - part of Bill C-27 - will establish additional requirements for high-impact AI systems. While the final form of AIDA is still evolving, departments should be tracking its development because it will likely create new obligations for government AI systems, particularly around transparency, fairness, and accountability.
Beyond Compliance: What Readiness Actually Means
Compliance with the Directive on ADM is the floor, not the ceiling. A department can be fully compliant and still not ready to deploy AI effectively. True AI readiness means your organisation has the data, infrastructure, talent, processes, and governance to adopt AI systems that deliver sustained value.
Think of it this way: the Directive tells you the rules of the game. Readiness determines whether you can actually play. A department with poor data quality, no data governance framework, limited ML engineering talent, and no experience operating production AI systems is technically allowed to deploy AI - but the probability of success is low and the risk of harm is high.
The readiness assessment answers a simple question: given where we are today, what do we need to put in place before we can deploy AI responsibly and effectively?
Assessment Dimensions
A comprehensive AI readiness assessment evaluates your organisation across eight dimensions. Each dimension matters - weakness in any one area can derail an AI initiative.
Common Gaps We See
Across dozens of readiness assessments with government departments and Crown corporations, these are the most common gaps.
Data quality is the universal problem
Almost every department overestimates its data readiness. The data exists, but it is scattered across systems, inconsistently formatted, poorly documented, and often incomplete. Before you can do anything meaningful with AI, you need to invest in data quality, data cataloguing, and data governance. This is not glamorous work, but it is essential.
Governance frameworks exist on paper but not in practice
Many departments have written AI principles and governance frameworks. Fewer have operationalized them. A governance framework that does not have clear decision-making authority, a defined process for reviewing AI use cases, and regular oversight of deployed systems is just a document. It needs to be a living practice.
Talent gaps are deeper than expected
The government struggles to recruit and retain AI talent at competitive salaries. But the talent gap is not just about data scientists. Departments also lack ML engineers who can productionize models, data engineers who can build reliable data pipelines, and IT operations staff who understand how to monitor and maintain AI systems in production.
IT operations maturity is a prerequisite nobody talks about
Here is the gap that most AI readiness consultants miss: you cannot operate AI systems reliably if your underlying IT operations are immature. If your incident management process is ad hoc, your change management is inconsistent, and your monitoring is fragmented, adding AI to the mix will create more problems than it solves. IT operations maturity is a foundational prerequisite for AI adoption.
How to Structure the Engagement
A structured AI readiness assessment for a government department typically follows this pattern.
- Scoping - Define which parts of the organisation are in scope, what AI use cases are being considered, and what the timeline looks like. This takes 1-2 weeks.
- Discovery - Interviews with stakeholders across IT, data, policy, and business areas. Review of existing documentation, data inventories, governance frameworks, and technology platforms. This takes 2-3 weeks.
- Assessment - Score each of the eight dimensions against a maturity model. Identify critical gaps that would block AI adoption. This takes 1-2 weeks.
- Roadmap - Build a prioritized plan to address gaps, with clear milestones and resource requirements. Quick wins first, foundational work in parallel, advanced capabilities after the foundation is solid. This takes 1-2 weeks.
- Debrief and alignment - Present findings to executive sponsors and key stakeholders. Align on priorities and next steps. This takes 1 week.
Total elapsed time: 6-10 weeks, depending on the size and complexity of the organisation.
What to Look For When Choosing an AI Readiness Consultant
The AI readiness consulting market is crowded and noisy. Here is how to separate the practitioners from the presenters.
Frequently Asked Questions
Is the Algorithmic Impact Assessment mandatory for all government AI systems?
The Directive on Automated Decision-Making applies to systems that make or support administrative decisions about individuals. Not all AI systems fall under this scope - internal analytics tools, for example, may not trigger the requirement. However, the safe practice is to complete an AIA for any system that uses AI or automation in a decision-making context. It is better to assess and determine it is low-impact than to skip the assessment and face an audit finding later.
How long does an AI readiness assessment take for a government department?
A focused assessment covering all eight dimensions typically takes 6-10 weeks from kickoff to final report. Larger organisations or those with complex IT environments may need 10-12 weeks. This includes stakeholder interviews, document review, gap analysis, and roadmap development. The assessment itself is the fastest part - acting on the findings takes much longer.
Do we need a separate AI governance consultant and an AI technology consultant?
Not necessarily, but your consultant needs to cover both. The biggest risk is hiring a technology-focused consultant who treats governance as a checkbox, or a policy-focused consultant who does not understand the technical realities of deploying AI. The ideal consultant has depth in both areas - they can assess your data pipelines and your governance frameworks with equal rigour. Boutique firms that combine IT operations and AI readiness expertise often provide better integrated assessments than firms that specialise in only one dimension.
What if our data quality is poor - should we still do a readiness assessment?
Absolutely. Identifying data quality as a gap is one of the most valuable outcomes of a readiness assessment. The assessment quantifies the problem - how poor is the data, which data sets are affected, and what remediation is needed - so you can build a realistic plan to improve it. Skipping the assessment because you already know your data is messy just means you never get a structured plan to fix it.
Can we do an AI readiness assessment in-house instead of hiring a consultant?
You can, if you have people with the right mix of AI knowledge, government context, and organisational assessment experience. The challenge is objectivity - internal teams tend to overestimate readiness in areas they own and underestimate gaps they are not aware of. An external consultant brings fresh eyes, cross-departmental benchmarks, and the ability to deliver uncomfortable findings without internal political consequences. A hybrid approach - internal team leading the process with external validation - can work well.
How does AI readiness relate to IT operations maturity?
They are directly connected. AI systems in production need the same operational support as any other IT system - incident management, change management, monitoring, and release management - plus additional capabilities like model performance monitoring, data drift detection, and retraining pipelines. If your IT operations maturity is low, your ability to operate AI systems reliably will be limited. A good AI readiness assessment evaluates IT operations maturity as a foundational prerequisite.
Related Services
About the Author
Corey Derouin is the founder and principal consultant at Codeview Digital. With extensive experience in federal government IT operations, ServiceNow platform delivery, and digital transformation, Corey brings a practitioner's perspective to every engagement - not a slide deck, but hands-on delivery from someone who has done the work inside government.
Learn more about our team