Skip to main content
Codeview Digital
Observability4 min read

Why Government IT Monitoring Is Broken (And What to Do About It)

The Problem

Walk into any federal department's IT operations centre and you will find the same thing: a wall of dashboards nobody looks at, an inbox full of alerts nobody reads, and a team that finds out about outages from end users calling the service desk.

It is not that these organizations lack monitoring. They have too much of it. The typical federal department runs somewhere between 8 and 15 monitoring tools, each bought at a different time, by a different team, for a different reason. Infrastructure bought Nagios. The app team bought Dynatrace. Security bought Splunk. The cloud team spun up CloudWatch. Someone got a Datadog trial and never cancelled it.

The result is not observability. It is noise. Thousands of alerts per day, most of them meaningless, burying the signals that actually matter in a flood of threshold breaches and heartbeat checks that nobody has tuned since 2019.

Why It Persists

This situation persists for three reasons that have nothing to do with technology.

First, procurement created the mess. Each tool was acquired through a separate procurement, often with a different vendor, a different contract vehicle, and a different renewal cycle. Nobody planned the monitoring architecture. It grew organically, one RFP at a time.

Second, nobody owns the whole picture. Infrastructure monitors infrastructure. Applications monitors applications. Security monitors security. But nobody monitors the service that users actually care about. There is no single team responsible for answering the question: "Is this business service working right now?"

Third, alert fatigue has been normalized. When everything alerts all the time, the rational response is to stop paying attention. Teams build filters, mute channels, and develop a sixth sense for which alerts are "real" and which are noise. The problem is that sixth sense is wrong often enough to guarantee missed incidents. And new team members do not have it at all.

The Path Out

The path out is not buying another tool. It is building an observability practice, which is a fundamentally different thing.

An observability practice starts with a question: "What services do we run, and how do we know they are working?" Not "what infrastructure do we have" but "what does a citizen or employee experience, and can we measure that?" This is the difference between monitoring and observability.

Start with your top 5 business services. For each one, define what "working" means in terms users would recognize. Can they log in? Can they submit a form? Does the response come back in under 3 seconds? These are your service-level indicators, and they are probably not being measured right now, even with 12 monitoring tools in play.

Then map the alert chain. When one of those indicators degrades, who needs to know? What is the escalation path? What runbook applies? If you cannot answer these questions for your top 5 services, you have a bigger problem than tool sprawl.

The tool consolidation conversation comes after that, not before. Once you know what you need to observe and why, the tool question becomes straightforward. Most organizations find they need two or three platforms, not twelve. But the consolidation only sticks when it is driven by a clear service model, not just a desire to cut licence costs.

For a detailed walkthrough of how to build this from scratch, the full observability strategy guide covers the framework, maturity model, and implementation phases. And if you want to see how the major platforms compare for government use cases specifically, the observability tools comparison guide breaks that down by data residency, compliance, and AIOps readiness.

One Question to Ask Yourself

Here is one question worth asking in your next operations meeting: "If our most important citizen-facing service went down right now, how would we find out?"

If the honest answer is "someone would call the service desk," then you have work to do. Not more tools. Not more dashboards. A fundamentally different approach to understanding whether your services are working.

That is what observability strategy is really about.

If this sounds familiar, the full observability strategy guide is at /guides/observability-strategy-government-2026.

Found this useful? Share it with a colleague who might be dealing with the same challenges.

About the Author

Corey Derouin is the founder and principal consultant at Codeview Digital. With extensive experience in federal government IT operations, ServiceNow platform delivery, and digital transformation, Corey brings a practitioner's perspective to every engagement.

Learn more about Codeview

Ready to talk?

We don't do high-pressure sales. Just a straightforward conversation about your challenges and whether we can help.

Start a Conversation