For modern engineering teams, Datadog is an indispensable tool for observability. However, this comprehensive visibility can come at a steep price. As an organization scales, the Datadog bill can become as complex and surprising as the cloud bill itself. Effective Datadog cost optimization requires a proactive strategy to control data volume and align monitoring spend with business value. Here are five targeted strategies to get your Datadog bill under control.
1. Audit and Reduce Custom Metrics
Custom metrics are often the single largest driver of a high Datadog bill. Datadog's pricing is based on the number of unique metric-tag combinations (time series). This means that metrics with high cardinality—those with many unique tag values (like a user_id tag)—can become exponentially expensive.
Actionable Steps:
Perform a Cardinality Audit: Use Datadog's Metric Summary page to identify your highest-cardinality custom metrics. Question whether every tag is providing critical value.
Eliminate Unused Metrics: Regularly audit your metrics to identify and remove those that are not being used in any dashboards or monitors.
Pre-aggregate Metrics: Consider aggregating metrics at the source (e.g., using an OpenTelemetry collector) before sending them to Datadog. Send averages and percentiles instead of every raw data point.
2. Control Log Ingestion and Indexing
Logs are another major cost center, with pricing based on the volume of data ingested and indexed. The default behavior of many applications is to log verbosely.
Actionable Steps:
Use Exclusion Filters: The most effective way to reduce log costs is to not send unnecessary logs. Configure the Datadog Agent with exclusion filters to drop verbose, low-value logs (like DEBUG or INFO logs from production) at the source.
Leverage Logging without Limits™: For logs you must retain for compliance but don't need to search in real-time, use Datadog's archiving feature. You can send 100% of your logs to cheap storage like Amazon S3 and "rehydrate" them for analysis if needed, avoiding expensive indexing costs.
Implement Log Sampling: For high-volume, repetitive logs, consider sampling. Ingesting only 10% or 20% of these logs can often provide enough data for trend analysis while cutting costs.
3. Right-Size Your APM and Tracing
Application Performance Monitoring (APM) is powerful, but tracing every single request in every service can be overkill.
Be Selective with Tracing: You can often achieve the necessary visibility by fully tracing only your most critical, user-facing services.
Use Ingestion Controls and Sampling: For less critical downstream services, you can use Datadog's ingestion controls to sample traces, providing a representative view of performance without retaining 100% of the data.
4. Consolidate Dashboards and Monitors
While this doesn't have a direct cost impact, a cluttered environment of stale dashboards and monitors creates operational "noise" that makes it harder for engineers to find what's important. Regularly deprecate and delete unused assets to streamline your observability platform.
5. Align Monitoring Spend with Business Value
The most important strategic step is to treat your Datadog spend with the same rigor as your cloud spend. This means adopting a FinOps mindset for observability.
Allocate Datadog Costs: Use a FinOps platform that can ingest Datadog billing data and, using tags, allocate those costs back to the specific teams, services, or features generating them.
Create Accountability: When a team can see that their new feature added $2,000 to the monthly Datadog bill, they are empowered and incentivized to be more mindful of observability costs.
Conclusion
By implementing a combination of tactical optimizations—auditing metrics, controlling logs, and right-sizing APM—and a strategic focus on cost allocation and accountability, engineering teams can ensure their observability spend is both efficient and directly tied to business value. This transforms the Datadog bill from a source of financial stress into a manageable and strategic investment.
All in One Place
Atler Pilot decodes your cloud spend story by bringing monitoring, automation, and intelligent insights together for faster and better cloud operations.

