Mastering Log Cost Control: A Practical Guide to Using Adaptive Logs Drop Rules
Overview
Managing log volumes is a constant challenge for platform and observability teams. In every organization, there are log lines that add little to no value—health check pings that repeat every few seconds, forgotten DEBUG statements that should have been removed, or verbose INFO logs from rarely used services. These noisy entries inflate your logging bill and clutter your observability data, making it harder to spot real issues.
The new drop rules feature in Grafana Cloud’s Adaptive Logs (currently in public preview) gives you a straightforward way to eliminate or sample low-value log lines before they ever reach long-term storage. By combining your own custom logic with the intelligent optimization recommendations already present in Adaptive Metrics and Adaptive Traces, you can reduce noise and cut costs immediately.
Drop rules work alongside two other mechanisms—exemptions and patterns—to give you complete control over log ingestion. This guide walks you through everything you need to know: what drop rules are, how to set them up, and how to avoid common pitfalls.
Prerequisites
Before you start creating drop rules, make sure you have the following:
- A Grafana Cloud account with access to the Adaptive Logs feature (public preview). You must have the correct permissions to manage log pipelines.
- Familiarity with log labels—you’ll use label selectors to target specific log streams (e.g.,
service=my-api,namespace=production). - Understanding of log levels—drop rules can filter by detected log levels like
DEBUG,INFO,WARN, andERROR. - A clear list of noisy log producers—identify which services, containers, or environments generate the most junk logs.
Step-by-Step Instructions
Step 1: Understand the Log Ingestion Pipeline
When a log line reaches Grafana Cloud, it goes through three stages in this order:
- Exemptions – Any log matching an exemption rule passes through untouched. No sampling or dropping is applied.
- Drop rules – Evaluated in priority order. The first rule that matches applies its drop rate (0% to 100%).
- Patterns – Optimization recommendations (e.g., for repetitive logs) can be applied to logs that weren’t exempted or dropped.
Drop rules sit in the middle of this pipeline, giving you a powerful lever to intercept known noise before it reaches your storage budget.
Step 2: Create a New Drop Rule
Navigate to the Adaptive Logs section in your Grafana Cloud instance and select Drop Rules. Click Create Drop Rule. You’ll define three things:
- Rule Name – A descriptive name like “Drop DEBUG logs from legacy services”.
- Match Criteria – Use a combination of:
- Log labels (e.g.,
service=legacy-batch-processor) - Log level (e.g.,
DEBUG) - Line content (a text pattern, e.g.,
health check)
- Log labels (e.g.,
- Action – Either Drop 100% (completely discard matching logs) or apply a drop percentage (e.g., 90% to sample only 10% of matching logs).
Step 3: Set Up a Simple Drop Rule for DEBUG Logs
Many teams have services that still emit DEBUG logs in production. Create a rule that targets all logs with level DEBUG and drops them entirely. In the match criteria, select log_level = DEBUG. Set the action to Drop 100%. This single rule can cut your log volume significantly.
Step 4: Sample Chatty Repetitive Logs
Sometimes you don’t want to drop logs entirely—just keep a representative sample. For example, a microservice logs a status every 5 seconds. Create a rule with a label selector like service=health-checker, and add a content filter for the text status OK. Set the drop rate to 90%. This keeps 10% of the logs for occasional verification while saving 90% of the cost.
Step 5: Target a Specific Noisy Producer
A new service might suddenly start emitting high volumes of low-value logs. Combine multiple criteria: specify a label selector service=noisy-service, choose log level INFO, and add a content pattern like heartbeat. Set the drop rate to 100% to eliminate those lines entirely without affecting other logs from that service.
Step 6: Prioritize Your Rules
Drop rules are evaluated in order. Place more specific rules higher than broad ones. For instance, if you have a rule that drops all DEBUG logs but also a rule that samples DEBUG logs from a critical service, order the specific rule first so it gets evaluated before the broad drop.
Step 7: Combine Drop Rules with Exemptions and Patterns
Remember the full pipeline:
- Use exemptions to protect critical logs you never want to drop (e.g., all
ERRORlogs from any service). - Use drop rules for known noise.
- Let pattern recommendations handle repetitive logs you haven’t explicitly targeted.
This complete system gives you fine-grained control without micromanaging every log line.
Common Mistakes
Mistake 1: Forgetting the Order of Evaluation
If you place a broad drop rule (e.g., drop all INFO logs) before a specific rule that samples INFO logs from a critical service, the specific rule will never be reached. Always order rules from most specific to most general.
Mistake 2: Overusing 100% Drop Rules
While tempting, dropping 100% of matching logs can blind you to trends. Consider using a high drop percentage (like 90% or 95%) instead of 100% for logs that might occasionally contain useful signals. You can always adjust later.
Mistake 3: Neglecting to Test Rules
Before applying a rule to production, test it in a staging environment or use the preview functionality in Grafana Cloud. Verify that the rule matches the intended logs and doesn’t accidentally drop critical data.
Mistake 4: Not Reviewing Log Level Detection
Grafana Cloud detects log levels automatically, but custom log formats might not be parsed correctly. Check that the log level labels are accurate for your services, otherwise your drop rules might not match as expected.
Mistake 5: Ignoring Exemptions
Exemptions are processed before drop rules. If you have an exemption for a log stream, any drop rule targeting that stream will have no effect. Plan your exemptions and drop rules together to avoid conflicts.
Summary
Adaptive Logs drop rules provide a simple yet powerful way to eliminate or sample noisy log lines before they are stored in Grafana Cloud. By combining label selectors, log levels, and content filters, you can target known noise with precision. The rules work in conjunction with exemptions and pattern recommendations to form a complete log cost management system. Careful ordering, testing, and balancing 100% drops with sampling ensures you reduce waste without losing critical observability.
Related Articles
- How to Ethically Collect and Study Roadkill Specimens: A Bioethical Guide
- Tracking Arsenic Exposure Through Blood: A New DNA-Based Marker
- 7 Ways This South Dakota Hospital-Hotel Is Revolutionizing Patient Care
- Hantavirus Outbreak on Cruise Ship: Spain Prepares Emergency Evacuation in Canary Islands
- Rivian Surges Past Expectations: R2 Production Launch Drives Q1 Revenue Growth
- 10 Hidden Gaps in the Psychedelic Revolution Affecting People of Color
- Trump Shifts Surgeon General Pick: From MAHA Influencer to Practicing Radiologist
- Surgeon General Shake-Up: Trump Swaps Casey Means for Nicole Saphier