Redefining Detection Engineering: Part II
An Engineering Discipline
Series:
In Part I, we defined detection engineering as an engineering discipline, not an analyst task. In Part II, we’ll bring that definition down to ground level and walk through what the work actually looks like day-to-day: the responsibilities, deliverables, and expectations of a detection engineer.
Responsibilities of a Detection Engineer
The Job Postings
The elephant in the room.
Read any random job description for detection engineering, and you’ll usually see some of the same recycled bullets:
“Design, implement, and tune detection rules across SIEM platforms”
“Develop and refine alert logic and reduce false positives”
“Ensure alignment and quality of detection”
This isn’t wrong, but it isn’t right either. The above descriptions reduce the role to what amounts to janitorial work. Clean up detections that you didn’t make, tune rules forever, live and die in a SIEM. If that’s all you’re doing, you aren’t engineering anything. You are the world’s most overpaid mop.
These job postings describe outputs, not the systems that need to be designed and managed to produce them.
The Reality
As I previously stated in Part I, detection engineering is an engineering discipline. This means the work mirrors software development far more than traditional SOC work. There are responsibilities, deliverables, and ownership far beyond “writing rules.” It is an engineering cycle involving several phases: research, simulation, development, testing, validation, automation, deployment, and maintenance. Sound familiar at all? (cough) SDLC (cough)
Your job isn’t just writing rules. Your job is to design logic, analytics, and systems that produce reliable detections at scale.
So, what does that look like in practice?
Core Responsibilities of a Detection Engineer
Everything below this point is what the job actually entails. The elements that make the role an engineering discipline.
Detection Research & Development
This isn’t regurgitating IOCs into queries and filling up your SIEM. This is research → translation → development & validation.
Researching
You’ll study threat reporting, malware families, and red team techniques (red team != penetration testers). We’re talking about real attack chains and adversary tradecraft, not indicators of compromise plastered in open-source reporting.
Translation
You’ll turn that raw intelligence into repeatable behaviors. Essentially, you’re identifying patterns that an attacker can’t easily change. I could get into a whole tangent about the Pyramid of Pain right now, but we’re saving that for Part III.
Lab Work
You’ll simulate attacks you’ve researched in controlled lab environments, preferably mimicking your production systems as closely as possible, to generate a source of truth for your log sources. This will also help you identify how these attacks manifest in other log sources you may not have initially considered.
Development
You’ll use your research, translations, and simulated attack logging to define actionable detection logic in your organization’s formats (e.g., SPL, KQL, Sigma). Ideally, this logic should be portable, reusable, testable, measurable, and version-controlled (we’ll get into these specifics in a later part).
Detection Validation & Testing
An untested detection is not a detection. It is a guess, and as you know, a guess isn’t engineering.
Test Cases
You’ll develop unit tests with log samples you either pull from open-source repositories or hand-create. You want both known-good and known-bad examples to measure the reliability of your detection logic, including edge cases.
Adversary Emulation
Ideally, you’ll have the ability to run controlled emulations on a production “testing” system. This will allow you to validate that the detection logic fires correctly against real, live log sources. If you can’t, hopefully, you have a red team or penetration testing team that can perform this for you.
Detection Lifecycle Management
Detections are living code.
Version Control
All of your detections, unit tests, metadata, response documentation, and context belong in Git. This is nearly non-negotiable. Your SIEM’s built-in version control doesn’t count here. It lacks branching, PR reviews, and CI/CD concepts. PR reviews handle a lot of issues detection engineering teams have - ensuring quality, catching errors before pushing to production, and creating a knowledge-transfer system for all team engineers.
Maintenance & Retirement
Attackers evolve. Log sources evolve. Priorities evolve. Some detections stop adding value. Either maintain them or remove them.
Detection-as-Code (DaC)
Detection engineering does not scale without codification and automation.
A mature detection engineering program looks like an assembly line that produces banger detections:
Research inputs
Logic development
Automated testing
Automated deployment
Measurement & feedback
CI/CD Pipelines
Pipelines prevent manual or inconsistent deployment of detections and enforce quality at scale. We’re talking about automated testing, linting, validation checks, documentation packaging, and deployment to your production environments.
We’ll discuss this in depth in a later part.
Risk Assessment & Architecture
If a detection isn’t created within the context of your organization, who is it intended for? A detection is useless if it doesn’t meaningfully reduce risk for your organization.
Business Context
You need to understand what assets are critical and the processes or workflows your organization uses. This also means you have to collaborate with other business units. I know, I know, it can feel gross to talk to non-nerds, but it must be done. You can’t engineer detections in a vacuum. Detections must be based on actual risk, not whatever you saw in today’s headline on HackerNews or The DFIR Report.
Risk-Based Coverage
Again, I want to stress that we need to cover the actual risk to your organization. This means realistic threats, hopefully based upon threat modeling for your environment. If you don’t have a Threat Intelligence team, congratulations, you’ll likely be doing that threat modeling yourself. I’m sorry.
Detection Architecture
Your program must be designed to be sustainable over the long term. Hence, the strong preference for Git. Storing a large number of detections in a SIEM directly is not feasible in the long run. This also means you need to develop scalable pipelines for your team.
Data & Log Source Onboarding
Data issues are detection issues. You cannot engineer effective detections without reliable logging. You also don’t want to rely on wildcards or regex to extract every value you need - get it parsed.
Log Validation
You need to work with your SIEM or platform teams to validate how onboarding is being completed. Confirm the field mappings, parsing, enrichments, timestamps, and volume.
Normalization & Enrichment
You must ensure the logs match your expected schemas. For example, process creation in one log source should match another log source’s process creation mappings.
I cannot stress the importance of enrichment enough. Do yourself a favor and at least get updated Geo IP and ASN information.
Collaboration & Enablement
Detection engineers are the hinge point to nearly every security function. It is not an extension of any other team, and it is absolutely not a dumping ground for work other teams should be doing.
Security Operations Center (SOC)
This is your primary customer. You have to develop detections that they can actually triage. Give them investigation context and playbooks. Don’t let them drown.
Threat Intelligence
You’ll work alongside the intelligence analysts to develop and formalize threat models, ingest their intelligence reporting for analysis, and collaborate to assess coverage needs.
Threat Hunters
Their work feeds directly into yours. You’ll turn their findings into detections, provide them with hunt packages, and reusable queries to assist in their processes.
Incident Responders
You should be a part of post-mortems or incident reviews. This is where you’ll not only identify any gaps in your detection catalog but also understand any detection failures that contributed to the incident.
Security Leadership
Leadership likes metrics; this is universal. You need to maintain metrics on the health, performance, and coverage of your detections.
Metrics & Measurements
A detection without measurement is an assumption.. and you know what they say about assumptions.
Metrics
You’ll likely be measuring quite a few metrics, with some of the most notable typically including true/false positive rates, precision/recall, alert volume, and health metrics. We’ll discuss this in much more depth in a later part. Think detection effiency, not detection count.
Automated Measurement
Most of your metrics can be automated via dashboards and CI/CD pipelines. Don’t get carried away with 1,000 different metrics, though. Please.
What The Responsibilities Aren’t
To be clear, detection engineering is not just:
Throwing IOCs from an intel report into queries as alerts
Investigative work (e.g., SOC work, threat hunting, etc.)
Tuning pre-canned detections from third parties
Counting how many rules you wrote like it’s proving something
If you’re doing nothing but the above, you aren’t a detection engineer. You’re just turning a wrench.
Closing Note
After all that, there’s a critical point I want you to understand: I haven’t mentioned volume a single time. This work is about precision and reliability. The work you do as a detection engineer determines whether your SOC is drowning in noise or focused on actual threats.
Detection engineers don’t prove their worth by the volume of detections they create. They prove it by trust. If analysts trust your alerts, you’ve won. If they don’t, you’ve failed - no matter how many you’ve written.
What’s Next in Part 3?
The Language of Detection Engineering


Fantastic breakdown of what true detection engineering entails - the emphasis on precision over volume really cuts through the noise. The parallel you draw to SDLC makes it crystal clear why CI/CD pipelines and version control aren't optional extras but core infrastruture. I've seen too many teams stuck in the 'wrench-turning' mode you describe, endlessly tuning vendor detections without ever building sustainable systems. The point about SOC trust being the real success metric is spot-on, especially when youv'e been on both sides of that relationship.