top of page
Search

Rethinking Incident Response – From PICERL to DAIR (Expanded Edition)

  • 3 days ago
  • 8 min read
ree

If you guys remember, I had written a post a while back about the DAIR model — and honestly, the response was wild. I got so many messages, and follow-ups asking for a deeper dive into the topic. So here I am, trying to break it down better, one phase at a time.



Let’s get into it.

-------------------------------------------------------------------------------------------------------------


🚨 First, let’s be clear: Incident Response is not linear

There’s no magical 6-step recipe to handle incidents. Real-world incidents are messy. You’ve got multiple events happening in parallel, your alerting system is going off, someone’s panicking, and maybe your containment plan just half-worked. Things overlap — and a linear approach just doesn't hold up anymore.


For example:

You detect something sketchy — cool, that’s your detection phase, right?

But what if during containment or eradication, you uncover a whole new set of TTPs or realize that another part of the system was compromised before detection even kicked in?

Now you’re circling back.

This is where DAIR — the Dynamic Approach to Incident Response — steps in. It’s not about replacing PICERL. It’s about shifting our mindset to a flexible, outcome-driven process.



⚙️ DAIR: Dynamic. Not Chaotic.

DAIR breaks down into waypoints rather than locked steps. Think of it like GPS rerouting in real time — you still want to reach your destination (containment and recovery), but you may need to take a few turns based on roadblocks you hit along the way.


Here’s the core flow:

  1. Detect

  2. Analyze

  3. Improve

  4. Respond

ree

But in real life, it's more of a loop than a line.


-------------------------------------------------------------------------------------------------------------

🧠 Preparation: The underrated superhero of IR

Before you even start thinking about detection, you’ve gotta "know thy organization."

Not just buzzwords — I’m talking about:
  • What actually matters to the business?

  • What’s your visibility like — do you have endpoint telemetry? Firewall logs? Sysmon? Or are you just hoping EDR will save the day?

  • Who’s reviewing the logs, and how often?

  • What’s your backup recovery process? Ever tested it? (Spoiler: most haven’t.)


Also, can your IR team actually respond?

Do they get tabletop exercises, actual practice runs, or are they just hoping Google + gut feeling = success?


Too many IR teams get stuck during a real event because they weren’t prepared for their own tools, their own network, or their own people. The DAIR model, just like PICERL, starts with preparation because without it, everything else falls apart fast.


-------------------------------------------------------------------------------------------------------------

🔍 Detect: It’s not about noise — it’s about meaning

Detection is where most IR teams spend their lives. But let’s break it down:

  • IOA (Indicator of Attack) = something an attacker does to get what they want (e.g. lateral movement, bypass attempts, privilege misuse)

  • EOI (Event of Interest) = “Hey, does this event actually matter to us based on our risk and context?”


You can get a million IOAs, but if they don’t align with what your organization cares about, it might not even qualify as an EOI.

That’s a big difference.

Not every alert needs panic mode.

Now, detection can be passive (you get alerted from your SIEM, a user reports weird behavior, etc.), or active — a.k.a. threat hunting. Active detection is where you're out there looking for trouble.


And once you detect something, that’s when clock starts ticking.

Also, don’t underestimate human signals:

A sysadmin noticing weird CPU spikes, or your helpdesk getting calls about failed logins — these can be golden early indicators. Blend your tech intel (logs, EDR, network alerts) with your people intel, and your detection game gets way stronger.



-------------------------------------------------------------------------------------------------------------


🔍 Verifying & Triage — Don’t Just Jump the Gun

Alright, so just because you spotted something weird doesn’t mean it’s go-time.This is where verify and triage come into play — and trust me, this step saves you from chasing ghosts or overreacting to a false alarm.

So what are we really doing here?

We're asking basic but critical questions:

  • “Is this actually a real incident?”

  • “Is it something that impacts our environment or our business?”

  • “Is it worth pulling in the full IR team, or can we handle it quietly?”


Let’s be honest — not everything weird is worth a full-blown IR war room.Sometimes it’s just noise.Sometimes it's someone forgetting their password and triggering alerts with failed logins from five devices at once.

So we verify: Is this event something that should even be on our radar?

And then we triage: Based on what we know so far, who do we need in the room?

For example, if it’s someone threatening a coworker over email, maybe that’s an HR + legal thing, not just IR.

If it's ransomware in your backups?

That’s game-on.


Also — don’t skip getting management input here. You don’t want to be guessing whether something is “business-critical” or not. Always align technical actions with business priorities. That’s how you earn trust.


-------------------------------------------------------------------------------------------------------------

📏 Scoping: How Bad Is It, Really?

Once we know it’s real, the next question is: How deep does this thing go?

Scoping is about figuring out the blast radius.
  • Which systems are affected?

  • Which users?

  • Which parts of the network?


Maybe you found a malicious registry key with an encoded blob in one system — cool, but how many other systems have that same key?

Maybe an attacker used a known IP — where else did that IP touch your infrastructure?


To do this right, you’re gonna pull from a ton of data sources:

EDR, SIEM, logs, threat intel, PowerShell outputs, config files, even registry snapshots if needed. Sometimes the scoping phase alone is a full-on detective mission.


And here’s the catch: you’ll probably do this more than once.

The DAIR model treats response phases as a loop, not a one-and-done checklist. So as you uncover new info (TTPs, new users, lateral movement), you’ll loop back and rescope again — and again — until you're confident you’ve mapped the full picture.

Skipping or half-assing this step is what leads to "Oops, we missed that second C2 channel." Don’t be that IR team.

-------------------------------------------------------------------------------------------------------------

🛑 Contain — Stop the Bleed

Alright, now that we know what we’re dealing with, it’s time to freeze the attacker in place.

Containment is all about one thing: making sure the bad actor can’t keep doing damage while we plan next steps.

This isn’t just pulling the plug. It’s smart isolation:

  • Throw the host into a private VLAN

  • Block known attacker IPs

  • Change DNS routes

  • Lock compromised user accounts

  • Cut C2 channels with some smart ACL magic


Now, containment doesn't always mean full-on disconnection. Sometimes you want to watch what they’re doing for a little longer — gather more data before cutting them off.

And speaking of data:

this is a golden time for evidence collection.If you isolate a host, grab logs, memory dumps, network traces — before the system reboots or changes state.

But don’t go overboard. If leadership doesn’t care about forensic review or court action, maybe you don’t need a full 100GB image of that file server.

Balance data collection vs. business value.

Also — everything you do during containment should support what’s coming next.

For example, if you’re revoking credentials now, you’re also laying the foundation for eradication and recovery later.

That’s why DAIR is a loop — not everything is siloed.


-----------------------------------------------------------------------------------------------------

🚨 Eradicate: Time to Clean House

Once you’ve caught wind of the attacker’s activity and put up the initial blocks (aka containment), it’s time to get serious: erase their fingerprints from your house.

Eradication isn’t just about kicking them out. It’s about reversing the damage they did, and making sure no doors are left open for them to stroll back in.


Here’s what this really looks like:
  • Wipe out malicious processes, hidden users, or rogue scheduled tasks.

  • Roll back from known-good backups — assuming you’ve got some.

  • Remove persistence mechanisms (e.g., registry tweaks, rootkits, or cron jobs).

  • Patch up the hole they crawled through — whether it was a CVE, weak creds, or a bad config.

  • If they messed with source code or tried sneaky fraud tricks (hello, bogus transactions), unwind all of that too.


This phase leans heavily on your earlier investigation work. What you learned while analyzing logs, running memory forensics, or doing packet captures — that’s what guides your clean-up operation.


🧠 Tip: Containment ≠ Eradication. One stops the bleeding, the other heals the wound.


-----------------------------------------------------------------------------------------------------

🔧 Recover: Don’t Just Reboot — Reinforce

Alright, so you’ve cleaned up the mess. Now what?

Recovery isn’t about just flipping the switch and saying, “We’re good.” It’s about making sure we don’t end up in this same mess again.


This is where root cause analysis comes into play. Not just what happened — but why it happened. Ask yourself:
  • Why was this even possible?

  • Was it a bad policy, or a good one nobody followed?

  • Did our alerts fire and nobody noticed?

  • Was that admin using the same password since 2017?


Recovery also means:
  • Rebuilding or restoring systems with clean images.

  • Changing compromised credentials and revoking tokens.

  • Reissuing API keys or cloud creds.

  • Validating app integrity (especially for dev or prod environments).

  • Getting SMEs to test the system before it goes live again.


And yes — test everything. Don’t just patch and pray. Make sure everything works as expected before you bring it all back online.

And when you do go live? Watch those systems like a hawk. Assume the attacker wants back in.


-----------------------------------------------------------------------------------------------------

🔁 Repeat: Because One Loop Is Never Enough

Let’s be real — IR isn’t a one-and-done process.

During the recovery or eradication phases, you’ll often discover more indicators, more footholds, or other places where the attacker left a mark.


And when that happens? You don’t just patch and move on. You loop back:

  • Re-scope the incident based on new findings

  • Re-contain the new areas

  • Re-analyze logs or malware samples

  • Re-do eradication if needed

  • Expand recovery if more systems are impacted


You’ll rinse and repeat this until:

  • There’s nothing left of the attacker’s presence

  • Stakeholders are confident the threat is neutralized

  • You’ve implemented proper safeguards to prevent a repeat


This isn’t just busywork. It’s how real IR works in the field. Especially when you’re dealing with APTs or any attacker with actual skills, you’re going to learn more as you go.



-----------------------------------------------------------------------------------------------------

Debrief: Let’s Talk Real Talk Post-Incident

Alright, the chaos has settled, the alerts have calmed down, and you’ve officially closed the incident. But we’re not done yet. It’s time to debrief—and no, this isn’t just about typing up a long, boring report. This is your chance to actually learn something from the whole mess.

This is where we sit down and talk:


  • What worked?

  • What completely flopped?

  • What can we do better next time?


Sometimes this is a polished PDF with a cover page and timeline. Sometimes it’s just well-documented notes in your IR platform or ticketing system. Format doesn’t matter. Value does.



So what should the Debrief look like?

  • Capture what happened (timeline, impact, decisions made).

  • Be honest. No fluff. Just facts (and where you’re not sure, make it clear it’s conjecture).

  • Record what tools and people helped vs. slowed things down.

  • Highlight wins—your team did something right in the heat, so give credit.


Then, use it!

  • Share it with key stakeholders (they might finally approve that EDR upgrade you’ve been asking for).

  • Use it to fix broken playbooks, fine-tune escalation paths, and close the feedback loop.

  • Schedule a follow-up (don’t ghost your own process). Did the changes you suggested actually happen?


Pro Tip:

Right after an incident, security is fresh in everyone’s mind. It’s the perfect moment to ask for improvements: better logging, better tooling, more training, etc.


The biggest mistake is skipping the debrief. That’s how lessons get lost.

And remember: The next incident is coming. What you learn here will determine whether you’re sprinting or stumbling when it hits.

-----------------------------------------------------------------------------------------------------

Stay flexible. Stay curious. And stay humble — because attackers love when defenders get lazy.

 
 
 

Comments


bottom of page