Lesson 1 of 5·18 min read·Includes quiz

True Positive vs False Positive

Fast classification patterns

What You'll Learn

  • Classify security alerts into the four outcomes: True Positive, False Positive, False Negative, and True Negative
  • Recognize the six most common false positive patterns in a Wazuh SIEM environment
  • Identify indicators that strongly suggest a true positive requiring escalation
  • Apply a 5-question mental checklist to triage any alert in under 60 seconds
  • Understand how alert fatigue degrades SOC performance and how classification discipline prevents it
  • Connect classification concepts to real Wazuh rule IDs and alert data you will encounter in Lab 4.1

The Four Outcomes Every Analyst Must Know

Every alert that lands in your queue has exactly one ground truth: something either happened or it did not. Your detection rule either fired or it did not. The intersection of those two facts produces four possible outcomes, and your entire job as an L1 analyst comes down to figuring out which one you are looking at.

Something Malicious Actually HappenedNothing Malicious Actually Happened
Alert FiredTrue Positive (TP) — Correct detectionFalse Positive (FP) — False alarm
No Alert FiredFalse Negative (FN) — Missed attackTrue Negative (TN) — Correct silence

The TP/FP/FN/TN decision matrix — every alert maps to exactly one quadrant

True Positive (TP): The alert is real. Wazuh rule 5551 fires for SSH brute force against linux-web-01, and you confirm 847 failed SSH attempts from 185.220.101.42 in a 5-minute window. The attack is happening. Escalate.

False Positive (FP): The alert fired, but nothing malicious happened. Wazuh rule 18152 triggers "Multiple Windows logon failures" on WIN-SERVER-01, but investigation reveals the service account svc-backup had an expired password that was renewed during a scheduled maintenance window. The detection logic was correct — multiple failures did occur — but the cause was operational, not adversarial.

False Negative (FN): The worst outcome. An attacker exfiltrated data from your network, and no alert fired at all. You will never see these in your queue — that is precisely the problem. FN reduction is an engineering task (better rules, better coverage, better telemetry), not an analyst triage task. But understanding FNs explains why you must take every TP seriously: each confirmed TP might be the one alert that catches what your other rules missed.

True Negative (TN): Nothing malicious happened, and no alert fired. This is the silent default state of your SIEM. Thousands of events per second flow through Wazuh, and the vast majority correctly produce no alert. You never see TNs in your queue, but they represent the system working exactly as designed.

As a triage analyst, your daily work is almost entirely about distinguishing TPs from FPs in the alerts that did fire. You cannot see FNs (by definition), and TNs never reach your queue. Master the TP vs FP distinction, and you have mastered the core of alert triage.

Why False Positives Are More Dangerous Than You Think

A single false positive seems harmless — you investigate, find nothing, close the ticket. But FPs are cumulative poison. Here is what actually happens in a SOC that generates 500 alerts per day with a 90% FP rate:

  • 450 alerts are noise. Analysts spend 2-5 minutes each investigating and closing them. That is 15-37 hours of wasted analyst time per day.
  • Alert fatigue sets in. After closing the 200th FP of the week, analysts start skimming instead of investigating. Click, close, next. Click, close, next.
  • True positives get missed. The 50 real alerts are buried in 450 false alarms. When everything looks like noise, the real attack does not stand out. This is how breaches happen.
  • Desensitization compounds over time. New analysts start careful. Within weeks, the FP rate trains them to assume every alert is noise. They develop a "close first, ask questions never" reflex.

The 2023 IBM Cost of a Data Breach Report found that organizations with high alert volumes and poor triage processes took an average of 204 days longer to identify a breach. Alert fatigue is not an inconvenience — it is a direct contributor to missed intrusions.

🚨

The FP trap: A 95% FP rate does not mean your SIEM is broken. Many enterprise SIEMs run at 90-99% FP rates out of the box. The problem is not the rate — it is the analyst response to the rate. Disciplined classification (even when most alerts are FPs) is what separates SOCs that catch breaches from SOCs that miss them.

FP RateAlerts/DayReal ThreatsAnalyst Behavior
50%500250Careful investigation, manageable workload
80%500100Fatigue begins, shortcuts emerge
90%50050Skimming is common, TPs occasionally missed
95%50025"Close all" mentality, significant TP miss rate
99%5005Near-total desensitization, breaches go undetected

The Six False Positive Patterns

After triaging thousands of alerts, you start recognizing the same FP patterns repeating. Knowing these patterns lets you classify faster — but never lets you skip investigation entirely. Even the most obvious-looking FP deserves 30 seconds of verification.

Six common false positive categories — learn to recognize them on sight

1. Scheduled Tasks and Cron Jobs

Pattern: Alerts fire at exactly the same time every day (or every hour, or every Monday at 02:00). The source is always the same host, the same user, running the same command.

Wazuh example: Rule 550 (file integrity monitoring — checksum changed) fires every night at 02:15 on linux-web-01 for /etc/crontab. Investigation shows the backup rotation script modifies crontab entries nightly.

Verification: Check if the timing is clock-precise. Check if the same alert appeared yesterday, last week, last month. If the pattern is perfectly periodic, it is almost certainly operational.

2. Service Account Activity

Pattern: A service account (svc-*, SA-*, SYSTEM, www-data) triggers authentication or privilege alerts. Service accounts often have broad permissions and authenticate frequently as part of automated workflows.

Wazuh example: Rule 18152 (multiple Windows logon failures) on WIN-SERVER-01 from svc-monitoring. The monitoring agent authenticates every 30 seconds and generates a burst of failures whenever Active Directory replication lags.

Verification: Confirm the account name follows your organization's service account naming convention. Check if the activity matches the account's documented purpose. If the service account is doing something outside its scope — that is a TP.

3. Vulnerability Scanners

Pattern: A massive burst of alerts from a single internal IP, hitting dozens of hosts and ports in rapid succession. The source IP belongs to your vulnerability scanning platform (Nessus, Qualys, Rapid7).

Wazuh example: Rule 87702 (web server 400 error codes) fires 200 times in 3 minutes from 10.0.0.50. The IP belongs to the Qualys scanner running its weekly Thursday scan.

Verification: Check the source IP against your asset inventory. Confirm the scan schedule with your vulnerability management team. If the scanning IP is not in your known scanner list — that is a TP (an attacker is scanning your network).

4. Legitimate Admin Tools

Pattern: Alerts for suspicious tools or commands that are actually routine admin operations. PowerShell, PsExec, WinRM, SSH from jump hosts — all legitimate when used by authorized administrators from authorized endpoints.

Wazuh example: Rule 60103 (new process launched) fires for psexec.exe on WIN-SERVER-01. The source is the IT admin's workstation, it is 10:30 AM on a weekday, and there is an open change ticket for patching that server.

Verification: Check the source host and user. Is it an admin? Are they working from an authorized endpoint? Is there a change ticket? If PsExec runs from an unknown workstation at 3 AM with no ticket — that is a TP.

5. Network Noise

Pattern: Internet-facing sensors generate a constant stream of low-severity alerts from random external IPs. Port scans, HTTP probes, DNS lookups — the background radiation of the internet.

Wazuh example: Rule 5551 (SSH brute force) fires from 194.26.29.120 against linux-web-01 with 12 failed attempts over 10 minutes. This is typical automated scanning — low volume, no follow-up activity, the IP appears on no threat intel feeds.

Verification: Check the volume (12 attempts is noise; 8,000 attempts is a campaign). Check for follow-up activity from the same IP. Check threat intel for the source IP. If the brute force is low-volume with no further activity, it is likely automated noise.

6. Threshold Triggers

Pattern: A rule fires because a numeric threshold was crossed, but the threshold is set too aggressively for your environment. Five failed logins in 10 minutes triggers an alert, but your help desk staff routinely fat-finger passwords 3-4 times before succeeding.

Wazuh example: Rule 18152 triggers on WIN-SERVER-01 for user jsmith with 6 failed logins followed by a successful login. John Smith typed his new password wrong five times after the quarterly password rotation.

Verification: Look for the successful login that follows the failures. Check if there was a recent password change. If there are 50 failures with no success, and the source IP is external — that is a TP.

💡

Speed tip: When you see any of these six patterns, your brain should immediately generate a hypothesis: "This is probably a [scheduled task / service account / scanner / admin tool / noise / threshold] FP." Then spend 30-60 seconds verifying. If the evidence supports your hypothesis, classify and close. If anything does not fit the pattern — slow down, investigate fully.

True Positive Indicators — When to Sound the Alarm

FP recognition is about speed. TP recognition is about accuracy. When the following indicators appear, the probability of a genuine threat jumps dramatically. The more indicators stack up on a single alert, the higher the urgency.

Unusual Source IP or Geography

The source IP is external, from a country your organization does not do business with, or from a hosting/VPN provider known for abuse. A Wazuh authentication alert from 185.220.101.42 (a known Tor exit node) is fundamentally different from one originating from your office subnet.

Off-Hours Activity

An alert fires at 03:17 AM on a Saturday. The affected user is a marketing coordinator who has never logged in outside business hours. Off-hours activity from non-IT accounts is one of the strongest TP indicators because attackers often operate during the target's off-hours to avoid real-time detection.

Known-Bad IOCs

The alert references an IP, domain, or file hash that appears in your threat intelligence feeds. Wazuh can integrate with MISP and other feeds to enrich alerts automatically (you will learn this in Module 5). If the IOC matches a known campaign, treat it as a high-confidence TP until proven otherwise.

Technique Chaining

A single failed SSH login is noise. But a failed SSH login followed by a successful login, followed by a new user account creation, followed by a service installation, followed by outbound connections to a new external IP — that is a kill chain. When multiple alerts correlate across a short time window on the same host or user, the probability of a TP compounds with each additional stage.

First-Time Observations

An action that has never been seen before for a given user, host, or service account. The first time svc-backup runs PowerShell is suspicious. The first time linux-web-01 connects to an IP in Eastern Europe is suspicious. Baselines give you the power to spot anomalies instantly.

No single indicator is definitive. An off-hours login might be a developer deploying a hotfix. A known-bad IP might be a shared hosting provider with both malicious and legitimate tenants. TP indicators are probability multipliers, not certainties. Stack multiple indicators before escalating with high confidence.

The 5-Question Mental Checklist

When an alert appears in your queue, run through these five questions in order. They take 30-60 seconds and give you a structured framework instead of a gut feeling.

Question 1: Is this a known pattern? Does this alert match one of the six FP patterns above? Same time, same host, same account as yesterday? If yes, verify and classify as FP.

Question 2: What is the source? Internal or external IP? Known admin or unknown user? Service account or human account? Authorized endpoint or unknown device?

Question 3: When did this happen? Business hours or off-hours? Weekday or weekend? During a maintenance window or during normal operations? Does the timing align with known scheduled tasks?

Question 4: Is there correlated activity? Is this a single isolated alert, or are there other alerts from the same host, user, or IP within the same time window? Isolated alerts lean FP. Correlated clusters lean TP.

Question 5: Does the context match the action? Is a finance user running PowerShell? Is a web server making outbound FTP connections? Does the action make sense for the actor and the asset? If not, investigate deeper.

┌─────────────────────────────────────────────────────┐
│            ALERT TRIAGE MENTAL CHECKLIST             │
├─────────────────────────────────────────────────────┤
│  1. Known FP pattern?  → YES → Verify → Close       │
│                         → NO  → Continue             │
│                                                      │
│  2. Source?             → Internal admin → Lower risk │
│                         → External/unknown → Higher   │
│                                                      │
│  3. Timing?             → Business hours → Context ok │
│                         → Off-hours → Suspicious      │
│                                                      │
│  4. Correlated alerts?  → Isolated → Likely FP       │
│                         → Clustered → Likely TP       │
│                                                      │
│  5. Context match?      → Action fits role → Lower    │
│                         → Action anomalous → Higher   │
├─────────────────────────────────────────────────────┤
│  LOW indicators  → Classify FP, document, close      │
│  MIXED indicators → Investigate 5 more minutes       │
│  HIGH indicators  → Escalate to L2 immediately       │
└─────────────────────────────────────────────────────┘

Real Wazuh Alert Walkthroughs

Let us apply the checklist to three real Wazuh alerts — the same types you will encounter in Lab 4.1.

Walkthrough 1: Rule 5551 — SSH Brute Force

{
  "rule": {
    "id": "5551",
    "level": 10,
    "description": "Multiple SSH authentication failures"
  },
  "agent": {
    "name": "linux-web-01",
    "ip": "10.0.0.10"
  },
  "data": {
    "srcip": "185.220.101.42",
    "srcport": "44821"
  },
  "timestamp": "2026-02-22T03:17:42.000Z"
}

Checklist:

  1. Known pattern? SSH brute force from external IPs is common internet noise — Pattern #5.
  2. Source? External IP 185.220.101.42. Quick check: this is a known Tor exit node.
  3. Timing? 03:17 AM — off-hours, but irrelevant for automated scanning.
  4. Correlated? Check for successful logins from this IP after the failures. Check for any other alerts on linux-web-01 in the same window.
  5. Context? SSH brute force against an internet-facing web server is expected background noise.

Verdict: If no successful login followed the failures and no other alerts correlate — FP (network noise). If a successful login followed — TP, escalate immediately (compromised credentials).

Walkthrough 2: Rule 18152 — Multiple Windows Logon Failures

{
  "rule": {
    "id": "18152",
    "level": 10,
    "description": "Multiple Windows logon failures"
  },
  "agent": {
    "name": "WIN-SERVER-01",
    "ip": "10.0.0.20"
  },
  "data": {
    "win.eventdata.targetUserName": "admin.jdoe",
    "win.eventdata.ipAddress": "10.0.0.5"
  },
  "timestamp": "2026-02-22T14:32:10.000Z"
}

Checklist:

  1. Known pattern? Multiple logon failures could be Pattern #6 (threshold). Check if a success follows.
  2. Source? Internal IP 10.0.0.5. Is this the user's workstation? An admin jump host?
  3. Timing? 14:32 — mid-afternoon, business hours. Normal.
  4. Correlated? Any other alerts on WIN-SERVER-01? Any lateral movement indicators?
  5. Context? User admin.jdoe is a domain admin. Failed logins from their workstation during business hours after a password rotation = Pattern #6.

Verdict: If the source IP is the user's assigned workstation, it is business hours, and a success follows — FP (threshold trigger after password change). If the source IP is unknown or the failures continue without success — investigate further.

Walkthrough 3: Rule 550 — FIM Checksum Changed

{
  "rule": {
    "id": "550",
    "level": 7,
    "description": "Integrity checksum changed"
  },
  "agent": {
    "name": "linux-web-01",
    "ip": "10.0.0.10"
  },
  "syscheck": {
    "path": "/etc/passwd",
    "event": "modified"
  },
  "timestamp": "2026-02-22T03:22:05.000Z"
}

Checklist:

  1. Known pattern? FIM on /etc/passwd could be Pattern #1 if it happens during user provisioning. But /etc/passwd changes are rare and significant.
  2. Source? Agent linux-web-01. Who modified the file? Check the Wazuh alert for the user context.
  3. Timing? 03:22 AM — five minutes after the SSH brute force alert on the same host.
  4. Correlated? This fires 5 minutes after rule 5551 on the same host. That is technique chaining: brute force → credential compromise → account creation.
  5. Context? /etc/passwd modification at 3 AM on a web server, 5 minutes after a brute force attack — this does not match any legitimate operational pattern.

Verdict: TP — escalate immediately. The temporal correlation with the SSH brute force strongly suggests credential compromise followed by account creation. This is a kill chain in progress.

🚨

Walkthrough 3 is the critical lesson. Rule 550 by itself is a level 7 alert that many analysts would classify as routine. But the 5-minute correlation with rule 5551 on the same host transforms it from "probably a FP" to "confirmed attack in progress." Context and correlation are everything. This is exactly the kind of analysis you will practice in Lab 4.1.

Building Classification Speed

Speed matters in triage. Not because your manager is watching your metrics (though they might be), but because every minute you spend on a FP is a minute a TP sits unaddressed in the queue. The goal is not to rush — it is to eliminate wasted time on alerts you can classify confidently.

Classification SpeedAnalyst LevelMethod
3-5 minutes per alertNew analystReading every field, checking documentation, asking teammates
60-90 seconds per alertExperienced L1 (3-6 months)Mental checklist, pattern recognition, keyboard shortcuts
15-30 seconds per alertSenior L1 / L2Instant pattern recognition, automated enrichment, tuned alert queue

You are aiming for the 60-90 second range by the end of this module. That means:

  • Know your FP patterns cold. When you see a scheduled task FP, you should recognize it in under 10 seconds.
  • Learn your environment's baselines. Which service accounts are normal? What time do backups run? What IPs are scanners?
  • Use Wazuh filters efficiently. Filter by agent, rule ID, severity, and time range. Do not scroll through the entire alert queue.
  • Document as you go. Every FP classification should include a one-line note: "FP — svc-backup password rotation, matches Pattern #2." This builds institutional knowledge and helps tuning engineers reduce FP volume.
💡

Lab 4.1 preview: In the upcoming lab, you will face 30 pre-loaded Wazuh alerts spanning agents linux-web-01, WIN-SERVER-01, dns-server-01, and fw-edge-01. The alerts include SSH brute force (rule 5551), Windows logon failures (rule 18152), FIM changes (rule 550), web server errors (rule 87702), and process anomalies (rule 60103). Your target is 85% accuracy — which means you can misclassify at most 4 out of 30. Apply the mental checklist, recognize the FP patterns, and watch for temporal correlations between alerts on the same host.

Key Takeaways

  • Four outcomes define all alerts: TP (real threat, alert fired), FP (no threat, alert fired), FN (real threat, no alert), TN (no threat, no alert). Your triage work focuses on distinguishing TPs from FPs.
  • False positives cause alert fatigue. At 90%+ FP rates, analysts unconsciously stop investigating, and real threats get missed. Classification discipline is the antidote.
  • Six FP patterns repeat endlessly: Scheduled tasks, service accounts, vulnerability scanners, admin tools, network noise, and threshold triggers. Learn them and you can classify faster without sacrificing accuracy.
  • TP indicators stack. No single indicator confirms a TP, but unusual source IPs + off-hours timing + correlated alerts + anomalous context together build high-confidence classifications.
  • Use the 5-question checklist on every alert: Known pattern? Source? Timing? Correlated? Context match? This eliminates gut-feel decisions and creates repeatable, defensible triage.
  • Correlation is the force multiplier. A single alert is ambiguous. Two correlated alerts on the same host in the same time window tell a story. Always check for related activity.
  • Speed comes from pattern recognition, not shortcuts. Fast analysts are not skipping steps — they have internalized the checklist through thousands of repetitions. Lab 4.1 is where you start building that muscle memory.

What's Next

You now have the mental framework for classifying alerts: the four outcomes, the six FP patterns, the TP indicators, and the 5-question checklist. In Lesson 4.2: Context Is Everything, you will learn the contextual factors that transform ambiguous alerts into confident verdicts — asset value, user role, time-of-day baselines, geographic anomalies, and organizational context. Context is what lets you answer Question 5 of the checklist ("Does the context match the action?") with authority instead of guesswork.

But first — practice. In Lab 4.1: Triage Under Pressure, you will open a Wazuh dashboard loaded with 30 pre-classified alerts and race to triage them all. The lab measures your accuracy against the answer key, and your target is 85%. Apply everything from this lesson. Recognize the patterns. Use the checklist. Watch for correlations. This is the skill that defines a SOC analyst.

Knowledge Check: True Positive vs False Positive

10 questions · 70% to pass

1

A Wazuh rule fires an alert for suspicious activity, but upon investigation the analyst determines no malicious activity occurred. Which classification applies?

2

An attacker exfiltrates 2GB of customer data over DNS tunneling, but no SIEM alert fires for the activity. What is this outcome called?

3

A SOC processes 500 alerts per day with a 95% false positive rate. How many true threats does the team need to find among the noise, and what is the primary risk?

4

An analyst sees Wazuh rule 550 (integrity checksum changed) fire for /etc/crontab on linux-web-01 every night at exactly 02:15 AM. Which false positive pattern does this most likely represent?

5

In the 5-question mental checklist, what should an analyst do if the answer to Question 4 ('Is there correlated activity?') reveals three more alerts on the same host within the same 10-minute window?

6

Wazuh rule 87702 fires 200 times in 3 minutes from internal IP 10.0.0.50 targeting multiple hosts. The analyst checks the asset inventory and finds 10.0.0.50 is registered as the Qualys vulnerability scanner. What is the correct classification?

7

In Lab 4.1, you encounter a Wazuh rule 5551 (SSH brute force) alert on agent linux-web-01 from external IP 185.220.101.42 at 03:17 AM with 847 failed attempts in 5 minutes. No successful login follows. Which classification is correct?

8

In Lab 4.1, an analyst sees rule 550 (FIM checksum changed) fire on /etc/passwd on linux-web-01 at 03:22 AM — exactly 5 minutes after rule 5551 (SSH brute force) fired on the same host. Why does the temporal correlation change the classification?

9

An analyst is triaging an alert and identifies that the source is an internal service account (svc-backup), the time is 02:00 AM on a Sunday, and the action is a large file transfer to an external IP. Which mental checklist questions produce conflicting signals?

10

In Lab 4.1, you encounter alerts from four agents: linux-web-01, WIN-SERVER-01, dns-server-01, and fw-edge-01. Rule 60103 fires on WIN-SERVER-01 for psexec.exe launched from an unknown internal IP at 02:47 AM with no associated change ticket. Using the mental checklist, what is the most appropriate initial action?

0/10 answered