What You'll Learn
- Read and interpret the full structure of a Wazuh alert — from rule metadata to raw event data
- Extract the 8 critical fields every SOC analyst must check in every alert
- Classify alerts by severity and understand what each level demands from the analyst
- Distinguish between alerts that require investigation and alerts that are informational noise
- Build an Alert Anatomy Reference Card you can use during triage shifts
Lab Overview
| Detail | Value |
|---|---|
| Lab Profile | lab-wazuh |
| Containers | Wazuh Manager, Wazuh Indexer, Wazuh Dashboard |
| Estimated Time | 45–60 minutes |
| Difficulty | Beginner |
| Browser Access | Wazuh Dashboard (Web UI) |
| Pre-Loaded Data | 505 alerts across 10 log sources, 4 agents |
| Deliverable | Alert Anatomy Reference Card for 10 analyzed alerts |
Why Alert Anatomy Matters. Every second you spend figuring out what an alert is telling you is a second you're not spending deciding what to do about it. SOC analysts who can instantly parse an alert's structure triage 3-4x faster than those who have to hunt for fields each time. This lab builds that muscle memory.
The Scenario
You've just started your first solo shift as a Tier 1 SOC analyst. Your shift lead hands you a printed card: "For the first hour, don't action anything. Just read alerts. Understand their structure. By the end of the hour, you should be able to look at any alert and instantly know: what happened, where, when, how bad, and what to do next."
Your Wazuh environment has 505 pre-loaded alerts from a realistic 24-hour period across 4 systems. Your job is to select 10 alerts of varying severity, dissect each one field-by-field, and build a reference card you'll use for the rest of the course.
Part 1: Navigate to the Alert Queue
Step 1: Open Security Events
After starting your lab and logging into the Wazuh Dashboard (admin / SecretPassword), navigate to the alert queue:
- Click Modules in the left sidebar
- Select Security Events
- You should see a table of alerts sorted by timestamp
Set the Time Range. Click the time picker (top-right) and set it to Last 24 hours to see all pre-loaded alerts. If you see fewer than expected, expand the range.
Step 2: Understand the Alert Table Columns
The default table shows a summary view. Each row is one alert with these key columns:
| Column | What It Tells You |
|---|---|
| Time | When the alert was generated (UTC) |
| Agent | Which monitored system produced the event |
| Rule description | Human-readable summary of what triggered |
| Rule level | Severity: 0-3 (info), 4-7 (low), 8-11 (medium), 12-15 (critical) |
| Rule ID | Unique identifier for the detection rule |
Part 2: The 8 Critical Fields
Every Wazuh alert contains dozens of fields, but SOC analysts consistently check these 8 first. Click on any alert to expand it and find each field.
| # | Field | Location in Alert | Why It Matters |
|---|---|---|---|
| 1 | rule.id | Top of expanded alert | Identifies which detection rule fired — lets you look up the rule definition |
| 2 | rule.description | Top of expanded alert | Plain-English explanation of what was detected |
| 3 | rule.level | Top of expanded alert | Severity determines your response priority |
| 4 | agent.name | Agent section | Which host is affected — determines who to contact |
| 5 | timestamp | Top of expanded alert | When it happened — critical for timeline building |
| 6 | data.srcip or data.srcuser | Data section | Who or what caused the event |
| 7 | data.dstuser or data.dstip | Data section | Who or what was targeted |
| 8 | full_log | Bottom of expanded alert | The raw log line — the ground truth that the rule matched against |
full_log Is Your Best Friend. When in doubt, always read the full_log field. Rule descriptions can be generic ("Authentication failure"), but the raw log contains the specific username, IP, port, timestamp, and error code. Analysts who skip full_log miss critical context.
Part 3: Analyze 10 Alerts
You'll now select and analyze 10 alerts — at least 2 from each severity tier. Use the filters in the Wazuh Dashboard to find alerts at each level.
Finding Alerts by Severity
Use the search bar to filter by severity:
rule.level: 15
rule.level: >= 12 AND rule.level: <= 14
rule.level: >= 8 AND rule.level: <= 11
rule.level: >= 4 AND rule.level: <= 7
Alert 1-2: Critical Severity (Level 12-15)
Find two alerts at level 12 or above. These represent the most dangerous events in your environment.
For each alert, fill in this template:
ALERT #___
─────────────────────────────
Rule ID: [e.g., 5712]
Rule Description: [e.g., "SSHD brute force trying to get access"]
Rule Level: [e.g., 15]
Agent: [e.g., linux-web-01]
Timestamp: [exact time from the alert]
Source: [IP or username that caused it]
Target: [IP, user, or file affected]
full_log summary: [first 100 characters of the raw log]
ANALYST ASSESSMENT:
- What happened? [one sentence]
- Is this normal? [Yes/No + why]
- What would I do? [Investigate / Tune / Escalate / Close]
Level 12+ alerts require immediate action in production. In a real SOC, these go to the top of the triage queue. Anything level 15 (e.g., "Windows audit log cleared") may require waking up the on-call senior analyst.
Alert 3-4: High Severity (Level 8-11)
Find two alerts at level 8-11. These are suspicious events that need investigation but aren't necessarily confirmed threats.
Fill in the same template for each. Pay attention to: Is the source IP internal or external? Is the target account a service account or a human?
Alert 5-6: Medium Severity (Level 4-7)
Find two alerts at level 4-7. These are often policy violations, configuration issues, or low-confidence detections.
Fill in the same template. Ask yourself: Would this alert add value during an active incident investigation, even though it's not high-severity on its own?
Alert 7-8: Informational (Level 0-3)
Find two informational alerts. These are typically agent heartbeats, successful operations, or system status messages.
Fill in the same template. Ask yourself: Under what circumstances would this "noise" alert suddenly become important? (Hint: absence of heartbeats = agent is down = possible tampering.)
Alert 9-10: Your Choice — Most Interesting Alerts
Browse the full alert queue and pick the two alerts you find most interesting or unusual. They can be any severity level.
Fill in the same template, but add one extra field:
WHY I PICKED THIS: [What caught your attention?]
Part 4: Compare Across Agents
Now that you've analyzed 10 alerts, compare the alert patterns across the 4 agents in your environment:
| Agent | Type | Expected Alert Pattern |
|---|---|---|
| linux-web-01 | Linux web server | SSH attempts, sudo events, web application alerts, FIM changes |
| WIN-SERVER-01 | Windows server | Logon events (4624/4625), service installs (7045), process creation (4688) |
| dns-server-01 | DNS server | Query logs, zone transfer attempts, unusual resolution patterns |
| fw-edge-01 | Edge firewall | Blocked connections, port scans, policy violations |
Step: Agent-Level Analysis
For each agent, search for its alerts:
agent.name: linux-web-01
Answer these questions:
- How many total alerts does this agent have?
- What is the highest severity alert?
- What rule fires most frequently? (Use the rule.id field to count)
Repeat for all 4 agents and record in a comparison table.
Part 5: Build Your Alert Anatomy Reference Card
Using your 10 analyzed alerts, create a one-page reference card with:
- Field Quick Reference: The 8 critical fields and where to find them
- Severity Guide: What each level means and your expected response
- Agent Baseline: Normal alert patterns for each of the 4 agents
- Red Flags: 3 patterns you noticed that would make you investigate immediately
Keep This Card. You'll use it in every subsequent lab. Production SOC analysts keep similar cheat sheets pinned to their monitors. The best analysts constantly refine their reference cards based on new patterns they discover.
Deliverable Checklist
Before completing the lab, ensure you have:
- 10 Alert Analysis Worksheets — complete templates for all 10 alerts with all 8 fields extracted
- Severity Representation — at least 2 alerts from each of the 4 severity tiers
- Agent Comparison Table — alert count, highest severity, and most frequent rule per agent
- Alert Anatomy Reference Card — one-page summary with field reference, severity guide, agent baselines, and red flags
Key Takeaways
- Every Wazuh alert has 8 critical fields that SOC analysts check first: rule.id, rule.description, rule.level, agent.name, timestamp, source, target, and full_log
- Severity levels determine triage priority: Level 12+ demands immediate action, Level 8-11 needs investigation, Level 4-7 is contextual, Level 0-3 is informational
- The full_log field contains the raw event and is the ultimate source of truth — never skip it
- Different agent types produce different alert patterns — knowing the baseline helps you spot anomalies instantly
- Building and maintaining an Alert Anatomy Reference Card makes you a faster, more consistent analyst
What's Next
In Lab 2.3 — Build a SOC Dashboard, you'll take the alert data you've been exploring and build custom visualizations. Instead of reading alerts one by one, you'll create dashboards that give you an instant picture of your environment's security posture — the same dashboards SOC managers check every morning.
Lab Challenge: Alert Anatomy
10 questions · 70% to pass
Open a Windows 4625 (failed logon) alert on WIN-SERVER-01. What is the rule.id for this brute force detection?
Find the highest severity alert in the entire queue (level 15). What event does it describe?
Navigate to the agent list. How many monitored agents are reporting alerts in your lab environment?
Expand any alert and find the full_log field. What type of information does this field contain that rule.description does NOT?
Filter alerts by agent.name: fw-edge-01. What type of events does the firewall agent primarily report?
Find an SSH authentication failure alert on linux-web-01 (rule 5551). What field reveals the attacker's IP address?
What is the rule.level for informational agent heartbeat/status alerts? Search for agent status events to find them.
Find a Windows Event ID 7045 (new service installed) alert on WIN-SERVER-01. What field contains the service name that was installed?
You notice that linux-web-01 has both SSH brute force alerts AND sudo escalation alerts. If you were triaging, which would you investigate first and why?
A level 7 alert fires for a DNS query to a suspicious domain on dns-server-01. The rule.description says 'DNS query for known malicious domain.' What is the appropriate analyst response?
0/10 answered