Threat Modeling with Diagrams in 2026 — STRIDE, Data Flow & Security Review
Threat modeling is the practice of systematically identifying security threats before they become vulnerabilities in production. The foundation of every effective threat model is a diagram — a visual representation of how data moves through your system, where trust boundaries exist, and which components are exposed to attack. Without a diagram, threat modeling devolves into guesswork and checklists disconnected from your actual architecture.
Why Threat Modeling Needs Diagrams
Security teams who skip the diagramming step tend to produce threat models that miss entire attack surfaces. A written description of your system will always leave out implicit trust assumptions — the internal API that happens to be reachable from the DMZ, the message queue that passes unsanitized payloads between services, the database that two microservices share without an access control layer between them.
Diagrams make these blind spots visible. When you draw a line from Service A to Database B, you are forced to ask: is this connection encrypted? Who authenticates? What happens if Service A is compromised? The visual format also makes threat models accessible to non-security engineers. A developer reviewing a data flow diagram can immediately spot whether their service appears and whether the trust boundaries around it are correct, without reading a 40-page security document.
- Completeness. Diagrams force you to enumerate every component, data flow, and trust boundary — reducing the chance of overlooking an attack surface.
- Communication. A single diagram aligns security engineers, developers, and product managers on how the system actually works, not how each person imagines it.
- Traceability. Each threat can be pinned to a specific element on the diagram, making it easy to track mitigations and verify fixes.
Data Flow Diagrams (DFDs) for Security
The data flow diagram is the standard visual notation for threat modeling. Unlike architecture diagrams that focus on deployment topology, DFDs focus on how data moves — which is exactly what attackers care about. A DFD uses four element types: external entities (users, third-party APIs), processes (your services and applications), data stores (databases, file systems, caches), and data flows (the arrows connecting everything). On top of these, you draw trust boundaries — dashed lines that separate zones of different privilege levels.
Here is a simple DFD for a web application with an API gateway, expressed in Mermaid syntax so it can be version-controlled alongside your code:
graph LR
subgraph "Trust Boundary: Internet"
User["External Entity: Browser User"]
end
subgraph "Trust Boundary: DMZ"
Gateway["Process: API Gateway"]
WAF["Process: WAF"]
end
subgraph "Trust Boundary: Internal Network"
AuthSvc["Process: Auth Service"]
AppSvc["Process: App Service"]
DB[("Data Store: PostgreSQL")]
Cache[("Data Store: Redis Cache")]
end
User -->|"HTTPS Request"| WAF
WAF -->|"Filtered Request"| Gateway
Gateway -->|"JWT Validation"| AuthSvc
Gateway -->|"API Call"| AppSvc
AppSvc -->|"SQL Query"| DB
AppSvc -->|"Session Data"| CacheEach arrow in this diagram is a potential attack vector. Each trust boundary crossing is a point where authentication, encryption, or input validation must be verified. The diagram becomes the checklist — you walk through every element and every flow systematically, rather than relying on memory or intuition.
| Threat Category | Description | Diagram Element | Mitigation Example |
|---|---|---|---|
| Spoofing | Pretending to be another user or system | External Entity | Authentication (OAuth 2.0, mTLS, API keys) |
| Tampering | Modifying data in transit or at rest | Data Flow / Data Store | Integrity checks (HMAC, digital signatures) |
| Repudiation | Denying an action was performed | Process / External Entity | Audit logging, non-repudiation tokens |
| Information Disclosure | Exposing data to unauthorized parties | Data Flow / Data Store | Encryption (TLS, AES-256), access controls |
| Denial of Service | Making a service unavailable | Process / Data Flow | Rate limiting, auto-scaling, circuit breakers |
| Elevation of Privilege | Gaining unauthorized access levels | Trust Boundary | Least privilege, RBAC, input validation |
The STRIDE Framework Visualized
STRIDE is the most widely used threat classification framework, developed by Microsoft in 1999 and still the default starting point for most threat models in 2026. The power of STRIDE comes from mapping each category to specific DFD element types. Spoofing targets external entities and processes — anywhere identity matters. Tampering targets data flows and data stores — anywhere data can be modified. Information Disclosure follows the same data-centric elements. Denial of Service applies to processes and data flows. Elevation of Privilege focuses on trust boundaries.
The table above shows this mapping. In practice, you annotate your DFD with colored overlays or numbered callouts: red markers for identified threats, green for mitigated ones, yellow for accepted risks. This turns the diagram into a living scorecard that tracks your security posture over time. Teams that maintain these annotated diagrams report finding 30-40% more threats than teams using spreadsheet-only approaches, because the spatial layout reveals relationships between threats that tabular formats hide.
Attack Surface Mapping with Architecture Diagrams
While DFDs focus on data movement, attack surface mapping takes a broader view: what is exposed, and to whom? An attack surface diagram overlays exposure information on your architecture — public endpoints, open ports, third-party integrations, admin interfaces, and any component reachable from an untrusted network.
- Network exposure layers. Color-code components by exposure level — internet-facing (red), DMZ (orange), internal (yellow), isolated (green). The visual gradient immediately shows where your perimeter is thickest and thinnest.
- Entry point enumeration. Mark every API endpoint, webhook receiver, file upload handler, and OAuth callback on the diagram. These are the doors an attacker will try first.
- Dependency risk mapping. Highlight third-party services and libraries with known CVE histories. A compromised dependency inside a trust boundary is more dangerous than one outside it.
- Blast radius visualization. For each critical component, shade the area of the diagram that would be affected if it were compromised. This helps prioritize which components deserve the most hardening investment.
Attack surface diagrams are especially valuable during architecture reviews and before major releases. They give security teams a quick visual answer to the question: "What changed since the last review, and does that change increase our exposure?"
Tools for Threat Modeling Diagrams
Several tools support threat-modeling-specific diagramming, each with different strengths depending on your team's workflow:
- Microsoft Threat Modeling Tool. The original STRIDE tool. Provides DFD stencils, auto-generates threat lists from diagram elements, and exports reports. Windows-only, free, and still actively maintained. Best for teams already in the Microsoft ecosystem.
- OWASP Threat Dragon. Open-source, cross-platform, and web-based. Supports DFD and STRIDE out of the box with a clean interface for annotating threats and mitigations directly on the diagram. Stores models as JSON, making it version-control friendly.
- Draw.io with security templates. Draw.io (diagrams.net) has community-contributed threat modeling shape libraries. While it lacks automatic threat generation, its flexibility and broad format support make it popular for teams that already use it for architecture diagrams. If you have existing draw.io architecture diagrams, you can convert them to other formats using Orriguii Diagram Converter and then annotate them with threat-modeling-specific elements in your preferred tool.
- Mermaid for text-based DFDs. As shown earlier, Mermaid can express DFDs as code. This means threat model diagrams can live in the same repository as the code they describe, go through code review, and be diffed in pull requests. The trade-off is less visual polish and no built-in threat auto-generation — but for developer-driven threat modeling, the version control benefits outweigh that.
Integrating Threat Models into Development Workflow
A threat model that lives in a Confluence page and gets reviewed once a year is a compliance artifact, not a security tool. To make threat modeling genuinely effective, it must be integrated into the development workflow — triggered by changes, reviewed with code, and updated continuously.
- Threat model as code.Store your DFDs in Mermaid or PlantUML format in the repository's
/docs/security/directory. Require updates to threat model diagrams in PRs that change trust boundaries, add new external integrations, or modify authentication flows. - Automated change detection. Use CI pipelines to flag PRs that modify security-relevant files (auth modules, API routes, infrastructure configs) but do not update the corresponding threat model diagram. This is a lightweight form of drift detection.
- Security review gates. For high-risk changes, require a security engineer to review the updated threat model diagram before the PR merges. The diagram provides a shared visual language for the review conversation.
- Sprint-level threat modeling. Instead of a quarterly big-bang review, run a 30-minute threat modeling session at the start of each sprint for any new feature that introduces a new data flow or trust boundary crossing. Use the existing DFD as the starting point and update it live during the session.
The goal is to make threat modeling feel like a natural extension of architecture work, not a separate security exercise imposed from outside. When threat model diagrams are versioned, reviewed, and updated alongside the code they describe, they stay accurate — and accurate diagrams are the only kind that actually prevent security incidents.