€EUR

Blog

NotPetya Aftermath – Maersk’s Adam Banks on Response and Recovery — GartnerSEC Insights

Alexandra Blake
by 
Alexandra Blake
10 minutes read
Blog
December 04, 2025

NotPetya Aftermath: Maersk’s Adam Banks on Response and Recovery — GartnerSEC Insights

Recommendation: Initiate a rapid cyber containment and recovery playbook within the first hour, led by the ciso and a cross-functional management team. Isolate affected networks, disable risky credentials, and bring offline backups online for restoration to a verified level of integrity, which keeps attackers from spreading and being contained.

Context from Maersk’s experience: Adam Banks explains that NotPetya propagated through an outbreak that touched IT systems across ships and cargo hubs. The outbreak proved that a single incident can cascade into management decisions and disrupt cargo operations, halting container handling and delaying shipments. The disruption forced yards to switch to manual workflows, underscoring how a cyber attack can disrupt management processes and influence operational decisions at scale. More resilience starts with rapid containment and clear communication.

Security posture and recovery design: Build a multi-layer cyber defense that is designed to protect critical assets and preserve functionality during incidents. Automate recovery to restore clean states into production-ready services when possible, and ensure backup chains remain intact so the majority of core systems can resume quickly. The approach should focus on keeping cargo-tracking and port-operations apps operational while the intrusion is contained.

Threat intelligence, data sharing, and response: Maintain active threat intel feeds and practice drills with ciso-led coordination. Criminals target gaps in management and supply chain, so interrupting attack paths and maintaining clear, timely communication reveals what lies behind the incident and reduces dwell time, speeding recovery for cargo systems and customer data alike.

Operational readiness: Map every asset into the shipping workflow–from manifests to yard-crane systems–to expose where cyber risk hits cargo operations. Develop concise runbooks that can be executed with minimal manual steps and further automation; track metrics like RTO and RPO improvements to show progress in keeping services available during an incident. Regularly rehearse with vendors to ensure needs are met and that response teams stay active year-round.

GartnerSEC Insights: Maersk NotPetya Aftermath

GartnerSEC Insights: Maersk NotPetya Aftermath

Immediately adopt an asset-centric recovery sprint that isolates malware, preserves critical terminals, and ensures core systems are restored within days, while securing governance across layers to support ongoing mitigation.

GartnerSEC insights reflects how Maersk’s management shifted from reaction to resilience, with asset-centric mapping that links IT and operations into port terminals. The most malicious campaigns show hackers targeting exposed layers, and the data proved containment within weeks that can cut losses and accelerate recovery. That shift does deliver clearer accountability at the management level.

Mitigation requires clear management level ownership; powells leads the incident response unit, coordinating containment, patching, and data integrity checks. The plan moves into a staged approach: containment, eradication, recovery, and verification over the coming weeks.

Actions include an up-to-date asset inventory, network segmentation, and rapid backups; assets are prioritized by criticality to operations: shipping management systems, terminal gateways, and payment terminals. Validate backups, test recovery plans, and run simulations weekly; the team wants automated checks and real-time dashboards to monitor risk into production.

Financial impact hinges on downtime; the NotPetya aftermath shows losses counted in days and weeks, but the most effective mitigation reduces exposure quickly. A robust cyber program shifts recovery timelines from days for critical assets to shorter windows across the same business lines, protecting margins and customer trust.

Operational governance assigns a level of authority, aligns risk appetite with allocated resources, and ensures cross-functional coordination across security, IT, and line management. Finance and risk teams monitor MTTR, asset coverage, and incident duration, while powells drives ongoing decision making to keep recovery on track.

Terminals across the network face a complex set of dependencies; this NotPetya aftermath shows that proactive resilience requires continuous testing, rapid mitigation, and sustained investment in people, tools, and controls to restore operations in weeks rather than months.

Key Learnings from Maersk’s NotPetya Incident

Key Learnings from Maersk’s NotPetya Incident

Immediately restored backups and segment networks to prevent further propagation. For maersk, this approach appeared to limit the outbreak and allowed much faster recovery of running processes. Document the steps and call out any gaps in infrastructure so teams can take ownership.

Create repeatable playbooks that cover the database and technical recovery steps, backup validation, and process handoffs. These steps help many teams manage incidents and respond during an outbreak.

Increase reliability by running drills that test failover across infrastructure and logistics platforms, and establish a simple call tree so the right engineer acts quickly when warnings appear. Behind the scenes, taken learnings feed changes to processes and infrastructure, while dashboards highlight obvious gaps and track burns and resource use.

Over time, maintain continuous improvement by documenting how a single outbreak was contained and how reliability rose through disciplined management of database locks, technical controls, and logistics coordination.

Immediate Containment Actions: Isolation, Patch Deployment, and Access Control

Immediately isolate the affected surface and devices. This halts self-propagating movement and reduces risks to targets, including databases, other company sites, and the supply chain.

Stand up a well-established command team, assign clear roles, and begin synchronized decision-making. Experts themselves emphasize rapid triage, asset inventory, and consolidated logs, like building a robust evidence trail as baseline. powells guidance adds concrete checklists for containment, including isolating high-risk workstations and curbing lateral movement.

Inventory all assets: servers, endpoints, databases, cloud services, and network devices. Tag critical assets–production databases, ERP, and supply-chain targets–and prioritize patching. Deploy patches in a controlled sequence: start with the most exposed targets, validate patches in a test environment, and keep a rollback plan ready. Schedule a maintenance window, coordinate with stakeholders, and confirm patch success with vulnerability scanning and post-patch checks. Repeat checks once patches are deployed.

Enforce strict access controls: least privilege, MFA for remote access, and revocation of stale credentials. Disable insecure protocols and default credentials, segment networks to minimize lateral movement, and require re-authentication for access to critical systems. Monitor login attempts and adjust policies in real time; restrict admin rights across the environment.

Containment verification: monitor for unusual activity on surface and databases, verify that blocking rules hold, and confirm there is no self-propagation. Use SIEM, EDR telemetry, and network analytics to trace источник of compromise. If you detect signs of renewed infection, re-isolate the affected segment and restart the patch cycle with validated images.

Recovery steps follow containment: systems rebuilt from trusted images and verified backups. Reimage key servers and databases, then reintroduce them only after validation checks pass. Protect backups from compromise and maintain supply continuity through staged restoration. This approach aligns with global cybersecurity experience and supports a resilient, threat-aware environment.

Recovery Roadmap: Backups, System Rebuild, and Validation

Recommendation: implement a verified backup restore and phased system rebuild today to recover core functionality within days. Isolate affected assets and confirm backups are intact before reintroduction.

Backups: secure, immutable backups stored offline or in a secure, air-gapped vault; ensure backups include data, configurations, and logs for most critical systems; run integrity checks and verify restore success. Schedule restore drills to verify RPO and RTO targets and confirm no corruption exists in the lineage.

System rebuild: establish well-established baselines from clean golden images; rebuild critical services with minimal attack surface; apply security patches and configuration hardening; verify network segmentation; reintroduce workloads in controlled flow to prevent another outbreak.

Validation: create a practical test plan to determine if functionality is restored; execute end-to-end tests of core workflows; conduct UAT with an analyst (hagemann) from research; verify data integrity and security controls; track impact to affected users; capture knowledge for future resilience.

Resource and governance: allocate resources and monitor financial state; avoid behind schedule days; ensure secure, well-documented runbooks; keep maersks teams informed; incorporate continuous improvement knowledge management; measure progress with daily burns of the backlog.

Milestone Owner Timeframe Key Validation Status
Backup verification and restore drill Backup & Recovery Team Day 1–2 Restore success, data integrity, and log completeness Planned
System rebuild baselines IT Build Team Day 2–5 Golden image deployed, patching complete, drift mitigated Planned
End-to-end validation Analyst hagemann; QA Day 4–6 Critical flow validation, UAT results, security controls verified Planned
Operational readiness & knowledge transfer maersks Ops & Security Day 6–7 Runbooksupdated; knowledge base refreshed; incident playbooks tested Planned

Leadership Perspectives: Adam Banks and Andy Powell’s Decision-Making

Recommendation: establish a cross-functional decision loop and a four‑week playbook to cut time to restore and increase resilience at terminals worldwide during an outbreak. Rely on basics, backup, and clear ownership, with a focus on fast, informed choices across global operations.

  • Adam Banks–decision-making under pressure

    • He drives speed with clarity, running four focused reviews daily to convert data into concrete actions.
    • His emphasis centers on the basics: backup plans, network segmentation, and keeping the shipping network accessible even when systems are under stress.
    • He maps single points of failure across terminals and ensures a designated owner (e.g., Charlie) oversees restores and re‑runs of affected workflows.
    • Digital dashboards surface time‑to‑recover and containment metrics, guiding trades‑offs between speed and risk as events unfold.
  • Andy Powell–decision-making at the governance level

    • He builds a cross‑functional board to align strategy, risk, and operations, ensuring decisions move beyond silos.
    • Concerning outbreak scenarios, he runs tabletop drills and four‑track playbooks to validate control measures and escalation paths.
    • He leverages artes experts to translate risk insights into practical steps, connecting security, safety, IT, and terminal operations.
    • He creates a centralized data hub that serves as a single source of truth for what is affected, what remains online, and what must be rebuilt.

Practical actions leaders can take now:

  1. Map critical terminals and shipping routes and link them to backup capabilities that can be activated within hours.
  2. Design systems with redundancy across the global network to prevent a single outage from cascading into others.
  3. Assign a clear owner (like Charlie) for backup and restoration cycles, and publish time‑bound targets for each action.
  4. Implement quarterly outbreak drills and real‑time simulations to validate governance and cross‑functional coordination.
  5. Adopt digital dashboards that track four KPIs: time to containment, time to restore, affected systems, and continuity level at key terminals.

Concerning long‑term resilience, Banks prioritizes quick wins that protect people, terminals, and shipping services while Powell secures governance, risk, and vendor alignment. The combined approach reduces risk of the wrong decision during a crisis and increases the organization’s ability to work through outages, once and for all, with a level of confidence that mirrors real‑world operations.

Nyetya Readiness: Practical Steps to Prepare for the Next Attack

start with a concrete asset-centric inventory of all critical assets in your shipping ecosystem, assign a risk score, and reduce exposure by decommissioning unused services and hardening the rest. Build a well-established governance model that ties asset criticality to control owners and recovery SLAs, so teams know who starts and how to escalate.

Adopt a threat-informed approach that accelerates detection and limits dwell time. Centralize telemetry from endpoints, network devices, and OT systems; apply correlation rules that flag anomalous file activity, lateral movement, and exfiltration patterns. Maintain some baseline knowledge of normal traffic to spot deviations quickly.

Establish a fast-response line of defense: segment networks and supplier chains, isolate affected segments within minutes, and ensure immutable backups that are tested and kept offline. During incidents, operate from clearly defined runbooks so teams know where to start, and there they recover critical shipping services first.

Develop recovery playbooks with a clear sequence to restore systems, validate data integrity, and re-open customer channels. after an incident, execute a controlled transition from containment to restoration and measure how quickly shipping resumes.

Empower the analyst and operations teams with concise runbooks, targeted drills, and shared knowledge. powells analyst notes reinforce the asset-centric approach, and many organizations learn fastest when these practices are repeated in weeks-long cycles.

Track metrics to close the loop: number of assets restored per day, time to resume critical shipping lines, and rate of false positives. After each exercise, adjust playbooks and communicate lessons to stakeholders; this helps organizations know likely improvements and where they were wrong.