
Make the decision to isolate affected segments and restore from verified offline backups within 24 hours to regain control. Then coordinate with stakeholders to map damage, reestablish operations, and set a clear path for rebuilding.
NotPetya struck on June 27, 2017, and Maersk’s global ship and container tracking networks logged widespread disruption. The incident was costly: Maersk estimated up to $300 million in lost revenue, and core operations were offline for about a week, with full systems restoration taking roughly two weeks. The company later released a concise report detailing the financial hit and recovery timeline.
In the rebuilding phase, Maersk rebuilt the IT stack based on a clean baseline, deployed a hardened foundation, and introduced network segmentation to limit future movement. They replaced compromised systems and logged critical events to verify data integrity before bringing ship- and terminal-facing services back online, including order processing and container tracking.
From this crisis, attackers demonstrated how ransomware-driven attacks can cascade across interconnected networks. The response required a decisive decision to segment networks, accelerate patching, deploy MFA, and strengthen backups. For stakeholders, transparent report and regular drills became essential. whats next for their security program is ongoing monitoring, tested playbooks, and cross-functional coordination to reduce threats and shorten containment time.
External researchers and security partners helped accelerate the recovery. A researcher who analyzed NotPetya’s spread noted that Maersk’s rapid isolation and rebuild steps limited lateral movement and shortened downtime. The collaboration with incident responders and vendors logged improvements that now inform container shipping IT policies and future resilience plans.
Going forward, Maersk emphasizes resilience: continue segmentation, robust backups, continuous monitoring, and frequent tabletop exercises to keep systems aligned with evolving threats. The focus is not only on restoring operations but on building trust with customers and suppliers by sharing dashboards, incident reports, and progress updates to support whats next in cyber readiness.
Path to Resilience: Maersk’s NotPetya Recovery and Forward Strategy
Apply a fast-track recovery plan that restores the system within days and exactly minimizes downtime: isolate affected networks, shut down compromised endpoints, and progressively bring core services back online. When NotPetya shut Maersk’s IT environment, it took several days to reestablish shipping schedules, container operations, and critical finance functions, so recovery milestones must be prioritized and measured.
Moved to a multi-region, layered system architecture that reduces single points of failure. Then offline backups, tested restore workflows, and air-gapped data stores were implemented. Updated threat intelligence feeds feed the security operations center, and states of readiness are reviewed weekly to ensure there is no regression.
Director-led governance keeps accountability tight. The council defined roles across IT, security, and operations, and exact number of critical systems was tracked on a live dashboard. There has been a shift toward proactive monitoring. There, leaders logged decisions, and the updated plan moved faster in crisis.
Globally, the NotPetya lessons applied against banks, suppliers, and other partners. Hackers attempted to exploit weak links, but the team logged thousands of security events and preserved information integrity. Where necessary, access controls shut down risky pathways and response playbooks were applied immediately.
Forward strategy builds on this foundation: apply continuous improvements, update dashboards, and run regular tabletop exercises to test plan readiness. Maersk aims to be among the largest corporate operations in the world, maintaining fantastic resilience across all states and territories. If anything unexpected occurs, the plan can be applied without delay, and the council can coordinate with banks and regulators to protect information.
Immediate containment, incident classification, and rapid isolation of affected systems
Immediately isolate the affected segment by disconnecting it from local networks and the internet, then preserve volatile data for analysis. Attackers often move laterally via exposed shares and remote services, so cut access at the edge and disable remote management until containment is confirmed.
Classify the incident using a risk-based approach: critical if core systems or data are threatened, high if destructive payloads or ransom are plausible, medium otherwise. Record the number of affected hosts and the data types involved, and map the scope across countries and networks. Use consistent terms and share the decision with stakeholders and legal teams.
Perform rapid triage to identify code paths and techniques: lateral movement, credential theft, and network propagation. Correlate local telemetry with data from cloud logs to map the attack surface. Then create a snapshot for offline analysis. On Tuesday and Wednesday, align teams across time zones and confirm next steps.
Containment actions: block the malicious code paths and C2 channels, revoke compromised identity credentials, rotate credentials, and disable remote administration. If needed, apply an alternative containment method to keep critical services online while you quarantine affected systems. Place compromised hosts into an isolated network segment and maintain an incident booking reference to track progress.
Preserve evidence and enable rapid recovery: capture memory and disk images, collect logs, and hash key files. Tag assets by status and keep a running inventory. Apply risk-based patching to prioritize updates on the most exposed systems, testing patches in a safe environment before broader deployment. Then roll out patches to cleaned machines and verify they cannot be easily re-infected.
Communication and next steps: share findings in terms of risk, impact, and confidence with global teams. Coordinate with CERTs and CSIRTs globally in key countries and maintain a live dashboard of networks and data exposure. The world watches the numbers; the actual tally may differ, but roughly one hundred assets may be affected. Keep partners like powell and merck informed to align remediation and business continuity.
Core system restoration: shipping schedules, ERP, and port IT recovery
Actually, immediately implement a three-track restoration plan for core systems to restore shipping schedules, ERP, and port IT. The initial targets set RPOs and RTOs: 15 minutes for critical processing, 24 hours for shipping schedules, 48 hours for ERP data, and 72 hours for port operations. The solution prioritizes the domain of core operations while keeping cross-domain alignment, which reduces ripple effects across customers and suppliers.
Track 1: Shipping schedules domain restoration. Bring up offline routing engines, reconstitute the largest schedules from clean backups, and switch to alternative carriers if the primary lanes are compromised. Maintain a phased rollout: within 0–12 hours reach 60% visibility, 12–24 hours 85%, 24–48 hours 95% accuracy. Phoned regional operations to confirm actual loads, vessels, and port slots. Destroy outdated duplicates in the new domain to prevent conflicts.
Track 2: ERP restoration. For the legacy systems, apply a domain-split restoration: separate core financials from logistics modules and recover in parallel. Restore initial data from offline backups, then perform just-in-time reconciliation with source of truth. Install patched middleware and implement a layered firewall to prevent further compromise. Validate processing flows on sample orders before go-live.
Track 3: Port IT recovery. Restore terminal operating system, yard management, and crane control in isolated segments. Rebuild the processing pipelines for container tracking and dock scheduling, then rejoin to a unified port IT domain. Run tests on customs interfaces and inter-terminal data exchange. This sets the stage for the largest ships to resume arrival windows.
Data integrity and governance. Cross-check ERP and shipping data with source systems, recover from snapshots, and run automated checks to prevent compromised records from propagating. This reduces error rates and preserves trust with suppliers and customers.
Supplier resilience and alternative sourcing. Maintain a list of critical suppliers and build relationships with backup providers. Create an incident-response phone tree to accelerate communications; called playbooks guide swift actions when disruption hits. When disruptions escalate, phoned logistics partners verify volumes and routes.
Security context and warfare readiness. Threats shift toward warfare-like tactics; implement continuous monitoring, rapid patching, and isolated test environments to avoid widespread damage. Prepare an alternative plan if any domain shows signs of attack, and ensure rapid failover between domains.
Next steps and continuous improvement. After stabilization, run quarterly recovery drills, document lessons, and tag legacy components with clear RTOs. Then align with suppliers and port authorities to increase resilience.
Data resilience and DR testing: backups, integrity checks, and failover readiness
Implement automated, immutable backups across three locations: on-site, offline air-gapped storage, and cloud replicas. Schedule full backups monthly and incremental backups hourly, with restoration tests every tuesday. Document results in written form and share openness with stakeholders across countries about resilience, there is visibility for action. Notpetya scenarios are possible, and backups should be recoverable within hours, close to real-time.
Validate integrity continuously: compute cryptographic hashes on backup chunks, verify them during restores, and periodically rehash live data to detect compromised blocks. For containers, preserve processing state and ensure code and metadata travel with each image, which helps trace provenance. Roughly align checks with business processes and risk appetite. Train staff to run integrity checks as part of routine processing.
Achieve failover readiness by automating service switchover to clean environments. Run end-to-end drills for critical services within hours, rather than days, and record any gaps in a written runbook. Having runbooks helps. Look for gaps in containment and recovery speed. Use notpetya scenarios to test containment and restoration speed.
Leadership and roles: appoint a clerc to lead DR readiness, coordinate with merck teams and other partners, and report progress to stakeholders with openness. After each drill, close findings within months; track improvements in processing, containers, and code across countries. Stakeholders asked for transparency; that improves alignment across there and others.
Security redesign: network segmentation, least privilege, and hardening of endpoints
Implement targeted network segmentation now to limit lateral movement, enforce least privilege, and harden endpoints across all devices. Start with core data zones, admin networks, and application segments, with strict firewall rules and micro-segmentation. This approach mirrors NotPetya lessons and maersk’s post-incident rebuild, delivering a normal operating baseline even under pressure. The aim is to reduce blast radius, speed containment, and protect customers and partners globally across the world economy.
- Asset inventory and data flow mapping: catalog devices, services, and data paths; use this line to define segmentation boundaries and deny-by-default rules.
- Segment by function and data sensitivity: isolate customer data and critical operational systems; restrict cross-zone access with explicit approvals.
- Zero-trust and least privilege: enforce RBAC/ABAC, Just-In-Time elevations, MFA for admins, and continuous posture checks on devices and services. This took time to configure but pays off in containment speed, and you can aim for rapid approvals in under load.
- Endpoint hardening: apply a single baseline across Windows, Linux, and macOS; disable unused services, enforce CIS benchmarks, enable device controls, and deploy EDR with automated response.
- Identity and access management: centralized IAM, SSO, conditional access, and device posture checks; rotate credentials, protect secrets with vaults, and ensure machine-to-machine authentication.
- Network controls: micro-segmentation, internal firewalls between segments, VLANs, NAC; enforce policy at every hop and log denials for audit trails.
- Monitoring and incident response: centralize endpoint and network telemetry, collect logs into a SIEM, and maintain runbooks with clear ownership. Numbers from pilot deployments show 40-60% faster containment in controlled tests.
- Patch management and configuration drift: establish a cadence (critical fixes within 24-48 hours; standard patches within 14 days) and automate drift remediation on endpoints and servers.
- Supply-chain and partner alignment: extend controls to customs, marine, and other partners; coordinate exercises and ensure collaboration (collaborating) across stakeholders globally. This approach protects the largest supplier networks and customers alike.
- Change management and culture: document decisions, maintain a line of accountability, and run regular tabletop exercises to validate strategy.
- Testing, learning, and rebuilding: run simulated breaches to validate containment, recovery, and communication with customers; the resulting insights move the place forward and support ongoing improvement. A tabletop exercise labeled ‘clerc’ tested patch governance and response roles.
whats at stake is a robust security redesign that locks down access, minimizes risks, and speeds recovery–protecting economic activities and customers alike. By collaborating across customs, marine, and other units, the largest enterprises can move from compromised state to resilient operations with a clear, documented solution.
Blockchain opportunities: immutable logs, audit trails, and tamper-resistant provenance to strengthen defenses

Implement a permissioned blockchain for immutable logs and audit trails to strengthen defenses. Build an append-only ledger that records every infrastructure change, time-stamped and cryptographically bound to the previous entry. Store payloads off-chain to keep the chain lean, while on-chain metadata and hashes provide proven provenance and added integrity.
Adoptez un modèle de gouvernance entre les équipes, avec des rôles clairs, des contrôles d'accès et des règles de conservation des données. Utilisez le chaînage de hachage et les preuves de Merkle pour vérifier la provenance des déploiements d'outils, des modifications de configuration et des événements de la chaîne d'approvisionnement. Pour les flux perfcdat, utilisez des condensés vérifiables et des ancrages périodiques à une chaîne publique telle que Bitcoin pour fournir une validation externe.
En réponse aux incidents, une provenance immuable accélère la reprise. Vous pouvez rejouer des séquences d'événements, confirmer à quel moment un endpoint infecté a touché un journal, et exposer les tentatives des assaillants d'effacer des traces. Un type de liaison cryptographique rend la falsification détectable et aide les enquêteurs à cartographier le chemin d'attaque.
Déployer un projet pilote progressif avec 3 à 5 équipes axées sur les charges de travail prioritaires dans un délai de 90 jours. Intégrer l'infrastructure existante, le SIEM et l'EDR afin que les événements s'alignent entre les sources. Deloitte recommande une vérification indépendante et un conseil de gouvernance qui se réunit mensuellement pour examiner les résultats.
Gouvernance et confidentialité : définir des règles de minimisation des données et des fenêtres de rétention ; limiter l’accès au personnel autorisé ; relier les journaux à l’ensemble des actifs et des configurations.
Résultats mesurés : MTTR pour les incidents, taux de tentatives d'altération détectées et temps gagné lors des examens forensiques. Vous obtenez une couche résiliente qui vous aide à dépasser les contrôles réactifs, à combler le fossé entre la sécurité et les opérations, et à faire des audits une pratique normale et reproductible que vous avez étendue grâce à la collaboration inter-organisationnelle. La prochaine étape consiste à étendre l'échelle des fonctions, en créant une fédération de journaux qui renforce les défenses dans l'ensemble du domaine.