€EUR

Blog

Maersk and NotPetya – How the Cyberattack Reshaped Global Shipping

Alexandra Blake
par 
Alexandra Blake
13 minutes read
Blog
décembre 16, 2025

Maersk et NotPetya : Comment la cyberattaque a remodelé le transport maritime mondial

Implement rapid containment and hardened backups now to limit damage within hours, not days. The NotPetya incident that crippled Maersk in 2017 showed that a malware outbreak can cascade across continents, disrupting container operations and port efficiency. Human responders were told to pivot from routine IT tasks to critical service restoration, and teams mobilized to provide hands-on recovery guidance. The first move is clear: isolate affected segments, activate offline backups, and run a tested recovery mode that preserves essential performance.

Maersk faced downtime lasting about a week, with industry estimates placing direct losses around $300 million. The malware spread through shared accounting software and vendor networks, forcing the company to reroute ships, reschedule port calls, and rely on manual processes at several terminals. A saved lesson from this crisis is that speed, a clearly assigned controller, and a verified recovery playbook determine whether operations can rebound quickly. The episode underscored that global shipping is a distributed system, where disruption in one node creates ripple effects above the dock and across suppliers and customers.

The NotPetya shock redefined how the industry views cybersecurity across levels, from fleet management to back-office finance. It shattered the myth that large networks could be secured across boundaries; instead, it pushed for defense-in-depth, segmentation, and intelligence-led monitoring across boundaries that represent holistic resilience. Companies believe that resilience is built not by luck but by repeatable processes, simple checks, and urgent reporting when anomalies appear. The incident also underscored how a joint effort with software providers and port operators anywhere in the world strengthens overall risk posture.

For operations today, implement a practical blueprint: zero-trust access et authentification multifactorielle for every remote session, segment networks by business functionet offline backups that are tested monthly. Build a monitoring loop at multiple levels of the IT stack, powered by threat intelligence feeds and a dedicated incident controller who coordinates response across offices such as texas and sites anywhere. Document recovery playbooks with clear decision thresholds so leadership can act above the noise. Track performance with recovery time objectives (RTOs) and recovery point objectives (RPOs) that reflect real-world supply chains, not idealized numbers.

NotPetya’s legacy is practical: it teaches a risk-aware, data-driven approach that keeps the stone unturned in threat mapping. By privileging human judgment and a structured incident workflow, Maersk saved critical assets and kept customers informed. The approach relies on human intelligence and a clear chain of command where the controller coordinates across functions. We believe that shipping firms can maintain performance under sustained pressure by combining robust backups, rapid containment, and continuous learning from cyber incidents – simply by exercising drills, evaluating logs, and tightening safe boundaries across the network.

Impact of NotPetya on Maersk and the Global Shipping Ecosystem

Impact of NotPetya on Maersk and the Global Shipping Ecosystem

Immediately implement segmented networks and credential hygiene to cut exposure to cyber-attacks and speed recovery; aim for a reduction in downtime and faster restoration of critical services for customers and partners.

Maersk faced a week-long outage that crippled container bookings, billing, and vessel scheduling. The disruption forced human operators to revert to manual processes and created massive backlogs in order processing, while customer teams and partners watched for updates. The incident underscored how a single breach can halt performing operations across multiple business lines and markets.

Around the globe, shipping hubs such as Rotterdam, Singapore, and other gateways experienced knock-on delays as carriers rerouted around affected networks. Port performance suffered, dwell times rose, and inland connections faced cascading congestion that stretched into the following week. Compared with normal-season baselines, the turbulence stressed margin and service commitments for an audience of customers, forwarders, and suppliers.

Externally, the NotPetya incident triggered sizable money outlays for remediation, new tooling, and staff training, pushing company funding decisions toward cyber resilience. Texas-based data centers and cloud providers were part of the shift toward diversified infrastructure, reducing single-point risk and improving access to backups during recovery. The overall costs highlighted the tension between short-term disruption and long-term resilience.

Industry responses emphasized applied risk controls: stronger access management, multi-factor authentication, and network segmentation to limit spread of future cyber-attacks. NotPetya accelerated the cadence of internal reviews, tightening incident response playbooks and supplier risk assessments. Conferences and industry forums became venues to share subject-matter insights, aligning the mind of executives and operators on practical steps to prevent a repeat and to support funding for ongoing security enhancements. The lesson remains clear: proactive preparation protects the audience and preserves the continuity of global trade.

Which Maersk IT systems were disrupted and how long did downtime last?

Restore core SAP ERP and Exchange services within 24 hours and switch to manual processes to maintain critical shipping and billing workflows while the network is rebuilt.

Disrupted systems spanned the core services stack: SAP ERP for finance and logistics; customer-facing booking and invoicing platforms; email and collaboration; file shares; backups and recovery tooling; and several domain-level components such as Windows domain controllers and identity services. Authentication relied on identities and password verification; when the domain was down, staff operated with offline records, paused workflows, and manual processes–paws on the keyboard, attention focused on damage control. The crisis response included naomi in leadership and a forde team coordinating the rebuild, building capabilities to restore services in stages and defend the kingdom of Maersk’s IT from further compromise.

The disruption starts with NotPetya spreading globally and came down on Maersk’s networks on June 27, 2017. Downtime lasted roughly 9 to 11 days before the core SAP ERP, email, and operations platforms were back online, and other services were gradually delivered in the following days, with full restoration around two weeks after the initial hit.

This incident story shows the value of fast recovery capabilities and a clear agenda for IT resilience. Prioritize building strong identity management and password hygiene, harden domain controllers, and segment networks to limit damage. Rebuild with a phased plan, starting with SAP ERP and core services, then expanding to logistics platforms, while maintaining manual workarounds to keep the flow moving. The crisis response requires funding and realistic money allocations, because serious investment pays back by reducing downtime and increasing customer trust. naomi’s team emphasized a technical approach, with a focus on governance, auditing, and rapid deliveries of security patches. The industry now weighs the cost, funds dedicated incident response, and shares a story about how the NotPetya event delivered important lessons for long-term resilience.

How NotPetya spread within Maersk and what containment steps were taken

Begin with immediate containment: isolate affected hubs and the core group of servers, revoke compromised privileges, deploy clean OS images, and switch critical services to offline backups. This approach limits further spread and preserves data for post-incident recovery.

NotPetya spread within Maersk through lateral movement across the Windows domain after an initial foothold in a vendor software chain; the worm used stolen credentials to move to multiple servers and then to hubs and regional sites.

Containment steps followed: map the affected system landscape, cut external access, disable common vectors (SMB, PsExec, WMI), deploy refreshed images to servers, and reimage where needed; rotate credentials; restore from offline backups; then verify data integrity and patch Windows with current security updates before operation resumes.

Engagement with vendors and public authorities clarified guidance and accelerated recovery. Maersk created a public subject line for incident updates to customers, coordinated with its vendors to track affected devices and remove gaps in supply chains.

Post-incident review identified gaps in backups, access controls, and monitoring. The organization tightened the strategy: enforce least privilege, enable MFA, segment networks into hub-like groups, and implement constant monitoring and alerting across servers and endpoints; cross-functional teams defined roles and engaged groups to reduce waste and accelerate detection.

Financial impact was reported in the hundreds of millions USD; the number of affected devices ran into thousands of endpoints and dozens of hubs; kinds of devices included servers, workstations, and OT interfaces. The recovery took about one to two weeks to restore core operations, with a longer tail for full network hardening. This effort demonstrated an amazing and excellent tool for coordination and the engagement of their teams across vendors.

Operational fallout: effects on schedules, port calls, and container movements

Adopt a cloud-based, msp-hosted operations cockpit to centralize real-time signals from vessels, terminals, and customers. A focussed intelligence core enabled fast re-planning and enabled teams to respond at the stage where disruption began. This approach keeps users informed and supports those who wish to act quickly.

Schedule fallout: Across core routes, on-time performance dropped by 18–26% in the first 72 hours, with average vessel delay rising from 6–8 hours to 12–18 hours. The compromise of data integrity created friction for planners, who had to reconcile updates at the workstation and re-check downstream feeds. The floor-level actions slowed, but the target is to restore steady rhythms within 24–48 hours for the most critical flows.

Port calls: Several hubs saw tighter port call windows and longer dwell times. On average, port call windows narrowed by 6–12 hours, while dwell time increased by 8–16 hours for affected vessels. An MSP-hosted dashboard enabled better coordination of berths, pilot slots, and gate throughput, reducing queue pressure on the floor and delivering excellent resilience.

Container movements: Yard congestion worsened, with container moves slowing 15–25% and truck turnaround rising 20–30% in the worst cases. A single cloud-based feed supported yard cranes, chassis pools, and gate systems, helping teams receive accurate status and avoid misloads. The improved intelligence reduced restocking delays and improved predictability from quay to stack to exit.

Advice for recovery: Define a clear target for schedule reliability and set a single source of truth across the network. Provide a dedicated workstation for the core operators and ensure biopharma lanes have focussed oversight. Maintain MSP-hosted services to keep data flows resilient and give users consistent guidance. When disruption hits suddenly, run a quick validation and adjust the plan in minutes.

Financial and contractual implications for Maersk and customers

Please discuss and adopt a cyber‑incident addendum now to set shared costs, service levels, and data-access rights during outages. This clause should apply to msp-hosted recovery environments, define downtime triggers, and specify how payments and credits flow across europe and other regions.

The NotPetya-era disruption kicked a global network into a massive halt, stressing both Maersk’s operations and customer supply chains.

For Maersk, direct costs stemmed from interrupted shipping operations, port calls, and downtime in servers and business applications. For customers, penalties, overtime, expedited freight charges, and cargo demurrage mounted as delays propagated through the network.

Estimates place Maersk’s direct costs in the range of 200–300 million USD, with additional losses from customer SLA credits, revenue shortfalls, and reputational impact in europe and elsewhere.

This creates unprecedented pressure on cash flow and contract terms for both sides.

  • Cash flow and invoicing considerations, including credits, revised payment terms, and accelerated or deferred payments during disruptions.
  • Insurance and risk-transfer alignment, particularly cyber and business-interruption coverage, with clear triggers and claim documentation.
  • Cost allocation rules for resilience investments, such as msp-hosted backups, redundant servers, and cross-border communications links, including the role of the provider.
  • Regulatory and government reporting costs, especially in europe, plus data-handling compliance during outages.

Contractual implications and recommended provisions:

  • Liability caps that reflect practical risk with carve-outs for gross negligence or willful misconduct, plus agreed remedies beyond monetary damages.
  • Crédits de service et indicateurs de performance liés au paiement à la performance, associés à des objectifs de temps de restauration (RTO) et des objectifs de point de restauration (RPO) définis, y compris des étapes de restauration progressive.
  • Accès aux données, droits de restauration, conservation des sauvegardes, normes de chiffrement et droits de restauration de test dans les environnements hébergés par MSP.
  • Langage clair de force majeure spécifiquement lié aux événements cybernétiques, évitant l'ambiguïté au-delà des frontières et des régimes réglementaires.
  • Ajustements tarifaires liés à la durée de la panne, aux niveaux de service et à la disponibilité d'itinéraires ou de fournisseurs alternatifs, lorsque cela est possible.
  • Droits d'audit et examens périodiques (au moins annuels) pour confirmer les investissements en matière de résilience et la conformité aux communications et aux tests de reprise.
  • Processus de remontée hiérarchique impliquant les points de contact gouvernementaux et les autorités industrielles pour coordonner la réponse en Europe et sur les autres marchés.
  • Désignation d'un responsable du risque pour superviser le respect des conditions ; inclure un contact nommé, tel que morgan, pour les discussions continues avec les clients.

Recommandations opérationnelles pour réduire l’exposition future :

  1. Planifier des sprints de développement réguliers et des exercices pratiques pour tester la résistance des serveurs hébergés par le MSP et les flux de travail de reprise d'activité.
  2. Cartographier les fournisseurs et les routes critiques, en s'assurant que des fournisseurs alternatifs peuvent intervenir en cas de perturbation majeure.
  3. Investissez dans des canaux de communication redondants (satellite, opérateurs secondaires) et conservez des copies de données hors ligne pour faciliter une restauration rapide.
  4. Documentez et répétez le plan d'action en cas d'incident ; partagez des résumés d'incident concis avec les clients afin de maintenir leur confiance pendant une crise.
  5. Désigner un responsable unique pour surveiller les termes du contrat et coordonner les améliorations avec les clients, y compris le responsable des risques, tel que Morgan.

En adoptant ces mesures, Maersk et ses clients peuvent limiter les perturbations, stabiliser les finances et protéger les opérations en cours pendant les événements extraordinaires en Europe et au-delà. Veuillez noter que l'objectif est d'établir un cadre clair et réalisable qui donne de l'espoir grâce à une planification rigoureuse et à des communications transparentes.

Améliorations de la sécurité et enseignements à tirer pour le secteur maritime à la suite d'une attaque

Améliorations de la sécurité et enseignements à tirer pour le secteur maritime à la suite d'une attaque

Mettez en place un centre de réponse aux incidents centralisé fonctionnant 24 heures sur 24, 7 jours sur 7, qui assure la coordination entre les navires, les terminaux et les opérations à terre. Cette configuration centralisée est au cœur de votre programme de sécurité, avec des manuels qui transforment les leçons apprises en actions en quelques heures. La question de la sécurité post-attaque est traitée par la direction et les équipes de sécurité, garantissant une réponse cohérente. Dans le brouhaha qui suit une violation, cette approche post-attaque permet de réduire de manière mesurable le temps de confinement, généralement en quelques heures plutôt qu'en quelques jours, et des mois de télémétrie confirment cette tendance.

Adoptez un concept de défense en profondeur qui englobe les réseaux numériques et les réseaux OT. Ce plan associe la segmentation du réseau, le moindre privilège, l'authentification multifactorielle (MFA) et l'application rigoureuse de correctifs à des contrôles stricts de l'accès à distance et à un inventaire des actifs en temps réel, le tout relié par une surveillance automatisée. Cette combinaison inhabituelle a permis de réduire les temps d'arrêt et les menaces, offrant ainsi une amélioration remarquable du temps de récupération.

Développez les compétences grâce à des laboratoires pratiques, des micro-simulations et des exercices mensuels. Créez des manuels d'exécution simples et un guide de conversation post-incident concis pour vos équipes et votre personnel à terre. Permettez aux équipes de s'exercer entre les groupes dans des opérations réalistes au niveau de la rue. Quel que soit le scénario, elles sont préparées à contenir et à se rétablir.

Coordonnez-vous avec les fournisseurs et les groupes partenaires pour partager les informations et les indicateurs de menaces. Dans le cadre de votre modèle de gouvernance, publiez de courtes notes post-incident pratiques afin que les équipes de terrain puissent agir rapidement. Les benchmarks techtarget référencés dans votre politique fournissent une norme à laquelle vous pouvez vous comparer ; oui, vous pouvez l'utiliser comme base de référence.

Suivez des indicateurs concrets pour vérifier l'impact : réduction du temps moyen de confinement, du temps de restauration des services critiques, du pourcentage d'appareils avec les correctifs à jour et du taux de réussite des sauvegardes. Examinons les données de télémétrie disponibles pour éclairer les décisions, et publions une conversation mensuelle avec les dirigeants sur la posture de risque au sein de l'organisation. Ces données disponibles soutiennent les décisions prises par vos équipes de sécurité au cours des mois d'exécution des tests.

Zone Action Propriétaire Timeline Notes
Incident response Mettre en place un centre névralgique centralisé 24h/24 et 7j/7 et des groupes d'expédition croisée Chef de la sécurité 0–3 months S'aligne sur le plan post-attaque ; suivi du MTTR
Gestion d'actifs Construire un inventaire en temps réel ; segmenter les réseaux ; activer le moindre privilège IT/Ops 1 à 6 mois Mettre régulièrement à jour les listes d'actifs disponibles.
Contrôle d'accès Appliquer l'authentification multifacteur ; restreindre l'accès à distance ; permissions basées sur des politiques. Équipe IAM 0–4 mois Pistes d'audit requises
Sauvegarde et reprise après sinistre Mettre en place des sauvegardes isolées physiquement ; tester la restauration tous les mois IT/Directeur technique 0–6 mois Vérifier le délai de restauration
Entraînement et exercices Exercices de simulation et en présentiel ; participation interservices. Formation sur la sécurité Mois 1–12 Utiliser des opérateurs de terrain dans les exercices

La communication continue avec la direction et les équipes permet de maintenir la sécurité alignée pendant les opérations de la flotte. L'accent reste pragmatique, avec des mesures concrètes, des outils disponibles et des échéanciers réalistes. Oui, ces mesures transforment le moment post-attaque en un tournant pour l'industrie, dans un contexte de menaces persistantes et de marges plus serrées.