
Implement rapid containment and hardened backups now to limit damage within hours, not days. The NotPetya incident that crippled Maersk in 2017 showed that a malware outbreak can cascade across continents, disrupting container operations and port efficiency. Human responders were told to pivot from routine IT tasks to critical service restoration, and teams mobilized to provide hands-on recovery guidance. The first move is clear: isolate affected segments, activate offline backups, and run a tested recovery mode that preserves essential performance.
Maersk faced downtime lasting about a week, with industry estimates placing direct losses around $300 million. The malware spread through shared accounting software and vendor networks, forcing the company to reroute ships, reschedule port calls, and rely on manual processes at several terminals. A saved lesson from this crisis is that speed, a clearly assigned controller, and a verified recovery playbook determine whether operations can rebound quickly. The episode underscored that global shipping is a distributed system, where disruption in one node creates ripple effects above the dock and across suppliers and customers.
The NotPetya shock redefined how the industry views cybersecurity across levels, from fleet management to back-office finance. It shattered the myth that large networks could be secured across boundaries; instead, it pushed for defense-in-depth, segmentation, and intelligence-led monitoring across boundaries that represent holistic resilience. Companies believe that resilience is built not by luck but by repeatable processes, simple checks, and urgent reporting when anomalies appear. The incident also underscored how a joint effort with software providers and port operators anywhere in the world strengthens overall risk posture.
For operations today, implement a practical blueprint: zero-trust access och multifaktorautentisering for every remote session, segment networks by business function, och offline backups that are tested monthly. Build a monitoring loop at multiple levels of the IT stack, powered by threat intelligence feeds and a dedicated incident controller who coordinates response across offices such as texas and sites anywhere. Document recovery playbooks with clear decision thresholds so leadership can act above the noise. Track performance with recovery time objectives (RTOs) and recovery point objectives (RPOs) that reflect real-world supply chains, not idealized numbers.
NotPetya’s legacy is practical: it teaches a risk-aware, data-driven approach that keeps the stone unturned in threat mapping. By privileging human judgment and a structured incident workflow, Maersk saved critical assets and kept customers informed. The approach relies on human intelligence and a clear chain of command where the controller coordinates across functions. We believe that shipping firms can maintain performance under sustained pressure by combining robust backups, rapid containment, and continuous learning from cyber incidents – simply by exercising drills, evaluating logs, and tightening safe boundaries across the network.
Impact of NotPetya on Maersk and the Global Shipping Ecosystem

Immediately implement segmented networks and credential hygiene to cut exposure to cyber-attacks and speed recovery; aim for a reduction in downtime and faster restoration of critical services for customers and partners.
Maersk faced a week-long outage that crippled container bookings, billing, and vessel scheduling. The disruption forced human operators to revert to manual processes and created massive backlogs in order processing, while customer teams and partners watched for updates. The incident underscored how a single breach can halt performing operations across multiple business lines and markets.
Around the globe, shipping hubs such as Rotterdam, Singapore, and other gateways experienced knock-on delays as carriers rerouted around affected networks. Port performance suffered, dwell times rose, and inland connections faced cascading congestion that stretched into the following week. Compared with normal-season baselines, the turbulence stressed margin and service commitments for an audience of customers, forwarders, and suppliers.
Externally, the NotPetya incident triggered sizable money outlays for remediation, new tooling, and staff training, pushing company funding decisions toward cyber resilience. Texas-based data centers and cloud providers were part of the shift toward diversified infrastructure, reducing single-point risk and improving access to backups during recovery. The overall costs highlighted the tension between short-term disruption and long-term resilience.
Industry responses emphasized applied risk controls: stronger access management, multi-factor authentication, and network segmentation to limit spread of future cyber-attacks. NotPetya accelerated the cadence of internal reviews, tightening incident response playbooks and supplier risk assessments. Conferences and industry forums became venues to share subject-matter insights, aligning the mind of executives and operators on practical steps to prevent a repeat and to support funding for ongoing security enhancements. The lesson remains clear: proactive preparation protects the audience and preserves the continuity of global trade.
Which Maersk IT systems were disrupted and how long did downtime last?
Restore core SAP ERP and Exchange services within 24 hours and switch to manual processes to maintain critical shipping and billing workflows while the network is rebuilt.
Disrupted systems spanned the core services stack: SAP ERP for finance and logistics; customer-facing booking and invoicing platforms; email and collaboration; file shares; backups and recovery tooling; and several domain-level components such as Windows domain controllers and identity services. Authentication relied on identities and password verification; when the domain was down, staff operated with offline records, paused workflows, and manual processes–paws on the keyboard, attention focused on damage control. The crisis response included naomi in leadership and a forde team coordinating the rebuild, building capabilities to restore services in stages and defend the kingdom of Maersk’s IT from further compromise.
The disruption starts with NotPetya spreading globally and came down on Maersk’s networks on June 27, 2017. Downtime lasted roughly 9 to 11 days before the core SAP ERP, email, and operations platforms were back online, and other services were gradually delivered in the following days, with full restoration around two weeks after the initial hit.
This incident story shows the value of fast recovery capabilities and a clear agenda for IT resilience. Prioritize building strong identity management and password hygiene, harden domain controllers, and segment networks to limit damage. Rebuild with a phased plan, starting with SAP ERP and core services, then expanding to logistics platforms, while maintaining manual workarounds to keep the flow moving. The crisis response requires funding and realistic money allocations, because serious investment pays back by reducing downtime and increasing customer trust. naomi’s team emphasized a technical approach, with a focus on governance, auditing, and rapid deliveries of security patches. The industry now weighs the cost, funds dedicated incident response, and shares a story about how the NotPetya event delivered important lessons for long-term resilience.
How NotPetya spread within Maersk and what containment steps were taken
Begin with immediate containment: isolate affected hubs and the core group of servers, revoke compromised privileges, deploy clean OS images, and switch critical services to offline backups. This approach limits further spread and preserves data for post-incident recovery.
NotPetya spread within Maersk through lateral movement across the Windows domain after an initial foothold in a vendor software chain; the worm used stolen credentials to move to multiple servers and then to hubs and regional sites.
Containment steps followed: map the affected system landscape, cut external access, disable common vectors (SMB, PsExec, WMI), deploy refreshed images to servers, and reimage where needed; rotate credentials; restore from offline backups; then verify data integrity and patch Windows with current security updates before operation resumes.
Engagement with vendors and public authorities clarified guidance and accelerated recovery. Maersk created a public subject line for incident updates to customers, coordinated with its vendors to track affected devices and remove gaps in supply chains.
Post-incident review identified gaps in backups, access controls, and monitoring. The organization tightened the strategy: enforce least privilege, enable MFA, segment networks into hub-like groups, and implement constant monitoring and alerting across servers and endpoints; cross-functional teams defined roles and engaged groups to reduce waste and accelerate detection.
Financial impact was reported in the hundreds of millions USD; the number of affected devices ran into thousands of endpoints and dozens of hubs; kinds of devices included servers, workstations, and OT interfaces. The recovery took about one to two weeks to restore core operations, with a longer tail for full network hardening. This effort demonstrated an amazing and excellent tool for coordination and the engagement of their teams across vendors.
Operational fallout: effects on schedules, port calls, and container movements
Adopt a cloud-based, msp-hosted operations cockpit to centralize real-time signals from vessels, terminals, and customers. A focussed intelligence core enabled fast re-planning and enabled teams to respond at the stage where disruption began. This approach keeps users informed and supports those who wish to act quickly.
Schedule fallout: Across core routes, on-time performance dropped by 18–26% in the first 72 hours, with average vessel delay rising from 6–8 hours to 12–18 hours. The compromise of data integrity created friction for planners, who had to reconcile updates at the workstation and re-check downstream feeds. The floor-level actions slowed, but the target is to restore steady rhythms within 24–48 hours for the most critical flows.
Port calls: Several hubs saw tighter port call windows and longer dwell times. On average, port call windows narrowed by 6–12 hours, while dwell time increased by 8–16 hours for affected vessels. An MSP-hosted dashboard enabled better coordination of berths, pilot slots, and gate throughput, reducing queue pressure on the floor and delivering excellent resilience.
Container movements: Yard congestion worsened, with container moves slowing 15–25% and truck turnaround rising 20–30% in the worst cases. A single cloud-based feed supported yard cranes, chassis pools, and gate systems, helping teams receive accurate status and avoid misloads. The improved intelligence reduced restocking delays and improved predictability from quay to stack to exit.
Advice for recovery: Define a clear target for schedule reliability and set a single source of truth across the network. Provide a dedicated workstation for the core operators and ensure biopharma lanes have focussed oversight. Maintain MSP-hosted services to keep data flows resilient and give users consistent guidance. When disruption hits suddenly, run a quick validation and adjust the plan in minutes.
Financial and contractual implications for Maersk and customers
Please discuss and adopt a cyber‑incident addendum now to set shared costs, service levels, and data-access rights during outages. This clause should apply to msp-hosted recovery environments, define downtime triggers, and specify how payments and credits flow across europe and other regions.
The NotPetya-era disruption kicked a global network into a massive halt, stressing both Maersk’s operations and customer supply chains.
For Maersk, direct costs stemmed from interrupted shipping operations, port calls, and downtime in servers and business applications. For customers, penalties, overtime, expedited freight charges, and cargo demurrage mounted as delays propagated through the network.
Estimates place Maersk’s direct costs in the range of 200–300 million USD, with additional losses from customer SLA credits, revenue shortfalls, and reputational impact in europe and elsewhere.
This creates unprecedented pressure on cash flow and contract terms for both sides.
- Cash flow and invoicing considerations, including credits, revised payment terms, and accelerated or deferred payments during disruptions.
- Insurance and risk-transfer alignment, particularly cyber and business-interruption coverage, with clear triggers and claim documentation.
- Cost allocation rules for resilience investments, such as msp-hosted backups, redundant servers, and cross-border communications links, including the role of the provider.
- Regulatory and government reporting costs, especially in europe, plus data-handling compliance during outages.
Contractual implications and recommended provisions:
- Liability caps that reflect practical risk with carve-outs for gross negligence or willful misconduct, plus agreed remedies beyond monetary damages.
- Service credits och betalning efter prestandamått kopplade till definierade mål för återställningstid (RTO) och mål för återställningspunkt (RPO), inklusive etappvisa återställningsmilstolpar.
- Dataåtkomst, rättigheter till återställning, lagring av säkerhetskopior, krypteringsstandarder och rättigheter till teståterställning i MSP-värdbaserade miljöer.
- Tydlig force majeure-formulering specifik för cyberhändelser, som undviker tvetydighet över gränser och regelverk.
- Prisjusteringar kopplade till avbrottets varaktighet, servicenivåer och tillgången till alternativa rutter eller leverantörer där det är möjligt.
- Rätt till granskning och återkommande utvärderingar (minst årligen) för att bekräfta investeringar i resiliens och efterlevnad av kommunikations- och återställningstester.
- Eskaleringsvägar som involverar statliga kontaktpunkter och branschmyndigheter för att samordna insatser i Europa och på andra marknader.
- Tilldelning av en riskägare för att övervaka efterlevnaden av villkoren; inkludera en namngiven kontakt, t.ex. morgan, för fortlöpande diskussioner med kunderna.
Operativa rekommendationer för att minska framtida exponering:
- Schemalägg regelbundna utvecklingssprintar och tabletopövningar för att stresstesta msp-värdbaserade servrar och återställningsarbetsflöden.
- Kartlägg kritiska leverantörer och rutter, och säkerställ att alternativa leverantörer kan träda in vid en omfattande störning.
- Investera i redundanta kommunikationskanaler (satellit, sekundära operatörer) och bevara offline-kopior av data för att stödja snabb återställning.
- Dokumentera och repetera incidenthandboken; dela koncisa incidentsammanfattningar med kunderna för att upprätthålla förtroendet under en kris.
- Tilldela en enda ansvarig ägare för att övervaka avtalsvillkoren och samordna förbättringar med kunderna, inklusive riskansvarig, som till exempel Morgan.
Genom att anta dessa åtgärder kan Maersk och kunderna begränsa störningar, stabilisera ekonomin och skydda den löpande verksamheten under extraordinära händelser i Europa och utanför. Observera att målet är att skapa en tydlig och genomförbar ram som ger hopp genom disciplinerad planering och transparent kommunikation.
Säkerhetsförbättringar och lärdomar för sjöfartsindustrin efter attacker

Börja med en centraliserad incidenthanteringshubb som är igång dygnet runt och samordnar fartyg, terminaler och landbaserad verksamhet. Denna centraliserade installation är kärnan i ditt säkerhetsprogram, med beredskapsplaner som omsätter lärdomar i handling inom några timmar. Ansvaret för säkerheten efter en attack ligger hos lednings- och säkerhetsteamen, vilket säkerställer en konsekvent reaktion. Mitt i bruset efter ett dataintrång ger detta tillvägagångssätt efter en attack en mätbar minskning av inneslutningstiden, vanligtvis inom några timmar snarare än dagar, och trenden bekräftas av telemetri under flera månader.
Använd ett koncept för försvarsdjup som spänner över digitala nätverk och OT-nätverk. Planen kombinerar nätverkssegmentering, principen om minsta privilegium, MFA och noggrann patchning med strikta fjärråtkomstkontroller och en aktuell tillgångsinventering förenad med automatiserad övervakning. Denna ovanliga kombination har lett till minskade driftstopp och reducerar hot, vilket ger en fantastisk förbättring av återställningstiden.
Utveckla färdigheter genom praktiska labb, mikrosimuleringar och månatliga övningar. Skapa enkla körböcker och en koncis samtalsguide efter incidenten för dina besättningar och landpersonal. Låt teamen öva över grupperna i realistiska operationer på gatunivå; oavsett vilket scenario som uppstår är de beredda att begränsa och återställa.
Samordna med leverantörs- och partnergrupper för att dela hotinformation och indikatorer. Publicera korta, praktiska anteckningar efter incidenter inom ramen för din styrningsmodell så att fältteamen snabbt kan agera. Techtarget-riktmärken som refereras i din policy tillhandahåller en standard som du kan jämföra med; ja, du kan använda detta som en baslinje.
Spåra konkreta mätvärden för att verifiera påverkan: minskning av genomsnittlig tid till inneslutning, tid för att återställa kritiska tjänster, andel enheter med aktuella patchar och lyckandefrekvens för säkerhetskopieringar. Låt oss titta på tillgänglig telemetri för att informera beslut och publicera ett månatligt samtal med chefer om riskpositionen inom organisationen. Dessa tillgängliga data stöder beslut som fattas av dina säkerhetsteam under månader av testkörningar.
| Area | Åtgärd | Ägare | Timeline | Anteckningar |
|---|---|---|---|---|
| Incident response | Etablera centraliserad 24/7-hubb och grupper för cross-ship | Säkerhetsansvarig | 0–3 months | I linje med plan efter attack; spåra MTTR |
| Kapitalförvaltning | Bygg live-inventering; segmentera nätverk; möjliggör minsta privilegium | IT/drift | 1–6 månader | Uppdatera regelbundet tillgängliga tillgångsförteckningar |
| Åtkomstkontroll | Tvinga fram MFA; begränsa fjärråtkomst; policybaserade behörigheter | IAM Team | 0–4 månader | Revisionsspår krävs |
| Säkerhetskopiering och DR | Implementera luftspalts-säkrade säkerhetskopior; testa återställning månadsvis | IT/CTO | 0–6 månader | Kontrollera återställningstid |
| Träning & övningar | Bordsövningar och direkta övningar; deltagande över gruppgränser | Säkerhetsutbildning | Månad 1–12 | Använd gatunivåoperatörer i övningar |
Fortsatt dialog med ledning och besättningar håller säkerheten samordnad när flottan är i drift. Fokus ligger fortsatt på det pragmatiska, med konkreta steg, tillgängliga verktyg och praktiska tidslinjer. Ja, dessa åtgärder förvandlar ögonblicket efter attacken till en vändpunkt för branschen, mitt i pågående hot och minskade marginaler.