EUR

Blog

Maersk Cyber Attack – How NotPetya Crippled Global Shipping

Alexandra Blake
podľa 
Alexandra Blake
11 minutes read
Blog
december 16, 2025

Maersk Cyber Attack: How NotPetya Crippled Global Shipping

Action: segment core services on robust networks, isolate them with offline backups, and train teams on rapid response to cut disruption when a cyber-attack hits. This approach is about reducing exposure and ensuring rapid recovery.

In the NotPetya outbreak, maersks core IT environment shut down to contain spread, crippling shipping software, and causing port calls to be missed or delayed. The incident impacted operations for days and triggered recovery costs reported in the hundreds of millions. It revealed how quickly a cyber-attack can cross networks and affect global trade, stressing maersks defences and the ability of teams to respond to them. A single miss in early detection can escalate.

To prevent a repeat, run a hands-on workshop focusing on incident sledovanie, tabletop exercises, and real-time detection. Build a cross-functional team that includes operations, IT, and security, so defences can adapt and decisions remain rapid. Track all indicators, from initial miss alerts to system isolation, and reveal gaps in how teams respond so they can fix them before a real incident hits.

Focus on maersks core operations: container management, cargo visibility, and customer communications. Establish offline backups for critical data, keep backups on separate networks, and ensure a rapid restoration playbook while testing against simulated disruptions. Regular drills will shorten the time from detection to containment, limiting disruption for partners and customers.

Stránka ability of teams to respond quickly shapes resilience. Share lessons with suppliers and ports, so they can mirror the same defences, and use transparent sledovanie to measure progress across the network. This approach steadies operations during cyber-attack events and strengthens maersks international shipping network.

NotPetya entry points: how malware spread into Maersk networks

Isolate and segment critical networks immediately to stop the spread. The initial entry point for Maersk’s NotPetya incident stemmed from a subsidiary network that received a compromised software update from a trusted vendor, enabling a destructive wiper to run. This shows how a targeted supply-chain breach can present as routine activity, and how staffers’ activities within a trusted line of software could accelerate a cyberattack across the wider environment. Early containment would have limited the blast radius.

NotPetya then moved across networks through management tools and legitimate credentials. It likely exploited a standard lateral movement path across Windows environments, using PsExec, Windows Admin Shares, and other remote techniques to reach devices and servers across a subsidiary network. Tracking login and process activity in the earlier hours would have signaled abnormal behavior and allowed quicker containment.

To defend now, implement concrete controls: enforce full network segmentation and restrict cross-network access; minimize and rotate administrative credentials; require MFA for remote login; apply strict application allowlisting and patch critical Windows vulnerabilities promptly; monitor staffers’ activities with user and entity behavior analytics; disable legacy protocols that enable lateral movement; ensure offline backups and regularly test recovery; isolate infected hosts and wipe them before reconnecting; run tabletop exercises with subsidiary teams so the lesson learned is embedded in the management line; keep logs and telemetry across all networks for tracking and forensics; build a response playbook that would protect operations in case of another cyberattack.

Beyond immediate containment, strengthen resilience by aligning cybersecurity with business continuity. Maintain visibility across all subsidiaries, reduce reliance on shared credentials, and train staffers to report phishing and suspicious activity. A rapid, coordinated response would manage incidents more effectively across networks and show measurable improvements in containment time and total downtime after an incident.

Immediate operational impact: outages at terminals, shipping schedules, and IT systems

Immediate operational impact: outages at terminals, shipping schedules, and IT systems

Immediately isolate the compromised network segments and switch critical operations to offline backups to prevent further spread and stabilize freight processing. Across the group, implement temporary offline workflows for terminals, inland shipping, and customer data portals, including manual reconciliation of freight movements and container bookings. This action limits malware spread and preserves data integrity while the responsible teams restore core systems and validate data in the affected subsidiary networks.

Outages at terminals across international operations crippled crane control, yard management, and vessel loading systems. In the first 24 hours, scheduling data vanished from the terminal operating and ERP systems, pushing departures and arrivals out by 12–36 hours on several routes. Some terminals experienced 1–2 day backlogs, forcing inland moves to rely on manual processes and leading to missed windows for perishable freight and time‑sensitive shipments.

The group’s IT systems suffered a broader impact: the network collapsed under malware pressure, affecting ERP, WMS, TMS, and linked data stores. Email, invoicing, and customer portals went offline in multiple subsidiaries, eroding visibility of data and the ability to share live freight status. Recovery required rebuilding domain controllers, reinstalling core software, and conducting strict data validation to avoid corrupted bookings or duplicate shipments.

Recent events exposed vulnerabilities in endpoint hygiene and network segmentation. Lessons emphasize tighter segmentation, frequent backups, and tested recovery playbooks. A cross‑functional workshop, including security, operations, and commercial teams, should become a standing activity for the giant group to reduce risk across regions and subsidiaries while maintaining service levels.

To restore full ability quickly, complete a focused 72‑hour plan: reimage affected endpoints, restore clean data from secure backups, and reestablish the core network with enhanced monitoring. Responsible managers should assign clear ownership to each subsidiary, set real‑time data dashboards for freight and schedule status, and run a brief data‑driven drill to confirm integrity. The year‑long commitment to resilience should include updated vendor controls, refreshed access policies, and a data‑driven timeline for full recovery and lessons learned from the workshop.

Direct and indirect costs: downtime, remediation, and lost revenue

Implement rapid network segmentation and offline backups within 24 hours to minimize downtime. This prevents further spread of nyetya and helps international cargo and services resume quickly while access to key networks and computer systems is restored, though some systems may require staged reactivation.

Downtime costs likely ranged from 3 to 7 days for core operations, with those outages blocking access to reservation systems, tracking, and cargo movement, delaying tens of thousands of containers and reducing revenue from services. Global estimates place direct losses in the hundreds of millions of USD, commonly cited around $200–300 million, with a portion tied to remediation and the need to rebuild damaged networks and computer infrastructure.

Remediation and recovery expenses covered forensic analysis, software restoration, server rebuilds, patching, and security upgrades, plus expanded monitoring and immutable backups. Those charges likely ran from tens to low hundreds of millions, depending on the size of the network and the extent of damage, and included overtime for staff, equipment procurement, and new licenses. Though challenging, those steps would reveal weaknesses in protocols and access controls across those networks.

Indirect costs included lost revenue from delayed shipments, penalties, and customer churn, as well as reputational damage and higher insurance premiums. Those effects can extend for months and often exceed the initial remediation bill, especially where international customers evaluate continuity for their own supply chains. Those costs are often spread across other services and future contracts.

Recommended measures to cut exposure include: segment networks into secure zones, enforce least-privilege access, and maintain offline immutable backups; deploy endpoint detection and response (EDR) and centralized logging; require MFA for all administrator access; implement application allow-listing and targeted security protocols; track cargo and service data to support rapid containment; and establish an international incident playbook with clear communication for customers. Regular drills and tracking of response times will help where recent cyber-attack patterns show trends, and can reveal gaps before a real incident.

Tracking post-incident metrics enables better budgeting and planning: measure time to containment, time to restore access, and revenue recovered. By tightening protocols and strengthening defenses, the downtime can be limited to hours rather than days, and the damage to cargo flows and services can be substantially reduced, even when those networks face another cyber-attack.

Incident response and remediation steps: containment, backups, and system restoration

Contain the breach within minutes by isolating affected segments, revoking compromised credentials, and routing work through clean, offline backups to maintain continuity and limit disruption. Immediately declare an incident and assign a responsible lead who coordinates cross-functional teams, including their IT, security, operations, and legal. The maersk case shows how fast containment affects resilience and demonstrates the value of a prepared playbook that staff can follow without hesitation. If external help is needed, a mckevitt advisory session can supplement the internal plan, and your teams can act almost immediately to keep the world’s ships moving. This approach enables your systems to stay functional while investigators work to identify the breach and preserve evidence.

Backups must be offline, immutable, and tested. Activate the latest clean copies to restore critical systems first, including your ERP, order management, and financials. Maintain several restore points, with at least one offline copy kept for the last year and another for peak periods. Validate data integrity with checksums and routine drills to confirm you can reach full recovery even if primary storage is compromised.

Restore should be staged in a clean environment: bring back non-critical systems first, apply a gold image for servers, and run a malware sweep across endpoints before almost immediately reintroducing them to production. Change all credentials and enforce multi-factor authentication, then reestablish connectivity through segmented networks to avoid a single point of failure. Use rapid, controlled rollout to ensure business continuity while monitoring for latent threats that may lurk in backups or dormant code.

Document lessons and share them with stakeholders through a seminar or post-event report. The late phase of the aftermath often reveals gaps in detection, containment, and communication; addressing those gaps strengthens resilience for future attacks, including those targeting logistics and other worlds. The lessons from this incident should inform your incident response playbook and external partners, ensuring your teams stay prepared and your customers see minimal disruption. The show of results at seminars helps reach the broader community and reinforces responsibility across the maersks and broader world of shipping.

Ongoing improvement requires routine testing, technology investments, and clear responsibilities. Define who is responsible for containment, backups, and restoration, and practice with quarterly tabletop exercises. Rehearse the full continuity plan in a real-world context by inviting industry peers to a seminar, analyzing the aftermath, and refining processes based on measured outcomes. The result is a culture of resilience that can reach beyond your organization to the worlds of shipping and technology, turning a disruption into stronger systems and faster recovery, a lesson your year-long resilience program can demonstrate for maersk and others in the sector.

Longer-term effects on the supply chain: customer trust, rerouting, and insurance considerations

Recommended action: implement a resilience framework within 90 days that aligns customer communication, operational rerouting, and cyber risk insurance. This plan connects those functions across the giant core of the organization, including those at headquarters and regional offices.

Customer trust will recover faster when recent incidents are communicated with transparency. Establish a formal status email cadence and a public dashboard that shows ETA windows, current delays, and remediation steps. Those updates should be timely, factual, and easy to understand, reducing the volume of inbound inquiries and protecting the brand’s reputation among international customers and partners.

Rerouting and moving goods demand a disciplined approach. Build an automated, multi-modal rerouting capability that can shift shipments between sea, air, rail, and inland networks where disruptions occur. Create a cross-functional playbook that defines where to move goods, how to reallocate capacity among those suppliers, and how to quantify service level impacts in near real time. Increasingly, networks rely on diversified lanes; the goal is to keep material moving with minimal stack time between alerts and action.

Insurance considerations require proactive collaboration with underwriters to map supply-chain risk to policy terms. Ensure coverage covers business interruption caused by cyber events, data integrity issues, and supplier disruptions. Recommended features include clear triggers, regional sub-limits, and the ability to claim for rerouting costs, expedited shipping, and third-party remediation. This approach helps transform cyber risk from a surprise cost into a managed budget line, potentially avoiding billion-dollar gaps in resilience funding.

In practice, balance transparency with protection of sensitive data. Use secure email communications for customers, establish a centralized incident response contact, and publish a concise incident summary after containment. Those steps support trust while avoiding overexposure of operational details that attackers could exploit.

To close gaps, integrate three concrete actions:

  • Implement an enhanced, end-to-end visibility layer that tracks shipments across alternate routes and flags deviations within minutes.
  • Develop international supplier contingencies, including pre-approved alternate carriers and pre-negotiated capacity pools to move goods rapidly when routes are disrupted.
  • Conduct regular insurance reviews with headquarters and regional teams, incorporating Wannacry- or similar-attack learnings into coverage and response playbooks.