€EUR

Blog

DCSA’s API Standards – Making Data Accessible in the Global Shipping Industry

Alexandra Blake
von 
Alexandra Blake
4 Minuten gelesen
Blog
Februar 13. 2026

DCSA's API Standards: Making Data Accessible in the Global Shipping Industry

Implement DCSA APIs now: prioritize the booking, events and tracking endpoints and adopt standardized payloads within 12 weeks to reduce manual touchpoints by ~40% and deliver near-real-time updates to each user.

Start with three practical steps: map existing interfaces, expose a thin API layer for legacy systems that still use EDI, and pilot satellite-based position feeds to enrich event records. This approach preserves current operations while creating a seamless path for automated notifications and ETA corrections.

Measure outcomes to prove value: track average time-to-confirm for a booking, percent reduction in email threads, frequency of status updates per voyage, and utilisation of API calls per customer. These metrics show how the transformation is transforming procurement and operations across the entire supply chain and how teams work together to reduce dwell and misrouting on a ship.

Follow clear plans: run a three-part pilot with one carrier, two shippers and a terminal; publish SLA and versioning rules; provide SDKs and sample payloads; and schedule weekly integration checkpoints. Those concrete steps produce repeatable outcomes today and make standardized data available for downstream systems, analytics and partner integrations.

DCSA API Standards: Making Data Accessible in Global Shipping – Aligning Container Vessel Arrivals and Platform Deployment for Digitisation

Implement DCSA API standards within 12 months to synchronize vessel ETA publishing, berth allocation and platform deployment, reducing manual handoffs and unlocking automatic operational updates.

Require APIs to publish ETA and status updates at 15-minute intervals and support event-driven webhooks for immediate communication between carriers, terminals and agents. Integrate AIS and satellite feeds so position and speed are captured along route and transferred into port systems; standardize payloads with ISO 8601 timestamps and UN/LOCODEs to avoid mapping errors during transfer. Plan three major releases per year with semantic versioning and clearly published windows for backward compatibility to give users predictable upgrade cycles.

Optimize berth windows by combining operational feeds from terminal operating systems and carrier planning systems; pilots with Maersk and terminal partners show up to 20% reduction in berth waiting time and a greater berth utilization rate over manual scheduling. Use automatic berth reassignment rules and notification communications to reduce berth idle minutes between vessel movements, improving throughput more than ad hoc messaging can achieve.

Include environmental metrics in every API payload: fuel consumption, auxiliary engine hours and satellite-derived emissions estimates. Feed those captured metrics into port and carrier management dashboards to quantify CO2 per TEU and set measurable targets for sustainable development. They allow operators to compare routing and berthing choices in terms of emissions and operating cost, enabling strategic choices that deliver better sustainability outcomes.

Establish governance that ties standard conformance to contractual SLAs, clarifies communications protocols and assigns responsibility for data quality. Promote developer toolkits, sandbox environments and open documentation to speed development and onboarding of new users. Monitor KPIs – arrival-time accuracy, automatic transfer success rate, and reduction in manual interventions – and publish quarterly reports so stakeholders can measure progress and plan further integration.

Operational Blueprint: Applying DCSA APIs to Align Vessel Arrival Data

Operational Blueprint: Applying DCSA APIs to Align Vessel Arrival Data

Implement a standardized arrival-state reconciliation process using DCSA Event, Voyage and Port Call APIs with a 24-hour SLA to align vessel arrivals across carriers, ports and terminals.

Capture arrivals data from three primary источники: carrier manifests via DCSA Voyage API, terminal systems via Port Call API, and AIS feeds. Use Event API webhooks for swift updates and fall back to periodic polling at 5‑minute intervals when webhooks fail. Tag each record with a SOURCE field (источник) and a capture_timestamp to preserve provenance and support audit trails.

Normalize incoming payloads to a single operational model with these specific canonical fields: vessel_imo, voyage_id, eta_utc, ata_utc, berth, status_code, bunker_onboard_mt, estimated_fuel_burn_mt, customs_status, and event_origin. Convert all times to UTC, round to nearest minute, and map disparate status codes to a 7-state arrival taxonomy (planned, ETA updated, underway, arrived, alongside, berthed, departed).

Use deterministic matching rules: exact match on vessel_imo + voyage_id if available; otherwise match on vessel_imo + eta window ±72 hours using a scoring function that weights Event API = 0.5, AIS = 0.3, terminal messages = 0.2. Flag mismatches with a score <0.6 as discrepancies for human review. Define a discrepancy threshold of 60 minutes for ETA vs ATA; treat larger gaps as operational exceptions that may increase fuel usage and customs hold time.

Integrate exception handling into user interfaces and machine interfaces simultaneously. Push high-severity exceptions to carrier operations teams via API notifications and to terminal planners via UI dashboards. Provide users with suggested corrective actions: reroute to alternate berth, request immediate bunker adjustment, or pre-clear customs documents. Record operator decisions and time-to-resolution for later analysis.

Measure performance with these KPIs and targets: discrepancy_rate (<5% within 6 months), mean_time_to_align (<4 hours), SLA_adherence (≥98%), bunker_variance (±3% of reported on‑board), and economic_impact_estimate per arrival. Use past three months as baseline and run an October pilot on a leading Asia-Europe lane with 500 captured arrivals to validate savings and model accuracy.

Apply transformation rules that reduce duplicated events and improve utilisation of system interfaces: drop duplicates within 2 minutes from the same источник, merge events sharing identical vessel_imo+event_origin+timestamp, and store raw payloads for lineage. Annotate records with an events_history array so analysts can understand timing patterns that led to discrepancies along the voyage.

Govern data flows with versioned API contracts, OAuth2 tokens per partner, rate limits tuned to peak operations (recommended 1,200 calls/min for Event subscriptions), and retention policies (raw for 12 months, aggregated for 36 months). Maintain an industry-wide schema registry so partners can align on field semantics and reduce mapping efforts beyond initial integration.

DCSA API Key fields captured Aktion SLA / Target
Event API event_type, timestamp, location, vessel_imo, voyage_id Real-time capture, webhook first, debounce 2 min Webhook delivery <30s; retry twice
Voyage API voyage_id, eta_utc, planned_port_calls, cargo_manifest Populate canonical voyage record, flag manifest mismatches Sync every 4 hours; updates within 1 hour
Port Call API port_call_id, berth, alongside_time, berthed_time, customs_status Align port-side status, surface customs holds Update <15 min after local event
Reference Data API locations, terminals, carrier_codes Resolve names/IDs, reduce mapping errors Weekly refresh; hotfixes within 24h
Location / AIS lat, lon, sog, cog, timestamp Supplement ETA/ATA, fuel burn estimation Stream latency <60s

Run a three-month pilot with these steps: deploy integrations on a leading lane in October, ingest 500 past and live arrivals to train the reconciliation model, iterate rules to reduce false positives to <8% and measure economic impact monthly. Report results to customs partners to shorten clearance windows and to OPS teams to reduce waiting time that drives bunker consumption. Use lessons from the pilot to expand industry-wide and scale interfaces so users see aligned arrivals data beyond local silos.

Identify critical endpoint set for vessel arrival alignment: required DCSA APIs and payloads

Implement this minimal endpoint set to align vessel arrivals: Port Call (voyage/eta), Event Notifications (webhook), Transport/Movement, Booking, Equipment/Container, Terminal Interface, Reference/Location and Party APIs.

Port Call payload: include voyageId, vesselIMO, vesselName, voyageNumber, portCode (UN/LOCODE), scheduledArrival (ISO‑8601), estimatedArrival, scheduledDeparture, berthingWindowStart, berthingWindowEnd, berthCode, draftMeters, nextPortCode, and sequenceVersion. Use numeric fields for TEU and draft, and provide sourceSystem and lastUpdatedBy. For example: scheduledArrival: “2026-03-10T14:00:00Z”. Transmit full voyage snapshot on first sync and deltas thereafter.

Event Notifications payload: eventType, eventTimestamp, relatedId (voyageId, containerNumber, bookingReference), locationCode, statusCode, details, sequenceNumber, and idempotencyKey. Send application/vnd.api+json JSON:API formatted POSTs to subscriber endpoints with retry/backoff and exponential windows. Mark events as automatic or manual and include eventProvenance to identify the system that generated the event.

Transport/Movement payload: transportId (GUID), shipmentId, bookingReference, carrierBookingReference, billOfLadingNumber, originUNLOC, destinationUNLOC, mode, containers:[{containerNumber,sizeType,status}], cargoType, weightKg, loadedOnVoyageId, and currentStatusTimestamp. Provide manifestReference and estimatedOnboardTime when available to help terminal planning.

Booking payload: bookingReference, shipperPartyId, consigneePartyId, commodities, totalTEU, containerRequirements, portCutoffTimes:{terminalCutoff,gateCutoff,docsCutoff} with timezone, requestedPickupDate, and confirmedStatus. Use these values to drive terminal sloting and appointment systems; store bookingVersion for reconciliation.

Equipment/Container payload: containerNumber, isoSizeType, tareKg, grossKg, currentLocationCode, lastFreeTime, containerStatus, and sealInfo. Connects these records to transportId and voyageId so terminal systems and customers see a single source of truth.

Terminal Interface payload: terminalCode, berthCode, availableCraneCount, plannedBerthStart, plannedBerthEnd, gateSlots (timestamp windows), yardOccupancyPercent, and serviceLevels. Design the model so terminals can transmit berth confirmations and slot assignments back into the Port Call and Booking domains.

Reference and Party payloads: partyId (carrier, terminal, shipper, consignee), names, roles, contactMethods, and locationList. Supply UN/LOCODE and standardized role terms to promote consistent matching across the constellation of systems involved in shipping.

Protocol and implementation rules: adopt JSON:API content-type, require RFC3339 timestamps, enforce mandatory keys listed above, support PATCH for delta changes, and version endpoints. Use webhooks for automatic event delivery and provide a pull API for full sync. Rate-limit guidance: allow 5 requests/sec per client for sync endpoints and 1,000 webhook events/min with backoff on 429 responses.

Operational recommendations: transmit ETA updates more frequently the closer a vessel is – for example, every 15 minutes when >48 hours out, every 5 minutes within 6 hours, and immediate on actualArrival/berth. Include sequenceNumber and lastProcessedEvent in responses so consuming systems can resume without duplicates and maintain idempotency.

Monitoring and commissioning: expose metrics for deliverySuccessRate, averageLatencyMs, and processingErrors per endpoint. Run a commissioning checklist that includes schema validation, webhook retries, authentication (mTLS or OAuth2), and end‑to‑end tests between carrier, terminal and customer systems. Assign a cross‑functional team to develop and monitor these items during the first sprint.

Mapping and data model guidance: map local fields to canonical names (voyageId, portCode, scheduledArrival) in an integration layer; preserve original source identifiers in a sourceRef array. Use a changeLog model that records who changed what and when to drive downstream reconciliation and customer notifications.

Security and governance: require token scopes per endpoint and limit data returned by role. Record consent and contractual terms for data sharing and include auditTrail entries for transmitted events so customers and terminals can verify provenance.

Start implementation from a prioritized checklist: 1) Port Call and Event webhooks, 2) Transport and Booking sync, 3) Terminal interface and Equipment, 4) Reference and Party consolidation, 5) monitoring and commission tests. This ordering reduces integration work than ad hoc approaches and lets the team deliver visible arrival alignment for shipments within weeks.

Translate port call events into a canonical arrival timeline: mapping rules and timestamp precedence

Apply a five-step mapping and timestamp-precedence rule set to generate a single canonical arrival timeline for each ship and shipment: map raw events to canonical phases, select the highest-priority timestamp for each phase, attach source confidence, and flag conflicts for human review.

Define canonical phases as: approach (vessel within 24 NM of port), pilotOnBoard, alongside/berth, startCargoOps (receipt of first container move), completeCargoOps, and departed. Map source events into these phases: AIS position reports, pilot/port authority manifests, terminal operating system (TOS) berth events, carrier operation messages (SOC/COC), gate-in/gate-out container scans, and vessel logbook entries. Store one canonical timestamp per phase and keep original event lists for audit and reconciliation.

Timestamp precedence (highest to lowest): 1 – Port authority / pilot timestamps for pilotOnBoard and berth authorisations (authoritative). 2 – Terminal operating system timestamps for alongside/berth and startCargoOps. 3 – Terminal/container gate scans for receipt and container moves. 4 – Carrier operational messages (ETA/ATA, SOC/COC) for planned and operator-confirmed milestones. 5 – AIS-derived timestamps (position crossing X- NM, stopped at berth) for automated detection. Always normalise to UTC ISO 8601 and attach source identifier and confidence score.

Resolve conflicts with deterministic rules: if a higher-priority timestamp exists, use it for the canonical phase. If lower-priority data precedes a higher-priority timestamp by more than threshold T1 = 30 minutes for berth/alongside or T2 = 120 minutes for approach/ETA, retain both timestamps, mark the canonical phase as “suspect”, and set sourceConfidence = low. If AIS indicates alongside earlier than TOS by <30 minutes, prefer TOS but record AIS as supporting evidence. For missing higher-priority data, promote the next available source but record a promotionReason and expectedAccuracy (%) based on source type.

Implement clock and timezone controls: require all sources to submit timestamps in UTC. For known clock skew, apply source-specific offsets calculated from historical comparisons (store last-offset and stdev). Reject timestamps older than 7 days for arrival phases unless accompanied by a signed port authority record. Apply a maximum correction of 60 minutes automatically; larger corrections require manual review.

Recommended canonical event schema (example fields): eventType, canonicalTimestampUTC, sourceType, sourceId, sourceConfidence(0-1), rawTimestamps[lists], berthId, portUNLocode, vesselIMO, containerCount, affectedContainers[], phaseDurationMinutes. Use these fields to produce KPI outcomes such as reduction in ETA variance, earlier customer notifications, and measurable move-time reduction per container.

Operational guidance for adopters and alliances: integrate these rules into carrier and terminal APIs to enable consistent messages through DCSA-aligned endpoints. Share promoted rules with shippers and customers so systems can expand automated workflows and optimise berth planning. Track five adoption metrics: percent phases with authoritative source, average promotion frequency, percent suspect flags resolved within 24 hours, mean deviation between canonical and carrier ETA, and reduction in early/late notifications. Digitising these mappings will increase visibility across industries, help optimise container moves, and move the industry beyond fragmented timestamps so carriers, terminals and shippers receive earlier, more reliable shipment information alongside container-level receipts and outcomes.

Validate carrier and container identifiers before ingestion: check digits, code lists and rejection handling

Validate container numbers and carrier codes at the API gateway using ISO 6346 check-digit logic and authoritative code lists before any downstream ingestion.

  • Exact container format and check-digit verification

    • Accept only 11-character ISO 6346 entries: owner code (3 letters) + equipment category (1 letter, typically U/J/Z) + serial (6 digits) + check digit (1 digit).
    • Compute the check digit: map letters to numeric values (A=10, B=12, C=13 … Z=38 with gaps at 11, 22, 33), multiply each character value by 2^position (position 0 for first character), sum, take sum mod 11; if remainder = 10 set check digit = 0. Reject immediately on mismatch.
    • Reject entries with incorrect length, non-ASCII letters, leading zeros in owner code, or serials outside 000001–999999; log exact failure reason and source (API call ID or spreadsheet row).
  • Carrier code validation and code-list management

    • Match carrier identifiers against an authoritative registry (BIC for owners, SCAC for US ops or your agreed carrier list). Treat unknown codes as quarantined until resolved.
    • Implement real-time lookups when internet access is available; fall back to a signed, timestamped cached snapshot if offline. Allow cached validity for a configurable window (default 24–72 hours) and record the snapshot version in each event.
    • Apply mapping for alliances and vessel-operating partners: maintain a canonical carrier-to-operational-carrier table used by APMS and downstream systems; update mapping on every alliance change and capture who applied the override.
  • Pre-ingestion checks for spreadsheets and bulk loads

    • Provide a client-side validator (Excel macro or lightweight JS) that flags rows before upload and returns a CSV of rejected rows with error codes; this reduces manual corrections and speeds digitising of legacy spreadsheets.
    • For bulk ingest, run a fast pre-scan that segregates valid, soft-fail (unknown carrier), and hard-fail (check-digit mismatch) rows. Only ingest valid rows; return a structured rejection file that the submitter can re-upload after fixes.
  • Rejection handling policy and error taxonomy

    1. Use standardized error codes so integrations can act automatically:
      • ERR01 – FORMAT_INVALID (length/charset)
      • ERR02 – CHECKDIGIT_MISMATCH
      • ERR03 – CARRIER_UNKNOWN
      • ERR04 – MAPPING_OVERRIDE_REQUIRED
    2. Define automatic actions per code:
      • ERR02: hard reject, notify sender, block until corrected.
      • ERR03: soft quarantine, attempt lookup from alternate services for up to 24 hours, then escalate to carrier ops if still unresolved.
      • ERR04: accept only with signed override from authorized user; record audit trail.
  • Monitoring, KPIs and SLAs

    • Track rejection rate, mean time to remediate (MTTR) and fraction of quarantined records resolved within SLA. Target goals: rejection rate <0.1% of ingest volume and MTTR <4 hours for operational ports/arrivals.
    • Instrument dashboards that show trends and spike detection so operations spot changes in carrier lists or mass spreadsheet errors faster than manual review can.
  • Integration points and operational safeguards

    • Validate before sending data to customs, APMS, fuel and bunkering systems. Incorrect IDs cause billing mismatches (fuel), customs delays, and mis-routed arrivals.
    • When an external system (APMS, terminal system) sends updates, attach the validation snapshot version so receivers know which code list produced the decision.
    • Use secure internet or L-band satellite links where shore connectivity is unreliable; offline validation must still record the cached list ID until internet sync finishes.
  • Governance, updates and development practices

    • Automate nightly code-list pulls and allow manual forcing for immediate changes; log who performed manual updates and why. Notify downstream teams of any change that affects carrier mappings or specifications.
    • Include unit tests for check-digit logic in CI/CD and a repository of real-world test vectors (valid and invalid numbers) used during development and QA.
    • Maintain a public changelog so partners know when your validation rules or lists changed; link the changelog ID in event metadata for traceability (see mapping here for the last 30 updates).
  • Operative Beispiele

    • Scenario: a spreadsheet upload from an agent currently sends 5,000 rows. Run a pre-scan: 4,990 valid, 8 ERR02, 2 ERR03. Return a rejection CSV with row IDs, error codes and suggested fixes; ingest the 4,990 immediately to meet arrival deadlines.
    • Scenario: a carrier alliance changes prefixes overnight. Apply mapping override, run reconciliation against captured arrivals for the last 48 hours, and roll a corrected feed to customs and eBOL systems to avoid downstream mismatches.
  • Practical checklist to implement today

    1. Deploy check-digit validation at the API edge.
    2. Integrate one authoritative code registry with real-time and cached modes.
    3. Create rejection CSV templates for spreadsheet users and automate pre-scan in the upload UI.
    4. Set KPI alerts for rejection spikes and retain audit trails for all manual list updates.

Following these steps reduces false positives, speeds processing of arrivals, aligns data with customs and APMS flows, improves fuel/billing accuracy, and allows your teams to achieve consistent, auditable ingestion while still accommodating real operational changes and alliance-driven mappings.

Authenticate and connect to DCSA services: OAuth flows, API keys and sandbox-to-production steps

Use OAuth 2.0 client credentials for machine-to-machine integrations and Authorization Code with PKCE for any application that acts on behalf of a user; store client secrets in a secrets manager and restrict scopes to the minimum required.

Require TLS 1.2+ for all calls over the internet, validate certificates, and enforce TLS pinning for mobile apps. For public clients, implement PKCE and short-lived access tokens with refresh tokens rotated on first use; for confidential clients prefer mutual TLS if the provider supports it.

Register each application in the DCSA developer portal with a unique client_id and an environment-specific secret or key. Use sandbox credentials for development and testing, tag tests that touch ebls or customs flows, and document which API versions your integration calls so you can gain clarity on breaking changes.

When you receive sandbox tokens, automate token caching and renewal: request a new token at 90% of token lifetime, retry failed token exchanges with exponential backoff, and log token errors to a secure audit stream rather than printing secrets to console. A smart local proxy that injects tokens and enforces rate limits reduces development friction.

Use API keys only for low-risk telemetry or partner identification where OAuth is not available; never use API keys as a substitute for user authentication. Treat API keys as secrets, rotate them quarterly, and block keys that show anomalous call patterns.

Follow this sandbox-to-production checklist: 1) complete contract and DCSA onboarding, 2) pass security and API conformance tests in sandbox, 3) provide a production support contact and SLA, 4) present audit logs and a manual rollback plan, 5) submit mTLS certificates if required, and 6) schedule a production cutover window with carriers and partners. If a carrier like hapag-lloyd or a customs gateway requires an October deployment window, coordinate early and confirm who sends final confirmation receipts.

Instrument every production call with correlation IDs, track request/response times, and capture HTTP status and business-level receipts for EDI-equivalent flows such as ebls and booking confirmations. Tag events by partner, shippers, carrier, and operational venue so downstream teams can filter by customs, environmental reporting, or fuel-tracking requirements.

Keep migration manual steps minimal: automate certificate upload, client registration, and scope approvals where possible, and maintain a checklist that shows which partners are connected and which still require manual onboarding. A single automated smoke test that queries a booking and validates a tracked receipt reduces cutover risk.

During development, emulate partner behavior: build a stub that sends realistic webhook payloads and validates your acknowledgements, including ebls acceptance and receipt messages. Run load tests against sandbox that simulate peak operational traffic from carriers, shippers, and customs gateways so you can tune retries and concurrency.

Monitor usage and business KPIs after production go-live: track token exchange rates, failed authentication calls, average call latency, and the number of receipts not delivered to partners. Use those metrics to prioritize fixes that directly affect shippers, carriers and industries relying on timely EDI/EBL updates and environmental or fuel reporting.