A high definition map is the kind of map a self-driving car reads. It is not the map you open on your phone. It is a centimetre-accurate, machine-readable description of every lane line, traffic sign, stop bar, and road edge along a route, packaged so that an autonomy stack can match it to live sensor data and know exactly where the vehicle sits within its lane.
This guide explains what an HD map actually contains, how it is produced and updated, where it fits in an autonomous driving stack, and the open questions that still divide the industry.
What an HD Map Actually Contains
An HD map encodes three layers of road information that a consumer street map does not.
Geometric layer: the precise 3D shape of the road surface, lane centrelines, lane boundaries, kerbs, and road edges, with horizontal accuracy of 10-20 cm. Each lane is a polyline rather than a single road centreline, and lane width is captured continuously rather than averaged.
Semantic layer: machine-readable attributes attached to the geometry. Speed limit, direction of travel, lane type (regular, bus, bike, HOV), turn restrictions, lane connectivity at intersections, stop-line positions, pedestrian crossing zones. This is what lets the autonomy stack reason about legal manoeuvres without inferring them from raw vision.
Landmark layer: 3D positions of traffic signs, signals, and other persistent features the vehicle's perception stack can match against in real time. This is the layer that powers map-aided localisation.
A typical HD map for a metropolitan area is hundreds of gigabytes uncompressed, far larger than a consumer street map of the same region.
Why Self-Driving Cars Need a Map at All
Cameras, lidar, and radar can perceive what is around the vehicle right now. They cannot perceive what is around the next corner. An HD map functions as a long-range prior: the vehicle knows the road geometry, the upcoming sign positions, and the lane topology far beyond sensor range, and uses that prior to plan smoother, safer, more decisive manoeuvres.
The map also handles edge cases sensors fail at. A lane line covered in snow, an obscured stop sign, a sun-glare moment, a faded crosswalk: with the map as a backup, the vehicle still knows where the lane line should be and where the stop bar should be. Without the map, every one of those moments becomes a degraded driving event.
Finally, the map encodes the rules. Whether a left turn is allowed, whether a lane is bus-only at this hour, whether the speed limit just dropped: these are semantic facts that vision can sometimes read but a curated map can guarantee.
How HD Maps Are Built
Three production pipelines dominate.
Survey-grade lidar fleets. Companies like TomTom, HERE, and dedicated mapping arms inside automotive OEMs operate fleets of survey vehicles equipped with high-end lidar, multi-camera rigs, and survey-grade GNSS-INS. Each vehicle records dense point clouds and imagery as it drives. Backend pipelines stitch the data, extract lane lines and signs, and produce the HD map. This is the highest accuracy approach. It is also the most expensive and the slowest to refresh.
Crowdsourced from production fleets. Mobileye's Roadbook and Tesla's data engine pull sensor signatures from millions of customer vehicles. Each vehicle uploads compact features (sign detections, lane line samples) rather than raw video. The backend aggregates across vehicles, filters noise, and updates the map continuously. The cost per kilometre is far lower than survey-grade. The accuracy is close enough for most ADAS and L2+ use cases and approaching what L4 needs.
Hybrid. A survey-grade baseline is built once and then a crowdsourced delta layer is applied for changes. Most modern providers run some flavour of this. The survey gives a clean foundation; the crowdsource gives freshness.
Localisation: Matching the Map to Reality
A car with an HD map still needs to know where on the map it currently sits. GNSS gives roughly 5-10 m accuracy in clear sky and worse in cities. That is not good enough for lane-level autonomy.
The vehicle solves this with map-aided localisation. The perception stack detects landmarks (signs, lane lines, lamp posts) in real time and matches them against the HD map's landmark layer. With enough matches, the vehicle's pose is known to a few centimetres, the same accuracy as the map itself. The math is essentially a tightly coupled fusion of GNSS, IMU, wheel odometry, and visual or lidar landmark associations.
This is also where map matching becomes part of the autonomy stack. The classic map-matching problem (snapping noisy GNSS to road geometry) generalises to snapping noisy multi-sensor pose estimates to a centimetre-accurate map.
Keeping the Map Fresh
Roads change. A new lane line appears, a sign moves, a construction zone closes a turn. An HD map that reflects last quarter's reality will mislead the autonomy stack today, sometimes dangerously.
The freshness problem is one of the hardest in the field. Three approaches are in production use.
Periodic survey. Quarterly, monthly, or weekly survey-vehicle passes. Reliable but slow and expensive.
Anomaly detection from the fleet. Production vehicles compare what they see to what the map says. Disagreements trigger a flag. Over enough vehicles, the flags converge on real changes and the map is updated.
Real-time tile delivery. The vehicle holds a local HD map cache and pulls only the tiles it is about to enter. Changes propagate to the cloud, then to vehicles, in minutes rather than weeks.
The state of the art is fleets with both survey and crowdsourced inputs and tile-based delivery, with map updates rolling out to vehicles continuously rather than as bulk releases.
The Vision-First Counter-Argument
Tesla's official position is that HD maps are a crutch. The argument: a sufficiently capable perception system should be able to read the road as well as a human, and any map will eventually go stale. Tesla relies on its in-vehicle vision stack and inferred lane geometry, with no centimetre-accurate map prior.
The counter-argument from the rest of the industry is that an HD map is a safety prior, not a substitute. It does not replace vision; it backs vision up. When a lane line is obscured or a stop sign is missing, the map fills the gap. When a sign reads "Speed Limit 30 except 6-9am", the map encodes the rule unambiguously. The defensive view is that an autonomy stack with a fresh HD map plus strong perception is safer than perception alone, even if perception is excellent.
The disagreement is genuine and unresolved. Most of the industry is converging on hybrid approaches: lighter map dependency than first-generation L4 stacks, but not the fully mapless approach Tesla advocates.
Standards and Formats
There is no single dominant HD-map format. The landscape splits into a few competing standards.
OpenDRIVE / OpenSCENARIO (ASAM, originally automotive simulation): widely used in simulation and increasingly in production map exchanges.
NDS / NDS.Live (Navigation Data Standard): an automotive-industry consortium format, with NDS.Live designed for tile-based delivery to production vehicles.
lanelet2 (open source, from KIT): used by Apollo, Autoware, and a growing number of academic stacks.
Proprietary: HERE, TomTom, and Mobileye each maintain internal formats with format-specific tooling. Customers consume them via SDK rather than raw files.
A production autonomy stack often holds the map in one canonical internal representation and ingests from whichever vendor formats it has licensed.
Where the Mapping Industry Is Going
Three tendencies are clear.
Crowdsourced freshness wins on cost and coverage for everything except the most safety-critical L4 deployments. Five years ago HD maps were exclusively survey-grade. Today most major vendors run hybrid pipelines.
Open formats are gaining ground. lanelet2, OpenDRIVE, and NDS.Live make it easier for AV developers to switch suppliers, build internal tools, and avoid lock-in. The first-generation closed proprietary HD-map model is under pressure.
The map is shrinking in scope. Modern autonomy stacks rely on the map for semantic information (rules, lane topology) and rough geometry, but lean on perception for fine-grained dynamic detail. The map handles what is stable; perception handles what is changing. The result is a smaller, lighter map that updates faster.
How MapAtlas Fits
MapAtlas does not build HD maps for L4 autonomous deployments. The MapAtlas focus is consumer-grade and B2B mapping, geocoding, and routing for products that need accurate addresses, isochrones, and route optimisation, not centimetre-accurate lane geometry. For an L4 stack you want a dedicated HD-map vendor.
What MapAtlas does provide is the upstream and downstream of the autonomous-driving pipeline. The Map Matching API snaps noisy GNSS traces from connected fleets to road geometry, which is the same primitive that powers fleet management, ADAS analytics, and map-aided localisation at lower precision tiers. The Geocoding API and Search API deliver address-grade location data for fleet ops, customer pickup, and delivery routing. The Isochrone API drives travel-time analysis for mobility-as-a-service planning.
For a deeper look at how vehicle traces become clean routes, see What Is Map Matching. For the basics of how a coordinate becomes a place, see What Is a Geocode.
Frequently Asked Questions
What is a high definition map?
A high definition (HD) map is a centimetre-accurate, machine-readable map of the road network designed for autonomous vehicles and advanced driver-assistance systems. Unlike consumer street maps, HD maps encode lane geometry, lane connectivity, traffic signs, signals, stop lines, road markings, and 3D landmarks with positional accuracy of 10-20 cm. The vehicle uses the HD map as a prior, then matches it to live sensor data (camera, lidar, radar) to localise itself within its lane and anticipate the road ahead.
How are HD maps different from Google Maps or OpenStreetMap?
Consumer maps are designed for humans: they show streets, names, and points of interest at metre-level accuracy. HD maps are designed for machines: they encode geometric and semantic detail at centimetre accuracy, with lane-level topology, sign positions in 3D, and machine-readable rules (speed limits, lane restrictions, turn allowances) that an autonomy stack can consume directly. Google Maps and OpenStreetMap are not sufficient for level 4 autonomous driving on their own, but they are useful as base layers and as input to HD-map production pipelines.
How are HD maps kept up to date?
Three patterns dominate. Survey-based: dedicated lidar-equipped survey vehicles drive each road periodically and reprocess the HD map. Crowdsourced: production vehicles in the fleet upload sensor anomalies (a missing lane line, a new construction zone, a moved sign) which trigger map updates. Hybrid: a survey baseline is maintained quarterly and a crowdsourced delta layer captures change in between. Real-time delivery to the vehicle uses tile-based updates over LTE or 5G so only changed areas download, not the whole map.
Do all autonomous vehicles use HD maps?
Most do, but not all. Waymo, Cruise, Mobileye, Baidu Apollo, and most level 4 deployments rely heavily on HD maps. Tesla famously avoids HD maps in favour of a vision-only approach, arguing that maps go stale and that a sufficiently capable perception stack should not need them. The industry consensus is moving toward HD maps as a safety prior with vision and lidar handling the long tail, but there is genuine debate. The map question is one of the defining architectural choices in modern autonomy.

