TL;DR: AI assistants invent plausible but wrong addresses at rates ranging from 6% for chain hotels to 38% for independent vacation rentals. The fix is not to correct the model. Publish one unambiguous ground truth using Schema.org Place markup, verified coordinates, and a canonical external identifier, then keep that truth consistent across every platform the business appears on.
Ask ChatGPT for the address of a three-star hotel in Porto and it will probably answer with a street name, a number, and a postal code. The answer will sound confident. For the major chains the answer will usually be correct. For the independent boutique property two streets away, the answer has a meaningful probability of being wrong.
This is not a rare edge case. It is a predictable output of how language models generate text, and it has direct consequences for anyone whose business depends on being found at a specific location.
The Mechanics of a Location Hallucination
A language model does not store a database of addresses. It stores a statistical distribution over tokens. When asked for an address, it predicts a sequence of tokens that looks like an address for that type of venue in that city.
If the training data contained the real address many times, consistently, and across authoritative sources, the prediction converges on the correct string. If the address appeared rarely, inconsistently, or not at all, the model interpolates. It picks a street that sounds right for the neighbourhood, a number that fits the block, a postal code that matches the local pattern.
The output is grammatically valid, geographically plausible, and often completely wrong.
Sample Audit: Hallucination Rates by Query Type
We ran 500 location queries through three leading AI assistants in April 2026. Each query asked for the address of a specific venue. Answers were compared against the venue's verified address on file with MapAtlas GeoEnrich.
The table below shows the share of responses containing at least one material address error (wrong street, wrong number, wrong postal code, or wrong city). Numbers are directional and specific to this sample.
| Query type | ChatGPT | Perplexity | Gemini |
|---|---|---|---|
| Chain hotel | 6% | 4% | 7% |
| Independent boutique hotel | 19% | 14% | 22% |
| Vacation rental | 38% | 29% | 41% |
| Independent restaurant | 24% | 18% | 27% |
| Landmark or attraction | 9% | 5% | 8% |
Source: MapAtlas sample audit, April 2026, n=500 queries.
Two patterns stand out. First, hallucination rate scales with how sparse and inconsistent the venue's web footprint is. Vacation rentals, which often exist on a single listing platform with no independent homepage, suffer most. Second, Perplexity consistently hallucinates less, likely because its retrieval layer grounds more answers in live sources rather than parametric memory.
A Worked Example
A query issued in April 2026: "What is the address of Casa do Vale guesthouse in Porto?"
Hallucinated answer from a leading assistant:
Casa do Vale is located at Rua de Santa Catarina 142, 4000-442 Porto, Portugal.
Verified answer from the property's own records and MapAtlas Geocoding:
Casa do Vale, Rua do Vale 38, 4200-512 Porto, Portugal.
Wrong street, wrong postal code, wrong side of the city. The hallucinated answer puts the guest in a shopping district three kilometres from the actual guesthouse. The error is not random. Rua de Santa Catarina is the most famous commercial street in Porto and appears heavily in training data for Porto accommodation queries. The model defaulted to the strongest statistical prior for the city.
Why Structured Data Changes the Outcome
A listing page with a properly formed Schema.org Place or LodgingBusiness JSON-LD block gives the model something it can extract rather than invent.
{
"@context": "https://schema.org",
"@type": "LodgingBusiness",
"name": "Casa do Vale",
"address": {
"@type": "PostalAddress",
"streetAddress": "Rua do Vale 38",
"postalCode": "4200-512",
"addressLocality": "Porto",
"addressCountry": "PT"
},
"geo": {
"@type": "GeoCoordinates",
"latitude": 41.1621,
"longitude": -8.5937
},
"identifier": {
"@type": "PropertyValue",
"propertyID": "wikidata",
"value": "Q00000000"
}
}
Three features of this block matter for hallucination reduction:
- Structured fields. The model does not have to parse a sentence. Street, postal code, city, and country are separate keys.
- Coordinates that match the address. A crawler can verify that the latitude and longitude fall within the postal code polygon. Mismatches flag the data as low confidence.
- A stable external identifier. Wikidata or a Google Place ID links the listing to a canonical entity. The model can reconcile the address against an authoritative source rather than relying on training-data frequency.
When these three conditions hold, extraction replaces generation. The probability of a hallucinated answer drops sharply.
The NAP Consistency Layer
Schema on the listing page is necessary but not sufficient. AI systems cross-check the address against other public sources: Google Business Profile, OpenStreetMap, Yelp, Tripadvisor, booking platforms, and the open web. When these disagree, confidence falls and the model becomes more likely to hedge or generate.
This is why Name, Address, Phone (NAP) consistency across platforms is a stronger predictor of citation than any single signal. A listing with perfectly formed schema but a conflicting address on Google Business Profile will still perform poorly. See NAP consistency for AI search for the mechanics.
What Tends to Fix Hallucination Risk
Four measures move the needle most in audits we have run:
1. Publish verified coordinates alongside the address. A written address is a string. Coordinates are a verifiable fact. MapAtlas Geocoding converts raw addresses into precise latitude and longitude at scale and flags inputs that do not resolve cleanly.
2. Wrap location facts in JSON-LD. The Place, LodgingBusiness, Hotel, Restaurant, and LocalBusiness types all accept address, geo, and identifier fields. Missing fields are where the model starts guessing.
3. Reconcile to a canonical identifier. Link the listing to a Wikidata QID or a Google Place ID. This gives AI systems a primary key to deduplicate against.
4. Enrich with nearby context. Hallucinations are not limited to the address field. Models also invent nearby landmarks, transit stops, and walk times. Verified proximity data, generated by MapAtlas GeoEnrich, anchors these claims too. Location-specific FAQs are an effective surface for exposing this data.
The Business Cost of a Hallucinated Address
A wrong address surfaced by an AI assistant does not just embarrass the model. It sends a real guest to the wrong place. The downstream effects compound:
- A cancelled booking, or worse, a no-show.
- A negative review that mentions the wrong location, which then becomes training data for the next model generation.
- Reduced citation confidence for the listing going forward, because the public web now contains contradictory signals.
The asymmetry is important. A hallucinated address hurts the listing even when the listing itself is innocent. The fix is not to correct the model directly, which is not possible, but to make the ground truth unambiguous enough that the model has no reason to generate in the first place.
How to Check Your Own Exposure
The free MapAtlas AEO Checker evaluates a listing against 29 structured signals, including address schema, coordinate presence, NAP consistency, and external identifiers. Listings that pass these checks are materially less likely to be misrepresented in AI answers. Listings that fail are the ones where the model has to guess.
Location hallucinations are not a quirk of any one assistant. They are a predictable consequence of training on an open web where the same business appears with slightly different addresses across dozens of sources. The fix is to publish one ground truth in a format AI systems can extract, and to make that ground truth consistent everywhere else the business is represented.
Related reading:
- Location-specific FAQs for AI search
- SEO was keyword to keyword, now it is database to database
- NAP consistency for AI search
- Check your AI visibility score for free
Frequently Asked Questions
What is an AI address hallucination?
An AI address hallucination is when a large language model returns a specific street address, postal code, or coordinate that looks plausible but does not correspond to the real location of the business, landmark, or property being described. It is not a minor rounding error. The model has synthesized an address that does not exist, belongs to a different venue, or combines a real street with the wrong city. For listings this is particularly damaging because the user may travel to the wrong location before realising the answer was fabricated.
Why do AI assistants hallucinate addresses?
Language models generate text by predicting the most likely next token, not by looking up facts. When an address is underrepresented, inconsistent across the web, or blocked from crawling, the model fills the gap with a statistically plausible string: a street name that sounds right for the city, a postal code pattern that matches the region, a number that feels typical. Without a structured ground-truth source to anchor the answer, the model has no mechanism to distinguish a memorised fact from a generated one.
How often do location hallucinations happen in practice?
In a MapAtlas sample audit conducted in April 2026 across 500 location queries spanning hotels, vacation rentals, restaurants, and landmarks, address-level hallucination rates ranged from roughly 6% for well-known chain hotels to 38% for independent vacation rentals. Generic landmark queries performed best; long-tail listing queries performed worst. The rate is directional and varies by model, language, and freshness of the underlying data, but the pattern is consistent: the less structured data a venue exposes, the more the model invents.
Does Schema.org structured data reduce hallucinations?
Yes, when the data is verified and consistent across sources. Publishing a Place or LodgingBusiness JSON-LD block with accurate geo coordinates, a validated postal address, and cross-references to authoritative identifiers such as Wikidata or Google Place ID gives the model a ground-truth anchor it can extract and cite. Inconsistent schema, for example coordinates that disagree with the written address, tends to lower confidence rather than raise it.
How do I audit my listings for hallucination risk?
Run the listing URL through the free MapAtlas AEO Checker at mapatlas.eu/ai-seo-checker. The checker evaluates 29 structured signals AI systems use to anchor location facts, including geo coordinates, Place schema, NAP consistency across platforms, and the presence of nearby-context fields. Pages missing these signals score high on hallucination risk because the model has to guess instead of extract.

