Travelers relying on artificial intelligence (AI) tools like ChatGPT for itinerary planning are encountering significant issues, including being directed to non-existent locations or receiving unsafe advice. Experts warn that verifying all AI-generated travel information is crucial to avoid potential dangers and frustrations.
Key Takeaways
- AI travel tools can suggest non-existent places.
- Incorrect information can lead to dangerous situations, especially in remote areas.
- A significant number of users report false or insufficient information from AI.
- AI's method of generating responses can lead to 'hallucinations' or made-up facts.
- Verifying all AI-generated travel advice is essential for safety and accuracy.
AI's Misleading Travel Suggestions
The rise of AI in trip planning has brought both convenience and serious challenges. Miguel Angel Gongora Meza, director of Evolution Treks Peru, shared an incident where two tourists were preparing for a hike in the Andes. They showed him a screenshot from an AI tool detailing a trek to the 'Sacred Canyon of Humantay.'
Gongora Meza quickly identified the problem.
"There is no Sacred Canyon of Humantay!" he stated. "The name is a combination of two places that have no relation to the description."The tourists had paid nearly $160 to reach a rural road near Mollepata, without a guide or a real destination.
Fact Check
According to one survey, 30% of international travelers now use generative AI tools or dedicated travel AI sites for trip organization.
This misinformation posed a serious risk. Gongora Meza explained that such errors can be life-threatening in Peru's high-altitude regions. "The elevation, the climatic changes and accessibility [of the] paths have to be planned," he noted. Using an AI program that combines images and names to create a fantasy can leave travelers at an altitude of 4,000 meters without oxygen or phone signal.
Real-World Travel Disasters Caused by AI
The problem is not unique to Peru. Dana Yao and her husband experienced a similar issue while planning a hike to Mount Misen on Japan's Itsukushima island. ChatGPT instructed them to hike to the summit for sunset and then use a ropeway for descent.
"That's when the problem showed up," said Yao, a travel blogger. "[When] we were ready to descend [the mountain via] the ropeway station. ChatGPT said the last ropeway down was at 17:30, but in reality, the ropeway had already closed. So, we were stuck at the mountain top."
Background on AI Travel Tools
AI tools like ChatGPT, Microsoft Copilot, Google Gemini, and specialized travel sites such as Wonderplan and Layla have become popular for generating travel ideas and itineraries. While they can offer helpful suggestions, their reliance on large datasets can sometimes lead to factual inaccuracies or invented information.
Another report from the BBC in 2024 highlighted more AI errors. Layla, a travel AI tool, briefly suggested an Eiffel Tower existed in Beijing. It also proposed an unfeasible marathon route across northern Italy to a British traveler. The traveler commented, "The itineraries didn't make a lot of logical sense. We'd have spent more time on transport than anything else."
Why AI Generates False Information
These issues stem from how AI models operate. Rayid Ghani, a distinguished professor in machine learning at Carnegie Mellon University, explained that AI programs like ChatGPT do not understand the truth in the same way humans do. "It doesn't know the difference between travel advice, directions or recipes," Ghani said. "It just knows words. So, it keeps spitting out words that make whatever it's telling you sound realistic, and that's where lot of the underlying issues come from."
User Experience Data
- 37% of AI travel users reported insufficient information.
- 33% said their AI-generated recommendations contained false information.
Large language models analyze vast amounts of text data to predict statistically probable word sequences. Sometimes this produces accurate information. Other times, it results in what experts call an "hallucination"βwhere the AI simply invents facts or scenarios. Because AI presents both true and false information with the same level of confidence, users find it hard to distinguish between them.
The Problem of AI Hallucinations
In the case of the "Sacred Canyon of Humantay," Ghani believes the AI likely combined regional words that sounded appropriate. AI tools also lack a true understanding of the physical world. An AI might mistake a 4,000-meter walk through a city for a challenging 4,000-meter mountain climb, overlooking the crucial context of terrain and elevation.
Beyond chatbots, AI-generated content can also mislead visually. A Fast Company article shared an incident where a couple traveled to Malaysia to see a scenic cable car they saw on TikTok. They discovered no such structure existed; the video was entirely AI-generated. This shows how AI can create and spread fictional content for engagement or other purposes.
Blurring Lines Between Reality and AI
The use of AI is subtly changing how we perceive the world. In August, content creators noticed YouTube used AI to alter videos without permission, editing clothing, hair, and faces of real people. Netflix also faced criticism in early 2025 for its AI efforts to "remaster" old sitcoms, which resulted in distorted faces of actors.
These examples illustrate how AI can make small, unnoticed changes, blurring the lines between reality and an AI-generated world. For travelers, this means the information they receive might not accurately reflect the places they plan to visit.
"If you are there, how will you turn this [around]? You're already on a cool trip, you know?" - Javier Labourt
Javier Labourt, a licensed clinical psychotherapist, emphasizes the mental health benefits of travel, such as interacting with diverse cultures and fostering empathy. He worries that AI hallucinations, by feeding users misinformation, create a false narrative about a place even before travelers leave home. This can undermine the very benefits travel offers.
Protecting Against AI Misinformation
Efforts are underway to regulate AI, with proposals in the EU and US suggesting watermarks or other features to identify AI-generated content. However, Ghani notes that detection is an "uphill battle." He suggests that "mitigation is a more reliable solution today than prevention."
While regulations might help identify AI-generated images or videos, they may not solve issues with AI chatbots making up facts during conversations. Experts, including Google CEO Sundar Pichai, suggest that hallucinations might be an "inherent feature" of large language models. This means user vigilance remains the primary defense.
Ghani advises travelers to be as specific as possible in their AI queries and to verify absolutely everything. He acknowledges the difficulty when asking about unfamiliar destinations. If an AI suggestion seems too perfect, a double-check is necessary. Ultimately, the time spent verifying AI information can sometimes make the process as laborious as traditional trip planning.
Labourt encourages travelers to maintain an open mind and adaptability when plans go awry. "Try to shift the disappointment [away from] being cheated by someone," he recommended. The ability to adjust and find joy even when things don't go as planned is a key aspect of successful travel, with or without AI assistance.





