The most recent release of the European Retail Barometer by our partner Solita features artificial intelligence as its main subject.
With AI having already established itself as a vital mechanism for optimization, personalization, and other improvements, we engaged Future Mind and Solita experts to discuss the difficulties involved in deploying AI within retail organizations.
What follows are their insights regarding the obstacles to AI adoption in the retail sector… along with strategies they use to overcome them.
In Solita’s Barometer, 37.5% of retailers cite data quality, integration, and regulatory complexity as key barriers, and nearly 30% struggle to estimate ROI. These challenges are structural because they reflect fragmented processes, unclear ownership, and legacy architectures, not limits in AI technology itself. AI exposes weaknesses that already exist. That is precisely why execution should not be delayed.
Start with focused use cases anchored in clear KPIs, define ownership, learn from the gaps you uncover, and scale deliberately. Data quality and maturity improve through structured action, not waiting for perfect conditions.
When implementing AI-based solutions, I rarely encounter technological barriers, and more often – organizational ones. The main challenges that block the success of client projects are concentrated in three areas:
When it comes to barriers or challenges, we've encountered when working with clients on implementing AI-based solutions, it often stands out that AI isn't just about learning something new, it's about unlearning old ways of working and relearning how to create value with AI.
Fragmented and biased data, legacy systems that slow integration, and the complexity of scaling responsibly under new regulations, especially in Europe, all lead to obstacles.
Additionally, cultural hurdles like building AI literacy, overcoming resistance to change, and ensuring teams embrace continuous learning have an impact. These challenges demand more than technology. Focus needs to go beyond technology to help address these challenges.
Successfully implementing AI-based solutions requires governance, lifecycle management, and cross-organization collaboration to ensure you're focusing on where AI can really drive value for the long term.
Two groups of challenges are most common: data and integrations. AI systems are only as effective as the quality of the information they operate on. Incomplete, fragmented, or inconsistent data can significantly reduce the value of an implementation. The second area is security and proper embedding of AI within the infrastructure – correct permission management, access control, and oversight of which systems and processes the assistant is allowed to interact with. It is also important to ensure transparency and auditability of outputs, so the organization can realistically assess how the model is performing. Overcoming these barriers usually determines whether AI becomes a real value-creating support for the business, or merely an interesting experiment.
In driving our internal AI transformation, I've been increasingly focused on the constant evaluation and judgment calls required to determine where and how to apply different types of AI technologies effectively. Geopolitics is a fairly new and significant driver, making sovereignty a key consideration in the process. How do we navigate and maneuver between American AI service providers offering the latest and greatest features and the need to be self-sufficient and keep our customers’ and our own data safe and secure?
The rapid advancement in LLMs designed for agentic code generation has also made cost management very visible. If one developer can burn through hundreds of euros in tokens in one day, the value of their work must be properly evaluated and understood.
Closely related, I’ve recently also spent a significant amount of time understanding AI service providers’ terms and conditions. Contrasting my findings with the public discourse on agentic coding around the world makes me question how much enterprise related agentic coding happens with consumer grade tools, and how sustainable this can be in the long run.
The biggest barrier I encounter is not resistance to change, lack of ambition, or even technology maturity. It is data quality. AI solutions are entirely dependent on reliable, structured and consistent data. Yet in many retail organizations, core operational data—especially around inventory—is inconsistent, delayed, or simply incorrect.
In my dialogues with companies in the grocery segment, particularly around store operations, this challenge becomes very tangible. Grocery is high-frequency, low-margin, operationally complex, and extremely sensitive to inventory accuracy. There are many sources of error: mis-scans, theft, manual overrides, and other operational deviations. Each may seem small individually, but cumulatively they add up, making reliable AI predictions very difficult.
Typical challenges include:
Retailers often want to move directly into AI-driven forecasting, replenishment optimization, dynamic pricing or personalized recommendations. But even if inventory accuracy is 97-98%, on a turnover of billions, the remaining 2-3% of errors is still significant—the AI is being trained on noise. You cannot automate what you cannot trust. AI amplifies the quality of your data, both good and bad.
Interestingly, in forecasting discussions today, the question is no longer only about improving forecast accuracy. It is increasingly about understanding the confidence level in the forecast. Decision-makers want to know: How much can we rely on this prediction? Sometimes, this confidence is less about exact precision and more about ensuring the store does not run out of stock—acting as a kind of operational insurance. Confidence intervals, explainability, and understanding these operational contingencies are becoming as important as the forecast number itself. Without trusted operational data, both accuracy and confidence suffer.
In summary, while AI offers immense potential for retailers, the consensus among experts is that successful adoption hinges on overcoming foundational structural and organizational hurdles.
Technical barriers are often secondary to issues such as fragmented data quality, undefined objectives, and the need to unlearn legacy workflows.
By prioritizing robust data governance, aligning cross-functional teams on clear KPIs, and managing cost and security risks, retailers can transform AI from an experimental initiative into a trusted engine of value generation.