There is a lot of noise today around Artificial Intelligence in manufacturing. Advanced analytics, Machine Learning, Deep Learning, Large Language Models, foundational models, AI agents… the list keeps growing, and so does the hype.
Yet, despite the apparent technological maturity, many manufacturers still struggle to scale AI beyond isolated pilots. A predictive maintenance proof of concept here, a quality model there, maybe a chatbot connected to a historian. Interesting initiatives, but rarely transformative at an enterprise level.
This article is not about inventing new AI algorithms. It is about what enables AI to scale and create sustained business impact across plants, lines, and sites.
And what we see today is that the main blockers are not the ones most people think.
When AI initiatives fail to scale, we often hear the same explanations:
In our experience, most of the time, none of these are the main problem.
AI skills are not that scarce anymore. They are growing fast. Universities are producing more graduates with solid foundation in data science, machine learning, and generative AI. It is a hot topic, and talent will continue to flow in that direction.
AI models are also not the issue. For most industrial use cases (anomaly detection, process forecasting and optimization, real-time quality monitoring, decision support…) we already have more than enough proven algorithms. From classical statistical models to deep learning and LLM-based systems, the toolbox is rich and mature.
The solution is not “waiting for the next breakthrough model”. The real problems lie elsewhere.
Most manufacturing companies have invested massively in data generation and storage over the last decade. Historians, MES, LIMS, CMMS, ERP, data lakes, cloud platforms… data is everywhere.
But data availability does not necessarily mean data usability.
The core issue is that companies are still too focused on collecting data, and not enough on contextualizing, structuring, and governing it.
AI does not fail because there is not enough data. It fails because the data has no consistent meaning across systems, sites, and time.
This shows up in several recurring pain points:
Yes, data quality matters; volume matters. But what matters even more is governance:
Without clear answers to these questions, every AI project starts by redoing the same work: understanding the data, cleaning it, mapping it, reconciling definitions. This does not scale.

Many AI initiatives operate (and many times are successful) on local, implicit models: a few tags, a data table, a dataset prepared for one specific use case. What is often missing is a shared, enterprise-wide data model that reflects accurately how the business works. This includes:
When this enterprise model exists and is well governed, AI solutions become reusable and scalable by default. When it doesn’t, every solution remains confined and stays as a nice pilot.
In manufacturing, AI initiatives rarely fail because the first use case does not work. They fail because the second, third, or tenth use case becomes too expensive, too slow, or too complex to deploy. If scalability is not designed in from day one, every new AI initiative accumulates technical debt, integration effort, and organizational friction.
Nowadays, many architectures are built to “just make the data available.” Scalability is addressed later, if ever.
This is where technology selection matters.
Traditional industrial connectivity gateways are excellent at connecting assets and protocols. They are very good at moving data from A to B. But they are not designed to be the backbone of an enterprise data model.
To scale AI, companies need infrastructure and software that:
This is exactly why industrial IoT edge platforms and data brokers are gaining traction. They are not just pipes, they are enablers of scalable architectures, and therefore key AI enablers.
There is an uncomfortable truth in manufacturing digitalization: the more digitally mature a plant is, the harder it becomes to integrate new digital solutions.
This is because today, digital maturity often means many digital systems. And many digital systems usually mean many point-to-point integrations between them.
Every new promising AI application or solution then requires:
This is why lighthouse plants often struggle to move fast with new technologies, despite being highly digital.
In the near future, we will see more and more AI products marked as plug-and-play for manufacturing. Some of them will be powerful. Some will deliver real value.
But plug and play does not mean “connect to one system and you’re done”.
In a typical plant, relevant data is spread across multiple siloes. Plug-and-play therefore often translates into:
And, as we’ve seen, the more digitalized the plant, the higher this integration cost.
This is exactly why a single source of truth and a shared data model are prerequisites, not something that is “nice to have”.
When AI solutions can connect once to a well-structured, governed data namespace, they truly become scalable. Not because AI models are smarter, but because the data foundation behind them is.
AI amplifies what is already there.
If you have low-quality data, AI will generate low quality models.
If your data landscape is fragmented, AI will amplify fragmentation.
If your data model is inconsistent, AI will amplify inconsistency.
If you have high-quality data and your architecture is scalable by design, AI will finally deliver on its promise.
The companies that will win with AI in manufacturing are not the ones chasing the latest model. They are the ones investing in context, structure, and governance today, so that every future AI capability can plug in once and scale everywhere.
That, in our opinion, is where the real business impact lies.