This website is not optimized for Internet Explorer 11. Please use a different browser for an optimal experience.

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

Artificial Intelligence at Scale in Manufacturing: The Real Bottleneck is not AI

-

There is a lot of noise today around Artificial Intelligence in manufacturing. Advanced analytics, Machine Learning, Deep Learning, Large Language Models, foundational models, AI agents… the list keeps growing, and so does the hype.

Yet, despite the apparent technological maturity, many manufacturers still struggle to scale AI beyond isolated pilots. A predictive maintenance proof of concept here, a quality model there, maybe a chatbot connected to a historian. Interesting initiatives, but rarely transformative at an enterprise level.

This article is not about inventing new AI algorithms. It is about what enables AI to scale and create sustained business impact across plants, lines, and sites.

And what we see today is that the main blockers are not the ones most people think.

What is not preventing AI from scaling

When AI initiatives fail to scale, we often hear the same explanations:

  • “We lack data to train complex algorithms”
  • “The models are not mature enough”
  • “We don’t have enough data scientists”

In our experience, most of the time, none of these are the main problem.

AI skills are not that scarce anymore. They are growing fast. Universities are producing more graduates with solid foundation in data science, machine learning, and generative AI. It is a hot topic, and talent will continue to flow in that direction.

AI models are also not the issue. For most industrial use cases (anomaly detection, process forecasting and optimization, real-time quality monitoring, decision support…) we already have more than enough proven algorithms. From classical statistical models to deep learning and LLM-based systems, the toolbox is rich and mature.

The solution is not “waiting for the next breakthrough model”. The real problems lie elsewhere.

The real challenge: Data without context does not scale

Most manufacturing companies have invested massively in data generation and storage over the last decade. Historians, MES, LIMS, CMMS, ERP, data lakes, cloud platforms… data is everywhere.

But data availability does not necessarily mean data usability.

The core issue is that companies are still too focused on collecting data, and not enough on contextualizing, structuring, and governing it.

AI does not fail because there is not enough data. It fails because the data has no consistent meaning across systems, sites, and time.

This shows up in several recurring pain points:

  1. Data quality and quantity are a must, but governance is what makes them scalable. 

Yes, data quality matters; volume matters. But what matters even more is governance:

  • Where is the data single source of truth?
  • Which digital system owns which process?
  • How do production events, states, and measurements relate to each other?
  • How do we ensure consistency of digital products across plants?

Without clear answers to these questions, every AI project starts by redoing the same work: understanding the data, cleaning it, mapping it, reconciling definitions. This does not scale.

  1. Lack of a full enterprise data model.

Many AI initiatives operate (and many times are successful) on local, implicit models: a few tags, a data table, a dataset prepared for one specific use case. What is often missing is a shared, enterprise-wide data model that reflects accurately how the business works. This includes:

  • Event-driven architectures that consistently expose what happens, when it happens, and where it happens.
  • A clear operational hierarchy
  • OT ontologies and semantic models that encode relationships
  • Knowledge graphs that allow reasoning, not just querying

When this enterprise model exists and is well governed, AI solutions become reusable and scalable by default. When it doesn’t, every solution remains confined and stays as a nice pilot. 

  1. Scalability must be a design principle, not an afterthought.

In manufacturing, AI initiatives rarely fail because the first use case does not work. They fail because the second, third, or tenth use case becomes too expensive, too slow, or too complex to deploy. If scalability is not designed in from day one, every new AI initiative accumulates technical debt, integration effort, and organizational friction.

Nowadays, many architectures are built to “just make the data available.” Scalability is addressed later, if ever.

This is where technology selection matters.

Traditional industrial connectivity gateways are excellent at connecting assets and protocols. They are very good at moving data from A to B. But they are not designed to be the backbone of an enterprise data model.

To scale AI, companies need infrastructure and software that:

  • Enforce structure and semantics
  • Integrate full enterprise data models
  • Support even-driven patterns
  • Decouple producers and consumers
  • Make data discoverable and reusable

This is exactly why industrial IoT edge platforms and data brokers are gaining traction. They are not just pipes, they are enablers of scalable architectures, and therefore key AI enablers.

The paradox of digital maturity

There is an uncomfortable truth in manufacturing digitalization: the more digitally mature a plant is, the harder it becomes to integrate new digital solutions.

This is because today, digital maturity often means many digital systems. And many digital systems usually mean many point-to-point integrations between them.

Every new promising AI application or solution then requires:

  • Connections to multiple historians
  • Interfaces with MES, quality systems, maintenance systems
  • Breaking data silos
  • Custom logic to reconcile the data models associated with each of these systems

This is why lighthouse plants often struggle to move fast with new technologies, despite being highly digital.

The myth of “plug-and-play” AI products

In the near future, we will see more and more AI products marked as plug-and-play for manufacturing. Some of them will be powerful. Some will deliver real value.

But plug and play does not mean “connect to one system and you’re done”.

In a typical plant, relevant data is spread across multiple siloes. Plug-and-play therefore often translates into:

  • Many point-to-point connections
  • Custom mappings
  • Hidden complexity pushed downstream

And, as we’ve seen, the more digitalized the plant, the higher this integration cost.

This is exactly why a single source of truth and a shared data model are prerequisites, not something that is “nice to have”.

When AI solutions can connect once to a well-structured, governed data namespace, they truly become scalable. Not because AI models are smarter, but because the data foundation behind them is.

AI is a multiplier, not a foundation

AI amplifies what is already there.

If you have low-quality data, AI will generate low quality models.

If your data landscape is fragmented, AI will amplify fragmentation.

If your data model is inconsistent, AI will amplify inconsistency.

If you have high-quality data and your architecture is scalable by design, AI will finally deliver on its promise.

The companies that will win with AI in manufacturing are not the ones chasing the latest model. They are the ones investing in context, structure, and governance today, so that every future AI capability can plug in once and scale everywhere.

That, in our opinion, is where the real business impact lies. 

Eager to know more?

Contact us now

Download the Whitepaper

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Request the presentation:

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Heading

Oops! Something went wrong while submitting the form.

Related stories

Related centers of expertise

Related industries