January 2026

By The FSO INSTITUTE

AI and Machine Learning to Improve Operational Performance

SESSION OVERVIEW

The January 2026 MHRT focused on how Artificial Intelligence (AI) and Machine Learning (ML) are moving from experimental pilots into practical, scalable capabilities that can materially improve manufacturing performance. The discussion—anchored by Bryan Griffen, Executive Director of OMAC—combined real-world deployment experience with MHRT research findings to highlight both the opportunity and the constraints manufacturers face today. The clear conclusion: AI is no longer a technology problem. It is a data, governance, and leadership execution problem.

1) Why This Moment Is Different: AI Is Now Practical at Scale

A core theme of the session was that AI in manufacturing is not new. Neural networks, expert systems, and advanced control techniques have existed for decades. Bryan outlined his own experience deploying AI as early as the late 1980s and 1990s for predictive maintenance, process optimization, and quality control. What historically limited adoption was not the math—it was the cost of infrastructure, lack of reliable data access, scarce expertise, and poor usability.

Those barriers have largely fallen. Computing power is inexpensive and abundant, connectivity is pervasive, data storage is cheap, and—most importantly—AI tools are now human-centered. Large Language Models (LLMs) and modern interfaces allow engineers, operators, and leaders to interact with AI without deep data science expertise. Bryan compared today’s adoption curve to smartphones: once expensive and clunky, now invisible and indispensable. AI is following a similar—but steeper—trajectory.

The implication for manufacturers is strategic. Organizations that begin building AI capability now will compound value through better data discipline, faster learning cycles, and institutional knowledge capture. Those that delay may find themselves technically able to “buy AI later,” but organizationally unprepared to use it effectively.

2) The Non-Negotiable Foundation: Trusted, Contextualized Data

Both the live discussion and MHRT research converged on one dominant constraint: AI only works when data is consistent, contextualized, and trusted. Poor data quality was cited as the single largest gap preventing effective AI deployment.

Bryan emphasized OMAC’s role in addressing this challenge through interoperability and open standards. Many participants know OMAC through PackML, which began as a packaging machine state model but has evolved into a structured foundation for consistent machine data. That consistency is critical—AI systems cannot reliably learn, optimize, or predict when every machine, line, or OEM describes the same condition differently.

The MHRT survey reinforced this point strongly. Respondents repeatedly noted that incomplete, inaccurate, or subjective data undermines AI outcomes. Manual data entry—particularly for downtime reasons, fault codes, and maintenance records—introduces variability that skews model outputs. Even where sensor data exists, it is often disconnected from operational context (machine state, production intent, maintenance history), limiting its value. The prevailing sentiment was clear: more data does not equal better AI; better-structured data does.

3) Where AI Is Already Delivering Measurable Value

Despite foundational challenges, the group aligned on four operational areas where AI is already producing real results:

Energy optimization: Frequently cited as the fastest payback opportunity, AI-driven energy optimization can deliver double-digit reductions in consumption—sometimes approaching 25% depending on baseline performance. AI identifies inefficient operating patterns, suboptimal setpoints, and variability-driven waste while balancing throughput, quality, and emissions constraints.

Predictive maintenance: AI is enabling a shift from reactive and time-based maintenance to predictive and prescriptive strategies. By detecting subtle patterns across vibration, temperature, current, and downtime precursors, AI can forecast failures earlier and recommend interventions. Bryan cited examples where downtime was reduced by as much as 50% in specific environments.

Quality inspection (especially vision): Vision systems combined with pattern recognition are outperforming manual inspection at speed and consistency. An example shared involved detecting chipped or broken confectionery products at extremely high line speeds—removing defects before packaging to reduce scrap, rework, and customer complaints. Beyond defect detection, AI supports yield
improvement and root cause analysis when linked to process data.

Operator decision support: Many participants viewed decision support as one of the most scalable use cases. AI accelerates troubleshooting, standardizes best practices, and reduces learning curves—especially valuable in environments facing workforce turnover and skills gaps. Importantly, AI enhances operator judgment rather than replacing it.

4) The LLM Inflection Point: Scaling Human Expertise

A significant portion of the discussion focused on Large Language Models (LLMs) as a step-change capability. LLMs are transforming documentation, training, troubleshooting, and engineering productivity by making institutional knowledge searchable and scalable.

Bryan’s message was direct: organizations not already using LLMs are falling behind—not because of technology limitations, but because competitors are accelerating learning and productivity. MHRT participants shared examples where AI compressed work that once took weeks into days through rapid analysis and iterative refinement.

However, the group was equally clear on boundaries. LLM outputs must be validated. AI is powerful but imperfect, particularly when fed poor data or applied outside its strengths.

The consensus model was iterative: AI proposes, humans validate, experts finalize. As Bryan summarized, AI is a decision-presenting tool, not a decision-making tool.

5) The Real Barriers: Integration, Cybersecurity, and Ownership

MHRT research confirmed that lack of system integration is a major reason AI is not used more effectively today. Legacy equipment, proprietary OEM architectures, and fragmented IT/OT systems make end-to-end data pipelines difficult to assemble. Participants discussed practical approaches to modernizing older assets using edge devices and data collectors that translate legacy signals into standardized, PackML-aligned data—avoiding wholesale equipment replacement.

Cybersecurity and the IT/OT divide emerged as persistent friction points. Successful patterns described during the session involved clear responsibility boundaries (e.g., IT owning infrastructure, OT owning shop-floor data usage) combined with secure architectures that limit vendor access past firewalls. Without this clarity, AI initiatives stall amid governance concerns.

The MHRT survey also highlighted vendor lock-in risk, uncertainty around edge versus cloud compute decisions, and unclear ownership of OT networks as recurring obstacles. These challenges are rarely technical—they are organizational.

6) Leadership, Governance, and the Human-Centered Model

Perhaps the strongest alignment between discussion and research was around leadership execution. While survey respondents did not explicitly blame leadership capability, their written comments revealed a deeper issue: lack of ownership and accountability.

Multiple respondents noted that if leadership does not clearly define objectives, articulate a roadmap tied to business outcomes, and actively model adoption, AI initiatives are “destined to fail.” Several compared AI adoption to TPM—requiring sustained leadership commitment, dedicated resources, and cultural reinforcement. Lip
service does not drive results.

This directly supported Bryan’s framing of AI as a leadership and governance challenge rather than a software deployment. Organizations that succeed assign clear ownership, fund dedicated teams, and integrate AI into daily work—not side projects. Smaller
companies often move faster because CEOs can push adoption directly; larger organizations must overcome structural inertia despite having greater resources.

7) MHRT Research Synthesis: What Must Change

The MHRT survey data crystallized three critical insights:

  1. Data quality and context are the primary constraints, not algorithms.
  2. Integration and ownership gaps prevent scaling beyond pilots.
  3. ROI challenges stem from unclear roadmaps, not lack of value.
  4. High-performing organizations focus on narrow, economically grounded use cases—targeting high-downtime lines, energy-intensive assets, or quality loss drivers—and scale only after proving value. Those pursuing AI as a generic “digital initiative” struggle to justify investment and sustain momentum.

8) What’s Next

MHRT will continue this topic in February, bringing in experts to address the specific barriers surfaced: data contextualization, integration strategies, governance models, and scaling AI responsibly across operations.

Bottom line: AI and ML are already delivering operational performance improvements today. However, scalable impact depends on disciplined foundations—trusted data, interoperable systems, cybersecurity, clear ownership, and a human-centered approach that pairs AI speed with human judgment. Organizations that treat AI as a capability— not a tool—will operate faster, smarter, and more resiliently.

Interested in learning more about the Manufacturing Health Roundtable?

Let's Connect!

Fill out the form below, and one of our team members will reach out to discuss your companies needs.