Artificial intelligence (AI) will have numerous applications in the manufacturing sector—from digital twins and quality control to production efficiency and predictive analytics. The future will surely have AI’s fingerprints all over it.
Generative AI, which uses algorithms to generate content, such as text, images, video, code or 3D designs, has specific applications in industrial manufacturing, such as writing programs and creating efficient designs.
The development of these generative applications are primarily coming from upstream of the factory or plant. Original equipment manufacturers (OEMs), such as machine builders and system integrators, are curious about AI, but the use cases are coming from the automation suppliers. Already, examples from Beckhoff, Siemens, Yaskawa and Emerson are in various stages of a generative-AI offering.
The reasons behind this upstream development are the trend toward technology democratization and AI’s cost.
First there’s a general trend toward the democratization of technology, where a supplier can provide automation that replaces technically skilled workers or eliminates the need to create that same technology on your own. Why spend the money to build it when you can buy it off-the-shelf, or with some minimal modification, from an automation supplier?
The second reason is the amount of resources needed for AI. ChatGPT, a generative pre-trained transformer, is one of the most well-known, and it costs $700,000 a day to run, according to SemiAnalysis, a semiconductor research firm.
Development costs vary, depending on the AI, but large-scale projects can exceed $500,000. A single training run from an AI engine can consume more power than the amount used by more than 100 American households in a year. Since 2012, the amount of compute used in large AI training runs has doubled every 3.4 months, according to an analysis released by OpenAI. And then there’s the expense of maintaining the hardware and software. Current central-processing-unit (CPU) architectures aren’t optimized for AI algorithms.
Development, training and maintenance costs aside, the energy costs are staggering. Parallel computing is required for AI and can include 100 processors working together. The annual cost of powering a data-center server rack can be $30,000, which means 100 cabinets could tally a $3 million energy bill, according to Sunbird, which provides data-center-infrastructure-management software.
These two factors could lead to another as-a-service business model. We’ve seen software-as-a-service and robotics-as-a-service and production-as-a-service, but AI-as-a-service could be the next wave, given the cost of development, maintenance and energy, as well as the availability of AI coming from automation suppliers.
Emerson, for example, has released an AI-based software called Revamp, which runs in the cloud, that converts DeltaV control-system and safety-system code in plant modernization projects. The software uses continuously updating AI models, so each system feeds data back into the cloud-based software as the system is modernized, creating learning algorithms that perpetually get smarter and faster at converting legacy code. The AI engine analyzes native files from the existing distributed control systems, safety instrumented systems or programmable-logic-controller (PLC) backups while using a global library of projects to sort, select and automate engineering tasks. The project is documented automatically, and significant portions can be generated in the DeltaV control system.
The Advanced Robotics for Manufacturing Institute is one of the manufacturing institutes linked together within Manufacturing USA. It recently announced eight projects for second-round funding with plans to award nearly $1.5 million, bringing the total contribution to around $3.3 million across these eight projects.
A project team representing Ohio State University, CapSen Robotics, Yaskawa and Robins Air Force Base designed and deployed an AI robotic system capable of producing component geometries for metal-forming in automotive manufacturing, factory machinery, power plants and military equipment.
USC and Siemens have collaborated on a project based on AI imitation and reinforcement learning to scoop precise amounts of granular and paste-like materials more safely, replacing humans with robots.
Beckhoff’s TwinCAT Chat Client for its TwinCAT XAE engineering environment makes it possible to use large language models (LLMs), such as ChatGPT from OpenAI, to develop a TwinCAT project in control programming. The TwinCAT Chat Client enables AI-supported engineering to automate tasks such as the creation or addition of function-block code, and even code optimization, documentation and restructuring. The client connects to the host cloud of the LLM—for example, Microsoft Azure in the case of ChatGPT—and provides communication to the PLC development environment. Release of this is forthcoming, and Beckhoff recommends reviewing code before implementing.
Predictive analytics is another AI application that could become generative if a company could bridge the divide between collecting sensor data and then turning that analysis into generated work orders in enterprise asset management (EAM)/computerized maintenance management system (CMMS) software.