Shadow AI Shines Light on Escalating Concerns

Unauthorized tools are unintentionally expanding manufacturers' attack surfaces.

Peach Istock Ai Cyber
istock.com/Peach

AI tools are transforming production floors and supply chains, driving efficiency and innovation. Among other things, generative AI solutions are being used to sift through service manuals, assess inventory availability, optimize procurement, and preemptively identify potential equipment failures—all of which improve productivity, reduce operational disruption, and lower maintenance costs.

That said, many of these AI tools are unauthorized, leaving IT leaders concerned about IP protection, safety standards and operational integrity.

Unsanctioned Tools Pose Security, Privacy Threats

Although they can be beneficial to working professionals, unauthorized AI tools—also known as shadow AI—create new risks for sensitive data exposure. A single security vulnerability in an unsanctioned, third-party AI tool can result in a data breach with devastating consequences for a manufacturing company.

For example, if employees are uploading sensitive supplier data into unvetted AI tools, a breach in one of these tools could easily result in serious compliance issues and potential regulatory exposure. Needless to say, all these third-party AI tools should be vetted and approved by IT personnel before such tools are integrated into the network. Unfortunately, this is not happening.

According to a recent ManageEngine study of 700 working professionals and IT decision makers across the U.S. and Canada, shadow AI is rampant across the manufacturing sector. In fact, a whopping 79 percent of manufacturing employees say they know co-workers who are using unapproved AI tools.

What's more, without obtaining company approval, employees are inputting sensitive corporate information into these tools. The vast majority of employees (71 percent) input personal tasks and documents; another 46 percent share general company details, and 44 percent enter client details.

Interestingly, there is some naivety among manufacturing employees, as many blindly assume that third-party AI tools have adequate privacy and security measures in place. Practically all (98 percent) manufacturing employees are confident that shadow AI tools protect the information they enter, and nearly half (45 percent) believe there is little to no risk in using such AI tools.

Security Teams Struggle to Catch Up

Although many IT leaders (66 percent) have identified unauthorized AI users on the network, most IT respondents (77 percent) agree that employees are adopting AI faster than IT can assess it, and 85 percent admit that it's challenging to control unauthorized AI use.

Also, it's not just rank-and-file manufacturing employees that are drawing the ire of IT personnel. Senior leaders across the board are also underestimating the risks of unvetted AI tools. Most IT leaders (62 percent) believe other senior leaders in the organization are underestimating shadow AI risks. This is a dangerous situation.

Organization-wide AI alignment is key. To educate employees (including senior leadership) about the inherent risks of using shadow AI, all personnel should be subjected to training programs. Additionally, there should be AI sandbox environments where employees can test out new AI tools, and workers that follow an organization's AI best practices should be acknowledged or rewarded.

The survey results suggest that working professionals in manufacturing are particularly amenable to AI education, as 66 percent explicitly said they want better shadow AI education.

Also, manufacturing employees want to see clear policies and a list of approved tools. Only one-third of these employees say their company has clear AI policies. However, the vast majority (70 percent) do want clear policies put into place. And roughly half (51 percent) want a list of officially sanctioned tools as well.

Despite the security risks, AI is certainly the future, and it should be embraced. Without a doubt, AI tools can help manufacturing professionals automate routine tasks, such as scheduling production maintenance, assisting with inventory management, facilitating demand forecasting, and preemptively identifying potential equipment failures. 

Given the new U.S. administration, many manufacturing companies are now using generative AI tools to assess tariff threats and enable just-in-time inventory management.

Although AI tools should absolutely be integrated into manufacturing workflows, IT leaders need to spearhead this initiative by vetting and sanctioning all third-party AI tools. During the vendor due diligence process, IT teams should seek out cloud-based AI tools that offer robust security and compliance measures. To ensure supply chain integrity, it's prudent for IT to ask potential vendors for security compliance verifications (such as ISO 27001 or CyberEssentials Plus). 

Also, IT should regularly audit these third-party AI tools' security protocols, as well as any open-source repositories from which the tools' developers pull.

If it's financially feasible, another route is for IT personnel to build a proprietary AI stack internally. Some IT teams opt to build in-house models on top of open-source models, enhancing the models through retrieval-augmented generation (RAG) to ensure that sensitive data remains in the network.

To be sure, shadow AI poses a host of security and compliance threats. According to the study, 44 percent of manufacturing employees admit to entering confidential client data into AI tools without confirming company approval, and another 46 percent admit to sharing internal company data with unsanctioned AI tools. If a data breach were to occur in one of these third-party tools, the financial repercussions for a manufacturing firm could be devastating.

That said, there's an opportunity here as well. If IT personnel can remove shadow AI and incorporate safe and secure AI tools that manufacturing employees feel empowered to use, a palpable competitive advantage can be attained.

More in Cybersecurity