Presented by Dell Technologies


Shadow AI is here to stay. The unsanctioned use of generative AI outside the purview of IT poses several threats to businesses large and small.

Twenty-eight percent of workers report currently using gen AI at work and over half without approval of their employers, according to a Salesforce survey of 14,000 workers1. Those numbers will grow as more employees realize that generating content is a rocket booster for productivity.

The good news for IT leaders? Shadow AI presents a great opportunity to modernize your IT governance strategy. This framework, with which IT leaders balance supporting business objectives while mitigating risks, keeps even the most seasoned CIOs awake at night.

IT governance requirements frequently change to meet business needs and shifting risk profiles. Regardless, pulling the levers that control strategic alignment and risk management while the entire IT department is racing to deliver value is vital.

The case for governing gen AI

Trying to get ahead of shadow AI is akin to putting the toothpaste back into its tube. In that vein, it’s not unlike shadow IT. And yet, it’s also very different.

With shadow IT, employees subscribed to SaaS services with a corporate credit card and spent time learning how to use them. Shadow AI is potentially even more risky because employees can simply sign up and create content -- no credit card required.

The risk is that employees consume gen AI services in an unsafe manner, presenting a governance nightmare. For example, some staff may include in their prompts corporate IP -- such as product specs and other sensitive blueprints, or personally identifiable information, such as home addresses, phone numbers and email aliases -- into public gen AI systems.

Employees may query such services for info on corporate strategy, unaware that they might divulge critical secrets to those who can reverse engineer prompts to gain a competitive edge. And staff who incorporate information about patents or other trade secrets can expose your organization to legal and copyright issues.

Unfortunately, although business leaders acknowledge the risks posed by gen AI, many organizations lack mature policies and processes to govern these tools.

Indeed, most organizations aren’t moving fast enough to put guardrails in place that ensure safe use, as 69% companies surveyed by KPMG were in the initial stages of or had not begun evaluating gen AI risks and risk mitigation strategies2.

Banning gen AI use runs the risk of courting sneaky shadow AI that puts your organization at risk for data breaches, compliance violations and reputational damage. And with 44% of IT decision-makers attesting that they are at early to mid-stage on their gen AI journeys, it's incumbent upon IT to bring the rest of the business along for the ride3.

To that end, IT leaders should work with managers in legal, compliance and risk departments on a centralized gen AI strategy. Together, they must decide how employees can use such tools, articulate those policies to employees and develop education and training to reinforce responsible use.

An AI governance playbook

This requires protocols to protect corporate data and should be done in a way that dovetails with, reflects and extends the department’s IT governance strategy. Organizations should:

Institute AI governance policies: Establish guidelines addressing AI usage within the organization. Define what constitutes approved AI systems, vet those applications and clearly communicate the consequences of using unapproved AI.

Provide approved tools. Giving employees approved AI applications that can help them perform their jobs reduces the incentive for employees to use unauthorized tools.

Formalize training: Educate employees on how to use approved gen AI services ethically and responsibly, as well as the risks associated with inputting sensitive content into restricted gen AI systems.

Audit and monitor use: Regular audits and compliance monitoring mechanisms, including software that sniffs out anomalous network activity, can help you detect unauthorized AI systems or applications.

Encourage transparency and reporting: Create a culture where employees feel comfortable reporting the use of unauthorized AI tools or systems. This will help facilitate rapid response and remediation to minimize the fallout of incidents.

Communicate constantly. Gen AI tools are evolving rapidly so you’ll need to regularly refresh your AI policies and guidelines and communicate changes to employees.

Your gen AI insurance policy

Good governance is like insurance. It’s better to have it and not need it than need it and not have it. And gen AI is like any other emerging technology that requires regular care and feeding.

However, it’s also unlike traditional AI tools in that the ease with which people can adopt it makes it harder to manage. Failure to guide employee adoption of gen AI could invite risky behaviors.

As you upgrade your governance model to account for gen AI technologies, you’ll want to get your data house in order. You’ll identify which data is sensitive and proprietary and should best be run in gen AI systems under your control, preferably on premises.

Gen AI presents a new frontier in the broader AI ecosystem. Trusted partners can help steer you through the learning curves.

Starting with virtual assistants, Dell is building business cases that can help customers get started on their gen AI journeys from the comfort of their datacenters. Leveraging open-source LLMs, you can control how you deploy your gen AI systems, protecting your corporate data.

Ultimately, bringing AI to your data may just be your best governance practice.

Learn more at dell.com/ai.

Clint Boulton is Senior Advisor, Portfolio Marketing, APEX at Dell Technologies.


1. The Promises and Pitfalls of AI at Work, Salesforce, October 2023 2. Generative AI: From Buzz to Business Value, KPMG, June 2023 3. Generative AI Pulse Survey, Dell Technologies, Sept. 2023


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com