Presented by Dynatrace


There is a huge level of hype around AI in the business world right now. New copilot tools, virtual assistants and LLMs are developed every week to assist organizations with a multitude of functions. As these technologies become more pervasive, every organization needs an internal code of conduct to mitigate the risks and govern the way that AI is used internally.

In essence, AI governance is an extension of existing privacy and security rules, because it simply builds on organizations’ existing data processing practices. In the same way that organizations have policies defining who can access their data and how it should be handled, an AI code of conduct clearly defines what is an acceptable use of AI.  By being proactive in establishing these guidelines, organizations can enable their teams to adopt these tools quickly to stay at the forefront of innovation, while ensuring they are doing so in a safe and secure manner.

How to craft an AI code of conduct

It's important that AI policy is driven both from above and below, with executive and employee support, and there are strong incentives to do so. Executives need to ensure their use of AI keeps them in compliance with upcoming regulations such as NIS2, while employees want clear guidance on which tools they can use, for which use cases.

Here are four key considerations IT leaders should consider when implementing their own AI code of conduct.

1. Integrate AI into existing procurement processes

An AI code of conduct should first outline the steps that employees need to complete before they can procure or begin using any new tools. Many organizations consider reinventing the wheel to accommodate AI tools, but this creates a significant amount of unnecessary work. Instead, they should subject any AI tool to the same rigorous procurement process that applies to any product that concerns data security.

The procurement process must also take into consideration the organization’s privacy and ethical standards, to ensure these are never compromised in the name of new technology. Whenever anyone wants to enable a new AI tool, it should have the same safeguards as any other technology product that the organization uses.

In addition, to streamline the decision-making process around new AI tools, organizations should centralize requests under a cross-departmental governance board, with representatives from multiple areas of the business, including engineering, business, security and legal teams. To succeed, this board must act as a facilitator rather than a blocker. It needs the mechanisms in place to consider individual use cases, business applications and cost benefits of different AI tools to select the best ones for the organization.

2. Ban free tools with unclear privacy and security guardrails

Given the wide range of use cases for readily available AI tools, organizations could quickly find that different teams are using their own solutions, often with free personal licenses rather than paid commercial ones. However, what these employees often don’t realize is that free tools’ privacy rules are more lenient, and can give vendors access to the company’s data. It’s important to be conscious of the privacy policies of AI tools when using these in an enterprise environment -- and be sure to only use these with a commercial license.

To address this risk, an AI code of conduct should stipulate that free tools are categorically banned for use in any business context. Instead, employees should be required to use an approved, officially procured commercial license solution, with full privacy protections.

3. Take a security-first approach to vendors and partners

Every organization needs to remain aware of how their technology vendors use AI in the products and services that they buy from them. To enable this, an AI code of conduct should also enforce policies to enable organizations to keep track of their vendor agreements. Solutions owners need to ensure that the terms of their agreements don’t change, and security remains a priority as AI is introduced into the products they use.

For example, AI-powered text summarization and transcription tools built into video conferencing solutions can be invaluable for note taking but could expose sensitive data if vendors use it to train LLMs. One of the most recent examples of this was seen in the case of a popular collaboration tool, when it was revealed that the provider was using the messages and files being shared between its users to train its AI capabilities, without their consent. To counteract this risk, organizations should build their AI code of conduct with data security as a core consideration. Even if this means restricting certain uses of AI, such as forbidding the use of AI assistants to transcribe meetings, it significantly limits the risk organizations can be exposed to through inappropriate or unmanaged AI usage.

4. Demonstrate how to use AI effectively

In many organizations, AI remains a relatively new tool. While employees have a basic understanding of how AI works, they may not have the knowledge needed to get the best results from it.

For example, there is a common misconception that tools like ChatGPT “think” like a human and can retain the context from previous questions they’ve been asked. In reality, an AI tool can only remember conversational context in a very limited way, which can influence the reliability of the output they generate. As a result, IT support teams often find themselves repeatedly being asked the same questions by employees struggling to deal with similar AI-related challenges.

To address this, an AI code of conduct should also help to establish employee expectations for what the technology can and can’t do, and what skills they need to make the most of it. Organizations can support this by implementing a training program to brief teams on how AI tools work as their knowledge grows and the technology evolves. While team members don’t need specialist AI skills, they should be equipped with the basic knowledge needed to use the tools effectively.

Creating a virtuous cycle of AI conduct

No organization operates in a vacuum, so in addition to governing their own internal use of AI, it is crucial to hold every company they work with accountable for their practices and ensure they uphold the same standards.

Organizations should also establish a trust center, and design their AI policies to be public facing.  This allows them to be transparent in how they use and implement AI tools, so their customers and other stakeholders have clarity.

By putting positive, proactive AI policies into the public domain, organizations can learn from one another, and shape their own code of conduct accordingly. The goal of this is to create a virtuous cycle of AI governance, where organizations are able to work together to ensure we all use these game-changing technologies responsibly and successfully to the benefit of all.

Alois Reitbauer is Chief Technology Strategist at Dynatrace.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com