dataanalytics > 4 AI Risks CISOs are Most Concerned About and How to Overcome Them

September 16, 2024

Article
9 min

4 AI Risks CISOs are Most Concerned About and How to Overcome Them

As the AI agenda gains momentum in Canada, more CISOs are concerned about driving successful AI adoption without compromising safety. Learn about the four key AI risks they’re facing and how to address them in this expert-led blog.

CDW Expert CDW Expert
What's Inside
  • AI adoption continues to rise in Canada

    An IBM report revealed that the proportion of enterprises in Canada deploying AI went from 34 percent to 37 percent in 2023, signalling continued AI adoption.

  • 4 AI risks CISOs are most concerned about

    Learn how CISOs can address data integrity, privacy and security, training and monitoring, and AI talent risks with expert advice from KJ Burke, Field CTO — Hybrid Technologies at CDW Canada.

  • CDW AI advisory for CISO

    CDW AI advisory services help CISOs strategize safe AI adoption with in-house experts, an extensive partner ecosystem and over 20 years of Canadian digital transformation experience.

Artificial Intelligence, technology

Canadian organizations have been increasingly adopting AI over the past year. An IBM report revealed that the proportion of enterprises in Canada deploying AI went from 34 percent to 37 percent in 2023, while adoption remained steady in the rest of the world.

Our 2024 Canadian Cybersecurity Study also highlighted growing AI adoption for cybersecurity use cases. For instance, 37.5 percent of surveyed organizations in the financial services industry reported mature AI implementations. 

From automating manual processes to empowering the workforce, AI applications promise transformative results for business. But for CISOs, adopting AI isn’t a singular decision. They must ensure organizational readiness in terms of data security, change management and risk mitigation.

Ultimately, successful AI adoption still hinges on an organization’s ability to address and avoid any potential risks.  

In this blog, KJ Burke, Field CTO — Hybrid Technologies at CDW Canada, sheds light on the four key AI risks CISOs are most concerned about and how they can prepare for seamless AI transformation.

1. Risks concerning data integrity

/

If data is oil, curated data is gasoline.

–   Burke says, highlighting the importance of data quality and integrity for fueling AI applications.

Modern AI systems, such as generative AI models, must not be trained on crude data. They need well-maintained, clean datasets that can produce error-free outcomes.

Even an outdated Excel spreadsheet or incompatible user documentation could be considered crude data in the real world. If you were to build an AI chatbot on top of such data, it may end up giving the wrong advice to your customers. This can have a negative effect on customer confidence, lead to lost business and potentially sink the investment that went into creating the chatbot.

This is because an AI model doesn’t know anything on its own – it learns from the knowledge you provide it. If the knowledge isn’t reliable, it will most likely make mistakes.    

Identifying your gasoline-like data

So, how can you identify when a dataset lacks integrity? Look for the three tenets of data integrity:

  • Accuracy – Does the data correctly represent what it’s meant for? Sales reports with the wrong numbers is an example of data inaccuracy.
  • Completeness – Do you have as much data as required to train the model? Missing rows in an Excel spreadsheet represents incomplete data.
  • Quality – Is the data free from discrepancies and biases? Skewed datasets that may be accurate but contain demographic or racial biases are an example.

How to ensure data integrity

There is a high chance that your organization may already have heaps of crude data that is unfit for AI models. But by making integrity checks, you can bring clean data to the table. Here’s how:

  • Review for human error – Build a scalable process that checks and removes human error from the data.
  • Scrutinize data sources and collection mechanisms – Check where your organization gets its data from, along with the mechanism for storing and processing data.
  • Educate employees on data integrity – Build a culture of data integrity within your organization by educating employees about the importance of preserving data.

By focusing on data integrity upfront, you can help ensure that your AI pilot doesn’t deviate from expectations. This lays a strong foundation for de-risking your AI investment, no matter the use case.

2. Risks concerning data privacy and security

AI tools have become widely accessible, which means anyone can now prompt an AI model without supervision. This presents a critical data privacy and security concern for organizations’ sensitive data.

You can address this risk by answering three questions:

  • Where is your data stored?
  • Who has access to your data?
  • How well-guarded do you keep your data?

Take the example of Apple’s implementation of OpenAI services in their latest iOS 18 version. Siri, the smart assistant in iOS, can access ChatGPT if you ask it a complex question, but ChatGPT’s backend is not allowed to store any of your data.

At the same time, the IP address of the device requesting the response is also masked. This ensures that while users can benefit from a third-party AI system, their risk exposure is minimal.

Four key pillars of risk in an organization

Burke also has a nuanced way of looking at data security and privacy risks as four key pillars.

/

Each of these is the stakeholder that we’re trying to service but they’re also an example of what the risk profile is for AI tools.

/
  • Coworker – Risk is contained within the tools and data a coworker can access and is limited in nature.
  • Team – Risk now expands to a team’s collective data, which requires an active set of permissions, controls and privacy. An AI model may expose data that’s intended for only selected users.
  • Organization – At the organizational level, it’s not just a team but the entire ecosystem of users interacting with the AI. All of the data housed within the organization, such as SharePoint or OneDrive data, could potentially end up in the wrong hands.
  • Platforms – Customer-facing AI that uses organizational data to solve customer problems directly represents a reputational risk to the organization.

Start small with a low-risk AI use case

Burke recommends that organizations start with a low-risk use case for their AI pilot and move forward from there. “When it comes to AI systems, we would want to implement something very simple and then we should innovate and do something more complex.”

This allows organizations the bandwidth to curb risk and learn the implications at a low-stakes stage. The lessons learned at this stage can help to reduce the risk on larger-scale initiatives in the future.

3. Risks concerning model training and monitoring

An AI model, such as GPT4o or PaLM2, is built using specialized techniques that can fulfill certain requirements. These generative AI models can carry out a variety of tasks such as summarization, content generation, reasoning and so on.

If you want to build a similar model that is private to your organization, there are several risks that come into play:

  • Improper data capture – Model is unable to capture the training data correctly, resulting in performance issues.
  • Overfitting – Model may run well on training data but can’t produce the same results on new data.
  • Computing costs – Custom models require a massive load of computing resources, which presents cost risks.
  • Model security – Cyberattacks and data corruption in the model can result in harmful outcomes when it is deployed.
  • Scalability – The training must account for future scalability or else the model will become redundant.

To train or not to train

The risks associated with model training are largely valid for organizations that need to train their own models. For many cases, this may not even be necessary.

Whether you should train your own model depends on the complexity and needs of your AI use case. If you’re building something simple, there are other methods available to achieve the same outcomes.

Method

When to prioritize

Risks involved

Building your own model

- When a highly specific solution is needed that cannot be met by existing models.

- When dealing with proprietary or highly specialized data.

- Data quality issues

- Overfitting and underfitting

- Resource consumption

- Security risks

Using a pre-trained foundation model

- For quick deployment of AI capabilities.

- When the organization lacks deep AI expertise.

Fine-tuning a model

- When a task-specific solution is needed but resources are limited.

- For improving performance on specialized datasets.

Using RAG (retrieval-augmented generation)

 

- When generating content that requires high accuracy and reliability.

- For applications needing up-to-date or specific external information.

- Integration complexity

- Dependency on external information sources

Using a third-party solution

- For non-core applications where quick deployment is crucial.

- When AI is needed for basic tasks like OCR, sentiment analysis, etc.

- Data privacy issues

- Dependency on vendors

- Limited customization

Burke recommends identifying the method that will suit your organization the most to avoid dealing with additional risks. Methods that involve training, using and fine-tuning models usually have a complex risk profile.

For simpler use cases, you can go with an easier development method that comes with less risk and better aligns with your goals. 

4. Risks concerning AI talent and skills

While AI has become much more prominent in the workforce lately, the job market has not necessarily kept up with the demand for AI-related skills. As such, organizations that target AI adoption in the near future may face hiring challenges.

Another point of concern is that new AI technologies are flooding the market, making it hard for even experienced AI professionals to keep up with the latest tools in the space.

Along with finding new talent, upskilling existing employees may also become a risk. Employees may need extensive training and awareness on how to use AI systems safely before an organization-wide adoption can be planned.

Failing to address the risks above may lead to AI implementation issues within the organization.

Build internal competency by partnering with AI experts

This is where partnering with an established solutions provider like CDW can help you in rolling out your AI projects.

We ensure you have access to seasoned AI experts who can guide you on the right AI applications for your business or obtain the necessary infrastructure to build your AI solutions.

Our AI competencies span multiple industries and domains, covering major AI use cases such as AI chatbots, high-performance computing, generative AI and so on. Alongside solutions for enterprise players, we also service government entities and startups who want to maximize their business efficiency with AI.

CDW AI advisory for CISOs

CDW Canada has more than 20 years of experience in bringing transformation to Canadian organizations. We bring nationwide expertise across business, government, education and healthcare to help organizations make the most out of their AI investments.

Our extensive partner ecosystem allows us to provide the underlying technology, top talent and leadership workshops to get your AI projects off the ground.

Whether you seek guidance on the first steps leading to your AI pilot or need advisory on safe AI adoption, our experts can help you meet your organization’s digital transformation goals.

Our AI competencies include a wide range of AI applications in Canada including:

  • Generative AI
  • High-performance computing (HPC)
  • Chatbots and large language models (LLMs)
  • Contact centre modernization
  • Predictive analytics
  • Data readiness/governance

From use case discovery to PoC development and ultimately taking your solution to production, we facilitate the entire AI lifecycle to strengthen your organization’s ability to mitigate risks.