Brand and Network Considerations When Adopting AI Corporately

Jane-Michele Clark
Director of Business Strategy
Palo Alto
AI
clock

Estimated reading time 

6

min

calendar icon

August 28, 2024

July 14, 2025

Table of content

It’s hard to imagine that the use of Copilot, ChatGPT and other AI tools has become commonplace in only a year, yet it has.

As an employer, you can rejoice in the shortened time requirements for writing code, correspondence and plan outlines, as well as the ease of automating routine tasks, but there are also privacy and security concerns to consider.

The story about the company’s whose senior executives’ employment contracts, complete with salaries and benefits, became available to the rest of the company through an internal ChatGPT query went viral – and likely gave many Boards of Directors heart palpitations, wondering what propriety data could end up in the wrong hands.  Given the degree of truly sensitive data that has been made public in the past year, they were right to worry.  So should we all.

In addition to revisiting, and re-writing protocols to protect internal privacy, there are ethical matters to address.

For instance, what if employees are using an AI tool /chatbot to create work for which a client is being billed?  If the client thinks the work was produced by internal experts, yet that’s not the case – and this is not disclosed – irreparable harm could befall the relationship.

What if employees are using it to do their work, so they can have a second, or third, remote job on the side – and the employer doesn’t know?  Again, erosion of trust may be the least of the concerns.

What Should You Do to Protect Corporate Reputation and Morale?

  1. Determine if and/or how you want employees to use AI.
  2. Detail what uses of an AI tool are acceptable – and all instance in which that is not the case.
  3. If you decide to allow specific types of employees to use AI for specific tasks, they should be required to clearly identify the portions of the work that was machine generated vs. what can be considered proprietary, or even corporate intellectual property.
  4. Ensure that AI-generated work be peer-reviewed, as these tools are not 100% accurate, and often superficial work – and work that may not reflect the most recent trends and/or technical advances.
  5. As mentioned, re-write your access protocols (think “least privilege) to prohibit/prevent the uploading of private and/or proprietary data.
  6. Specify the consequences for violating corporate policies and prescribed practices related to AI use.
  7. Assess your network security posture with an AI lens – and make necessary changes.
  8. Continue to review and update your AI implementation approach and policies.

Other AI Adoption Concerns, Including Security

Anand Oswal is the Senior Vice President and General Manager of Network Security at cyber security leader Palo Alto Networks. On August 2, 2024, he wrote a blog post entitled, “Promise and Perils of Building AI Into Your Business Applications”, that also appeared in Forbes:

“Artificial intelligence (AI) is at the forefront of business innovation. But although AI feels like a relatively new concept, 83% of technology service providers already use generative AI in their businesses.

Business use of AI apps spans nearly every type of application, including supply chain optimization, process automation, customer service chatbots, virtual assistants, data analysis, logistics monitoring, fraud detection, competitive intelligence and more. But there are risks involved with this new technology. Take, for example:

  • Airlines, hotels and online travel businesses are building LLM-powered virtual assistants to let you self-manage your bookings. But what if the organization rushed that application to market without considering supply chain vulnerabilities in the app ecosystem – including corrupt AI and machine learning (ML) packages and model vulnerabilities?
  • Pharmaceutical enterprises are trying to use their past research, trials and outcomes to train models, thereby accelerating their ability to take their next drug to the market. But what if the organization leverages an open-source model that was trained on poisoned data, leading to incorrect or misleading trial results?
  • Real estate companies are building online apps to help you find your next property and build the most appropriate offer based on the market data. But what if the application was subject to prompt injection attacks that let bad actors arbitrage the market at the expense of everyday home buyers?

No matter where you may sit on the AI adoption spectrum, it’s clear that the businesses that are embracing AI are winning a competitive edge. But it’s not as easy as plugging an AI model into your existing infrastructure stack and calling it a win. You’re adding a whole new AI stack, including the model, supply chain, plug-ins and agents – and then giving it access to sensitive internal data for both training and inference. This brings a whole new set of complexities to the security game.

So, how does a business harness the potential of AI without compromising security?

  • The journey to securing AI-powered applications starts with discovery. You must be able to see every component of your AI app ecosystem – including AI apps, models, inference and training datasets, and plug-ins.
  • Next, you must understand your security posture to identify and remediate against possible risks in the supply chain and the configuration, as well as data exposure risks to your AI apps. By identifying your highest-risk applications, you can investigate your training dataset risks and potential level of risk to your organization.
  • Then, you must protect against runtime risks. These are the risks your app is exposed to once it’s deployed and exposed to the outside world. Attackers are aware of the speed at which new AI applications are being developed and rushed to market, and they’ve devised an increasing arsenal of AI-specific attacks in the hopes of exploiting new, untested components and weaknesses in the overall security posture of these applications. Enveloping your AI application components with runtime protection mechanisms helps you shield your model against misuse—like prompt injection techniques to leak your customer data or attackers using your models to generate malware.

The promises of AI can’t be overstated. But the risks must be acknowledged with the same fervor to see it live up to its full potential. A comprehensive security solution will help you confidently build AI-powered apps by securing your journey to AI, from design to build to run.”

To learn how you can best protect your cloud, edge, network, endpoints and people in our brave new AI world, please contact us at [email protected], or 1.877.238.9944.

Download PDF

Back to insights, resources and news
arrow

Similar insights