Education and knowledge is incredibly important. This is especially true when experimenting with new and innovative technology. The downside is that one could be playing with fire and not even know it until it is too late.
Over the past several months, the conversations about generative AI have been about the positive and discussing potential uses. However, these conversations rarely discuss the potential downsides. This is a mistake. There is an education that isn’t happening at the level it needs to be.
To be clear, this post is not intended to call out or embarrass anyone but rather to raise the awareness of ensuring your teams are well-educated.
Suspicion becomes reality
A few weeks ago, I ran across something that we have long suspected was taking place, but not for such sensitive information. I learned that ChatGPT was not just being used to create confidential information but was also being fed highly regulated information to create this confidential information.
First, let’s backup. We all know that ChatGPT, OpenAI’s public user interface, is used for their generative AI service across all industries and just about all walks of life. Just ask any audience who has used ChatGPT and most of the hands will go up. The wide use has lead to suspicion that ChatGPT could be inadvertently used with confidential information. The hope among executives was that employees would be knowledgeable and refrain from using it with sensitive information. However, that assumes a degree of understanding in how ChatGPT and more broadly generative AI works. Unfortunately, that is not the case.
One broad use of ChatGPT discloses confidential information
I always welcome the opportunity to share what I have learned and educate those inside and outside of the IT ranks. A few weeks ago, I was invited to present an update on generative AI to a group of investor relations officers (IROs). In this case, the lead-up prep call and presentation uncovered an issue that CIOs and executives need to be aware of.
In the prep call, it became apparent that a few of the folks were already using ChatGPT. While simply using ChatGPT is not concerning, it was what they were using ChatGPT for that raised my eyebrows. A few of the IROs mentioned they were using ChatGPT to build the draft of the press release for their financial releases. For those not familiar with the process, this means feeding non-public financial results and information into ChatGPT to build a draft of the release that ultimately will be made public.
For those not familiar, these quarterly and annual financial releases are highly regulated and sensitive as they contain non-public financial data and forward-looking information. This data has the potential to provide pre-public insights that could lead to financial trades prior to its public release.
Based on the prep call, we thought it would be good to ask the audience at the presentation how many were using ChatGPT…and how many were using ChatGPT to build their financial releases. A significant number raised their hands.
One of the questions raised was ‘What is the likelihood that someone can get access to that information?’ At the time, I explained that while it was possible, today it was probably relatively low. I added that the risk would likely go up in short order. I didn’t realize at the time how short order it would be.
Risk of confidential information disclosure goes up
The presentation to the group of IROs was on a Friday. On Sunday I flew from Los Angeles to New York for a CIO event. On Monday, three days after my presentation, I was chatting with a fellow CIO and shared what I had learned. He then shared that he was recently made aware of a major financial institution that was actively mining this information.
Hence, the risk level that I thought was relatively low three days earlier, just went to high as it was only a matter of time before someone connected the dots. That is assuming they have not already connected the dots and have discovered the information. If they have discovered this information, it means that institutional investors could be making trades on a company before the public release. I am sure that the regulatory bodies like the Securities and Exchange Commission (SEC) may have something to say here.
Education is the empowerment of innovation
This example of pre-public financial release is concerning, but more indicative of the importance to educate. Some organizations have taken a more drastic move by blocking access to new innovation and tools like ChatGPT. Of course there are ways around those blocks.
The important learning here is not to block access to innovation. Embrace it…but also educate to provide guidance on how it works and what guardrails should be used. For example, with ChatGPT, use it build content, but do not contribute confidential information.
First, check with your IRO and ensure that they are not submitting confidential information to ChatGPT. Second, work on a plan to educate your organization on the ways that companies are using generative AI, but also provide them with education and guidance so they can make good, well-informed decisions.
Today, organizations are taking broad actions like blocking access to ChatGPT and blocking access to all .AI domain names. History has shown that taking broad steps like this are rarely fruitful. A better approach is to create guardrails and education programs to a) encourage use of innovation while b) limiting the risk of misuse.