DENVER - As more people take advantage of generative AI programs like ChatGPT, companies would do well to put procedures in place for their use, cybersecurity analysts said during a June 13 panel at the Unconventional Resources Technology Conference (URTeC).

While AI, or artificial intelligence, tools can speed up work, without proper safety measures intellectual property could easily slip out into the public eye or open the door to malware.

Policies and a cybersecurity mindset are important, but only to a point — the last line of defense comes down to the human element.

As such, cybersecurity has become more critical than ever, panelists said during URTeC’s Cybersecurity for Unconventional Technology session.

Even ChatGPT, an AI chatbot, should be vetted by the IT department before employees use them, cybersecurity consultant Leon Hamilton said.

“When you start using these technologies on the corporate infrastructure, you have to be aware that cybercriminals could be uploading malicious software that could be damaging to your infrastructure,” he said.

And getting people to stop using ChatGPT otherwise seems to be a non-starter.

Catharina "Dd" Budiharto, president and CEO of Cyber Point Advisory, said she learned the hard way that locking something down can result in workarounds that are more vulnerable than the original platform. She locked down her children’s smartphones, making it impossible for them to use certain apps.

“They outsmarted us,” she said, noting a friend set up social media accounts for them. “The risk expanded.”

In short, she said, it’s human nature to “do it anyway” when told no.

“We cannot say no. If we stop our employees and users from using generative AI,” she said they will use it anyway, but beyond the control of the company. And, ultimately, “we will not have visibility into it.”

She urged a different solution: “Instead of saying no, it’s yes — and,” which allows people to use the tool, but delineates how to use it safely.

Benefits and risk

Scott Moore, Devon Energy’s manager of digital security, said “everyone” is rushing toward AI.

“We all are going to be living with some sort of AI on whatever devices we have from here on out,” he said.

There are benefits, but also risks. “We’re still trying to figure it out,” he said.

Better understanding will come through use cases.

David Llorens, principal at Risk Consulting, said those use cases will vary based on different jobs and workflows and that it will be hard to restrict access to generative AI.

“They’re going to have to use it, whether you like it or not. The cat is already out of the box,” he said.

And, Budiharto said, information sharing and collaboration will help all companies in the industry use these tools more safely.

“One organization alone cannot figure this out,” she said.

She compared the emergence of generative AI to the early adoption of smartphones.

“When iPhones were proliferating the enterprise organizations, everyone was panicking” from legal to security to operations, she said. “Together we were able to integrate the personal device into the enterprise level.”

Of course, one of the draws of generative AI is that it can be used to “write better emails, make the contracting process go faster” and much more, ConocoPhillips Chief Information Security Officer Annessa McKenzie said. “While generative AI may seem scary, well the internet seemed scary” too, at first, she said.

The human element

Devon’s Moore said cybersecurity comes down to awareness and education. The goal is for employees to understand that rules are in place not to inconvenience them but to make their operations safer and more secure.

And much like the oil and gas industry intensified its focus on safety in the field, it can do so with cybersecurity, panelists said.

Llorens said companies can cultivate a cybersecurity culture by introducing cybersecurity moments at the outset of meetings, much as many oil and gas companies’ open meetings with safety moments.

Part of the problem the industry faces, Budiharto said, is what seem to be competing priorities and fears from the IT and operational technology (OT) sides of businesses.

The conflict comes from the fear of the unknown.

“OT assumes IT doesn’t know anything about the field, or IT says, ‘You don’t know what we see from a cybersecurity threat level,’” she said. “But once we break down the barriers, the technology piece is not the difficult one. It’s the culture and the people.”