ChatGPT May Leak Trade Secrets, U.S. FTC Claims to Focus on AI Violations

Technology Author: Yunfeng Zhang Apr 20, 2023 12:55 AM (GMT+8)

Companies using generative artificial intelligence, such as ChatGPT, may be compromising customer information and trade secrets as a result, according to a new report from Israeli cybersecurity firm Team8.

Network security, Cloud Security

According to the report, the proliferation of new AI chatbots and collaboration tools could expose some companies to data breaches and legal risks. They are concerned that hackers could use chatbots to gain access to sensitive corporate information or launch attacks against companies. In addition, confidential information currently fed to chatbots could be used by AI companies in the future.

Major technology companies, including Microsoft and Alphabet, are looking to improve chatbots and search engines through generative AI technology, using data from the Internet to train their models to provide users with a one-stop Q&A service. If confidential or private data is used to feed these tools, it will be difficult to remove the information in the future, according to the report.

According to the report, "Companies using generative AI have sensitive information, intellectual property, source code, trade secrets, and other data that could be accessed and processed by others through direct user input or channels such as APIs, including customer information or private information, as well as confidential information." Team8 has rated this risk as "high". They believe that the risk is "manageable" if appropriate precautions are taken.

In the report, Team8 emphasizes that chatbot queries are not fed into the Big Prophecy model to train artificial intelligence, contrary to recent reports that such prompts could be seen by others. "As of press time, the Big Language model does not yet have the ability to update itself in real time, and therefore cannot feed back the information entered by one person to another, effectively allaying such concerns. However, it is not unlikely that this approach will be used in training future versions of the Big Language model." The report says.

The report also identified three other "high-risk" issues related to the integration of generative AI tools, highlighting the threat of sharing more and more information through third-party applications. Microsoft has integrated some of its AI chatbot capabilities into Bing search and Office 365 tools.

Microsoft vice president Ann Johnson (Ann Johnson) participated in drafting the report. Meanwhile, a Microsoft spokesperson said, "Microsoft encourages a transparent discussion of cyber risks in the security and AI space."

The explosion of ChatGPT chatbots, developed by OpenAI, has sparked calls for regulation. There is widespread concern that the innovation could also be used for nefarious purposes, as companies seek to use the technology to improve efficiency.

Federal Trade Commission (FTC) officials said Tuesday that the agency will focus on companies that misuse AI technology to violate anti-discrimination laws or engage in deceptive practices.

FTC Chairwoman Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya, who both appeared at the congressional hearing, were asked about concerns related to recent innovations in artificial intelligence. The technology could be used to create high-value "deep counterfeits" that could lead to more fraud and other illegal activity.