According to a recent survey by Reuters/Ipsos, a growing number of workers in the United States are turning to ChatGPT for assistance with routine tasks, despite concerns that prompted companies like Microsoft and Google to limit its use. ChatGPT, a chatbot program powered by generative AI, has gained attention worldwide as companies seek ways to leverage its capabilities in various aspects of their operations. However, security firms and businesses have expressed worries about potential leaks of intellectual property and strategic information that could arise from its usage.
Numerous anecdotal instances reveal individuals employing ChatGPT to aid in tasks such as drafting emails, summarising documents, and conducting preliminary research.
Of those polled in an online AI survey conducted between July 11 and 17 by Reuters/Ipsos, 28 per cent indicated they regularly integrate ChatGPT into their work routines. Surprisingly, only 22 per cent confirmed their employers explicitly endorsed the use of such external AI tools.
In contrast, 10 per cent of respondents reported that their bosses had explicitly prohibited the utilisation of external AI tools, while approximately 25 per cent remained uncertain about their company's stance on the technology.
ChatGPT achieved unprecedented popularity shortly after its November launch, simultaneously generating excitement and concern. This rapid ascent brought OpenAI, the developer behind the program, into conflicts with regulators, particularly in Europe, where the company's data collection practices attracted criticism from privacy watchdogs.
The concern stems from the fact that human reviewers from various companies can access the generated chat content. Researchers discovered that similar AI systems could reproduce absorbed training data, posing a risk to proprietary information.
Ben King, VP of customer trust at corporate security firm Okta (OKTA.O), noted the misunderstanding among users regarding the utilisation of generative AI services and emphasised its critical implications for businesses. Since many of these AI services are free and lack formal contracts, users' assessment processes might not account for potential risks.
While OpenAI refrained from commenting on the implications of individual employees using ChatGPT, the company highlighted a recent blog post addressing corporate partners' data usage concerns and clarifying that their data would not be used to further train the chatbot without explicit permission.
For instance, Google's Bard collects data, including text and location information, from users. Alphabet-owned (GOOGL.O) Google declined to comment further on this matter.
Microsoft (MSFT.O) did not provide an immediate response to requests for comment.
An employee from a US-based dating app, Tinder, shared that despite an unofficial ban on ChatGPT, employees employ it for "harmless tasks" like email composition and general research. Similarly, other companies such as Coca-Cola and Tate & Lyle are experimenting with ChatGPT while focusing on data security.
Despite the benefits of enhanced capabilities, experts advise caution due to potential information security vulnerabilities. Malicious prompts, for instance, can manipulate AI chatbots into divulging sensitive information, prompting the need for careful implementation and monitoring.