What Are the Risks of Using ChatGPT at Your Workplace?
A creation of OpenAI (an artificial intelligence company in San Francisco), ChatGPT is likely the most commonly used AI chatbot in 2023. With Reinforcement Learning with Human Feedback (RLHF) as its training foundation, ChatGPT is built on immense amounts of human language data to accurately predict and construct sentences, providing highly human-like text responses within a matter of seconds.
ChatGPT was suddenly brought to the forefront in late 2022. Initially a free application, ChatGPT’s functions attracted more than one million users in early December 2022. In the span of merely a month, it grew to gather 100 million users in January 2023, thus making the mark as the most rapidly-growing application yet. While predominantly used in English, ChatGPT can generate responses in a variety of languages, inasmuch as the software has been properly trained and contains enough linguistic data on a given language. Understandably, such linguistic accessibility has also increased the use of the software in non-English-speaking contexts.
ChatGPT in the workplace
As ChatGPT is used for responding to general inquiries and assisting with the composition of emails, articles, and codes, the application can, at the surface level, be seen as highly resourceful for the workplace. Such automated technology, with its promise of automated human-like writing and advising, seems to carry the potential to revolutionize the workplace. Less time wasted on uncomfortable emails and monotonous tasks equals fewer funds spent on certain administrative roles. With this advanced automation, ChatGPT is another piece of technology replacing human jobs, with some claiming that it is the AI and the future of work.
While research by TalentLMS reveals 70% of the workforce has used ChatGPT in the workplace, it also warns that very few people are trained to use AI properly. Few software applications pose no risk, and there are multiple reasons to exercise more caution when using the popular application at work.
ChatGPT Risk 1: Data privacy concerns
While OpenAI declares to not sell user data from ChatGPT, it is important to remember that ChatGPT is trained with a complicated language model that contains vast amounts of data from the Internet and, potentially, data inputted by its users. This leads to valid concerns about data breaches, as ChatGPT can retain sensitive information, learn from it, and then problematically apply it in future contexts with other users. Sharing proprietary information with external parties is, in itself, a breach of contract, and it is even more evident why with the use of ChatGPT. As such, users should not, under any circumstances, share any company data that may be of sensitive or confidential nature.
This provisional concern has led large firms such as JP Morgan and Chase to limit employees’ use of ChatGPT. In March 2023, the worry over data breaches brought the nation of Italy to issue a temporary emergency decision, asking OpenAI to stop using the personal data of Italians to build its training foundation. The investigation is still underway, and Open AI has stopped access to ChatGPT among users in Italy until a proper response can be drafted to the officials. Given the extensive amount of data stored on OpenAi's servers, the repercussions of a data breach resulting from a cyber attack could have severe consequences.
ChatGPT Risk 2: Outputs from ChatGPT may generate inaccurate or misleading information
Another major limitation to consider is the validity of ChatGPT’s responses. The software’s model is trained to predict the sequence of words in a sentence or text, and it is not primarily concerned with the truthfulness of the information it provides. Founders of OpenAI are aware of this limitation, adding that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers". The application itself is not trained to separate fact from fiction, so companies should be aware when using ChatGPT. Additionally, because ChatGPT was trained in earlier years, it contains a limited understanding of global developments or new knowledge after 2021. It may therefore present biased or inaccurate content, which may in turn be harmful to your company’s operation.
ChatGPT Risk 3: ChatGPT outputs may impact your company’s Google SEO score
While the debate on this is still ongoing, concerns have emerged regarding the possibility that ChatGPT's automated outputs could adversely affect a company's SEO score. With Google continuously seeking to optimize its search results for users, spam-like content (such as ChatGPT’s content when not fact-checked) may be filtered and not end up with an optimal SEO score. In theory, Google could detect ChatGPT output by analyzing its text patterns and ‘call out’ websites that use ChatGPT. By employing the Giant Language Model Test Room (GLTR), Google could potentially determine whether a human authored a website and consequently penalize the page for non-human content.
What to avoid when using ChatGPT at work
Sharing sensitive or confidential information
As mentioned previously, you should not input sensitive or confidential information on ChatGPT, as this could jeopardize a business’s operations and expose you or your company to potential lawsuits and fines. It is also crucial to check whether ChatGPT outputs infringe on anyone else’s intellectual rights. This can be done with software aimed to detect AI plagiarism and the origin of any given text in depth (such as ZeroGPT), although such types of software are still being refined at the current point in time.
Relying excessively on ChatGPT
While ChatGPT does come with its conveniences, it is easy for employees to rely on it excessively at some point. This may compromise critical thinking – ChatGPT’s output should not be assumed to be accurate, and it should by no means be a pillar for making key decisions. Too much of ChatGPT in one’s routine may also lead to employees being more ‘mentally idle’; while the AI can help with some administrative tasks, it is important to remember that higher-level tasks, such as policy refinement or quality control, still require human input.
Overlooking ethics associated with ChatGPT
Another commonly overlooked aspect of ChatGPT concerns ethics. Because ChatGPT was developed with massive amounts of human language data as its foundation, it ‘recycles’ information to produce new content.
As such, there is no simple way to determine whether the content that ChatGPT produces for you is original enough for commercial use, thus possibly infringing upon third-party intellectual property (IP) rights without you knowing immediately. Vice-versa, it is equally possible for ChatGPT to misuse your intellectual property as well.
Like most software applications, ChatGPT is not without risks. Before incorporating this tool into your workplace, carefully evaluate potential threats that may arise. By thoroughly understanding its risks and providing comprehensive training for employees, you can ensure the responsible and effective use of ChatGPT in your workplace.
Esme Lee is a science writer and editor in the UK, carrying a passion for tech copywriting. She has a background in educational neuroscience and holds a PhD from the University of Cambridge.