Open AI, ChatGPT: To Be, or Not to Be, That Is the Question
Open AI, ChatGPT: To Be, or Not to Be, That Is the Question
Hamid Reza Saeidnia, Department of Knowledge and Information Science, Tarbiat Modares University
It was launched by OpenAI in November, 2022, as a chatbot called ChatGPT (Generative Pre-trained Transformer) [1]. A supervised and reinforcement learning approach (an approach to transfer learning) is used in conjunction with OpenAI’s GPT-3 family of large language models [2]. Basically, ChatGPT is an AI-based chatbot that can answer any question, according to its developer Open AI. ChatGPT can understand complex topics and topics in various specialized fields such as physics, mathematics, and programming; also it can solve complex academic-level problems, as well as generate blog content, compose lyrics, create HTML pages, and more [3]. Among the advantages of ChatGPT, it is possible to mention the deep understanding of this tool from the writing skills of the language and even programming languages [2]. In fact, ChatGPT can solve coding challenges and problems and even perform some mathematical proofs. Chat GPT is a smart, adaptable, and multi-purpose chatbot that can be used as an artificial intelligence assistant to facilitate the content creation process and even to improve software development workflows.
In general, ChatGPT is newly introduced and so has its disadvantages [1]. In fact, ChatGPT may produce a response that appears to be correct, but in reality, is not. Depending on the user’s level of knowledge, this may be difficult to detect [4]. Or ChatGPT may have biases and act biased, still unable to ask questions to clarify our queries, and also may provide long outputs [3]. The release of this powerful technology has brought with it many challenges. Some consider it smarter than humans, some call it destructive technology, and others consider it a great helper for writing scientific and research studies. The most interesting point is that this technology has been used by researchers as a co-author. In general, the world of artificial intelligence shows that it is very exciting and that this technology can help humans at a very high level in any field, a subject that was considered science fiction in the past decades. But now it is a part of reality.
—Can ChatGPT be considered a disruptive technology or non-destructive?—
Can ChatGPT be considered a disruptive technology or non-destructive?
But here is the question: “Can ChatGPT be considered a disruptive technology or non-destructive?“
First, ChatGPT answers this question:
As an AI language model, I do not have the capability to assess the effect of ChatGPT on businesses and industries. However, as ChatGPT is primarily a conversational tool that uses natural language processing to communicate with humans, it is unlikely to be considered a disruptive technology or non-destructive, as it does not necessarily replace any existing products or services. Rather, it complements and enhances them by providing a new way of interaction and communication with customers and users.
At first glance, ChatGPT can be considered safe, but the main problem is that its developer, OpenAI, has expressed that ChatGPT can present misinformation. In other words, chat can express information in such a way that it seems correct, but in reality, this information is wrong.
Can misinformation be considered safe?
Misinformation can never be considered safe because it can lead to a range of negative consequences such as confusion, misunderstandings, and incorrect decisions [5]. In some cases, misinformation can also be harmful to individuals’ health and safety[6]. It is therefore important to verify information from reliable sources before accepting it as true [5]. It is interesting to know that the definition of misinformation means not being sure of knowing the source, person, or platform that published the wrong information. If you are sure of knowing the source or the person or the platform, it is called disinformation. This shows that ChatGPT may not know that it is producing misinformation! Accepting this assumption, can ChatGPT be considered intelligent? Therefore, it is always recommended to verify the information provided by an AI language model with multiple sources.
A study titled “The End of Online Exam Integrity?” introduces ChatGPT as a tool that can challenge online exams [7]. In fact, ChatGPT can appear much scarier than the mentioned cases. Noam Chomsky said, “ChatGPT is basically high-tech plagiarism.” From the perspective of Chomsky, “(ChatGPT) may have value for some things, but it is not clear what Something”! [8].
ChatGPT and high-tech plagiarism
Different solutions have been introduced to deal with ChatGPT’s high-tech plagiarism in academic publishing, such as banning ChatGPT, adjusting teaching styles, and detecting AI-produced content. One study presents a different way to prevent high-tech plagiarism: using NFTs (Non-fungible Tokens) to generate immutable (non-changeable) created texts may be a solution [2, 9]. That ChatGPT can be used as a tool for academic misconduct in online exams, as a tool for writing believable scientific abstracts, or as a tool that can think for a human character can be really disturbing, especially when thinking about how much more advanced this tool will be in the next 10 years. But in contrast to this fear, one can pay attention to the good aspects of this technology—for example, as a tool to increase the academic performance of a student, of course, on the condition that he does not use ChatGPT as a guide in doing his/her homework.
It seems that most information databases, publishers, universities, and even governments in different countries are looking for policies on how to use artificial intelligence tools. It should be noted that progress in the field of artificial intelligence and the production of tools such as chat should be in sync with the policies presented in that field; otherwise, it can create dangerous aspects for human life. Finally, to answer the question “Can ChatGPT be considered a disruptive technology or non-destructive?”: surely in the coming years, the global effects of ChatGPT on the job, behavior, and performance level of researchers and every user of ChatGPT will be well known.
References
- M Alshater M. Exploring the role of artificial intelligence in enhancing academic performance: A case study of ChatGPT. Available at SSRN. 2022.
- Mohammadzadeh Z, Ausloos M, Saeidnia HR. ChatGPT: high-tech plagiarism awaits academic publishing green light. Non-fungible token (NFT) can be a way out. Library Hi Tech News. 2023.
- Saeidnia H. Using ChatGPT as a Digital/Smart Reference Robot: How May ChatGPT Impact Digital Reference Services? Information Matters. 2023;2(5).
- Pavlik JV. Collaborating With ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator. 2023:10776958221149577.
- West JD, Bergstrom CT. Misinformation in and about science. Proceedings of the National Academy of Sciences. 2021;118(15):e1912444117.
- Cacciatore MA. Misinformation and public opinion of science and health: Approaches, findings, and future directions. Proceedings of the National Academy of Sciences. 2021;118(15):e1912437117.
- Susnjak T. ChatGPT: The End of Online Exam Integrity? arXiv preprint arXiv:221209292. 2022.
- Culture O. Noam Chomsky on ChatGPT: It’s “Basically High-Tech Plagiarism” and “a Way of Avoiding Learning”. 2023 February 10th.
- Saeidnia HR, Lund BD. Non-fungible tokens (NFT): a safe and effective way to prevent plagiarism in scientific publishing. Library Hi Tech News. 2023;40(2):18-9.
Cite this article in APA as: Saeidnia, H. R. (2023, June 7). Open AI, ChatGPT: To be, or not to be, that is the question. Information Matters, Vol. 3, Issue 6. https://informationmatters.org/2023/06/open-ai-chatgpt-to-be-or-not-to-be-that-is-the-question/