As AI-powered apps become more prevalent in academia and in schools, there are concerns that they may compromise the integrity of research, or the assurance of learning. However, these concerns are often based on misconceptions about what AI apps can and cannot do.
Firstly, many academics and teachers assume that AI apps can do everything from conceiving a project to publishing original research with the press of a button. However, this is more of a fantasy than an argument and shows that they don't have an informed understanding of what AI apps can and cannot do. AI apps are tools, and like any other tool, they have their limitations. Some things they do wonderfully well, and others they are terrible at doing.
Handwriting and books (Generated by Bing) |
Secondly, there are concerns that AI apps will compromise the integrity of research. However, academic integrity is not an isolated concept. The integrity of any research project depends on various factors, such as funding opportunities, the desire to secure tenure, and journal editors’ personal preferences. Leading academic publishers like Elsevier now accept the use of generative AI apps like ChatGPT if they are "used to improve readability and language of the work."
Thirdly, one of the concerns surrounding the use of AI apps is the fear that students will rely solely on this technology to create their essays. However, this fear may be unfounded if there are sound assessment practices in place. As AI apps is simply a tool and not a substitute for critical thinking and original research. Furthermore, using AI apps does not guarantee a good grade or success in academia as it cannot replace the knowledge, skills, and effort required to produce a high-quality essay.
As such, educators can encourage students to use ChatGPT as a supplementary tool to enhance their writing and research skills rather than as a replacement for them. In my experience, when offered the option to create something from scratch or use a ChatGPT-generated text, many students opt for the former than the latter. By being open and transparent about AI apps and ChatGPT, educators can help students develop the necessary skills to succeed in their academic pursuits while also taking advantage of the benefits that AI technologies like ChatGPT can provide.
Fourthly, there are concerns that AI is biased. However, biases exist in many aspects of research, including the way libraries are organized and laws are drafted. The same is true for AI. While Google search is “biased” too, it is not often talked about by academics. When you look up something on Google, you do not get results that are most relevant to your search. It gives you results that have Search-Engine-Optimized (SEO). Try looking up the word “apple” on Google, for instance, and almost all results on the first page are that of Apple the company and do not apply the fruit.
Finally, there are concerns that using AI-generated text is plagiarism. However, plagiarism is when you present another person's work as your own. When you use an AI app to generate text, you are not presenting another person's work as your own. You can use AI apps to improve the readability of your work, but you cannot make an AI app like ChatGPT as your co-author according to the current guidelines for academic publishing. ChatGPT is not a human, although superficially it seems to act like one.
In conclusion, while there are some valid concerns about the use of AI in academia and schools, many of these concerns are based on misconceptions. It is important for academics to have an informed understanding of what AI apps can and cannot do, as well as how they can be used to enhance research integrity and readability. With the right approach, AI can be a valuable tool for researchers and teachers alike.
For concrete examples of how to use AI apps in writing, see my e-products such as mini-guides, Notion or ClickUp templates in my Gumroad shop.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.