Text-based Generative AI applications used for research purposes (e.g., ChatGPT, Gemini, Co-Pilot, Deepseek) often yield suboptimal results and have limited functionality and transparency. These applications are designed to simulate text-based conversations and can provide some answers over the web, but they are not suitable for serious research.
Briefly
Here are a few reasons why it might not be suitable:
- Generating misleading data
Some Generative AI applications may create fabricated references or data that does not actually exist. - Results are not reproducible
Generative AI text generators produce different results each time they are used, but reproducible results are necessary for serious and ongoing research. - Variable or poor quality results
Results can often include Wikipedia pages, online books, and free materials. - Paid resources are not accessible.
Generative AI chat apps cannot provide access to many quality articles, books, and datasets because they cannot bypass paywalls to access these contents. - You cannot filter the results
Generative AI doesn't have the advanced filtering options that specialized research databases offer, so you can't check criteria like date, source type, language, media type, etc. - You cannot choose peer-reviewed sources
You cannot directly filter peer-reviewed sources to ensure quality and academic relevance. - May give incorrect answers
Some Generative AI applications may present incorrect information or provide answers to a completely different question.
What Should Students and Researchers Use Instead?
Students should always use reliable commercial or Open Access sources.
Özyeğin University resources, academic journal databases such as the IEEE or Emerald collections, or SpringerOpen, Google Scholar or ProQuest Open High quality Open Access journal collections such as should be preferred.



