From “Can AI think?” to “Can AI help thinking deeper?”: Is use of Chat GPT in higher education a tool of transformation or fraud?

YALÇIN DİLEKLİ, SERKAN BOYRAZ

Abstract


This research was conducted to see if using ChatGPT prompts students to think more deeply through reflection reports. The case study method and qualitative research methodology were used to carry out this study. Five graduate students in the Curriculum and Instruction department at Aksaray University's Social Sciences Institute who were teachers in various subjects and employed at various state school levels participated in the study.  It was found that the majority of participants accepted all of the information presented by ChatGPT based on a citation as true, did not feel the need to control data reliability, and could be manipulated by ChatGPT while doing self-evaluation. Additionally, despite the fact that they prepared reflective reports in which they compared their essays with ChatGPT and included questions that prompted them to think critically and reflectively, as well as the fact that they had taken a graduate-level course on the teaching of higher order thinking skills, it was acknowledged that they could not demonstrate the expected performance in using higher order thinking skills other than to a limited extent. The onus should be on educators to pioneer positive examples of how to utilize ChatGPT and provide direction on how to harness its potential, supported by critical thinking, rather than to avoid using it and identify it as a tool to be avoided.


Full Text:

PDF

References


Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology, 15(3), 1-13. https://doi.org/10.30935/cedtech/13152

Bisconti, P., Orsitto, D., Fedorczyk, F., Brau, F., Capasso, M., De Marinis, L., & Schettini, C., Eken, H., Bisconti, P., Orsitto, D., Fedorczyk, F., Brau, F., Capasso, M., De Marinis, L., & Schettini, C., Eken, H., Merenda, F., & Forti, M. (2023). Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology. , 38(4), 1443-14. AI & SOCIETY, 38(4), 1443-1452. https://doi.org/10.1007/s00146-022-01518-8

Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals . Longman.

Carbonell, J. G., Michalski, R. S., & Mitchell, T. M. (1983). Machine learning: A historical and methodological analysis. Techniques and Methodology, 4(3), 69-78.

Chung, H. M., & Silver, M. S. (1992). Rule‐based expert systems and linear models: An empirical comparison of learning‐by‐examples methods. Decision Sciences, 23(3), 687-707. https://doi.org/10.1111/j.1540-5915.1992.tb00412.x

Cornish, F., Gillespie, A., & Zittoun, T. (2014). Collaborative analysis of qualitative data. In U. Flick (Ed.), The SAGE handbook of qualitative data analysis (pp. 79-93). SAGE.

Costa, A. L. (1985). Developing minds: A resource book for teaching thinking. Alexandria, VA: Association for Supervision and Curriculum Development.

Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1-12. https://doi.org/10.1080/14703297.2023.2190148

De Angelis, L., Baglivo, F., G., A., G.P., P., P., F., A.E., T., & Rizzo, C. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front. Public Health, 11, 1-15. https://doi.org/10.3389/fpubh.2023.1166120

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., ..., & Wright, R. (2023). So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: is Chat GPT a blessing or a curse? Frontiers in Education, 8(May), 1166682.

Hariri, W. (2023). Unlocking the potential of chatgpt: A comprehensive exploration of its applicatıons, advantages, limitations, and future directions in natural language processing. Computation and Language, 1-23. https://doi.org/10.48550/arXiv.2304.02017

Iskender, A. (2023). Holy or Unholy? Interview with Open AI’s ChatGPT. European Journal of Tourism Research, 34, 1-11. https://doi.org/10.54055/ejtr.v34i.3169

Johns, A. M. (1986). Coherence and academic writing: Some definitions and suggestions for teaching. TESOL Quarterly, 20(2), 247-265. https://doi.org/10.2307/3586543

Kumar, U., Dubey, B., & Kothari, D. P. (2022). Research methodology: Techniques and trends. CRC Press.

Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI ground truth really true? The dangers of training and evaluating AI tools based on experts’ know-what. MIS Quarterly, 45(3), 1501-1525. https://doi.org/10.25300/MISQ/2021/16564

Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. (2023). Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Digital Health, 5(3), 105-106. https://doi.org/10.1016/S2589-7500(23)00019-5

McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia Medica, 22(3), 276-282 .

Mitrović, S., Andreoletti, D., & Ayoub, O. (2023). ChatGPT or human? Detect and explain. explaining decisions of machine learning model for detecting short chatgpt-generated text. Computation and Language, 1-11. https://doi.org/10.48550/arXiv.2301.13852

Plebani, M. (2023). ChatGPT: Angel or Demond? Critical thinking is still needed. Clinical Chemistry and Laboratory Medicine, 61(7), 1131–1132. https://doi.org/10.1515/cclm-2023-0387

Rawas, S. (2023). ChatGPT: Empowering lifelong learning in the digital age of higher education. Education and Information Technologies, 1-14. https://doi.org/10.1007/s10639-023-12114-8

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154. https://doi.org/10.1016/j.iotcps.2023.04.003

Sallam, M., Salim, N. A., Barakat, M., & Al-Tammemi, A. B. (2023). ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive studyhighlighting the advantages and limitations. Narra J, 3(1), 1-14. https://doi.org/10.52225/narra.v3i1.103

Schön, D. A. (1992). The reflective practitioner how professionals think in action. Routledge.

Stahl, B. C., & Eke, D. (2024). The ethics of ChatGPT – Exploring the ethical issues of an emerging technology. International Journal of Information Management, 74, 1-14. https://doi.org/10.1016/j.ijinfomgt.2023.102700

Swartz, R. J., & Parks, S. (2004). lnfusing the teaching of critical and creative thinking into content instruction. California: The Critical Thinking Co.

Swartz, S., & McGuinness, C. (2004). Developing and Assessing Thinking Skills Project Part 1. Boston: National Center for Teaching Thinking.

Turing, A. M. (2009). Computing machinery and intelligence. In R. R. Epstein (Ed.), Parsing the Turing Test (pp. 23-65). Springer Netherlands. https://doi.org/10.1007/978-1-4020-6710-5_3

Warrens, M. J. (2015). Five ways to look at Cohen's kappa. Journal of Psychology & Psychotherapy, 5, 1-4. https://doi.org/10.4172/2161-0487.1000197

Wegerif, R. (2007). Literature review in thinking skills, technology and learning. Open University: Technology and Learning.




DOI: https://doi.org/10.51383/ijonmes.2024.316

Refbacks

  • There are currently no refbacks.




Copyright (c) 2024 http://creativecommons.org/licenses/by/4.0

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.