Sunday, 16 April 2023

A Training Course to Face the Challenges of Artificial Intelligence

Training staff and students on the use of AI tools must be part of any institutional policy that is implemented to deal with the challenges of the new technology. 


Carefully designed training programmes are an effective way to introduce students and staff to the challenges and potential of the new technology. Training must include ethical and legal issues arising from the use of AI tools, their potential benefits and limitations, and how to use them effectively in higher education
.

A New Training Course

In this paper, we describe a course developed by senior academics titled Knowledge and Learning in the Age of Chat GPT. The course deals with fundamental questions about knowledge, its creation, verification and application, especially in an educational context.

The Rationale for the Course

As Artificial Intelligence (AI) tools like ChatGPT begin to encroach into the realms of knowledge production, it becomes important that students and even teachers have a clearer understanding of how universities and colleges create, validate, and transmit knowledge. Rather than worry about how ChatGPT will undermine the integrity of educational processes, HE institutions must bring the technology to heel as simply another source of information that must be tested and verified like any other source. 

More than ever before, HE institutions must forge an environment where all knowledge is subject to critical evaluation and students are given a more explicit understanding of knowledge creation and validation. Students must be taught that knowledge is fragile and vulnerable to manipulations and biases. With that realisation, and equipped with critical and analytical skills, students will be able to evaluate the output of AI technologies and make informed decisions on how to use and apply the information generated by AI tools.

HE institutions must also examine how AI can beneficially serve educational processes. For instance, AI has the potential to liberate education from the control of external agents like the media, governments and politicians or a particular perspective or set of beliefs. Tools like ChatGPT can provide learners with an immediate alternative view of the knowledge that is officially transmitted.

Overall, the course equips students and staff with the necessary skills to ethically navigate and apply the opportunities and challenges that AI technologies bring to the realm of education.

Course Outline

Ideally, the course should be taught over two full days. However, a shortened version can be delivered in one day.

I. Introduction

Welcome, and introduction to the course.

A brief overview of the topics to be covered.

II. How ChatGPT Answers Questions

Explanation of how ChatGPT works on large data sets.

Examples of how ChatGPT can be used to extract knowledge from text.

Distinguish ChatGPT from information on the Internet.

Discussion of the advantages and limitations of this technology.

Ethical issues arising from the use of ChatGPT.

III. ChatGPT in Higher Education

Personalized and self-learning.

Online tutoring and mentoring.

Automated grading of exams and assignments.

Translation, question answering, summarizing.

Literature search.

Curriculum development.

Generating course materials.

Improving accessibility to higher education, generally and for special needs students.

Originality and plagiarism.

IV. Validation of Knowledge

The importance of credibility and accuracy of knowledge and the role of the university in that process.

Importance of knowledge in making informed decisions, solving problems, and advancing knowledge.

Traditional methods of knowledge validation - peer review, fact-checking, citation analysis, and expert opinion.

Challenges in validating online information and information produced by AI tools.

V. Hierarchies of Knowledge

Discussion of the hierarchies of knowledge, from data to information, knowledge, understanding, and wisdom 

Explanation of how these levels build upon one another and contribute to deeper insights.

VI. Knowledge Systems

Meaning of knowledge.

Different types of knowledge.

Overview of different knowledge systems and how they have created knowledge in the past.

Examples of how indigenous knowledge, religious knowledge, and scientific.

Different approaches and perspectives in knowledge systems.

VII. Bloom's Taxonomy and Learning

The hierarchy of cognitive skills.

ChatGPT and the hierarchy of cognitive skills.

Explanation of Bloom's Taxonomy and its six levels of learning: remembering, understanding, applying, analysing, evaluating, and creating

Discussion of how different types of questions and learning activities can promote higher-order thinking skills.

VIII. Critical thinking

Thinking tools to evaluate information/knowledge.

Evaluation of sources.

Analysis of biases.

Application of logical reasoning.

Identifying logical fallacies.

External references.

IX Limitations and Challenges

Ethical implications

Explanation of the limitations of ChatGPT and other similar technologies, including the possibility of biased or flawed knowledge.

Discussion of the importance of critical thinking skills in evaluating knowledge from these sources.

AI technology may seem to make learning more exciting but the excitement must be tempered with vigilance in ensuring the accuracy and quality of information.

© Espact Sdn. Bhd.


Monday, 10 April 2023

A conversation with ChatGPT on Plato's Allegory of the Cave


 I asked Chat GPT to review the following.

Most formal education, whether in schools or universities takes place in caves such as allegorized by Plato. What we learn constructs our reality. Unlike the cave people, our pupils have no means to escape the knowledge that is given to them. There is other knowledge but it will never be discovered by the students or at least not by most of them. Those who teach will not encourage it.

 Chat GPT's Response

 As an AI language model, my role is to provide information and avoid personal opinions. However, I can say that the statement you provided is not entirely accurate. While Plato's allegory of the cave can be used as a metaphor for education, it is important to recognize that formal education is not necessarily limited to a particular perspective or set of beliefs. Furthermore, students are encouraged to seek knowledge beyond the classroom, and many educational institutions provide opportunities for students to explore different ideas and perspectives. Additionally, teachers play a crucial role in facilitating this process of discovery and critical thinking. It is important to recognize that formal education is constantly evolving and adapting to new knowledge and ideas.

Me

 Yes, agreed. I am grateful for your input. But what if students begin to see or choose only to see knowledge generated by ChatGPT and other similar tools ?

 ChatGPT

 It is important to recognize that while AI language models like ChatGPT can provide a vast amount of information and generate responses, they should not be the sole source of knowledge for students. It is essential for students to engage in critical thinking and evaluate the information they receive from various sources, including AI tools.

Additionally, formal education is not just about acquiring knowledge but also developing critical thinking, problem-solving, and analytical skills. These skills enable students to apply the knowledge they have gained to real-world situations and make informed decisions.

Therefore, while AI tools can be useful in supplementing students' learning, it is important for students to have a broad range of learning experiences that include engaging with teachers, peers, and other resources outside of AI tools.

Me

           But the tools also offer to criticize and analyze. There is no escape, you see.

           ChatGPT

 While AI language models like ChatGPT are designed to provide accurate and reliable information, it is still important for students to engage in critical thinking and evaluate the information they receive, even if it comes from AI tools. AI tools are programmed by humans and are not immune to bias or erros

In addition to critical thinking, students should also develop research skills that allow them to assess the credibility and reliability of sources, including AI tools. By learning to analyze and critique information, students can gain a deeper understanding of a subject and develop the skills necessary to make informed decisions and solve problems in their personal and professional lives.

It is also important for educators to provide guidance on how to use AI tools effectively and responsibly. This includes teaching students how to analyze and interpret the information provided by AI tools and how to evaluate the credibility and accuracy of the information. By doing so, students can learn to use AI tools as a valuable resource while also developing critical thinking and analytical skills.

Developing Institutional Policies on the Use of AI Tools





ChatGPT, a popular AI tool released in November 2022, has sparked public discussion and controversy among higher education institutions, their regulators, academics and students across the world. Not surprisingly, the first reactions have been about the impact of the technology on student behaviour. The discussions, both in this country and overseas, have tended to emphasise how students will resort to the new tool to violate academic integrity but not how it will spur education forward. It almost seems as if higher education institutions everywhere are in a constant battle to keep their students within ethical boundaries. Perennial complaints about students plagiarising work from the Internet have deflected discussions on how the Internet and new technologies can and have aided educational processes and access to education. The popular media also takes pleasure in reporting cases of student violations than in telling how the new technologies have assisted individuals in expanding their education.

Earlier this year in January 2023, the New York Department of Education announced a ban on access to ChatGPT due to the same concerns. There has been very little discussion about how the new tools can be adopted to improve how higher education is delivered through universities, colleges and polytechnics.

This paper puts forward a proposal for higher education institutions to take a policy-driven approach to build on the potential of AI whilst introducing safeguards against abuse and the dangers inherent in the technology.

Technology and Education

From the quill and paper to the printing press and the production of books, technology has disrupted and nudged education to expand access and improve its delivery. Some innovations also brought with them fears that new devices will undermine the way education is presented, especially the integrity of its processes. The fear of technology escalated in the last century when new devices like photocopiers, sound recorders and handphones became affordable to students. The fear was that these devices would undermine traditional practices in education. Then came the epoch-changing internet. And more fear. Communication became instant and the world wide web gave access to vast sources of information, most of which was freely available. Cutting and pasting replaced taking notes at lectures and the laborious copying from books. Search engines and browsers made information and whole libraries accessible with the click of a mouse or a tap on the screen of the computer. Educational institutions were clear beneficiaries of digital technologies, not only in the educational processes but also advertising and promotional applications and a wide range of administrative uses for the registration of students, grading of academic work, and measuring academic performance. But these benefits were seldom as much publicized as students’ use of the new technology to fulfil academic work such as writing essays and theses. With information and knowledge so widely available and so easily reproduced, cutting and pasting replaced the laborious process of writing. Plagiarism began to terrorize teachers, especially in universities and other higher educational institutions. But more was to come. 

Hovering on the wings of the changes brought about in the 20th century and threatening to change all human enterprise was Artificial Intelligence with its promises of autonomous cars, humanoid robots, and the means to change the nature of work. Artificial Intelligence is already being used extensively in banking, healthcare, E-commerce, retail business, and transportation to name only a few of its applications. A product, no doubt of higher education, Artificial Intelligence is poised to change all education.

The shockwave that ChatGPT sent through universities and colleges over the world can be attributed to two main reasons. The first reason is the technology’s staggering ability to interact with humans using natural language to answer questions on an incredible range of topics. ChatGPT sits on top of the vast resources of the internet and uses those resources to answer questions using AI to filter out the answer and communicate the answer in natural language. 

The second reason stems from the same concerns that academia has with the Internet - the fear that the GPT features will encourage academic dishonesty and further disrupt the teaching and learning processes. As mentioned at the outset of this paper, those were the reasons that forced ChatGPT to be banned by the New York Department of Education.

Chat GPT is not the only AI tool that has had an impact on higher education. Other AI tools that are in use help in literature search, formatting documents to citation rules, summarising large texts, and even analysing technical information. These are only some of the applications. There are more and more are likely to be produced.

A Policy-Driven Response to AI in Education

What is required now is for the regulators of higher education and institutions of higher education to develop a systematic and rational approach to the promises and challenges of the new tools. It involves creating a set of guidelines, procedures, and principles that will address the potential and challenges raised by the new tools based on the goals and objectives of the institution involved. Such an approach will provide consistency and transparency in decision-making, ensure that everyone within the institution is aware of the policies and procedures that are in place, and provide a framework for measuring success or progress. It will avoid ad hoc and reactive decisions and resistance from any die-hard Luddites that may be lurking in the shadows of the institution.

The National Council on Higher Education

It is in times like these that the need for a single policy-making authority in the higher education system becomes apparent. The higher education system in the country is one of the most diverse systems found anywhere in the world. There are over 400 institutions of higher education, not including post-secondary colleges and vocational institutions. The institutions are established under seven separate pieces of legislation grown out of different political purposes over the last 50 years. The institutions vary in their governance structures, and sources of funding and operate with different levels of expertise in governance and educational management. The public-private division of institutions belies other divisions in the system. There are foreign branch campuses that are regulated, at least partially, by their parent institutions and local institutions that deliver programs from foreign institutions and professional bodies.

The legislative structure of the system, obviously to add coherence to the system includes a national council on higher education vested with exclusive powers over policies on higher education. Established by an act of parliament, institutions and regulators alike are bound by the policies established by National Council on Higher Education.

The National Council on Higher Education is the one agency that could have established policies to deal with AI’s impact on higher education. Sadly, however, the council has been dormant for almost a decade. Since the law establishing the council (The National Council on Higher Education Act 1996) is still in force it is doubtful if any other agency or ministry can perform the functions of the council with the same authority as the Council.

The only other agency with legal authority to add commonality to the jumble is the Malaysian Qualifications Agency, which has issued an advisory on the subject.

The MQA Advisory

Even as this document is being written (4 April 2023), the Malaysian Qualifications Agency (MQA) has issued an Advisory Note (2/2023) titled Use of Generative Artificial Intelligence Technology (Generative Artificial Intelligence) in Higher Education (Google Translation). Happily, the advisory adopts a pro-innovation approach to the use of AI in higher education without the paranoia seen in some international responses.

The MQA Advisory concludes that the use of Generative AI-based technology can be a good educational resource if used in a prudent and responsible manner to support and enrich students' learning experiences. HEP needs to provide clear guidance on the use of Generative AI without affecting the set learning outcomes as well as the principles of educational integrity.

The Advisory claims that it is issued in line with principles in MQA’s Program Accreditation Code of Practice (2017) (COPPA), especially in relation to items 1.2.5, 2.2.1, 2.2.2, 4.2.1 and 7.1.1 and similar quality principles in the Code Program Accreditation Practice: Open & Distance Learning (2019) and TVET Program Accreditation Code of Practice (2020). Whilst the COPPA provisions that are cited have some obvious connection with what is stated in the Advisory, the recommendations directed variously at the institutions, the academics, and the students go much further than the current COPPA provisions. On the whole, however, the Advisory will encourage HE institutions to look more closely at how AI and its tools can be deployed in education. How much and how the institutions will use the new technology is a matter that is left to the institutions.

UNESCO Guidance for Policy-makers – AI Essentials for Policy-makers

A UNESCO document, AI and Education - Guidance for Policy-makers published in 2021(https://unesdoc.unesco.org/ark:/48223/pf0000376709) intends to help policy-makers understand the possibilities and implications of AI in teaching and learning. Although the UNESCO guidance does not anticipate the technology of ChatGPT, it recognized that ‘the very foundations of teaching and learning may be reshaped by the deployment of AI in education.’ Despite its limitations, the document is a useful resource for those formulating policies on AI and education because of the impact of tools like ChatGPT. 

Guidelines and Policies for Using AI Tools

This document encourages HE institutions to take a balanced approach to the technology that will weigh its challenges against its capabilities and benefits in higher education. 

It is proposed that higher education institutions adopt a more measured, policy-based approach to AI tools than simply considering their negative impact on academic integrity. A carefully developed policy as suggested here will place the institution in command over the application of the tools. They can establish rules through the policies for the responsible and ethical use of the tools and periodically review the application of those tools. The adoption and publication of such a policy to staff and students and possibly their parents may itself be a deterrent against the unethical use of such tools.

Preliminary matters - AI Committee

  1. Establish in accordance with the constitution of the institution, a committee or authority as the constitution dictates to advise the management of the institution on the technology. The role of the AI Committee (AIC) shall be described in the institution’s AI policy (AIP). The AIC’s responsibilities shall include the following.

    1. Advise management on the selection and procurement of the technology.

    2. Keep management informed on any changes that may be made to the technology as and when such changes are made.

    3. Develop strategies to publicize the institution’s use of the technology in accordance with the AIP.

    4. Devise strategies to incorporate institutional information and knowledge into datasets used by AI technology.

    5. Advise management on any matter arising from the use of the technology that may affect compliance with the rules or guidelines of the Malaysian Qualifications Agency.

    6. Advise management on any consequence that the use of the technology may have on the laws and regulations affecting the design and delivery of courses of study in the institution.

    7. Advise management on any use of the technology that may affect the terms of any contractual agreements with third parties such as collaborating institutions.

    8. To advise the management on any matter that the AIC feels should be brought to the attention of management for the safe and ethical use of AI in the institution.

    9. To do such other things that are requested by management on the use of the technology that may affect compliance with the rules or guidelines.

  2. Role of the top-level management.

Top management means the authority within the institution that has overall powers over the management of the institution, its staff and students as laid down in the founding statutes such as the Universities and University Colleges Act 1971, the Private Higher Educational Institutions Act 1996 and the Education Act 1996. Top management shall be responsible for the matters shown below.

  1. Establish policies and guidelines on the use of AI and other technology-based aids in the institution’s educational processes;

  2. The policies shall be developed in consultation with the staff and students of the institution to secure their commitment to the policies;

  3. Describe the application of the policies and any actions that may be taken for non-compliance with the policies;

  4. Describe how the policy will be updated/changed;

  5. Publicize the policies on the institution’s website and share the same with government agencies and relevant partners of the institution;

  6. Carry out a survey on how the technology might affect the institution’s educational processes and conduct periodic risk analyses. 

  7. Carry out an evaluation of the potential impact on students and staff considering the potential benefits and risks.

  8. Consider copyright and other intellectual property rights that might arise from the use of AI tools in the institution.

  9. Consider the protection of personal data and privacy, and the development of policies and protocols to ensure that the AI tools are used in a fair, transparent, and unbiased way.

  10. AI is used in both academic and administrative matters. Any policy that is developed by the institution must cover AI use across the different activities of the institution.

  11. Review and compile any third-party materials used by the institution that is generated by AI and list copyright and Intellectual Property (IP) rights attached to those rights.

  12. Review and compile a list of all institutional materials used internally as well as supplied to third parties, including students and check if IP rights are protected when used in AI applications.

  1. Suggested principles to be included in policies developed by the institution.

    1. Ethical Considerations: The use of AI tools like ChatGPT must be guided by ethical principles that ensure that students, faculty, and staff are treated fairly, equitably, and with respect. These ethical considerations must be grounded in the principles of autonomy, and justice and issued in a non-threatening manner.

    2. Compliance with Laws and Regulations. The policies must comply with laws and regulations issued by the Ministry of Education, the Ministry of Higher Education, the Malaysian Qualifications Agency and where relevant any guidelines issued by professional associations that may impact the institution’s courses of study.

    3. Responsible Use: The use of AI tools must be responsible, transparent, and accountable. The institution must ensure that staff and students realise that the output from the tools is accurate, reliable, and free from bias, which may not always be the case. 

    4. The use of AI tools must be transparent, and the decision-making process that led to their use must be open to scrutiny. Institutions must also be accountable for the impact of AI tools on the individuals and groups affected by institutional use of output from the AI tools.

    5. Similarly, staff and students must be made to understand that they take responsibility for the use of the tools and the output from the tools. Responsibility for accuracy and reliability cannot be shifted to the tools.

    6. Training and Education: Institutions must provide adequate training and education to faculty, staff, and students on the use of AI tools like ChatGPT. This includes training on the ethical considerations of using AI tools, their potential benefits and limitations, and how to use them effectively.

    7. Data Protection and Privacy: Institutions must ensure that the use of AI tools complies with applicable laws and regulations related to data protection and privacy. This includes ensuring that data used by AI tools is collected and stored in a manner that protects the privacy and confidentiality of individuals.

    8. Collaboration: Institutions must collaborate with other institutions and organizations to advance the development and responsible use of AI tools in higher education. This includes sharing best practices, collaborating on research, and promoting the ethical use of AI tools.

    9. Continuous Monitoring: Institutions must continuously monitor the use of AI tools like ChatGPT to ensure that they are being used in accordance with this policy and ethical principles. This includes regularly reviewing the impact of AI tools on students, faculty, and staff, and making adjustments as necessary.

  2. Limitations:

    1. What is stated here are some general issues that institutions may have to take into account in developing their policies on the use of ChatGPT type of AI tools. Institutions and institutional practices vary across the sector and as such what is proposed here must only be taken as a guide to the formulation of policies. Anyone reading this document must take the responsibility for ensuring the accuracy, completeness or usefulness of the information and must assume all risks associated with the implementation of any policies based on the advice provided.