ChatGPT, a popular AI tool released in November 2022, has sparked public discussion and controversy among higher education institutions, their regulators, academics and students across the world. Not surprisingly, the first reactions have been about the impact of the technology on student behaviour. The discussions, both in this country and overseas, have tended to emphasise how students will resort to the new tool to violate academic integrity but not how it will spur education forward. It almost seems as if higher education institutions everywhere are in a constant battle to keep their students within ethical boundaries. Perennial complaints about students plagiarising work from the Internet have deflected discussions on how the Internet and new technologies can and have aided educational processes and access to education. The popular media also takes pleasure in reporting cases of student violations than in telling how the new technologies have assisted individuals in expanding their education.
Earlier this year in January 2023, the New York Department of Education announced a ban on access to ChatGPT due to the same concerns. There has been very little discussion about how the new tools can be adopted to improve how higher education is delivered through universities, colleges and polytechnics.
This paper puts forward a proposal for higher education institutions to take a policy-driven approach to build on the potential of AI whilst introducing safeguards against abuse and the dangers inherent in the technology.
Technology and Education
From the quill and paper to the printing press and the production of books, technology has disrupted and nudged education to expand access and improve its delivery. Some innovations also brought with them fears that new devices will undermine the way education is presented, especially the integrity of its processes. The fear of technology escalated in the last century when new devices like photocopiers, sound recorders and handphones became affordable to students. The fear was that these devices would undermine traditional practices in education. Then came the epoch-changing internet. And more fear. Communication became instant and the world wide web gave access to vast sources of information, most of which was freely available. Cutting and pasting replaced taking notes at lectures and the laborious copying from books. Search engines and browsers made information and whole libraries accessible with the click of a mouse or a tap on the screen of the computer. Educational institutions were clear beneficiaries of digital technologies, not only in the educational processes but also advertising and promotional applications and a wide range of administrative uses for the registration of students, grading of academic work, and measuring academic performance. But these benefits were seldom as much publicized as students’ use of the new technology to fulfil academic work such as writing essays and theses. With information and knowledge so widely available and so easily reproduced, cutting and pasting replaced the laborious process of writing. Plagiarism began to terrorize teachers, especially in universities and other higher educational institutions. But more was to come.
Hovering on the wings of the changes brought about in the 20th century and threatening to change all human enterprise was Artificial Intelligence with its promises of autonomous cars, humanoid robots, and the means to change the nature of work. Artificial Intelligence is already being used extensively in banking, healthcare, E-commerce, retail business, and transportation to name only a few of its applications. A product, no doubt of higher education, Artificial Intelligence is poised to change all education.
The shockwave that ChatGPT sent through universities and colleges over the world can be attributed to two main reasons. The first reason is the technology’s staggering ability to interact with humans using natural language to answer questions on an incredible range of topics. ChatGPT sits on top of the vast resources of the internet and uses those resources to answer questions using AI to filter out the answer and communicate the answer in natural language.
The second reason stems from the same concerns that academia has with the Internet - the fear that the GPT features will encourage academic dishonesty and further disrupt the teaching and learning processes. As mentioned at the outset of this paper, those were the reasons that forced ChatGPT to be banned by the New York Department of Education.
Chat GPT is not the only AI tool that has had an impact on higher education. Other AI tools that are in use help in literature search, formatting documents to citation rules, summarising large texts, and even analysing technical information. These are only some of the applications. There are more and more are likely to be produced.
A Policy-Driven Response to AI in Education
What is required now is for the regulators of higher education and institutions of higher education to develop a systematic and rational approach to the promises and challenges of the new tools. It involves creating a set of guidelines, procedures, and principles that will address the potential and challenges raised by the new tools based on the goals and objectives of the institution involved. Such an approach will provide consistency and transparency in decision-making, ensure that everyone within the institution is aware of the policies and procedures that are in place, and provide a framework for measuring success or progress. It will avoid ad hoc and reactive decisions and resistance from any die-hard Luddites that may be lurking in the shadows of the institution.
The National Council on Higher Education
It is in times like these that the need for a single policy-making authority in the higher education system becomes apparent. The higher education system in the country is one of the most diverse systems found anywhere in the world. There are over 400 institutions of higher education, not including post-secondary colleges and vocational institutions. The institutions are established under seven separate pieces of legislation grown out of different political purposes over the last 50 years. The institutions vary in their governance structures, and sources of funding and operate with different levels of expertise in governance and educational management. The public-private division of institutions belies other divisions in the system. There are foreign branch campuses that are regulated, at least partially, by their parent institutions and local institutions that deliver programs from foreign institutions and professional bodies.
The legislative structure of the system, obviously to add coherence to the system includes a national council on higher education vested with exclusive powers over policies on higher education. Established by an act of parliament, institutions and regulators alike are bound by the policies established by National Council on Higher Education.
The National Council on Higher Education is the one agency that could have established policies to deal with AI’s impact on higher education. Sadly, however, the council has been dormant for almost a decade. Since the law establishing the council (The National Council on Higher Education Act 1996) is still in force it is doubtful if any other agency or ministry can perform the functions of the council with the same authority as the Council.
The only other agency with legal authority to add commonality to the jumble is the Malaysian Qualifications Agency, which has issued an advisory on the subject.
The MQA Advisory
Even as this document is being written (4 April 2023), the Malaysian Qualifications Agency (MQA) has issued an Advisory Note (2/2023) titled Use of Generative Artificial Intelligence Technology (Generative Artificial Intelligence) in Higher Education (Google Translation). Happily, the advisory adopts a pro-innovation approach to the use of AI in higher education without the paranoia seen in some international responses.
The MQA Advisory concludes that the use of Generative AI-based technology can be a good educational resource if used in a prudent and responsible manner to support and enrich students' learning experiences. HEP needs to provide clear guidance on the use of Generative AI without affecting the set learning outcomes as well as the principles of educational integrity.
The Advisory claims that it is issued in line with principles in MQA’s Program Accreditation Code of Practice (2017) (COPPA), especially in relation to items 1.2.5, 2.2.1, 2.2.2, 4.2.1 and 7.1.1 and similar quality principles in the Code Program Accreditation Practice: Open & Distance Learning (2019) and TVET Program Accreditation Code of Practice (2020). Whilst the COPPA provisions that are cited have some obvious connection with what is stated in the Advisory, the recommendations directed variously at the institutions, the academics, and the students go much further than the current COPPA provisions. On the whole, however, the Advisory will encourage HE institutions to look more closely at how AI and its tools can be deployed in education. How much and how the institutions will use the new technology is a matter that is left to the institutions.
UNESCO Guidance for Policy-makers – AI Essentials for Policy-makers
A UNESCO document, AI and Education - Guidance for Policy-makers published in 2021(https://unesdoc.unesco.org/ark:/48223/pf0000376709) intends to help policy-makers understand the possibilities and implications of AI in teaching and learning. Although the UNESCO guidance does not anticipate the technology of ChatGPT, it recognized that ‘the very foundations of teaching and learning may be reshaped by the deployment of AI in education.’ Despite its limitations, the document is a useful resource for those formulating policies on AI and education because of the impact of tools like ChatGPT.
Guidelines and Policies for Using AI Tools
This document encourages HE institutions to take a balanced approach to the technology that will weigh its challenges against its capabilities and benefits in higher education.
It is proposed that higher education institutions adopt a more measured, policy-based approach to AI tools than simply considering their negative impact on academic integrity. A carefully developed policy as suggested here will place the institution in command over the application of the tools. They can establish rules through the policies for the responsible and ethical use of the tools and periodically review the application of those tools. The adoption and publication of such a policy to staff and students and possibly their parents may itself be a deterrent against the unethical use of such tools.
Preliminary matters - AI Committee
Establish in accordance with the constitution of the institution, a committee or authority as the constitution dictates to advise the management of the institution on the technology. The role of the AI Committee (AIC) shall be described in the institution’s AI policy (AIP). The AIC’s responsibilities shall include the following.
Advise management on the selection and procurement of the technology.
Keep management informed on any changes that may be made to the technology as and when such changes are made.
Develop strategies to publicize the institution’s use of the technology in accordance with the AIP.
Devise strategies to incorporate institutional information and knowledge into datasets used by AI technology.
Advise management on any matter arising from the use of the technology that may affect compliance with the rules or guidelines of the Malaysian Qualifications Agency.
Advise management on any consequence that the use of the technology may have on the laws and regulations affecting the design and delivery of courses of study in the institution.
Advise management on any use of the technology that may affect the terms of any contractual agreements with third parties such as collaborating institutions.
To advise the management on any matter that the AIC feels should be brought to the attention of management for the safe and ethical use of AI in the institution.
To do such other things that are requested by management on the use of the technology that may affect compliance with the rules or guidelines.
Role of the top-level management.
Top management means the authority within the institution that has overall powers over the management of the institution, its staff and students as laid down in the founding statutes such as the Universities and University Colleges Act 1971, the Private Higher Educational Institutions Act 1996 and the Education Act 1996. Top management shall be responsible for the matters shown below.
Establish policies and guidelines on the use of AI and other technology-based aids in the institution’s educational processes;
The policies shall be developed in consultation with the staff and students of the institution to secure their commitment to the policies;
Describe the application of the policies and any actions that may be taken for non-compliance with the policies;
Describe how the policy will be updated/changed;
Publicize the policies on the institution’s website and share the same with government agencies and relevant partners of the institution;
Carry out a survey on how the technology might affect the institution’s educational processes and conduct periodic risk analyses.
Carry out an evaluation of the potential impact on students and staff considering the potential benefits and risks.
Consider copyright and other intellectual property rights that might arise from the use of AI tools in the institution.
Consider the protection of personal data and privacy, and the development of policies and protocols to ensure that the AI tools are used in a fair, transparent, and unbiased way.
AI is used in both academic and administrative matters. Any policy that is developed by the institution must cover AI use across the different activities of the institution.
Review and compile any third-party materials used by the institution that is generated by AI and list copyright and Intellectual Property (IP) rights attached to those rights.
Review and compile a list of all institutional materials used internally as well as supplied to third parties, including students and check if IP rights are protected when used in AI applications.
Suggested principles to be included in policies developed by the institution.
Ethical Considerations: The use of AI tools like ChatGPT must be guided by ethical principles that ensure that students, faculty, and staff are treated fairly, equitably, and with respect. These ethical considerations must be grounded in the principles of autonomy, and justice and issued in a non-threatening manner.
Compliance with Laws and Regulations. The policies must comply with laws and regulations issued by the Ministry of Education, the Ministry of Higher Education, the Malaysian Qualifications Agency and where relevant any guidelines issued by professional associations that may impact the institution’s courses of study.
Responsible Use: The use of AI tools must be responsible, transparent, and accountable. The institution must ensure that staff and students realise that the output from the tools is accurate, reliable, and free from bias, which may not always be the case.
The use of AI tools must be transparent, and the decision-making process that led to their use must be open to scrutiny. Institutions must also be accountable for the impact of AI tools on the individuals and groups affected by institutional use of output from the AI tools.
Similarly, staff and students must be made to understand that they take responsibility for the use of the tools and the output from the tools. Responsibility for accuracy and reliability cannot be shifted to the tools.
Training and Education: Institutions must provide adequate training and education to faculty, staff, and students on the use of AI tools like ChatGPT. This includes training on the ethical considerations of using AI tools, their potential benefits and limitations, and how to use them effectively.
Data Protection and Privacy: Institutions must ensure that the use of AI tools complies with applicable laws and regulations related to data protection and privacy. This includes ensuring that data used by AI tools is collected and stored in a manner that protects the privacy and confidentiality of individuals.
Collaboration: Institutions must collaborate with other institutions and organizations to advance the development and responsible use of AI tools in higher education. This includes sharing best practices, collaborating on research, and promoting the ethical use of AI tools.
Continuous Monitoring: Institutions must continuously monitor the use of AI tools like ChatGPT to ensure that they are being used in accordance with this policy and ethical principles. This includes regularly reviewing the impact of AI tools on students, faculty, and staff, and making adjustments as necessary.
Limitations:
What is stated here are some general issues that institutions may have to take into account in developing their policies on the use of ChatGPT type of AI tools. Institutions and institutional practices vary across the sector and as such what is proposed here must only be taken as a guide to the formulation of policies. Anyone reading this document must take the responsibility for ensuring the accuracy, completeness or usefulness of the information and must assume all risks associated with the implementation of any policies based on the advice provided.
In the early 2000s 'Generative topics', CCDT or the Collaborative Curriculum
ReplyDeleteDevelopment Tool at Harvard Graduate School of Education exposed Educators
to the definitions, its academic purpose & tool applications.
The objective: - For educators to enhance, Teaching for Understanding and
for greater inclusion of Technology in Education.
Two decades now 2023, the open source is ever green and 'Generative' in
practise, dynamic by definition.
MQA advisory note (2/2023)
"Use of Generative Artificial Intelligence in Higher Ed"- AI as good
educational resource to enrich learning experience.
This is optimistic and a perspective for more understanding.
The onus of enriching the learning experience by the provider -an engagement
of guidance.
AI offers providers/institutions various capabilities- Creating curriculum
and instruction with the aid of AI, Designing a homeroom of guidelines,
initiating policy-based T&C for student familiarization and compliance,
Designing of assessments, Individualized education plans, Channelling
relevant content to a learner type/gap, digitizing from static to
interactive content etc.
Terminologies as "Opensource" & "Generative" award limitless power into the
hands of the common man.
Students may seem more focussed today.
1) Weaponizing Generative AI,
2) Improving skills of generating the right prompts &
3) Proving their limitless subject matter knowledge.
According to the founders of Open AI - AI/conversational AI, ChatGPT's USP
is to deliver instant answers/solutions to assessments and questions. The
concern is around the facts presented as the AI model in itself is known to
fabricate.
While students are aware of the opportunities, universities are in process
of reviewing policies/ new guidelines for reasons more than the delivery of
an enriched learning experience.
Mr Menon your study/ paper on initiation of guidelines & new policies on the
"Use of Generative Artificial Intelligence in Higher Ed" is a timely call
for more discussions inviting all stakeholders.
Savio Fernandez
Thank you for the observations and comments, Savio.
ReplyDeleteThankyou for your comments, Anonymous.
ReplyDeleteMr. Menon, what a great (but lengthy article). Thanks for giving your learned views.
ReplyDeleteThere is always the fear of technology by those who perceive only the threats & not ways to mitigate these. In higher education, it is necessary for learners to assimilate the knowledge but more importantly, they should apply these. "The proof of the pudding is in the eating", so in my case when I was teaching my students at Zhaoqing University, China (remotely - during Covid-19 years) , I made them / scored them on the way they used the knowledge learned more often than usual. In-class quiz modern technology meant that ( Qinghua University's Rainclassroom addon to Powerpoint) I could sneak these questions in and students had to answer them within the timeframe given (often 120s) and we got instant results. I also gave rewards ("hong bao") to the top/fastest scorers and published the "leaderboard" at the end of the class. This use of tech kept my students on their toes. Even if they had the AI robot, it would not be fast enough to give them the answers! My use of tech (I had no choice as I had to teach them from over 4,000 km away) helped to create as good (some say better) a learning environment for my students. The teaching evaluation was reasonable, so much so that the leadership of my university granted me the privilege (the only one of the 2,000 teaching staff) to teach entirely online for 2 and a half years. So IMHO, educators, especially those in higher education should find ways to make use of technology and not to fear it. Of course whether the power that be (some may not have taught a class recently) fully subscribe to this view is another matter.