Sunday 26 September 2021

Higher Education Minster Acted Ultra Vires Rules High Court

 

Section 4A, the Universities and University Colleges Act 1971.

For the purpose of selecting a qualified and suitable person for the post of Vice-Chancellor or for any other post to which the Minister has the power to appoint under this Act, the Minister shall, from time to time, appoint a committee to advise him on such appointment.

The Minister of Higher Education acted ultra vires and in violation of natural justice in terminating the appointment of a member of the statutory committee established to advise the minister on the appointment of Vice-Chancellors. This was the finding of the KL High Court following an application for judicial review (JR) of the minister’s action. The JR application was made by Dr Andrew Aeria, the member involved in the dismissal. The decision was delivered on 12 August 2021.

The Permanent Selection Committee for the Appointment of Vice-Chancellors

The committee in question is the Permanent Selection Committee for the Appointment of Vice-Chancellors (the Committee) which is established under section 4A of the Universities and University Colleges Act 1971 (the Act). The Committee’s function is to advise the minister in selecting qualified and suitable persons for the post of Vice-Chancellors in public universities. Section 4A was added to the Act in 2009 to ensure ‘greater accountability, transparency, professionalism and academic independence and autonomy in the process of the appointment.’ The section applies not only to the appointment of vice-chancellors and deputy vice-chancellors but also to other officials in the Ministry such as the Director-General of Higher Education and Deputies Director-General. However, the Committee only deals with the appointment of vice-chancellors.

The High Court Decision

Dr Aeria was appointed to the Committee in 2018 for a term of three years with a provision for earlier termination with 30 days’ notice. Notwithstanding those provisions, his appointment was terminated in April 2020 when a new minister took office, giving him only 4 days’ notice. Dr Aeria’s application for judicial review was filed in 2020 and the matter was heard in August this year. Apart from declaring the minister’s actions ultra vires and against natural justice, the court also issued a certiorari order to quash the minister’s decision to terminate Dr Aeria’s appointment. Further, the court declared that because of the quashing of the decision, Dr Aeria’s membership in the Committee was deemed to have continued from the date of his appointment to the date of the court’s order. Dr Aeria was awarded costs of RM 5000 and damages that are to be assessed by the court.

Wider Implications of the Case

The High Court’s decision may have wider implications than on the rights of reinstatement of someone wrongfully removed from a statutory committee. Despite the important role it plays, the Committee functions outside public scrutiny and oversight. Even insiders in the higher education sector are ignorant about how the Committee’s advice to the Minister is reached and communicated to the Minister. In fact, the very manner in which the Committee is presently constituted raises a few questions about whether there has been compliance with section 4A. The section directs the Minister to establish a committee ‘from time to time’ to advise him on the appointment of any official who the Minister is empowered to appoint under the Act. Section 4A makes no provisions for the constitution of the committee or how it is to function. In any case, what is envisaged by the section cannot by any stretch of the language used be described as a permanent committee. Nevertheless, what has transpired through bureaucratic processes in the Ministry of Higher Education (MOHE) is the establishment of a committee described as the ‘Permanent Selection Committee for the Appointment of Vice-Chancellors.’ Although it is the Minister who appoints members to the committee, there are documents (created by the MOHE) that deal with the terms of appointment, the responsibilities of members appointed to the committee and the criteria for the selection of Vice-Chancellors. The MOHE’s efforts in setting up the Committee and the attendant regulations no doubt contribute to good management and continuity in the Committee’s processes. However, if the criteria for appointment of VCs is set by the Ministry, would that not interfere with the independence of the Section 4A committee? A factor not considered by the High Court decision is the legality of any appointments to the seat of Vice-Chancellor that may have been made on the advice of the Committee during the absence of Dr Aeria from the Committee.

The substantive orders and declarations issued by the High Court in this judicial review would, it is submitted, support arguments in a future application to challenge the constitution of the Committee and perhaps even the decisions it makes in advising the Minister.

Judicial Review

Dr Aeria’s case establishes that the court’s willingness to inquire into the propriety of appointments and removal of members from the Committee under an application for judicial review. If this is the case, then in appropriate circumstances, a member of the Committee, or indeed any other party with an interest in the appointment of a Vice-Chancellor, may be able to apply for a judicial review of the advice that the Committee gives to the Minister under the section.

Judicial review is a powerful tool to subject official decisions to an independent review of lawfulness. Actions for judicial review play a key role in keeping those vested with statutory powers to act according to those powers. Not many in academia are willing to take such actions and as such the High Court decision, is a tribute to Dr Andrew Aeria’s willingness to challenge the Minister’s decision.

Unanswered questions aside, there is no doubt that the decision as delivered by the High Court will strengthen the role of the Section 4A Committee, prevents its manipulation by the Minister and ensure the independence of the members appointed to the Committee.

Sunday 12 September 2021

OECD Artificial Intelligence (AI) Principles for responsible stewardship of trustworthy AI

 The Recommendation on Artificial Intelligence (AI) is held out as the first intergovernmental standard on AI. The Recommendation was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy.

The recommendations are the outcome of OECD research and discussions carried out over a period of 3 years. OECD found that their work had demonstrated a need to shape a policy environment at the international level to ‘foster trust in and adoption of AI in society.’ The recommendations on AI complement existing OECD standards on privacy and data protection, digital security risk management, and responsible business conduct.

The Recommendation on AI contains five high-level values-based principles and five recommendations for national policies and international co-operation. It also proposes a common understanding of key terms, such as “AI system” and “AI actors”, for the purposes of the Recommendation. The following terms have been defined as shown below.

·         AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

·         AI system lifecycle: AI system lifecycle phases involve:

i) ‘design, data and models’; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building;

ii) ‘verification and validation’;

iii) ‘deployment’; and

iv) ‘operation and monitoring’. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.

·         AI knowledge: AI knowledge refers to the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes and best practices, required to understand and participate in the AI system lifecycle.

·         AI actors: AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.

·         Stakeholders: Stakeholders encompass all organisations and individuals involved in, or affected by, AI systems, directly or indirectly. AI actors are a subset of stakeholders.

Five high-level values-based principles

1.       Inclusive growth, sustainable development and well-being

a.       Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.

2.       Human-centered values and fairness

a.       AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

b.      To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

3.       Transparency and explainability

a.       AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:

                                                               i.      to foster a general understanding of AI systems;

                                                             ii.      to make stakeholders aware of their interactions with AI systems, including in the workplace;

                                                            iii.      to enable those affected by an AI system to understand the outcome; and

                                                           iv.      to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

4.       Robustness, security and safety

a.       AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.

b.      To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.

c.       AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

5.       Accountability

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.

Five Recommendations for Policy Makers

6.  Investing in AI research  and development

a)  Governments  should  consider  long-term  public  investment,  and  encourage  private  investment,  in research  and  development,  including  interdisciplinary  efforts,  to  spur  innovation  in  trustworthy  AI  that focus on  challenging  technical  issues  and  on AI issues. AI  research related  social, legal  and  ethical  implications  and  policy

b)  Governments  should  also  consider  public  investment  and  encourage  private  investment  in  open datasets  that  are representative and  respect privacy and  data  protection  to support an environment for and  development  that  is  free  of  inappropriate  bias  and  to  improve  interoperability  and  use of  standards.

7. Fostering a digital ecosystem for AI

Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.

8.  Shaping an enabling policy environment for AI

 a)  Governments should promote a policy environment that tested, and scaled up, as appropriate. supports  an  agile  transition  from  the  research and  development  stage  to  the  deployment  and  operation  stage  for  trustworthy  AI  systems.  To  this  effect, they  should  consider  using  experimentation  to  provide  a  controlled  environment  in  which  AI  systems can  be tested, and scaled-up, as appropriate

b)  Governments  should  review  and  adapt,  as  appropriate,  their  policy  and  regulatory  frameworks  and assessment  mechanisms  as  they  apply  to  AI  systems  to  encourage  innovation  and  competition  for trustworthy  AI.

9.  Building human capacity and preparing for labour market transformation

a)  Governments  should work  closely  with  stakeholders  to  prepare  for  the  transformation of  the  world of work  and  of  society.  They  should  empower  people  to  effectively  use  and  interact  with  AI  systems  across the  breadth of  applications,  including  by  equipping  them  with  the necessary  skills.

b)  Governments  should  take  step c s,  including  through  social  dialogue,  to  ensure  a  fair  transition  for workers  as  AI  is  deployed,  such  as  through  training  programmes  along  the  working  life,  support  for those affected  by  displacement, and  access  to  new  opportunities  in the  labour  market.

c)  Governments  should  also  work  closely  with  stakeholders  to  promote  the  responsible  use  of  AI  at work,  to  enhance  the  safety  of  workers  and  the  quality  of  jobs,  to  foster  entrepreneurship  and productivity, and  aim  to  ensure that the benefits  of  AI  are broadly  and fairly  shared.

10. International cooperation for trustworthy AI

a)  Governments,  including  developing  countries  and  with  stakeholders should actively cooperate to advance these  principles  and  to  progress  on responsible stewardship  of  trustworthy  AI.

b)  Governments  should  work  together  in  the  OECD  and  other  global  and  regional  fora  to  foster  the sharing  of  AI  knowledge,  as  appropriate.  They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.

c)  Governments should promote the development of multi-stakeholder, consensus-driven technical standards for interoperable and trustworthy AI.

d)  Governments  should  also  encourage  the  development,  and  their  own  use,  of  internationally comparable  metrics  to  measure  AI  research,  development  and  deployment,  and  gather  the  evidence base to  assess  progress  in  the  implementation  of  these principles