The Recommendation on Artificial Intelligence (AI) is held out as the first intergovernmental standard on AI. The Recommendation was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy.
The recommendations are the outcome of OECD research and
discussions carried out over a period of 3 years. OECD found that their work
had demonstrated a need to shape a policy environment at the international
level to ‘foster trust in and adoption of AI in society.’ The recommendations
on AI complement existing OECD standards on privacy and data protection,
digital security risk management, and responsible business conduct.
The Recommendation on AI contains five high-level
values-based principles and five recommendations for national policies and
international co-operation. It also proposes a common understanding of key terms,
such as “AI system” and “AI actors”, for the purposes of the Recommendation. The
following terms have been defined as shown below.
·
AI system:
An AI system is a machine-based system that can, for a given set of
human-defined objectives, make predictions, recommendations, or decisions
influencing real or virtual environments. AI systems are designed to operate
with varying levels of autonomy.
·
AI system
lifecycle: AI system lifecycle phases involve:
i) ‘design, data and models’; which
is a context-dependent sequence encompassing planning and design, data
collection and processing, as well as model building;
ii) ‘verification and validation’;
iii) ‘deployment’; and
iv) ‘operation and monitoring’.
These phases often take place in an iterative manner and are not necessarily sequential.
The decision to retire an AI system from operation may occur at any point
during the operation and monitoring phase.
·
AI
knowledge: AI knowledge refers to the skills and resources, such as data,
code, algorithms, models, research, know-how, training programmes, governance,
processes and best practices, required to understand and participate in the AI
system lifecycle.
·
AI actors:
AI actors are those who play an active role in the AI system lifecycle, including
organisations and individuals that deploy or operate AI.
·
Stakeholders:
Stakeholders encompass all organisations and individuals involved in, or
affected by, AI systems, directly or indirectly. AI actors are a subset of
stakeholders.
Five high-level
values-based principles
1.
Inclusive growth, sustainable development and
well-being
a.
Stakeholders should proactively engage in
responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for
people and the planet, such as augmenting human capabilities and enhancing
creativity, advancing inclusion of underrepresented populations, reducing
economic, social, gender and other inequalities, and protecting natural
environments, thus invigorating inclusive growth, sustainable development and
well-being.
2.
Human-centered values and fairness
a.
AI actors should respect the rule of law, human
rights and democratic values, throughout the AI system lifecycle. These include
freedom, dignity and autonomy, privacy and data protection, non-discrimination
and equality, diversity, fairness, social justice, and internationally
recognized labor rights.
b.
To this end, AI actors should implement
mechanisms and safeguards, such as capacity for human determination, that are
appropriate to the context and consistent with the state of art.
3.
Transparency and explainability
a.
AI Actors should commit to transparency and
responsible disclosure regarding AI systems. To this end, they should provide
meaningful information, appropriate to the context, and consistent with the
state of art:
i.
to foster a general understanding of AI systems;
ii.
to make stakeholders aware of their interactions
with AI systems, including in the workplace;
iii.
to enable those affected by an AI system to
understand the outcome; and
iv.
to enable those adversely affected by an AI system
to challenge its outcome based on plain and easy-to-understand information on
the factors, and the logic that served as the basis for the prediction,
recommendation or decision.
4.
Robustness, security and safety
a.
AI systems should be robust, secure and safe
throughout their entire lifecycle so that, in conditions of normal use,
foreseeable use or misuse, or other adverse conditions, they function
appropriately and do not pose unreasonable safety risk.
b.
To this end, AI actors should ensure traceability,
including in relation to datasets, processes and decisions made during the AI
system lifecycle, to enable analysis of the AI system’s outcomes and responses
to inquiry, appropriate to the context and consistent with the state of art.
c.
AI actors should, based on their roles, the
context, and their ability to act, apply a systematic risk management approach
to each phase of the AI system lifecycle on a continuous basis to address risks
related to AI systems, including privacy, digital security, safety and bias.
5.
Accountability
AI actors should be accountable
for the proper functioning of AI systems and for the respect of the above
principles, based on their roles, the context, and consistent with the state of
art.
Five Recommendations
for Policy Makers
6. Investing in AI research and development
a)
Governments should consider
long-term public investment,
and encourage private
investment, in research and
development, including interdisciplinary efforts,
to spur innovation
in trustworthy AI
that focus on challenging technical
issues and on AI issues. AI research related social, legal
and ethical implications
and policy
b)
Governments should also
consider public investment
and encourage private
investment in open datasets
that are representative and respect privacy and data
protection to support an
environment for and development that
is free of
inappropriate bias and
to improve interoperability and
use of standards.
7. Fostering a digital ecosystem for AI
Governments should foster the
development of, and access to, a digital ecosystem for trustworthy AI. Such an
ecosystem includes in particular digital technologies and infrastructure, and
mechanisms for sharing AI knowledge, as appropriate. In this regard,
governments should consider promoting mechanisms, such as data trusts, to
support the safe, fair, legal and ethical sharing of data.
8. Shaping an enabling
policy environment for AI
a) Governments
should promote a policy environment that tested, and scaled up, as appropriate.
supports an agile
transition from the
research and development stage
to the deployment
and operation stage
for trustworthy AI systems. To
this effect, they should
consider using experimentation to
provide a controlled
environment in which
AI systems can be tested, and scaled-up, as appropriate
b)
Governments should review
and adapt, as
appropriate, their policy
and regulatory frameworks
and assessment mechanisms as
they apply to
AI systems to
encourage innovation and
competition for trustworthy AI.
9. Building human capacity
and preparing for labour market transformation
a)
Governments should work closely
with stakeholders to
prepare for the
transformation of the world of work
and of society.
They should empower
people to effectively
use and interact
with AI systems
across the breadth of applications,
including by equipping
them with the necessary
skills.
b)
Governments should take
step c s, including through
social dialogue, to
ensure a fair
transition for workers as
AI is deployed,
such as through
training programmes along
the working life,
support for those affected by
displacement, and access to
new opportunities in the
labour market.
c)
Governments should also
work closely with
stakeholders to promote
the responsible use
of AI at work,
to enhance the
safety of workers
and the quality
of jobs, to
foster entrepreneurship and productivity, and aim to
ensure that the benefits of
AI are broadly and fairly
shared.
10. International cooperation for trustworthy AI
a)
Governments, including developing
countries and with
stakeholders should actively cooperate to advance these principles
and to progress
on responsible stewardship
of trustworthy AI.
b)
Governments should work
together in the
OECD and other
global and regional
fora to foster
the sharing of AI
knowledge, as appropriate. They should encourage international,
cross-sectoral and open multi-stakeholder initiatives to garner long-term
expertise on AI.
c)
Governments should promote the development of multi-stakeholder, consensus-driven
technical standards for interoperable and trustworthy AI.
d)
Governments should also
encourage the development,
and their own
use, of internationally comparable metrics
to measure AI
research, development and
deployment, and gather
the evidence base to assess
progress in the
implementation of these principles
No comments:
Post a Comment
I would love to hear your comments.