Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Monday, 21 April 2025

The Irreplaceable Voice: Will Artificial Intelligence Silence Human Song?

 The Irreplaceable Voice: Will AI Silence Human Song?

 

The first thing I do every morning is to turn on the music. I start each day with Carnatic music from CDs randomly arranged on the player. The music brings to life the stillness of the morning and starts my day on a positive mood. The language, genre and style of music may change but music remains in the background all day long, at home and at work.

As I write, Maharajapuram Santhanam's distinctive earthy voice fills the room. Like all great voices, his voice is unmistakable—one that I doubt any human or algorithm can ever replicate.

Yet synthetic voices pervade the media to communicate news and other information and to persuade people to purchase goods and services. All we hear are the same few voices repeated over and over.

As artificial intelligence grows ever more sophisticated, I wonder if this distinct—and, I believe, irreplaceable—pleasure of listening to a multitude of voices in Carnatic music will one day be replaced by synthetic voices? Is this the beginning of us humans abdicating our individual differences to machines?

The Allure of the Human Voice

What makes a voice like Maharajapuram’s so captivating is not just its pitch or clarity, but its humanity—the slight tremble in a long-held note, the improvisational flourish in a raga, the faint breath between phrases, a cough, a clearing of the throat. These are not flaws; they are the fingerprints of a living artist. Carnatic music, like all great vocal traditions, thrives on this individuality. One singer’s rendition of a song does not sound like the same song rendered by another. Nor is one recording of a song identical to a later recording of the same song.

This diversity is not incidental; it is part of the tradition. There are over a hundred singers I listen to, and only rarely do I mistake a rendering by one for another, and I am no expert in the Carnatic tradition.

The Rise of Synthetic Sound

Yet, AI now threatens to flatten this richness. Already, tools exist to clone voices, generate "perfect" singing, or even compose new "performances" by long-dead artists. At first, this may seem harmless—a novelty, a tool for experimentation. But the danger lies in normalisation. If listeners grow accustomed to synthetic voices, will they still seek out the raw, unfiltered beauty of human song? If record labels can license an AI "Santhanam" to sing endlessly without fatigue or ageing, will they invest in living artists? The convenience of artificiality could quietly erode our connection to the real.

A Call for Self-Awareness

Our greatest challenge is not to reject AI, but to awaken to what it means to be human alongside it. We must recognise two worlds—the organic and the artificial—without blurring their boundaries. Just as we teach children to distinguish a photograph from a painting, we must now teach them to discern a living voice from a synthetic one, a human choice from an algorithmic suggestion. But this awareness must extend beyond sound to touch, intuition, and creativity. Our selves—and our agency—are not relics to be archived, but flames to be guarded.

Teaching Humanity

AI’s greatest danger is that it will make us forget who we are as humans and forget the multiple talents we are born with. To counter this demands a reimagining of education. Let us teach students to wield AI as a tool—for drafting ideas, transcribing melodies, or exploring creative possibilities—while fiercely preserving the sanctity of human expression. The new curricula must emphasise:

  • The role of human emotions and the human body in art (the breath behind a note, the callus on a violinist’s finger);
  • The ethics of authenticity (when to label AI, when to privilege human creation);
  • The courage of imperfection (why a cracked note can express more than a flawless one).

As Santhanam’s voice rises in a final, resonant phrase, I am reminded that technology has no inner life. A song synthesised by AI may delight the ear, but only a human voice can reach the soul of the listener. Our task is not to resist progress, but to insist that progress serves what machines can never replicate: the messy, glorious act of being alive, of being human.

"Let us use AI, but never mistake it for artistry. Let us listen to both worlds—but only bow before one."

Sunday, 16 April 2023

A Training Course to Face the Challenges of Artificial Intelligence

Training staff and students on the use of AI tools must be part of any institutional policy that is implemented to deal with the challenges of the new technology. 


Carefully designed training programmes are an effective way to introduce students and staff to the challenges and potential of the new technology. Training must include ethical and legal issues arising from the use of AI tools, their potential benefits and limitations, and how to use them effectively in higher education
.

A New Training Course

In this paper, we describe a course developed by senior academics titled Knowledge and Learning in the Age of Chat GPT. The course deals with fundamental questions about knowledge, its creation, verification and application, especially in an educational context.

The Rationale for the Course

As Artificial Intelligence (AI) tools like ChatGPT begin to encroach into the realms of knowledge production, it becomes important that students and even teachers have a clearer understanding of how universities and colleges create, validate, and transmit knowledge. Rather than worry about how ChatGPT will undermine the integrity of educational processes, HE institutions must bring the technology to heel as simply another source of information that must be tested and verified like any other source. 

More than ever before, HE institutions must forge an environment where all knowledge is subject to critical evaluation and students are given a more explicit understanding of knowledge creation and validation. Students must be taught that knowledge is fragile and vulnerable to manipulations and biases. With that realisation, and equipped with critical and analytical skills, students will be able to evaluate the output of AI technologies and make informed decisions on how to use and apply the information generated by AI tools.

HE institutions must also examine how AI can beneficially serve educational processes. For instance, AI has the potential to liberate education from the control of external agents like the media, governments and politicians or a particular perspective or set of beliefs. Tools like ChatGPT can provide learners with an immediate alternative view of the knowledge that is officially transmitted.

Overall, the course equips students and staff with the necessary skills to ethically navigate and apply the opportunities and challenges that AI technologies bring to the realm of education.

Course Outline

Ideally, the course should be taught over two full days. However, a shortened version can be delivered in one day.

I. Introduction

Welcome, and introduction to the course.

A brief overview of the topics to be covered.

II. How ChatGPT Answers Questions

Explanation of how ChatGPT works on large data sets.

Examples of how ChatGPT can be used to extract knowledge from text.

Distinguish ChatGPT from information on the Internet.

Discussion of the advantages and limitations of this technology.

Ethical issues arising from the use of ChatGPT.

III. ChatGPT in Higher Education

Personalized and self-learning.

Online tutoring and mentoring.

Automated grading of exams and assignments.

Translation, question answering, summarizing.

Literature search.

Curriculum development.

Generating course materials.

Improving accessibility to higher education, generally and for special needs students.

Originality and plagiarism.

IV. Validation of Knowledge

The importance of credibility and accuracy of knowledge and the role of the university in that process.

Importance of knowledge in making informed decisions, solving problems, and advancing knowledge.

Traditional methods of knowledge validation - peer review, fact-checking, citation analysis, and expert opinion.

Challenges in validating online information and information produced by AI tools.

V. Hierarchies of Knowledge

Discussion of the hierarchies of knowledge, from data to information, knowledge, understanding, and wisdom 

Explanation of how these levels build upon one another and contribute to deeper insights.

VI. Knowledge Systems

Meaning of knowledge.

Different types of knowledge.

Overview of different knowledge systems and how they have created knowledge in the past.

Examples of how indigenous knowledge, religious knowledge, and scientific.

Different approaches and perspectives in knowledge systems.

VII. Bloom's Taxonomy and Learning

The hierarchy of cognitive skills.

ChatGPT and the hierarchy of cognitive skills.

Explanation of Bloom's Taxonomy and its six levels of learning: remembering, understanding, applying, analysing, evaluating, and creating

Discussion of how different types of questions and learning activities can promote higher-order thinking skills.

VIII. Critical thinking

Thinking tools to evaluate information/knowledge.

Evaluation of sources.

Analysis of biases.

Application of logical reasoning.

Identifying logical fallacies.

External references.

IX Limitations and Challenges

Ethical implications

Explanation of the limitations of ChatGPT and other similar technologies, including the possibility of biased or flawed knowledge.

Discussion of the importance of critical thinking skills in evaluating knowledge from these sources.

AI technology may seem to make learning more exciting but the excitement must be tempered with vigilance in ensuring the accuracy and quality of information.

© Espact Sdn. Bhd.


Sunday, 12 September 2021

OECD Artificial Intelligence (AI) Principles for responsible stewardship of trustworthy AI

 The Recommendation on Artificial Intelligence (AI) is held out as the first intergovernmental standard on AI. The Recommendation was adopted by the OECD Council at Ministerial level on 22 May 2019 on the proposal of the Committee on Digital Economy Policy.

The recommendations are the outcome of OECD research and discussions carried out over a period of 3 years. OECD found that their work had demonstrated a need to shape a policy environment at the international level to ‘foster trust in and adoption of AI in society.’ The recommendations on AI complement existing OECD standards on privacy and data protection, digital security risk management, and responsible business conduct.

The Recommendation on AI contains five high-level values-based principles and five recommendations for national policies and international co-operation. It also proposes a common understanding of key terms, such as “AI system” and “AI actors”, for the purposes of the Recommendation. The following terms have been defined as shown below.

·         AI system: An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

·         AI system lifecycle: AI system lifecycle phases involve:

i) ‘design, data and models’; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building;

ii) ‘verification and validation’;

iii) ‘deployment’; and

iv) ‘operation and monitoring’. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.

·         AI knowledge: AI knowledge refers to the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes and best practices, required to understand and participate in the AI system lifecycle.

·         AI actors: AI actors are those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI.

·         Stakeholders: Stakeholders encompass all organisations and individuals involved in, or affected by, AI systems, directly or indirectly. AI actors are a subset of stakeholders.

Five high-level values-based principles

1.       Inclusive growth, sustainable development and well-being

a.       Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.

2.       Human-centered values and fairness

a.       AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

b.      To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

3.       Transparency and explainability

a.       AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:

                                                               i.      to foster a general understanding of AI systems;

                                                             ii.      to make stakeholders aware of their interactions with AI systems, including in the workplace;

                                                            iii.      to enable those affected by an AI system to understand the outcome; and

                                                           iv.      to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

4.       Robustness, security and safety

a.       AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.

b.      To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.

c.       AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.

5.       Accountability

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.

Five Recommendations for Policy Makers

6.  Investing in AI research  and development

a)  Governments  should  consider  long-term  public  investment,  and  encourage  private  investment,  in research  and  development,  including  interdisciplinary  efforts,  to  spur  innovation  in  trustworthy  AI  that focus on  challenging  technical  issues  and  on AI issues. AI  research related  social, legal  and  ethical  implications  and  policy

b)  Governments  should  also  consider  public  investment  and  encourage  private  investment  in  open datasets  that  are representative and  respect privacy and  data  protection  to support an environment for and  development  that  is  free  of  inappropriate  bias  and  to  improve  interoperability  and  use of  standards.

7. Fostering a digital ecosystem for AI

Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.

8.  Shaping an enabling policy environment for AI

 a)  Governments should promote a policy environment that tested, and scaled up, as appropriate. supports  an  agile  transition  from  the  research and  development  stage  to  the  deployment  and  operation  stage  for  trustworthy  AI  systems.  To  this  effect, they  should  consider  using  experimentation  to  provide  a  controlled  environment  in  which  AI  systems can  be tested, and scaled-up, as appropriate

b)  Governments  should  review  and  adapt,  as  appropriate,  their  policy  and  regulatory  frameworks  and assessment  mechanisms  as  they  apply  to  AI  systems  to  encourage  innovation  and  competition  for trustworthy  AI.

9.  Building human capacity and preparing for labour market transformation

a)  Governments  should work  closely  with  stakeholders  to  prepare  for  the  transformation of  the  world of work  and  of  society.  They  should  empower  people  to  effectively  use  and  interact  with  AI  systems  across the  breadth of  applications,  including  by  equipping  them  with  the necessary  skills.

b)  Governments  should  take  step c s,  including  through  social  dialogue,  to  ensure  a  fair  transition  for workers  as  AI  is  deployed,  such  as  through  training  programmes  along  the  working  life,  support  for those affected  by  displacement, and  access  to  new  opportunities  in the  labour  market.

c)  Governments  should  also  work  closely  with  stakeholders  to  promote  the  responsible  use  of  AI  at work,  to  enhance  the  safety  of  workers  and  the  quality  of  jobs,  to  foster  entrepreneurship  and productivity, and  aim  to  ensure that the benefits  of  AI  are broadly  and fairly  shared.

10. International cooperation for trustworthy AI

a)  Governments,  including  developing  countries  and  with  stakeholders should actively cooperate to advance these  principles  and  to  progress  on responsible stewardship  of  trustworthy  AI.

b)  Governments  should  work  together  in  the  OECD  and  other  global  and  regional  fora  to  foster  the sharing  of  AI  knowledge,  as  appropriate.  They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.

c)  Governments should promote the development of multi-stakeholder, consensus-driven technical standards for interoperable and trustworthy AI.

d)  Governments  should  also  encourage  the  development,  and  their  own  use,  of  internationally comparable  metrics  to  measure  AI  research,  development  and  deployment,  and  gather  the  evidence base to  assess  progress  in  the  implementation  of  these principles