The UN Secretary-General’s AI Advisory Body launched its Interim Report: Governing AI for Humanity in December 2023 and was open for comments until March 31, 2024. In response to the call for a closer alignment between international norms and how AI is developed and rolled out, NetMission.Asia as a youth initiative had an internal call for comment among members for the submission of comments.
Below are some comments we submitted based on the section included in the interim report.
Opportunities and Enablers
The Interim Report on Governing AI for Humanity by The UN Secretary-General’s AI Advisory Body highlights the potential benefits of AI in various sectors, including healthcare, education, and the economy. AI technologies can increase productivity, improve decision-making, and enhance the quality of life. However, to fully realize these opportunities, access to data, infrastructure and a skilled workforce are necessary. Governments, international organizations, private sectors, the technical community, and civil society all have a role in ensuring these enablers are in place.
The report emphasizes the importance of harnessing these opportunities while ensuring that AI is developed and used responsibly and ethically. This requires a focus on human-centered AI that prioritizes human well-being and human rights and promotes inclusivity and diversity. The report proposes the establishment of global AI governance institutions that can support international collaboration on data, computing capacity, and talent to achieve the Sustainable Development Goals (SDGs).
The report notes that the potential benefits of AI are not evenly distributed, and there are concerns about the potential for AI to exacerbate existing inequalities, leading to an AI divide within a larger digital and developmental divide. The responsible development and use of AI require ongoing investment in research and development, as well as efforts to build capacity and ensure that AI is accessible to underserved communities. To achieve this, specific opportunities and enablers have been identified with a focus on yielding human augmentation rather than human replacement or alienation as the outcome. These include enhancing human well-being, economic growth, environmental sustainability, security, inclusivity, transparency, accountability, trust, ethical considerations, and human-AI interaction. Effective governance and regulation are necessary to ensure that AI is used responsibly and ethically.
Risks and Challenges
The report on governing AI for humanity by The UN Secretary-General’s AI Advisory Body highlights several risks and challenges associated with the responsible development and use of AI. These risks include potential biases and discrimination, job displacement, and ethical and environmental challenges. AI can perpetuate and amplify existing biases, leading to unfair treatment of individuals or groups, and the misuse of AI, such as the use of AI for malicious purposes or the creation of deepfakes, also presents risks. The report emphasizes the need for a human-rights-based approach, focusing on transparency, accountability, and explainability of AI systems. Robust regulatory frameworks, public awareness and education about AI, and mechanisms to address the potential harm caused by AI are necessary to ensure responsible development and use. The reliance on decisions from biased AI can lead to real-world risks including, job displacement, privacy and security, concentration of power, dependence on technology, lack of transparency, lack of governance, lack of privacy, and lack of ethical considerations.
While AI offers significant opportunities, it also presents risks and challenges that must be addressed. To keep human lives and our environment at the forefront and end of all AI-integrated processes, a human-rights-based approach, focusing on transparency, accountability, and explainability of AI systems, and robust regulatory frameworks are necessary to ensure the responsible development and use of AI. AI governance frameworks, norms, and standards based on broad consensus, ethical guidelines, regulations and standards for AI safety and security, and promoting transparency and explainability in AI systems are essential to address the challenges associated with AI development and use.
The section could have explored the challenges of encouraging and sustaining meaningful multistakeholder participation in policymaking discussions, especially on youth involvement and representation. NetMission.Asia strongly supports the participation of youth at key stages in policymaking discussions or youth-led AI initiatives.
Guiding Principles to guide the formation of new global governance institutions for AI
The Interim Report on Governing AI for Humanity by The UN Secretary-General’s AI Advisory Body proposes guiding principles for new global governance institutions for AI, prioritizing human well-being, dignity, and autonomy, and promoting inclusivity and diversity, transparency and explainability, collaboration, responsible resource management, security, and continuous learning and improvement.
The report could benefit from more concrete examples of how the proposed guiding principles and institutional functions could be implemented in practice. It could also benefit from more discussion on the role of civil society and other stakeholders in the governance of AI, and addressing the potential impact of AI on global power dynamics and the geopolitical implications of AI. The potential conflict with governance culture and difficulty in identifying universal/foundational guiding principles could also have been discussed. For instance, an opportunity to present a unique identity to distinct stakeholders should be given; as seen in the GDC process where the inputs received from the Technical community were clubbed with civil society, blurring their crucial role in the holistic policy process.
Institutional Functions that an international governance regime for AI should carry out
The Interim Report on Governing AI for Humanity by The UN Secretary-General’s AI Advisory Body (AIAB) proposes a framework for the responsible development and use of AI, highlighting the need for a global governance regime for AI. The report identifies several institutional functions for such a regime, including setting global standards and norms, promoting interdisciplinary collaboration, monitoring and evaluating AI impacts, ensuring access to AI, encouraging continuous learning and improvement, and addressing AI-related challenges.
However, the report could benefit from more concrete recommendations for action, a clearer roadmap for implementation, and increased engagement with stakeholders. The report proposes guiding principles for the formation of new global governance institutions for AI, including human-centered, inclusive and participatory, transparent, accountable, ethical, and resilient.
Capacity building is crucial, particularly for developing countries, and the report emphasizes the importance of supporting capacity-building efforts at the national and regional levels. Establishing internationally recognized norms and standards is essential for ensuring the responsible and ethical development of AI, and mechanisms for monitoring AI applications across different sectors and domains should be established. The potential for state-led efforts to camouflage barriers to education or capacity-building as threats to state security should also be considered, or the use of such opportunities to incite regime change or regime stabilization.
Dispute resolution mechanisms are also essential, and the report suggests the establishment of specialized dispute resolution mechanisms or the utilization of existing international legal frameworks.
To address the report’s shortcomings, the following recommendations are suggested:
- Include more concrete recommendations for action and provide a clear roadmap for the implementation of the proposed framework.
- Increase engagement with stakeholders, including the technical community, civil society, academia, and the private sector.
- Empower existing multistakeholder bodies such as the Internet Governance Forum (IGF) to initiate discussion on issues to share AI-related best practices while informing and inspiring those who have the power to make decisions on such issue(s).
- Accessibility to institutions such as the UN is only accessible to national or stronger groups, thereby stifling regional and last-mile connectivity by different stakeholders.
- Compliance by design, not by choice should be the prerequisite for such an international governance regime for AI.
Other comments on the International Governance of AI section
The International Governance of AI section in the Interim Report on Governing AI for Humanity by The UN Secretary-General’s AI Advisory Body emphasizes the need for a global approach to AI governance, recognizing that AI is a global phenomenon that affects all countries and regions. The report highlights the importance of public-private partnerships, capacity building, education, accountability, and transparency in AI governance.
Capacity building is crucial for the development and use of AI, ensuring that all countries and regions have the necessary resources and expertise to develop and use AI systems in a responsible, ethical, and equitable manner. Education and awareness-raising can help ensure that individuals and organizations are aware of the potential risks and harms associated with AI systems, and can help promote responsible and ethical AI development and use. Accountability and transparency can help ensure that AI systems are developed and used in a responsible, ethical, and equitable manner, and can help address potential risks and harms associated with AI systems.
The report could benefit from more concrete examples of how the proposed guiding principles and institutional functions could be implemented in practice. While the report notes that the responsible development and use of AI requires ongoing investment in research and development, as well as efforts to build capacity and ensure that AI is accessible to underserved communities, it could provide more specific recommendations for how this can be achieved.
The report could also benefit from more discussion on the role of civil society and other stakeholders in the governance of AI. While the report notes that the responsible development and use of AI requires collaboration and coordination among stakeholders, it could provide more specific recommendations for how this can be achieved.
Finally, the report could address the potential impact of AI on global power dynamics and the geopolitical implications of AI. As AI becomes increasingly integrated into various aspects of society, it has the potential to influence global power dynamics and affect international relations. The report could include a discussion of these issues and provide recommendations for how they can be addressed.
In short, the International Governance of AI section in the Interim Report on Governing AI for Humanity by The UN Secretary-General’s AI Advisory Body provides a valuable contribution to the ongoing debate on AI governance. By emphasizing the need for a global approach, public-private partnerships, capacity building, education and awareness-raising, and accountability and transparency, the report can help ensure that AI is developed and used in a responsible, ethical, and equitable manner. However, the report could benefit from more concrete examples of how these principles can be implemented in practice, more engagement with stakeholders, and more discussion on the potential impact of AI on global power dynamics and the geopolitical implications of AI.
Other feedback on the Interim Report
The Interim Report on Governing AI for Humanity by The UN Secretary-General’s AI Advisory Body provides a valuable contribution to the debate on AI governance. The report highlights the need for a global governance regime for AI, covering a wide range of issues and emphasizing a human-centric approach and international cooperation.
The report’s recommendations, including the establishment of global standards and norms, interdisciplinary collaboration, monitoring and evaluation of AI impacts, ensuring access to AI, encouraging continuous learning and improvement, and addressing AI-related challenges, are practical and actionable.
However, there are areas where the report could be improved. Specifically, it could benefit from more concrete recommendations for action and a clearer roadmap for the implementation of the proposed framework. The issue of conflicting attitudes towards adopting hard-law or soft-law approaches to AI principles across the world should be addressed. Additionally, the report could benefit from more engagement with stakeholders, including civil society, academia, and the private sector.
Despite these limitations, the report is well-written and accessible, and covers a wide range of issues related to the development and use of AI, including ethical, legal, social, economic, and environmental issues.
Further research is needed to ensure that the development and use of AI align with human values and societal benefits. This includes exploring the potential impact of AI on global power dynamics and geopolitical implications.
Overall, the report provides a comprehensive and thoughtful analysis of the challenges and opportunities associated with AI, and offers concrete recommendations for the development and use of AI in a responsible, ethical, and equitable manner. By emphasizing the need for a human-centric approach and international cooperation, the report can help ensure that AI is developed and used in a way that benefits humanity as a whole.
Edited and submitted by Jenna Manhau Fung, Rida Ashfaq, Stella Teoh, Sameer Gahlot (on March 27, 2024)