How Does AI Safety Agreement Embrace Designated Groups Six Months Later?

Statements and reports – 05.13.2024

AI safety agreement (‘Bletchley Declaration’) was signed by 28 countries that agreed on a risk-based approach to frontier AI, areas and types of risks, including social protection, health, education, labour. It was just a few days after the US issued the first AI executive order, requiring safety assessments, civil rights guidance, and research on labour market impact, also accompanied by the launch of the AI Safety Institute. In parallel, the UK introduced the AI Safety Institute and the Online Safety Act echoing the approach of the European Union and Digital Services Act related to online and minor protection. 

Six months later the White House reported on completing a 180-day action plan, the EU introduced the COE’s Treaty on Human Rights, AI office, regulatory boxes and sent nominations to appoint national governments as AI regulators. Saudi Arabia established an international center for AI research and ethics, and the next Safety Summit will be held in South Korea, Seoul. How do governments cooperate on AI safety six months later and how they can better address designated and vulnerable groups?

180 days action plan

Over 180 days, the Executive Order directed agencies to address a broad range of AI’s safety and security risks:

    • Managing Risks to Safety and Security, including misuse of AI for engineering dangerous biological materials, generative AI and dual-use foundation models, expanding international standards development in AI, launching AI Safety and Security Board, developing safety and security guidelines for critical infrastructure owners and operators, piloting new AI tools for identifying vulnerabilities in vital government software systems.
    • Standing up for Workers, Consumers, and Civil Rights, including principles and practices for employers and developers, guidance to assist federal contractors and employers, resources for job seekers, workers, and tech vendors and creators, guidance on AI’s nondiscriminatory use, principles that set guardrails for the responsible and equitable use of AI in administering public benefits programs,  nondiscrimination requirements in health programs, safety and effectiveness of AI deployed in the health care sector.
    • Harnessing AI for Good, including advancing AI’s use for scientific research, deepening collaboration with the private sector, developing energy-efficient AI algorithms and hardware, working on pilots, partnerships, and new AI tools to advance clean energy, tackle major societal challenges.
    • Bringing AI Talent into Government, including The AI and Tech Talent Task Force, launching the DHS AI Corps, which will hire 50 AI professionals to build safe, responsible, and trustworthy AI to improve service delivery and homeland security, guidance on skills-based hiring to increase access to federal AI roles. 
 
AI Safety Dialogue

This action plan comes in parallel with efforts in Europe and other regions. In particular, the Council of Europe introduced AI treaty designed to protect human rights, democracy and the rule of law. The EU Commission established an AI Office, which will ensure the development and coordination of AI policy at the European level, as well as supervise the implementation and enforcement of the forthcoming AI Act, introduced  Regulatory sandboxes for the development, training, testing and validation of innovative AI systems under the direct supervision, guidance and support by the national competent authority, before those systems are placed on the market or put into service. On April 24 the Commission formally invited the national governments to appoint their AI regulators. Other initiatives include the use of AI in public administration and governance, science, R&D and technology transfer, consortiums of Language Technologies, investments in emerging technologies, talent and human capacity. 

This vision is synergized with national strategies and focuses on capacities as well. In particular, France’s national strategy aims to train and financially support a target of at least 2000 students, position a minimum of 1 center of excellence among the top international ranks, recruit 15 world-renowned foreign scientists. Germany is boosting AI research funding by nearly €1 billion. Plans include 150 new AI research labs, expanded data centers, and accessible public datasets. Specifically, it will promote 50 ongoing measures focused on research, skills, and infrastructure development and complement them with 20 additional AI initiatives

South Korea’s AI plan includes a 2.2 trillion won budget focused on large-scale projects in national defence, medicine and public safety, establishing AI graduate schools with the aim of cultivating 5000 AI specialists, as well as strengthening public-private partnerships in artificial intelligence research and development. Singapore’s vision NAIS 2.0 aims to propel Singapore as a leader in the field of AI, and to use AI for the public good, including AI compute, talent, and industry development. 

Saudi Arabia in cooperation with Unesco launched the International Center for AI Research and Ethics (ICAIRE) in Riyadh. The center aims to be a beacon of ethical AI practices, underscoring the importance of integrating values and ethical considerations in the rapidly evolving field of artificial intelligence. It will operate independently, with its own legal, financial, and administrative framework, striving to enhance the capabilities and legal aspects of AI and other frontier technologies in Saudi Arabia.

 

AI Safety and Designated groups

National approaches to safety come in parallel, including a focus on high and unacceptable risk models, its oversight and regulatory sandboxes, effects on environment and sustainability, talent, capacity, research and infrastructure behind it. However, different governments are still at different stages of deploying this vision to reality, some – already introducing AI Safety Institutes, some – still aiming to do so. There is also a difference in how nations balance innovation and safety. Some – prioritizing investments and innovation, some – social protection (e.g. Germany – “AI made in Germany”, UK – “international technology hub for emerging AI startups”).

There are still potential areas that should be addressed:

    • Policy loopholes – the Council of Europe’s AI treaty was criticized for potential exemptions for the public sector.
    • Security  exemptions – AI Act’ articles overriding safety and protection measures in “specific cases and scenarios” (unacceptable and high-risk categories).
    • Vendors influence and monopolies – limited number of vendors, involved in critical infrastructure. In particular, noticable investments in data centers in France ($4.3 billion), the UK (£2.5 billion), Germany (€3.2 billion) and Japan ($2.9 billion) were done by Microsoft;
    • Limited cases – the complexity of ontologies in areas of health, education, labor, which require focus beyond “high-level” national risks, but also addressing effects on specific conditions, cases, groups, age, gender, social and economic parameters. It also includes the accuracy and feasibility of particular models and systems, the involvement of multiple solutions, patients, caregivers and data input.
    • Limited stakeholder involvement last year’s AI Safety Summit was criticized for its “close-door” format, lack of involvement of interdisciplinary researchers and technologists, adopters and stakeholders, necessary convergence of social and technical knowledge and language, lack of participation beyond G20 and its allies.
 
Opinion

Generative AI and foundation models can support vulnerable groups by fueling existing assistive technology ecosystems and agents, learning, accommodation and accessibility solutions. However these algorithms also pose challenges associated with transparency, understanding systems outcomes, cognitive silos, potential misinformation and manipulation, privacy and ownership breaches. These rights and categories are underlined by AI-specific laws, digital, accessibility and social protection frameworks.

There are different ways, in which generative AI-associated systems may pose risks for these groups. In particular:

    • They may fuel bias in existing systems, such as automated screening and interviews, public services involving different types of physical and digital recognition and contextual and sentiment bias. AI algorithms are known to discriminate against individuals with facial differences or asymmetry, different gestures, gesticulation, speech impairment, different communication styles or assistive devices.
    • They may lead to manipulative scenarios, “addictive design”, cognitive silos and echo chambers. For instance, algorithms were used to spread misinformation among patients during the pandemic.
    • Language-based systems may add a negative connotation to group-related keywords and phrases (such as disability) or provide wrong outcomes due to a public data set containing statistical distortions or wrong entries.
    • Privacy – in some countries, public agencies were accused of using data from social media without consent to confirm patients’ health and disability status for pension programs.

South Korea to host second AI Safety Summit on May 21-22

Recent contributions:
Ethics and governance of artificial intelligence for health

WHO: Ethics and governance of artificial intelligence for health

OECD: AI to support people with disability in the labour market

SAPEA: AI & Scientific Mechanism

Copyright © 2018-2024