The urgency for policies to protect people

Chapter Four of Amazing Artificial Intelligence*

Read the publication in full here

The potential for both good and harm by automated systems, new technologies and Artificial Intelligence has been well documented and examples for both sides abound. In this publication alone, a very small slice of those examples had been touched upon, both the good and the harm. There are already lawsuits being filed and civil society groups organizing to demand accountability for the Big Tech companies that have harmed them using AI tools from invasion of privacy to unlawful surveillance to discrimination and bias. There is also already the calls for data to be protected and for consumers to be informed on whether or not they are giving information away, especially sensitive data or even photographs. Furthermore, with the so called “arms race” of Big Tech on who can reach the proverbial gold mine first with the best and fastest AI tools, it is almost inevitable that unintended harmful consequences will result from this and the most vulnerable will be the first to be harmed. The sensible call is to slow this race down but that in all reality will not be enough. In seeing the great potential for peril, what is needed are enforceable rules and regulations that will protect people.

Many recognize this need and have been calling for governments and regulators to not only catch up but be ahead of these AI developers before the potential for harm is realized. These harms have already been unfortunately realized in recent times, but that does not mean they cannot be allowed to continue.

Civil society groups such as Access Now together with many other organizations following technology and AI have called for human rights impact assessments for AI. These groups studied and analyzed what concrete actions can be done and have written a full set of recommendations. In brief, they argue that “any form of AI or algorithmic impact assessment integrates the human rights legal framework, so that it can unearth potential human rights harms, as well as propose effective mitigation strategies, including prohibition or withdrawal of systems, when harms do occur.”[41]

Access Now continues, “Our report therefore explores existing forms of impact assessments, from data protection impact assessments (DPIAs) to the impact assessment tool in Canada’s 2019 Directive on Automated Decision-Making, and highlights the shortcomings and best practices of these models. 

With more and more jurisdictions mandating impact assessment for AI systems, we have made some key recommendations, including the following:

Ensure input by civil society and those impacted, and disclose results: Alongside integrating a human rights framework into impact assessments for AI systems, we demand increased, meaningful, and resourced involvement of civil society and affected groups in organisations empowered to perform assessments and audits, as well as in standardization bodies, and meaningful public disclosure of the results of assessments and audits.

Create mechanisms for oversight if self-assessments fail to protect people: In the context of any self-assessment regimes, we demand the introduction of mechanisms that trigger independent audits and assessments, as well as clear avenues for people affected by AI systems, or groups representing them, to flag harms and thereby trigger investigations by enforcement bodies. 

Jointly develop a method for human rights-based AI risk assessment: Working with all relevant stakeholders, authorities should develop a model risk assessment methodology that explicitly addresses the human rights concerns raised by AI systems.”[42]

For more detailed analysis and the full set of recommendations, check out the full report[43] (Nonnecke, Brandie, Dawson, Philip “Human rights impact assessments for AI: analysis and recommendations”Access Now October 2022)

There are also the various proposals and recommendations on the table right now that are being discussed globally, in the EU and in the US. These are not yet laws but it is the hope that not only will they become legally enforceable, and therefore have teeth to hold the tech companies accountable but also that these policymakers will take this opportunity to get feedback from civil society, most especially the people who have already been harmed by biased, unsafe, harmful discriminatory and invasive technologies, but the people at large so that these policies and regulations may prioritize protecting people and not the tech corporations and their profits.

The proposals that will be presented below are the:

1) UNESCO Recommendation on the Ethics of Artificial Intelligence which was adopted last November 23, 2021[44]

2) AI Liability Directive (Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence) Brussels, September 28, 2022[45]

3) Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People[46]

Key points of the proposals will be presented and discussed.

The content of the recommendation
 
The Recommendation aims to realize the advantages AI brings to society and reduce the risks it entails. It ensures that digital transformations promote human rights and contribute to the achievement of the Sustainable Development Goals, addressing issues around transparency, accountability and privacy, with action-oriented policy chapters on data governance, education, culture, labour, healthcare and the economy.
 
Protecting data
The Recommendation calls for action beyond what tech firms and governments are doing to guarantee individuals more protection by ensuring transparency, agency and control over their personal data. It states that individuals should all be able to access or even erase records of their personal data. It also includes actions to improve data protection and an individual’s knowledge of, and right to control, their own data. It also increases the ability of regulatory bodies around the world to enforce this.
 
Banning social scoring and mass surveillance
The Recommendation explicitly bans the use of AI systems for social scoring and mass surveillance. These types of technologies are very invasive, they infringe on human rights and fundamental freedoms, and they are used in a broad way. The Recommendation stresses that when developing regulatory frameworks, Member States should consider that ultimate responsibility and accountability must always lie with humans and that AI technologies should not be given legal personality themselves.
 
Helping to monitor and evaluate
The Recommendation also sets the ground for tools that will assist in its implementation. Ethical Impact Assessment is intended to help countries and companies developing and deploying AI systems to assess the impact of those systems on individuals, on society and on the environment. Readiness Assessment Methodology helps Member States to assess how ready they are in terms of legal and technical infrastructure. This tool will assist in enhancing the institutional capacity of countries and recommend appropriate measures to be taken in order to ensure that ethics are implemented in practice. In addition, the Recommendation encourages Member States to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee auditing and continuous monitoring efforts.
 
Protecting the environment
The Recommendation emphasizes that AI actors should favor data, energy and resource-efficient AI methods that will help ensure that AI becomes a more prominent tool in the fight against climate change and on tackling environmental issues. The Recommendation asks governments to assess the direct and indirect environmental impact throughout the AI system life cycle. This includes its carbon footprint, energy consumption and the environmental impact of raw material extraction for supporting the manufacturing of AI technologies. It also aims at reducing the environmental impact of AI systems and data infrastructures. It incentivizes governments to invest in green tech, and if there are disproportionate negative impact of AI systems on the environment, the Recommendation instruct that they should not be used.

United Nations Educational, Scientific and Cultural Organization (UNESCO) “Recommendation on the Ethics of Artificial Intelligence” Adopted on 23 November 2021 UNESCO 2022
https://unesdoc.unesco.org/ark:/48223/pf0000381137   

The UNESCO recommendations are clear, concise and emphasize that AI systems are recognized for their potential for good but that it should be made clear that that is exactly what they should be doing, working for the good of the people and the planet. The almost 50-page document itself goes into detail into each recommendation and how Member States are recommended to implement the policies set forth in each area together with civil society, business and technology. The biggest hole however in these set of commendable recommendations is that they are all voluntary. This may have been historic in that it is the first ever globally agreed set of recommendations by a global body such as the UNESCO, however, it would be even more powerful if it were made into a global legally binding agreement, one that will be legally enforceable and can have punitive measures for violators of the regulations.

AI Liability Directive (Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence) Brussels, September 28, 2022

The proposal itself is meant to add to the EU AI Act which will most likely go into law a few years from now at the same time as this AI Liability Directive. In a nutshell, this current proposal is meant to empower EU citizens to sue companies in the EU for damages if they can prove that the company’s AI harmed them. For example, if the person can prove that because of their ethnicity or any other possible criteria caused the company’s AI to discriminate against them for benefits or denied them a job position, the company can be held liable and can be held liable for damages. The goal is for transparency and for companies to show that their AI does not discriminate against people, combined with the AI Act, this will cover areas where people are most vulnerable to harm such as surveillance, policing, denial of benefits, health care and other areas. The EU’s logic is that the digital economy is expanding, and with it, the AI tools to power the digital economy, and with that, higher risks for discriminatory algorithms. The discrimination and bias against minorities have already raised by civil society campaigners demanding accountability from Big Tech.

This is definitely a step in the right direction and hopefully the laws do not take so long to get approved nor get watered down as it goes through the process, however, one issue is that in this situation, while it does empower the citizen, it does put the onus on that person to prove the harm. This is a little bit difficult to imagine in the real world, a person or even a group of persons, going to court to prove the harm done to them by an AI of a tech company – where is it that the proof is supposed to come from if it is not as obvious as for example a paper trail or other materials the person/s already have access to? However, it does give people a common avenue across the entire EU, to seek redress for harm by AI and tech companies, which is definitely a good step forward.

Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People

The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was published by the White House Office of Science and Technology Policy (OSTP) in October 2022. President Biden is known for his stance on pushing back on Big Tech and calling for stronger protections for the American people, and with this, is the vision and proposal for moving forward with a framework for stronger protections.

The Blueprint has five principles:

1) Safe and Effective Systems

2) Algorithmic Discrimination Protections

3) Data Privacy

4) Notice and Explanation 5) Human Alternatives, Consideration and Fallback

Safe and Effective Systems
You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems. You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.
 
Algorithmic Discrimination Protections
You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.
 
Data Privacy
You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.


Notice and Explanation
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome. Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context. Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.
 
Human Alternatives, Consideration, and Fallback
You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes a description of these human governance processes and assessment of their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.

Blueprint for an AI BILL OF RIGHTS making automated systems work for the American People. White House Office of Science and Technology Policy. The White House October 2022 
https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

It is important to note that this is not yet a law. But it is expected that this Blueprint along with the proposed framework on how it should be implemented will move forward and be proposed as a bill to then enact into law. This is key if this is to have any impact because this has to become legally enforceable. Also important to note, the OSTP states, rightly so, that it does not take AI or super sophisticated technology to do harm, sometimes, all is needed is simple technology, and therefore, these principles apply to all automated systems.

Even though it is not yet a law, it is an important step forward that the White House pushes this forward because the US is home to most of the Big Tech companies that these regulations should apply to, and more importantly, be held accountable. If the Biden government succeeds in getting this passed into some form of bill or law that makes these principles legally enforceable and enable punitive measures on tech companies, it will make a big impact on the rest of the industry as it will show that they can no longer act with impunity and not be bothered by the harmful consequences they cause.

There is a latest proposal on policies and regulations: a proposal for a minimum of 6 months moratorium. It comes from the Future of Life Institute, and while it gives good reasons for its proposal, there are already critiques to what is unsaid in the letter. Here is the letter and below it, an analysis of it and what is lacking.

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
 
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
 
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
 
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
 
AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
 
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
 
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.
 
*the letter’s footnotes can be found in the link:
Future of Life Institute “Pause Giant AI Experiments: An Open Letter” March 22, 2023
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

The letter sounds sincere but somehow gives off an unrealistic reading of the world. AI developers are tripping over themselves to try and win this reckless race and does the Future of Life really think that their letter will: one, stop the race and two, what in the world will a 6-month moratorium really achieve? Are they expecting to see a realistic and substantial change in the programming and algorithms in chat bots in that short a time period? The Microsoft unhinged chat bot already exists as a cautionary tale of what happens when a program does not spend enough time in the lab before it is released (thankfully only to a select few reviewers and not the general public).

Also, as an excellent critique of this Future of Life letter ags: “The letter addresses none of the ongoing harms from these systems, including 1)

worker exploitation and massive data theft to create products that pro t a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.” [47]

The letter of Future of Life also sounds really condescending with this proposal of enjoying an “AI summer”. This is clearly meant only for those who already bene t from AI, automated systems, and are probably even owners, stockholders or bene ciaries of Big Tech or other corporations that have bene ted greatly from the neoliberal traditional economy or digital economy and the embedded system of economic and social inequalities.




[41] Leufer, Daniel “Why we need human rights impact assessments for AI” Access Now November 10, 2022

[42] Leufer, Daniel “Why we need human rights impact assessments for AI” Access Now November 10, 2022

[43] Nonnecke, Brandie, Dawson, Philip “Human rights impact assessments for AI: analysis and recommendations”Access Now October 2022 

[44] United Nations Educational, Scientific and Cultural Organization (UNESCO) “Recommendation on the Ethics of Artificial Intelligence” Adopted on 23 November 2021 UNESCO 2022 
https://unesdoc.unesco.org/ark:/48223/pf0000381137

[45] Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) Brussels, 28.9.2022 COM(2022) 496 final 2022/0303 (COD)

[46] Blueprint for an AI BILL OF RIGHTS making automated systems work for the American People. White House Office of Science and Technology Policy. The White House October 2022  
https://www.whitehouse.gov/ostp/ai-bill-of-rights/

[47] Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face) Statement from the listed authors of Stochastic Parrots on the “AI pause” letter March 31, 2023
https://www.dair-institute.org/blog/letter-statement-March2023

One thought on “The urgency for policies to protect people

Comments are closed.