Afterthoughts

Chapter Five of Amazing Artificial Intelligence*

Read the publication in full here

There is no question that the advancement of technology, including something still yet to be fully understood, such as Artificial Intelligence, automated systems and all its elements, has the potential for both good but also for incredible harm. Yes, advancements in this field have made great strides in information technologies, communications, and other areas. It has also made the processing of great amounts of data and data analytics not only possible but faster.

These capabilities though have also enabled and continue to grow the digital economy, which, in its neoliberal format has just moved the free trade market of inequity from the traditional economy to a digital version, one powering a platform economy and enriching Big Tech companies and leaving the disenfranchised just as poor as they were in the traditional economy. Save for a sparing of exemptions of success stories in the digital economy, of a few making it out of the rat race and making it big, the neoliberal economic story stays the same.

This is the first issue with Artificial Intelligence. The technology advancing and making leaps and bounds for science and technology is all great when it is done in the service of making life better for the people, but when the advancement is done to harm people in order to profit, then that is not acceptable. The digital economy is already well exposed that its platform economy runs on a new kind of natural resource: data. And as this publication explains, Artificial Intelligence tools are only as good as the amount of data it has.

Although Artificial Intelligence and automated systems have a vast expanse of fields from finance to automobiles, this publication delved into two examples: text to image generation and the large language models and the race between chat bots. There are a couple of reasons for this, first, these two cases are at the forefront of the development of AI and its subsets such as machine learning, deep learning and large language models training, which have caught the world’s attention. This is no longer the AI of asking Siri on your phone what the weather is like. These are advancements of great strides, with images that to an untrained eye, may look like the real thing, and chats with chat bots that are actual conversations.

However, take a deeper look and ask again, are these really great advancements? 

The brazen data laundering done by Stability AI in order for their text to image AI tool Stable Diffusion to run is absolutely inexcusable. They knowingly used an academic research group to scrape copyrighted images – nearly 6 billion images to be exact – to avoid paying the rights to the rightful owners. That is data laundering or maybe theft is a better word. It is good that they are being sued by the artists and getty images. The chapter did not even delve deeper into the unintended harmful consequences of the release of Stable Diffusion without filters to the general public as it allowed the malevolent to then take the AI tool further and produce deep fakes – photos that look real because the photo of the real-life person in it looks genuine and not photoshopped or faked, and these deep fakes are not always benign or made for a laugh, some are defamatory and others pornographic and the victims are almost powerless as the images spread around the internet. Stability AI and others of course deny any responsibility but the technology and images are out there and in the internet, there is no such thing as deleting.

On the advancement of large language models and chat bots, credit is given to language applications that have and always been seen as crucial as it has many practical ways that can help and assist the disabled and other people who may have otherwise lost their ability to use language to communicate. The family of language models have a lot to contribute to society. It is also no doubt that the large language model abilities have shown the marvels of machine learning, a subset of artificial intelligence, and how it has even more potential given more time, testing and feedback. However, the so-called “arms race” of the tech companies in making the best and “smartest” chat-bots are not helpful. Instead, they may be harmful. The fact that one of the test reviewers stated that the conversation with the Microsoft chat-bot had unsettled him so much not because of the potential factual errors but rather the potential that the AI may learn to convince humans to inflict harm or worse, learn to do it itself, should be a clear red flag that the “arms-race” needs to simply stop, re-evaluate and work slowly. The work is important and exactly so that it should be done with care so as not to create AI tools that can spread disinformation, racism, misogyny, bias and become as what Chomsky and others have said, something like the banality of evil.

These two cases presented in the publication, text to image generation and chat bots, are only two examples from the vast expanse of technologies at various stages of development. How are these two cases relevant to all those other areas and fields using and developing AI? The relevance is that the harmful consequences to real people and the potential for an almost too terrifying to imagine danger of unhinged chat bots learning to manipulate or even worse, learning to do harm to others by itself, is a very loud and clear cautionary tale to all. If AI, automated systems, big technology developers are not taking these as lessons to be learned and not repeated but rather prevented from even coming close to happening, then, governments and other policymakers need to move much faster to put regulations in place. It is more effective to prevent rather than try to chase and troubleshoot.

There is a bigger picture in all of this. People should not buy into the hype of AI. The excitement around chatGPT being able to converse seemingly eloquently has deeply impressed many. However, this is just a marvelous display of machine learning. It has memorized and trained on volumes and volumes of data and trained in conversation and can therefore converse. But what does it do when it has gaps that it can’t fill when answering, it auto-fills, sometimes with untruths or things it’s cobbled together that are not correct.

The chat bots are machine learning language programs which is a subset of Artificial Intelligence. Now, just because “intelligence” is in Artificial Intelligence and chat bots belong under it, it does not translate to it being intelligent. True human intelligence is much more complex, capable of moral thinking, has potential beyond imagination, and so much more. 

It cannot be emphasized enough that policies and regulations need to be put in place with the utmost urgency.

The various proposals of policies and regulations that are on the table all have their pros and cons. The human rights proposals from civil society, which propose that AI tools do not harm human rights but rather take them into account in their programming. The UNESCO proposal, a globally agreed set of recommendations for ethics of AI, which is historic but lacking as it is voluntary and not legally enforceable. Then the EU AI Liability Directive which, while it does put the onus on the consumer, is a good step forward in holding Big Tech accountable across the EU as it empowers citizens to sue tech companies if they can prove that they were harmed by its AI. The fact that there will be, if this passes into law, a central place for consumers to seek redress from harmful AI tools is a positive step. Then the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People which centers around five core principles all around making stronger protections for their citizens and trying to push back on Big Tech. If this makes it all the way into a law that can be legally enforced, then it would make a big impact as most Big Tech are in the US.

Finally, it is also crucial that even if only half the world has access to the digital world, that these rights and protections that are being discussed in various spaces and governments, cover them too. All people deserve their human rights to be protected, and to be protected from the potential harms of these emerging new technologies, automated systems and AI and its technologies and tools. These technologies must be developed for the benefit and greater interest of the people, not the other way around.

One thought on “Afterthoughts

Comments are closed.