Once we recognise the human was never outside of artificial intelligence, we can use it to help create a digital society for the common good.
Artificial intelligence is both praised as a general solution to the most pressing social problems and loathed as a main cause of precisely these. In current debates, the focus is mostly on the ‘intelligence’ part, which is misleading because the main moral and political implications stem from the fact that AI is ‘artificial’—a socio-technical artefact.
Since the 1950s, major socio-economic and earth-system trends have followed an exponential growth pattern, called the ‘great acceleration’. Digitalisation, as a major innovation trend of the last decades, follows the same pattern, most famously observable in the computer-chip manufacturing industry as the so-called ‘Moore’s law’ (neither a law nor without limits). The exponential growth of data and computational power places higher demands on people and resources in all steps of the process we now call ‘digitalisation’ and this applies especially to AI-based systems.
Modern hardware needs a variety of raw materials, including coltan mined and processed under conditions which are socially unsustainable. Whole regions of the world (mostly in the global south) are being transformed into the ugly flip-side of the brave new digital world. With respect to ecological sustainability, the energy needed for the extraction, processing and shipping of the components, as also for the operation of modern computer systems of Big Data and AI, is quite substantial. The rule of thumb is that a contemporary data centre needs for its operation as much electricity as a small town, mostly for cooling (life-cycle costs not included).
Nevertheless, we need information and communication technology (ICT) for the European energy revolution to happen. It can help us save energy and resources in other fields, such as mobility or electricity consumption in households—the overall energy balance depends on the aim and the motivation.
It is strange that technicians are reminded often to put the human in the loop—the human was never outside. It is humans who are creating technology, it is humans who are using technology and it is the human part of the socio-technical system AI that provides the intelligence. Consequently, ‘AI does not make us more “intelligent”, only more computationally powerful.’
And while it is tempting for a technological civilisation to seek technical solutions to all of its problems, regardless of powerful tools not every problem can be tackled by technology. Unless we change the underlying social conditions, digitalisation will increase the problems we want to solve and create additional ones.
In line with the 17 United Nations Sustainable Development Goals, the ‘ultimate goal of technology’ would be ‘to improve the human condition in a sustainable way for all of us and for our environment’. But even if this responsible understanding of innovation would become a global standard, it does not protect us from the unintended consequences which create new problems or path-dependencies when trying to solve old ones.
Norbert Wiener, who defined cybernetics in 1948 as the scientific study of control and communication in the animal and the machine, already knew that ‘we had better be quite sure that the purpose put into the machine is the purpose which we really desire’. But that does not answer what this purpose is and who ‘we’ are—two questions better asked right at the beginning, if innovation in AI is to be safe, trustworthy, reliable and sustainable.
We need your support
Social Europe is an independent publisher and we believe in freely available content. For this model to be sustainable, however, we depend on the solidarity of our readers. Become a Social Europe member for less than 5 Euro per month and help us produce more articles, podcasts and videos. Thank you very much for your support!
Therefore, political action is needed beyond the digital sphere and that leads us to the non-computable question: in what type of future society do we want to live? We need public deliberation about that, independent of putative technical ‘necessities’. In the long run, ‘any development that does not boost trustworthiness will ultimately not succeed’.
Big data-based AI calculations which are ‘good enough’ for ethically and epistemologically questionable business models cost large amounts of energy and are typically not trustworthy, for instance due to biased training datasets or machines that only pretend to learn, ‘which puts a question mark to the current broad and sometimes rather unreflected usage … in all application domains in industry and in the sciences’. Think, for instance, of determining thus the creditworthiness of individuals.
Instead of primarily using AI for tracking users to personalise advertisements, the networked society of today lacks an infrastructure designed to enhance ‘individual inclusion, personal development, environmental protection, fair competition and a functioning digital public sphere’, as well as ‘access to data and services such as cloud services, mobility platforms or a search index’—in other words, ‘the common good’. The global ‘free’ market and its powerful big-tech companies will not provide such an infrastructure, unless there is a requirement to change unsustainable business models.
It will neither emerge from the ‘move fast and break things’, surveillance-capitalist model of Silicon Valley, nor will China’s mass-surveillance state capitalism be compatible with an open, emancipatory, digital-commons ICT infrastructure. Consequently, there is an urgent need for a ‘European way’ towards sustainable digitalisation, based on trust, responsibility and public ICT.
Trust as a building block also means ensuring good engineering practices, regulation by law and a basic digital literacy. Technically, transparency and explicability play a central role. But, understood as socio-technical system, if AI is really to become a base technology for further sustainable innovation it must be accessible to everyone and made for the people in the common interest.
A ‘European public open space’ would provide a platform to discuss what this common interest looks like—this project for conceptualising a European public sphere is as yet only a vision but, embedded in an ecosystem of public ICT platforms, it could be a good start. Digital infrastructures which play a key role in everyday life should not be designed in favour of ‘surveillance capitalism’ and ‘networks of control’ which get more powerful with more data. Concerning web indices as fundamental infrastructure for search engines, projects such as the Open Web Index could secure this critical information infrastructure and restore Europe’s informational sovereignty, as well as ‘have a stimulating impact on digital innovations, in the field of search engines and for the European start-up and internet economy’.
These are just a few examples of possible parts of a public ICT ecosystem. Based on truly sustainable, data-protection-friendly business models and green IT, it would serve citizens, companies and the state hand-in-hand. Such an infrastructure could rapidly scale up globally, with its inherent interoperability and data portability, if it is well done. It could provide a different environment for trustworthy and responsible AI services in favour of the common good—in favour, that is, of vulnerable people on a vulnerable planet.
Reinhard Messerschmidt is an interdisciplinary social scientist holding a doctorate in philosophy. At the science-policy interface, he works on topics at the intersection of ethics, technology assessment, research and innovation, and sustainability. Stefan Ullrich is group lead of the research group @jwi_riot at the Weizenbaum Institute for Networked Society. As an informatician with a minor degree in philosophy, he critically examines the impact of ubiquitous information-technology systems on society.