DeepMindGato

1. The organization

This Data-Ethical Consultation concerns DeepMind's Gato. Gato is a "generalist AI" that can perform 600 different tasks, such as playing Atari games, caption images, and chat with humans. Google DeepMind is a company with a team of scientists, engineers, ethicist and more, focused on creating artificial general intelligence (AGI). For more information, see https://www.deepmind.com/publications/a-generalist-agent.

2. The AI technologies Employed

Since Gato is a generalist AI that can perform many different tasks, there are multiple AI technologies involved. Generally, the system uses a deep neural network to process its input and decide wether to output text, button presses, game controls or something else (Reed et al., 2022). Basically, a DNN is a machine learning method which consists of using neural networks with multiple hidden layers. DNNs can grow into very big networks. In this case, the Gato model uses 1.18 billion parameters. Gato is instantiated as a transformer sequence model, which is a specific kind of DNN that differentially weights the significance of the input data (Reed et al., 2022).
During the training phase of Gato, data is collected from a wide range of different tasks and modalities that are relevant. For example, in order to accurately caption images, data in the form of images needs to be collected. However, for Gato to learn how to chat with humans, textual conversation data is needed. These different data sources are transformed to tokens and are all processed by the same transformer neural network (Reed et al., 2022). This allows the final system to be able to perform 600 different tasks based on the same DNN with the same weights, which is what makes Gato revolutionary. Because some of Gato's tasks require embodiment (e.g., stacking blocks), it also uses robotics. However, as it is not relevant for the rest of this Mini DEC, I will not go into details here.

3. Ethical concerns

Working with a team of ethicists, DeepMind pays close attention to ethical issues with AI. They are mainly focused on two topics: technical safety and ethics & society. DeepMinds statement on technical safety is that AI systems can only benefit the world if they are made reliable and safe. There are specific AI safety teams within DeepMind, that develop approaches for safer AI. When it comes to ethics & society, DeepMind employs a team of ethicists and policy researchers to understand how technical advances will impact society, and how risks can be avoided or reduced. DeepMind also fosters public discussions around AI ethics.
Specifically for Gato, DeepMind has dedicated a small section of their published paper on ethical considerations, mainly concerning bias in data and safety risks due to real world embodiment malfunctioning. It must be noted here that DeepMind currently does not deploy Gato to any users.
In what follows, I will mention three ethical concerns that are raised by DeepMind's Gato in depth.

The threat of artificial superintelligence for humanity

One could argue that Gato is a first step towards artificial superintelligence: a type of artificial intelligence that would surpass even the smartest humans. With artificial intelligence becoming so fast, powerful, and efficient, it can leave people feeling inferior and questioning what it means to be human. After all, humans like to think that what makes them superior to other living beings is their intelligence. Therefore, machines acquiring superintelligences poses a significant threat on how humans perceive themselves. Different concerns will need to be addressed, and questions need to be answered. For instance, should policy making and legal procedures be transferred to superintelligent machines?
Furthermore, practical questions and concerns about AI systems replacing humans in the workplace are rightfully raised. The scare here is that humans will become unnecessary and that many products and places will lose their 'human touch'. Furthermore, because humans will potentially lose their jobs, there is a fear of becoming dependent on others/machines.
Finally, superintelligent AI could become malevolent towards society, for example by preventing humans from using resources such as money, land, and water, or by constructing a total surveillance state. The common denominator in these cases reaches beyond the fear of total loss of human power, leaving humans in the hands of machines. Humans would actually be worse off in a world with these machines, where their freedom is restricted, living conditions are harsh, and they are unsafe. These are very serious issues that need to be addressed before actually designing and deploying superintelligent AI.

The environmental impact of training large models

As mentioned above, the Gato's network consists of roughly 1.2 billion parameters. Training large models like this unfortunately comes at a cost: they require power-hungry computers that demand a lot of energy and produce a significant amount of carbon emissions. That is, the total cost of producing a result in machine learning increases linearly with the amount of (hyper)parameters in a model (Dhar, 2020). This is an ethical consideration of AI that is often overlooked. While it is difficult to measure the exact carbon footprint of companies or specific models because tech companies are reticent about sharing data, it can be assumed that training large models like Gato increase the carbon emission rates quite a bit (Dhar, 2020). Furthermore, not only does AI use a lot of energy, it is also deeply rooted in the exploitation of the earth. The many devices needed only within companies like DeepMind require the extraction of costly and sparse metals from the earth, which has a huge impact on the environment (Dhar, 2020).
The urgency of climate change presses us to find a solution to these problems. Especially with the growing popularity of AI and the recent advancements of generalist AI (requiring large models), it is essential that the environmental issues of AI are recognized and that solutions are sought.

The power of Google DeepMind

The last ethical consideration is not focused on Gato per se, but more so on Google DeepMind's position within the world "market" of AI. Google is one of the leading companies in AI worldwide (other companies being Amazon and Facebook), and is becoming virtually unstoppable in the marketplace. This is dangerous, as AI is a very powerful tool that can cause great harms when used wrong. If one company has all the power, it will have more or less free play in deciding what systems to build and how to use it. Supervision and oversight from independent third parties will be difficult to manage. Furthermore, it will be difficult to ensure that these large companies are distributing wealth equally and to prevent that inequalities between countries increase.
While the above is applicable to all kinds of AI technologies and applications, being the first company that develops an AGI has more far-reaching implications. Although there exists disagreement around Gato being an actual AGI or not, it must be acknowledged that it is a successful first-step in the process. Achieving AGI would give any company a huge amount of power, as this is equivalent to achieving an artificial being with the same level of intelligence that humans have. These AGI's will be suited to execute any task and will likely be in high demand, empowering the company's position on the marketplace significantly. Other than that, AGI's can perform more tasks, which means that they also inhibit more ways to cause harm. The companies designing AGI's can decide on the implementation of safety measures and ethical/moral guidelines, placing great power in their hands.

4. Recommendations

With regard to the threat of artificial superintelligence for humanity, I believe it is important for companies to explain to people what it is they are doing, thereby fostering trust in AI. Furthermore, DeepMind could employ teams of engineers, ethicists and policy makers focussed specifically on the effects of creating superintelligent AI, evaluating short-term effects as well as long-term effects and identifying the different threats of superintelligence and their solutions. These teams could specify particular guidelines to ensure superintelligent AI systems are safe.
To account for the environmental impact of training large models, DeepMind should put effort into using sustainable materials and renewable energy. Furthermore, research could be done to explore the possibility of computing less costly, for example by using as much simple models as possible. For transparency about environmental impact, it would be best if companies share their energy usage data. This would potentially also drive them to improve their practices. Lastly, policy could be deployed to force large tech companies to contribute to the sustainability movement, for example by offering financial help to environmental research or green initiatives.
In order to address the ethical considerations regarding the power of Google DeepMind, I suggest that DeepMind and other big companies to continue publishing their methods and data, making them available to the public. Furthermore, companies could invest in setting up educational programs that are made available to people who have less resources to educate themselves, for example in the Global South. Apart from that, independent watchdogs could be deployed, guarding companies' ethical guidelines and balancing the power of AI.

Literature

Dhar, P. (2020) The carbon impact of artificial intelligence. Nat Mach Intell 2, 423–425. DOI: https://doi.org/10.1038/s42256-020-0219-9

Reed, S. et al. (2022). A generalist agent. arXiv preprint arXiv:2205.06175.---
aliases:
tags:
dg-publish: true
author: Student2022A
year: ay21
type: MiniDEC
created: 2022-07-08
modified: 2023-08-01

1. The organization

This Data-Ethical Consultation concerns DeepMind's Gato. Gato is a "generalist AI" that can perform 600 different tasks, such as playing Atari games, caption images, and chat with humans. Google DeepMind is a company with a team of scientists, engineers, ethicist and more, focused on creating artificial general intelligence (AGI). For more information, see https://www.deepmind.com/publications/a-generalist-agent.

2. The AI technologies Employed

Since Gato is a generalist AI that can perform many different tasks, there are multiple AI technologies involved. Generally, the system uses a deep neural network to process its input and decide wether to output text, button presses, game controls or something else (Reed et al., 2022). Basically, a DNN is a machine learning method which consists of using neural networks with multiple hidden layers. DNNs can grow into very big networks. In this case, the Gato model uses 1.18 billion parameters. Gato is instantiated as a transformer sequence model, which is a specific kind of DNN that differentially weights the significance of the input data (Reed et al., 2022).
During the training phase of Gato, data is collected from a wide range of different tasks and modalities that are relevant. For example, in order to accurately caption images, data in the form of images needs to be collected. However, for Gato to learn how to chat with humans, textual conversation data is needed. These different data sources are transformed to tokens and are all processed by the same transformer neural network (Reed et al., 2022). This allows the final system to be able to perform 600 different tasks based on the same DNN with the same weights, which is what makes Gato revolutionary. Because some of Gato's tasks require embodiment (e.g., stacking blocks), it also uses robotics. However, as it is not relevant for the rest of this Mini DEC, I will not go into details here.

3. Ethical concerns

Working with a team of ethicists, DeepMind pays close attention to ethical issues with AI. They are mainly focused on two topics: technical safety and ethics & society. DeepMinds statement on technical safety is that AI systems can only benefit the world if they are made reliable and safe. There are specific AI safety teams within DeepMind, that develop approaches for safer AI. When it comes to ethics & society, DeepMind employs a team of ethicists and policy researchers to understand how technical advances will impact society, and how risks can be avoided or reduced. DeepMind also fosters public discussions around AI ethics.
Specifically for Gato, DeepMind has dedicated a small section of their published paper on ethical considerations, mainly concerning bias in data and safety risks due to real world embodiment malfunctioning. It must be noted here that DeepMind currently does not deploy Gato to any users.
In what follows, I will mention three ethical concerns that are raised by DeepMind's Gato in depth.

The threat of artificial superintelligence for humanity

One could argue that Gato is a first step towards artificial superintelligence: a type of artificial intelligence that would surpass even the smartest humans. With artificial intelligence becoming so fast, powerful, and efficient, it can leave people feeling inferior and questioning what it means to be human. After all, humans like to think that what makes them superior to other living beings is their intelligence. Therefore, machines acquiring superintelligences poses a significant threat on how humans perceive themselves. Different concerns will need to be addressed, and questions need to be answered. For instance, should policy making and legal procedures be transferred to superintelligent machines?
Furthermore, practical questions and concerns about AI systems replacing humans in the workplace are rightfully raised. The scare here is that humans will become unnecessary and that many products and places will lose their 'human touch'. Furthermore, because humans will potentially lose their jobs, there is a fear of becoming dependent on others/machines.
Finally, superintelligent AI could become malevolent towards society, for example by preventing humans from using resources such as money, land, and water, or by constructing a total surveillance state. The common denominator in these cases reaches beyond the fear of total loss of human power, leaving humans in the hands of machines. Humans would actually be worse off in a world with these machines, where their freedom is restricted, living conditions are harsh, and they are unsafe. These are very serious issues that need to be addressed before actually designing and deploying superintelligent AI.

The environmental impact of training large models

As mentioned above, the Gato's network consists of roughly 1.2 billion parameters. Training large models like this unfortunately comes at a cost: they require power-hungry computers that demand a lot of energy and produce a significant amount of carbon emissions. That is, the total cost of producing a result in machine learning increases linearly with the amount of (hyper)parameters in a model (Dhar, 2020). This is an ethical consideration of AI that is often overlooked. While it is difficult to measure the exact carbon footprint of companies or specific models because tech companies are reticent about sharing data, it can be assumed that training large models like Gato increase the carbon emission rates quite a bit (Dhar, 2020). Furthermore, not only does AI use a lot of energy, it is also deeply rooted in the exploitation of the earth. The many devices needed only within companies like DeepMind require the extraction of costly and sparse metals from the earth, which has a huge impact on the environment (Dhar, 2020).
The urgency of climate change presses us to find a solution to these problems. Especially with the growing popularity of AI and the recent advancements of generalist AI (requiring large models), it is essential that the environmental issues of AI are recognized and that solutions are sought.

The power of Google DeepMind

The last ethical consideration is not focused on Gato per se, but more so on Google DeepMind's position within the world "market" of AI. Google is one of the leading companies in AI worldwide (other companies being Amazon and Facebook), and is becoming virtually unstoppable in the marketplace. This is dangerous, as AI is a very powerful tool that can cause great harms when used wrong. If one company has all the power, it will have more or less free play in deciding what systems to build and how to use it. Supervision and oversight from independent third parties will be difficult to manage. Furthermore, it will be difficult to ensure that these large companies are distributing wealth equally and to prevent that inequalities between countries increase.
While the above is applicable to all kinds of AI technologies and applications, being the first company that develops an AGI has more far-reaching implications. Although there exists disagreement around Gato being an actual AGI or not, it must be acknowledged that it is a successful first-step in the process. Achieving AGI would give any company a huge amount of power, as this is equivalent to achieving an artificial being with the same level of intelligence that humans have. These AGI's will be suited to execute any task and will likely be in high demand, empowering the company's position on the marketplace significantly. Other than that, AGI's can perform more tasks, which means that they also inhibit more ways to cause harm. The companies designing AGI's can decide on the implementation of safety measures and ethical/moral guidelines, placing great power in their hands.

4. Recommendations

With regard to the threat of artificial superintelligence for humanity, I believe it is important for companies to explain to people what it is they are doing, thereby fostering trust in AI. Furthermore, DeepMind could employ teams of engineers, ethicists and policy makers focussed specifically on the effects of creating superintelligent AI, evaluating short-term effects as well as long-term effects and identifying the different threats of superintelligence and their solutions. These teams could specify particular guidelines to ensure superintelligent AI systems are safe.
To account for the environmental impact of training large models, DeepMind should put effort into using sustainable materials and renewable energy. Furthermore, research could be done to explore the possibility of computing less costly, for example by using as much simple models as possible. For transparency about environmental impact, it would be best if companies share their energy usage data. This would potentially also drive them to improve their practices. Lastly, policy could be deployed to force large tech companies to contribute to the sustainability movement, for example by offering financial help to environmental research or green initiatives.
In order to address the ethical considerations regarding the power of Google DeepMind, I suggest that DeepMind and other big companies to continue publishing their methods and data, making them available to the public. Furthermore, companies could invest in setting up educational programs that are made available to people who have less resources to educate themselves, for example in the Global South. Apart from that, independent watchdogs could be deployed, guarding companies' ethical guidelines and balancing the power of AI.

Literature

Dhar, P. (2020) The carbon impact of artificial intelligence. Nat Mach Intell 2, 423–425. DOI: https://doi.org/10.1038/s42256-020-0219-9

Reed, S. et al. (2022). A generalist agent. arXiv preprint arXiv:2205.06175.