IBMWatson

1. The organization

⇒ What is company or service or agency for which you'll be providing an abbreviated data-ethical consultation? (provide a brief description, website, etc.)

The company that will be evaluated in this DEC is IBM Watson. It is a sub-division of the company IBM which sells multiple software and hardware related products. IBM Watson provides multiple AI-based applications and solutions. This DEC will focus on the healthcare-related solutions that IBM Watson provides, these solutions fall under the Watson Health program. The company aims to sell its products/solutions to healthcare practitioners with the goal of improving and facilitating medical support and research. IBM Watson Health mostly sells products that aid in the process of health-related decision-making (diagnosing, treatment recommendations, etc.)

More information regarding IBM Watson can be found on their website: https://www.ibm.com/watson

2. The AI technologies Employed

⇒ What AI technology or technologies do they use and for what purposes?

As has been stated above, IBM Watson Health provides a multitude of AI driven solutions. It should be noted that all solutions contain an AI-component. However, per solution the AI-component differs. Firstly, one of the most well-known solutions that IBM Watson provides is Diagnostic imaging. Diagnostic imaging uses computer vision to automatically classify (diagnose) medical images (e.g. CT scans and MRI scans). Computer vision application often make use of Convolutional Neural Network (CNNs) model architectures, these models are inherently opaque and contain black-box components. Secondly, IBM Watson provides clinical decision support, this supports clinicians by bringing forward relevant patient data (from their health record) and relevant medical literature to assist the clinician in making decisions regarding the treatment of patients. Again, AI is used to bring forward the relevant data to the clinician. It is important to note that the clinical decision support should be seen as a supporting tool for the clinician. It should not be seen as a tool that gives recommendations on its own. Instead, it aims to provide relevant information to the clinician, thereby enabling the clinician to come up with a well-informed proposition. The AI technology behind this is mostly natural language processing, which is able to identify relevant documents and pieces of text based on inputted topics. For instance, the clinical decision support tool could be used to find the most recent advancements made in the field of oncology if that information is relevant for a specific patient. Lastly, IBM Watson Health provides a multitude of software services that leverage AI to optimize workflow and other processes within hospitals. However, this service is too broad to identify which specific AI technology is used as this might be dependent on the specific use-case.

3. Ethical concerns

⇒ Briefly discuss three ethical concerns that are raised by this organization's use of these AI technologies

Having established the core services that IBM Watson (Health) provides, this section will focus on evaluating the company's current practices. Firstly, IBM Watson clearly states that its services should be seen as supportive and not as autonomous decision-making tools. This is important as it encourages a decision-making process where a human is still 'in-the-loop'. It is desirable to retain such a structure in decision-making processes as it prevents wrong decisions (in this case, recommendations) being made to patients if the algorithm behind the decision-making tool malfunctions. Clinicians bring domain expertise and can contextualize data in cases where the algorithm falls short. IBM Watson seems to be aware of this and therefore portrays its services as complementary to clinicians and not as replacements for humans / clinical staff. Furthermore,

Possible model bias

However, there are also some ethical concerns that come forward when evaluating IBM Watson's current practices. Firstly, diagnostic imaging is used to diagnose patients based on image data. The models that are used for diagnostic imaging have been trained before going into deployment. However, due to the GDPR, the legislation regarding the use of patient data to train models in Europe is very strict. Therefore, these models are often trained on data that originates from different countries (with more flexible laws concerning patient data), the countries from which most training data originates are the USA and India. Training models on data from a very limited set of countries creates the risk of the diagnostic imaging models being good at diagnosing patients that come from the same countries as the images in the training data but bad at patients that come from ethnic minorities or ethnicities that are not present in the training data. This means that there can be a bias in these models, which would lead to the wrong diagnosis being made and consequently, a wrong treatment for the patient in question. It is therefore important that IBM Watson evaluates possible biases in their models caused by a lack of diversity in the training data.

The autonomy of the patient at risk

A second ethical concern that arises when evaluating IBM Watson's current practices is the autonomy of the patient. The clinical decision support tool that IBM Watson provides aims to provide clinicians with the information required to make the best decision regarding treatment for a patient. However, the notion of 'best' treatment is a subjective one, different patients might prefer some side-effects over other ones. When looking at this through the scope of the ethical ideal of decisions being made together by the patient and the clinician it becomes clear that tools that by using tools that rely on a standardized set of values, the patient's autonomy is being threatened. This can be attributed to the fact that if a system does not incorporate an individual's personal values when providing the clinician with a set of possible effective treatments, the clinician will discuss the suggested options with the patient and the patient will not be informed about treatments that would match their preferences better but were not selected by the AI system.
It is therefore important that the clinical decision support tool does not provide its suggestions based on a standardized set of values and preferences regarding treatments. However, it is not entirely clear whether as of now IBM Watson's treatment recommendation tools are able to incorporate personal preferences into its recommendation systems. Thus, in order to ensure the patient's autonomy IBM Watson should focus on the option to incorporate personal values in the decision-making process.

Who holds the responsibility for consequences of advice given by IBM Watson's services

The final ethical concern regarding IBM Watson's practices that will be addressed revolves around the assignment of responsibility. As has been made clear above, the services that IBM Watson provide are mostly AI-based recommendation and diagnosing tools. Although these tools have been thoroughly tested and evaluated before going into deployment it is inevitable that at some point they will make an incorrect recommendation or diagnosis. In turn, this will in some cases lead to a wrong treatment being selected for a patient. Such an incorrect treatment can be fatal to the patient in question. This raises questions regarding the responsibility for the recommendations made by IBM Watson's products. Although IBM Watson clearly states that its products should be used as an additional source of information in the process of making diagnoses and treatment recommendations, it is still likely that situations occur in which the tools will provide incorrect information that might be considered as truthful by the clinician and patient. AI-tools cannot be directly held responsible for their actions because they do not possess the capacities to experience negative consequences / repercussions, another agent must be held responsible for the consequences of the recommendations made by the AI system. In order to avoid this unclarity, there must be some mutual agreement between the seller of the product (IBM Watson) and the users of the product (clinicians) about who holds the legal responsibility for the consequences of wrongful advice given by the product.

4. Recommendations

Having established three ethical concerns regarding IBM Watson's use of artificial intelligence, this section will aim to provide recommendations to ensure that IBM Watson's practices can be regarded as ethical and thought-through. Starting with the first ethical concern, the presence of bias in the AI-driven diagnosing products that IBM Watson sells. In order to prevent the use products that contain biased models, it is important that IBM Watson tests its products on data originating from patients with different ethnic backgrounds. This will allow IBM Watson to determine the performance of its products on different ethnicities and will reveal whether there is a bias towards any ethnicity. If that is the case, the AI models that are at the core of the product need to be re-trained on more diverse data.

The second ethical concern that has been addressed is the threat to patients' autonomy that might be caused by IBM Watson's products. As has been made clear, one of the risks that comes with the use of IBM Watson's products is that they might suggest certain treatments to the patient based on a standardized and pre-defined set of values and preferences. However, if these values do not align with the patient's personal values and preferences, the AI system will subsequently suggest treatments that do not reflect what the patient finds to be important. Therefore, the patient will make an ill-informed choice and there will be no room for shared decision-making. As stated above, in the current day and age, there seems to be consensus on the ethical notion that patients should be able to affect the treatment they receive based on their personal values. It is therefore important that IBM Watson focuses on implementing a setting that enables the system to incorporate values and preferences into their AI-based recommendation system such that the system can provide customized and appropriate treatment recommendations for every individual, thereby respecting the individual's autonomy.

Lastly, in order to ensure that no responsibility gap occurs when IBM Watson's products are being used I recommend that before deploying its products, a formal agreement, in the form of a contract, is made. This will ensure that there is clarity surrounding the responsibility for the actions caused by the recommendations made by IBM Watson's products. It is also important to make this clear to the patient so that they are aware who is responsible for the recommendations made to them. Although it could be the case that IBM Watson already has some standardized contract that needs to be signed by the users of its products, no information about this could be found on their website. Therefore, I deem it to be useful to mention the possibility of using such formalized agreements to clarify the division of responsibility regarding recommendations made by IBM Watson's AI systems.