SlimTaal

Mini ethical consultation

1. SlimTaal

The organisation that is covered in this data-ethical consultation is called SlimTaal. SlimTaal is a company that describes itself as an AI writing tool, that helps writing professional texts.

2. The AI technologies Employed

SlimTaal uses AI to generate professional texts. They do this by obtaining a prompt from the user, rewriting it, and providing it to ChatGPT-4. There are two different versions of the product: a version for individuals and an organization-based one. The individual version can be used within a few minutes, and only requires making a free account. This free account allows the user to access and use the SlimTaal tool for free during a seven day trial period.
For the organizational version, SlimTaal emphasizes the use of a personalized approach for each organization, which means that SlimTaal collaborates with the client organization to find out their preferences and communication style. After having obtained this, the text tool is adapted to fit these specific wishes.
SlimTaal does not provide much information regarding their methods and AI technologies. They do shortly describe their methods, without going in depth about AI techniques. The methods that they do provide information on, roughly looks as follows: the user is guided through a few steps of questions about their writing wishes. This starts by selecting the type of text that the user wants to send. Examples of text types are custom sales email, blog, news article and quotation. Once the type is selected, the user is guided through some questions specifically tuned towards the chosen type. If the user for example wants to write a custom sales email, they are asked to enter their product or service, the benefits for the potential client and whether or not the client has been contacted before. The SlimTaal tool then uses this information to generate a prompt. This prompt is sent to ChatGPT-4, and the result is adapted again by SlimTaal. However, there are no details provided on how this adaptation of the results or the prompt is done.

3. Ethical concerns

SlimTaal provides its users with a tool that is easy to use, even for people who are not experienced in writing prompts or interacting with AI tools. Additionally, they frequently advertise their services via the public radio, which increases the chances of reaching a different audience as compared to the regular users of AI tools. By making this so accessible and easy to use SlimTaal helps users of different levels of IT knowledge to join the developments in the field of AI, thereby helping decrease inequalities of this type.
Additionally, SlimTaal ensures the safety and privacy of its users, by following GDPR and AVG guidelines. Personal data of users is only stored on private servers. Answers that users provide to generate prompts are only used for this purpose, as opposed to using it to improve training of GPT models.

Personal data is entered into ChatGPT

As mentioned in the section regarding AI technologies, SlimTaal’s main service is rewriting prompts and outputs so it results in a more professional and fitting text. This is done by transforming the user’s answers to questions into prompts that are provided to ChatGPT. This method carries a risk that is likely not noted by the target audience of this tool: if the user enters private information or information that is private to their organization, this data also enters ChatGPT. SlimTaal mentions this practice in their general terms and conditions, in the following wording:

Gebruiksvoorwaarden SlimTaal

“Als de door gebruikers van Slimtaal aangeleverde informatie persoonsgegevens bevatten, dan worden deze persoonsgegevens mogelijk gebruikt in de betreffende prompts. In die situatie deelt Loo van Eck B.V. via Slimtaal dus persoonsgegevens met de beheerder van ChatGPT: OpenAI Ireland Ltd.”

This practice is of an ethical concern, because users are likely not aware of their data being shared with OpenAI. Since OpenAI uses the input data of their model for training purposes, your data could be used for these purposes, without your knowledge. SlimTaal specifies in their frequently asked questions that your data is not used to train their models: this is likely true for their own model, but by using this set-up, the user’s data is still used to train a large language model. This wording therefore tricks the user into thinking that they can safely provide personal data, while this is actually not a safe practice.

Not getting your money’s worth

SlimTaal is a tool that simply transforms your information into suitable prompts, and then transforms the output of the ChatGPT model into suitable results. This might be an attractive feature for users without any previous experience with AI tools, but since this is a paid service, there are some ethical concerns. The only service that SlimTaal offers, is a easy guide to prompt engineering. However, if the user would enter the same information in ChatGPT by themselves, and would ask it to generate a prompt for it, would it not get similar results for free? SlimTaal does help the user by suggesting questions and providing a simpler interface, but in addition to that it is unclear what the user is paying for. Since ChatGPT is a free tool, the user could decide to enter their problems themselves, and save the money.

It should be mentioned that SlimTaal does provide a consult with organizations to identify their specific writing styles and requirements, and that they adjust their tool to these specific needs. Therefore, the concern is mainly for the individual users, who get a state-of-the-art model, that could likely easily be replaced by a step-by-step plan and a ChatGPT account.

A general risk of large language models is the appearance of harmful contents or biases in the outputs of the model. SlimTaal uses the outputs from ChatGPT, which makes them directly dependent on the fairness of this model. Do they have any method of checking whether their outputs are fair, and without bias? Since the relation to ChatGPT seems of great importance to the workings of the SlimTaal tool, the risks included in ChatGPT will directly translate to the workings of SlimTaal. If the number of users of SlimTaal increases, it will be difficult for SlimTaal to check the outputs for potential biases. Additionally, they do not report any such checks or research on their website. Did they perform any research into potential biases in their outputs? How do they control for the risks of using ChatGPT? Or, is there no control and do they just depend on the quality of ChatGPT entirely?
If there is no control of this at all, users could unintentionally use texts that include harmful or biased contexts. This bias could be hard to detect, if it is for example only a case of representation bias, which is usually only discovered when considering many outputs.

4. Recommendations

To address the concerns highlighted above multiple recommendation can be made. The first concern covered the input of personal data into ChatGPT. This concern can be resolved in multiple ways. First of all, SlimTaal could use a local GPT model, in a way that the input to this model is not shared with OpenAI. If this is unfeasible for their approach, they could also guide the user in entering their data. When asking the user to provide input for questions, they could include a warning to not include personal or organizational data if they do not want this data to be known by OpenAI.
The second concern, that users might not get their money’s worth, can also be resolved by more open communication by SlimTaal. If SlimTaal were to communicate their methods to the user more clearly, the user could make a more informed decision on whether or not they want to invest in using this service. The service could be marketed more as a tool for people who are not common to using AI tools, or who have trouble coming up with the rights prompts.
Thirdly, there is the concern of using the outputs from ChatGPT without the ability to check for harmful or biased contents. As SlimTaal does not provide information about the way in which they adapt the output, it cannot be said with certainty that they do not include any checks on this output. However, by not providing this, the user does not get any guarantee that the output is checked for potential biases. Assuming that SlimTaal does perform checks to this purpose, a solution would therefore be to include an explanation of the changes that are made to the ChatGPT output. If SlimTaal does not perform these checks, it is recommended that they research their methods, and publish their findings on their website.