News about Artificial Intelligence:

AI and the concerns about privacy and ethics

Artificial Intelligence is getting closer and closer to our daily life, although often invisible to most people. We are always reminded that this is happening. Examples are algorithms in social media, person recognition on your phone and email fraud detection.

If government and healthcare start to embrace AI en masse, this image will be reinforced. As a result, there are increasing concerns about privacy and ethical aspects. An important topic is whether an AI profiles ethically and makes predictions that do not contain any biased viewpoints. That would (unintentional) lead to discrimination.

Governments are aware of this and try to define approaches to prevent this from happening. These approaches are mainly aimed at gaining insight into how an AI works. How do the models work, can this be made transparent and can you prove to a judge afterwards that they acted responsibly and in accordance with all legislation? But also whether it is ethically responsible.

Ethics and law are not the same

That brings us directly to the next topic. The law is a framework and should be followed. However, law does not equals ethics. In Western society ethics is taken into account by democratically elected representatives, but certainly not always. For example, from an ethical perspective, you should not distinguish between residents of country A or B. After all, every person has the same basic rights to freedom, life and happiness. Governments continuously discriminate this principle based on who lives in their country. There are good reasons for this from an economic and cultural perspective, but that does not make it ethical.

Ethics and privacy

Perspectives in ethics

There are different perspectives in ethics, which can lead to different outcomes. You can judge whether an action is ethical based on the consequences. For example, killing 1 person to save 10 lives would be ethically correct in this view. You can also approach it from a personal perspective. For example someone might find it completely unacceptable to kill someone. In that case, the right choice would be that 10 people die instead of one. Another perspective is when you consider the person’s situation. If the person who would die is a family member or even closer (child/parent). The ethical outcome is probably that that person would choose to preserve the family member’s life over others. You could argue that the government uses the latter principle in policy making. But the government is not personally involved with it’s inhabitants since they don’t know everyone. Therefore the personal ethical perspective does not apply.

The algorithms that form an AI have no feelings or moral values, but they can determine whether the consequences of the output outweigh the costs. For that reason, an AI can currently only use the first ethical perspective. Since there are several perspectives to be taken into account and also political motivations, it is wise to sometimes reconsider the outcome of an algorithm.

Discrimination

A much-discussed topic within the government is discrimination, especially on the AI topic. An AI can indeed discriminate and will even do so if the data gives reasons for that. There are well-known examples of a racist chatbot, but much more subtle discrimination forms that are undesirable from government and healthcare perspective.

Discrimination by an AI can be prevented or greatly reduced. You can train a model in such a way that it does not discriminate or much less than a person would do with the same information. To describe exactly how this works is beyond the scope of this blog. In short, it means that you automatically augment the data that gives a skewed distribution and let two models in competition assess the synthetic data to what extent it corresponds to the real data. After that the real training starts with a dataset that is much more balanced. This is not science fiction, we have already done this in practice.

Privacy

In the  Europe we have strict laws around protecting the privacy of citizens (GDPR). This has significant consequences for the application of information technology as a whole, but certainly for the use of artificial intelligence. For example, the data must be anonymized or anonymous. This can be done by removing all personal information before the training, or it must already be anonymous, like when using photos or video images. Many AI projects therefore spend a lot of time anonymizing existing data. You can partially automate that process with AI. Natural language detection can be used to remove name and address details, bank accounts and social security numbers. It is not permitted by European law to work with data that is still privacy sensitive. Even if there will be no significant deviation in the outcome of a trained model.

For example, an AI will rarely conclude that if your name is John Williams that you have a higher risk of cancer than Pete Harris. This cannot be ruled out entirely, as it can be statistically significant. If it would, you might think it is interesting to find out why.

Consequences for our product Intra

The aspects surrounding privacy and ethics actually affect our Intra product. Because we work with computer vision and often with cameras, we cannot clean up and anonymize data before processing. Anonymization during processing is technically possible by blurring or masking faces, but that is not permitted by European law. However, we can process data completely anonymously. That is allowed. We are also allowed to do that without using exceptions in the GDPR. We do this by using thermal cameras for detection instead of normal cameras.

To prevent discrimination in existing datasets, new data can be generated using generative and discriminative networks. This is also possible with video and images.

This AI assessment tool is a practical tool for governments and healthcare institutions. Handy but also quite labor intensive, anyway definitely worth reading so that you know better what your goals, methods and perspectives are before you start. Of course we can also help with this assessment.

Relevant sources

5 Comments

  • Steve

    Well I wasn’t knowing about privacy and ethics in such detail!

  • Shazam Khan

    Thanks for sharing.

  • Lotus

    Amazing post.

  • Thirupathi

    Thanks for the article!

    Regards,
    Apptomate – AI & Machine Learning Company in USA

  • IT Support Companies

    As the excitement surrounding AI has grown, businesses have been scurrying to showcase how AI is used in their goods and services. This element of AI programming is concerned with gathering data and developing rules for converting the data into usable information. The rules, known as algorithms, teach computer equipment on how to execute a certain task in a step-by-step manner.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*

*