What is an example of a system classified under 'Unacceptable Risk' in the EU AI Act?

Prepare for the Artificial Intelligence Governance Professional Exam with flashcards and multiple choice questions. Each question includes hints and explanations to enhance understanding. Boost your confidence and readiness today!

In the context of the EU AI Act, systems that pose an 'Unacceptable Risk' are those that have the potential to cause significant harm to individuals' rights and safety, or that could undermine democratic processes or social values. Subliminal techniques that distort behavior fit this classification perfectly because they manipulate individuals in a way that bypasses conscious awareness, leading to actions that the individual is not fully aware of or does not consent to. Such techniques can lead to severe implications for personal autonomy and decision-making, raising ethical concerns about manipulation and consent in AI applications.

The other systems mentioned do raise ethical and safety concerns, but they do not inherently meet the 'Unacceptable Risk' threshold as clearly as the use of subliminal techniques. Simplistic recommendation systems, while they may have limitations, generally do not manipulate behavior to the degree that they compromise individual rights. Predictive policing systems can lead to serious issues such as bias and discrimination but are often considered to fall under the category of 'high risk' rather than 'unacceptable risk.’ Real-time monitoring without consent poses privacy issues and ethical concerns, yet it may not be classified as unacceptable in all contexts. Therefore, subliminal techniques represent a more direct and profound violation of individual rights, aligning them with the EU AI

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy