What is considered an unacceptable risk in AI systems according to the EU Model?

Prepare for the Artificial Intelligence Governance Professional Exam with flashcards and multiple choice questions. Each question includes hints and explanations to enhance understanding. Boost your confidence and readiness today!

The classification of unacceptable risk within the EU AI Model emphasizes the necessity to protect fundamental rights and public safety. Real-time remote facial recognition systems in public places are regarded as posing significant threats to privacy, personal data protection, and potentially enabling mass surveillance. These systems can lead to wrongful identification or profiling, which undermines individual freedoms and societal norms.

The EU's approach reflects concerns about the implications of such technologies on democratic values and human rights. As these systems often operate without individuals' knowledge or consent, their deployment raises ethical questions and potential for misuse. Therefore, the EU Model prioritizes the regulation and, in many cases, outright prohibition of technologies deemed to have unacceptable risks, such as real-time remote facial recognition in public spaces.

In contrast, the other options, while they may present risks, do not reach the level of being classified as unacceptable within the EU guidelines. For example, high-stakes medical diagnosis systems need robust safeguards and oversight but can still be developed in a manner that respects ethical standards and patient safety. Similarly, automated loan underwriting and the use of AI in educational testing, while they involve significant considerations about fairness and transparency, do not inherently violate fundamental rights to the same extent as the uncontrolled use of facial recognition in public settings.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy