Which issue arises when AI models perform differently across groups because of data biases?

Prepare for the AI Prompt Engineering and Key Concepts in Machine Learning and NLP Test. Study with comprehensive questions, hints, and explanations. Equip yourself for success!

Multiple Choice

Which issue arises when AI models perform differently across groups because of data biases?

Explanation:
Bias in AI is the issue here. When data used to train a model contain biases or aren’t representative of all groups, the model learns patterns that perform better for some groups and worse for others. That leads to different accuracy or error rates across groups, even if the overall performance looks good. Fairness in AI is about reducing or eliminating those disparities, but the underlying problem described is bias in AI. Transparency and privacy concern other aspects—how decisions are explained or what data are kept private—rather than the unequal performance caused by biased data.

Bias in AI is the issue here. When data used to train a model contain biases or aren’t representative of all groups, the model learns patterns that perform better for some groups and worse for others. That leads to different accuracy or error rates across groups, even if the overall performance looks good. Fairness in AI is about reducing or eliminating those disparities, but the underlying problem described is bias in AI. Transparency and privacy concern other aspects—how decisions are explained or what data are kept private—rather than the unequal performance caused by biased data.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy