AI bias is a feature, not a bug. It can be a measure of societal progress though.
Let us unpack that.
Most data available today is intrinsically biased, and #AI outcomes reflect this. This is NOT an algorithm gone bad or a flawed training procedure, or a topic for #AIforgood vs evil AI debate.
Artificially fixing or sampling data populations to reflect what seems right doesn’t solve the underlying social challenges; it only temporarily brushes them under the carpet, and creates a smoke-screen. In actuality, it dilutes the message, if not outrightly make it worse.
Most businesses and tech-savvy organizations are very much aware of this underlying disparity. Have you ever pondered why it seems like such a struggle to persuade businesses to adopt #ethicalAI and #responsibleAI frameworks? Now you know why.
How to fix this?
Instead of predominantly concentrating on what’s wrong with #ArtificialIntelligence, and how to address it via #AIethics, and #AIgovernance, we should prioritize, address, and fix specific societal problems, monitor the development of our values, and find practical ways to operationalize these concepts.
That is certainly abstract. Yes, and more harder than criticizing AI, but who said progress was going to be simple?
Where is the AI connect?
AI is indeed the go-to technology of today. We can surely exploit the fast-paced observations of AI bias as a symptom to treat the *disease* itself, but never lose sight of the real issue here. No AI bias framework or responsible AI principle is going to fix #society for you.
Decide for yourself. Where would you rather spend your time – fixing the symptom, solving the problem, or rather an equal balance between the two?
As always, be sure to consult with AI and ethics experts to help support your decision making. And don’t forget to let us know what thoughts you would add to this piece in the comments below.