Next Executive Sessions :
After the interesting talk of Stephen Brobst CTO#4 we now want to organise a debate on Big Data & Ethics. Pierre-Nicolas is proposing the following format: Short presentations by each participant representing the vision of his/her company followed by a debate. Any suggestions ? Who wants to participate ?
Here is some inspiration on the topic.
The RTBF’s big data expert Pierre Nicolas Schwab reflects on how to develop algorithms for PSM which match their values and mitigate the potential flaws of artificial intelligence. He invites EBU Big Data Initiative participants to join the RTBF workshop addressing this issue this December in Brussels.
“Most large companies are investing massively in Big Data technologies to leverage the value of their data. While many still consider Big Data as an inescapable business trend, concerns are growing regarding the impact of Big Data on our daily lives.
A ‘man vs machine’ Artificial Intelligence (AI) milestone was reached in March this year when the deep-learning algorithm AlphaGo defeated one of the world’s best players at Go, Lee Sedol. In an article I published before the game, I was wondering how advances in AI were changing our lives. Cathy O’Neil, a Harvard PhD mathematician, expressed similar concerns at the USI conference in Paris in early June and will be releasing her book “Weapons of Math Destructions” in September. This book elaborates on the concerns she has expressed on her blog about ill-based decisions triggered by algorithms and how big data “increases inequality and threatens democracy”. The title may be provocative but promises to go beyond the filter bubble effect made popular by Eli Pariser.
Because algorithms are only as good as those who build them, we need to open up the models, and not only the data. Those models need to be subject to criticism, peer-review and third-party scrutiny. This will avoid the use of biased or even dangerous algorithms (e.g. the French universities selection algorithm scandal revealed earlier this month) and will increase people’s trust in organizations which use algorithms. To illustrate the latter, the French fiscal authorities are now forced to render public the criteria that play a role when submitting a tax payer to a control. This exemplifies that a change is ongoing and, as PSM, we must embrace and support it.
Not only must we avoid replicating flawed models (in particular recommendation algorithms that trap users in “filter bubbles”) but we, as public organizations, have duties towards our democratic societies and their citizens. That’s why we need to (1) engage in a global reflection on how our algorithms need to be shaped to reflect our values and (2) pave the way for better practices that will inspire other organizations in different industries.