Data ethics: It’s a thing
In light of the Ethics of Data Science conference that concluded yesterday, it’s worth looking at how ethical algorithms can actually be.
Think about it: everything from self-driving cars to automated personal assistants, artificial intelligence is well on its way to impacting every facet of our daily lives.
This means industries and governments are relying on machine learning to make important decisions that will have a real effect on the lives of consumers and citizens.
The conference is hosted by the University of Sydney, which tells us: “algorithms are a fundamental tool in everyday machine learning and artificial intelligence, but experts have identified a number of ethical problems. Models built with biased and inaccurate data can have serious implications and dangerous consequences, ranging from the legal and safety implications of self-driving cars and incorrect criminal sentencing, to the use of automated weapons in war.”
It’s unlikely the world will go all WarGames, but it is a possibility if the right data gets into the wrong hands.
So what can be done?
The Centre uses data science to preserve natural resources, build intelligent systems, improve digital health and explore the human condition.
“It is important to note that algorithms are not unethical, it is the bias in sampling created by some implementations of them which is an issue,” Professor Cripps explained.
Taking domestic violence as an example Professor Cripps said, “If an algorithm finds that a subgroup of the population is more likely to experience domestic violence, and on that basis continues to sample from that subgroup, then it is a self-fulfilling prophecy. To guard against this, a deep understanding of uncertainty and how to quantify it needs to be incorporated into algorithms.”
When it comes to business ethics, it is a matter of defining business ethics in a digital world. When you profit from data use, how do you achieve ethical practice?
Jason Tan, CEO and co-founder of machine learning company Sift Science told SecurityIntelligence, “Each business needs to define for itself a clear North Star of what is right and what is wrong. That doesn’t have to get into the nitty-gritty of what is right and wrong — but establish a baseline of what they want for a cultural mindset so that everyone is guided by the principle of doing the right thing as much as possible.”
That isn’t so easy because what is considered right or wrong is often a grey area.
Accenture says, The digital economy is built on data—massive streams of data being created, collected, combined and shared—for which traditional governance frameworks and risk-mitigation strategies are insufficient. In the digital age, analyzing and acting on insights from data can introduce entirely new classes of risk. These include unethical or even illegal use of insights, amplifying biases that exacerbate issues of social and economic justice, and using data for purposes to which its original disclosers would not have agreed, and without their consent. These and other practices can permanently damage consumer trust in a brand.
No company, wants that. And imagine if a small cap working in the data space misuses data collected. Its low share price would plummet to a point it may not recover. Brand strength for a small cap is also imperative to its long-term survival. Especially tech stocks.
Accenture has released a slideshow titled Building Digital Trust.
It covers best practices for data sharing and offers 12 guidelines to building a code of data ethics. It includes governance, transparency, privacy safeguards and respect for someone’s data.
We’ll leave you with a link to this Forbes article, which looks at Blockchain, cybersecurity, cloud computing, automation, AI and data ops and makes you think about all the industries that have moved into data use and how they are disseminating that data.