Artificial Intelligence + Algorithms = Assumptions!

LaRae LongArticles by Hazel Henderson

Submitted by: Hazel Henderson

Posted: Aug 12, 2016 – 06:00 AM EST

Hazelhenderson

Artificial intelligence (AI) has emerged in the public debate with Deep Mind’s recent win over a champion of the Chinese game GO.  AI is all about computers getting better at solving problems formerly thought too difficult which should be left to humans.  Since the 1970s, a small group of computer specialists and mathematicians based their hopes on teaching machines to follow the rule-based learning of human reasoning.  They designed algorithms (coding these rules into software programs) which they hoped would enable computers to emulate human thought processes.

Today, these algorithms run more and more of our everyday lives.  They can decide whether our credit score is good, if we get hired for a job, whether we can board an airplane or get admitted to the country of our destination.  Algorithms dominate the world’s stock exchanges, evaluating most companies and deciding whether to buy or sell, as well as causing “flash crashes”.  Political campaigns are based on algorithms deciding which voters will turn out and which candidates they’ll favor.  The much vaunted Internet of Things (IoT) uses algorithms to monitor our babies, open our locks, control our use of energy, steer our cars and oversee our fitness programs and diets.  Algorithms control which ads we see online, monitor our buying habits and track our whereabouts by GPS and our smartphones.  Increasingly, algorithms program drones and weapons systems.

This brave new world of algorithms and big data has taken over the economies of most post-industrial societies and the lives of their citizens.  Most relish the new connectivity, social media and instant, always-on lifestyles – happily surrendering their most personal information and privacy.  Few asked about the assumptions and biases that those human programmers may have installed in all these algorithms.

New evidence is coming to light as to how these human biases unconsciously held by most people can skew algorithms, including gender and racial biases that affect job selection.  In New Scientist (July 16, 2016), Aviva Rutkin describes how algorithms with such hidden assumptions have denied people credit, jobs and even parole, and hails the new General Data Protection Regulation (GDPR) recently approved by the European Parliament.  This GDPR calls for companies to prevent such discrimination, which filters through algorithms and is hidden in the guise of mathematical impartiality.  Rutkin also cites the US White House symposium on AI which explored such issues and how Silicon Valley programmers and technocrats can arbitrarily dictate policies affecting the lives of citizens.

An egregious example of a faulty algorithm was exposed by a team of Swedish scientists in The Economist (July 16, 2016) who discovered that 40,000 brain research studies based on using functional MRI scans of brains were invalid due to misinterpretation of blood flow patterns assumed in the algorithms they applied.  Another is the business-as-usual assumptions in the International Energy Agency’s forecasts failing to track the shift from fossil fuels to renewable energy.

Debates about AI, robots, job losses and the privacy and security issues have surfaced in many countries, including who owns all this personal data being patterned into algorithms by spy agencies, social media companies, advertisers, marketers, insurance and banking firms, law enforcement.  All this data is valuable and vital to our information-based economies.  Microsoft’s former chief scientist, Jaron Lanier, in Who Owns the Future (2012), claims that Google, Facebook, Amazon, LinkedIn, Snapchat, Instagram should pay every user for each and every bit of their personal information.  Lanier notes their business models sell this data to advertisers, insurers, bankers, political campaigns, so each user should receive payment for their data – quite feasible with existing software.

The public is awakening to this new threat of big data as “Big Brother” while acknowledging all its potential benefits.  We do not need many of the idiocies promoted for profit in the Internet of Things.  For example, the Parks Associates survey found that 47% of US broadband households have privacy or security concerns about smart home devices.  Tom Kerber, Director of Research, cites recent media reports of hacking into baby monitors and connected cars and suggests that if firms offered a bill of rights to consumers, this might ease concerns.  At least, a standard for all smart devices should allow users to switch off their connectivity and operate them manually.  Oxford University’s Future of Humanity Institute’s paper by Deep Mind’s founder Demis Hassabis advocates such “off switches” for AI systems (The Economist, June 25, 2016).  How would such safeguards work in electronic finance?  We as consumers also need greater access to monitor all the assumptions in algorithms that run our lives.

New Cities Foundation Greg Lindsay reports in Smart Homes and the Internet of Things that 66% of smart phone users are afraid of these devices tracking their movements.  The Atlantic Council’s March 2016 seminar on “Smart Homes and Cybersecurity” concluded it’s already too late to protect homeowners and other users.  Google founder Larry Page is pouring billions into developing flying cars (“Propeller Heads”, Businessweek, June 6, 2016).  Is this what the public wants?  Automotive engineer Mary Louise Cummings of Duke University testified at a recent Senate hearing on driverless vehicles at which Google, General Motors and Ford were requesting over $3 billion in subsidies.  She noted that these companies had done no real testing of driverless vehicles and doubted they could be both autonomous and safe.  What would flying cars and drones for the 1% do to our already crowded skies and quality of life for the 99%?

All this is why Ethical Markets proposes a new standard to shift the balance of power back to consumers and citizens: a new Information Habeas Corpus.  Britain in 1215 adopted the rule of Habeas Corpus, which assured individuals’ rights over their own bodies, further codified by Parliament in 1679.  Today, we need to extend this basic human right to our brains and all the information we generate in all our activities.  Time for the Information Habeas Corpus!