Seeking Ground Rules for A.I.

Seeking Ground Rules for A.I.

About seven years ago, three researchers at the University of Toronto built a system that could analyze thousands of photos and teach itself to recognize everyday objects, like dogs, cars and flowers.

The system was so effective that Google bought the tiny start-up these researchers were only just getting off the ground. And soon, their system sparked a technological revolution. Suddenly, machines could “see” in a way that was not possible in the past.

This made it easier for a smartphone app to search your personal photos and find the images you were looking for. It accelerated the progress of driverless cars and other robotics. And it improved the accuracy of facial recognition services, for social networks like Facebook and for the country’s law enforcement agencies.

But soon, researchers noticed that these facial recognition services were less accurate when used with women and people of color. Activists raised concerns over how companies were collecting the huge amounts of data needed to train these kinds of systems. Others worried these systems would eventually lead to mass surveillance or autonomous weapons.

How should we, as a society, deal with these issues? Many have asked the question. Not everyone can agree on the answers. Google sees things differently from Microsoft. A few thousand Google employees see things differently from Google. The Pentagon has its own vantage point.

This week, at the New Work Summit, hosted by The New York Times, conference attendees worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence. The results are included here.

But even the existence of this list sparked controversy. Some attendees, who have spent years studying these issues, questioned whether a group of randomly selected people were the best choice for deciding the future of artificial intelligence.

One thing is for sure: The discussion will only continue in the months and years to come.

The Recommendations

Transparency Companies should be transparent about the design, intention and use of their A.I. technology.

Disclosure Companies should clearly disclose to users what data is being collected and how it is being used.

Privacy Users should be able to easily opt out of data collection.

Diversity A.I. technology should be developed by inherently diverse teams.

Bias Companies should strive to avoid bias in A.I. by drawing on diverse data sets.

Trust Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.

Accountability There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.

Collective governance Companies should work together to self-regulate the industry.

Regulation Companies should work with regulators to develop appropriate laws to govern the use of A.I.

“Complementarity” Treat A.I. as tool for humans to use, not a replacement for human work.

The leaders of the groups: Frida Polli, a founder and chief executive, Pymetrics; Sara Menker, founder and chief executive, Gro Intelligence; Serkan Piantino, founder and chief executive, Spell; Paul Scharre, director, Technology and National Security Program, The Center for a New American Security; Renata Quintini, partner, Lux Capital; Ken Goldberg, William S. Floyd Jr. distinguished chair in engineering, University of California, Berkeley; Danika Laszuk, general manager, Betaworks Camp; Elizabeth Joh, Martin Luther King Jr. Professor of Law, University of California, Davis; Candice Morgan, head of inclusion and diversity, Pinterest

Source Link