As for the majority of the tech industry, 2018 was the year of artificial intelligence calculation. Given that AI systems are integrated into more products and services, the disadvantages of technology have become clearer. Researchers, companies and the general public have begun to cope with AI's limitations and its adverse effects, asking important questions such as: how is this technology used and for whose benefit?
This calculation is most visible as a parade of negative titles for algorithmic systems. This year saw the first deaths caused by auto-driving cars; the Cambridge Analytics scandal; accusations that Facebook has eased the genocide in Myanmar; the discovery that Google helped the Pentagon to train drone tracking tools; and ethical questions about a human assistant with the help of AI. The research group AI Now described 2018 as the year of "cascading scandals" for the terrain, and this is correct, if it's a discouraging summary.
But it is not necessary to see these titles like only negative. Ultimately, the scandal is better than unnoticed evils, and controversy can in theory help us improve.
Take Face Recognition. This is one of the fastest moving technologies in 2018 with successes like the Chinese police identifying crime at a music concert and broadcasters using technology to identify guests at the royal wedding, but also serious problems including bias, false positives, and other mistakes that are changing potentially. Police forces around the world have begun to use face recognition in the wild despite a study after a study that shows serious shortcomings, and the authoritarian potential of technology has become painfully clear in China, where it is one of the many tools used to suppress the minority of Uighurs.
All this is uncomfortable to read, but as a result of these controversy, companies have begun to build tools to combat bias problems, and a big tech company like Microsoft is now openly calling for regulation of face recognition. To read this news in a positive light, greater controversy means surveillance, and – in the long run – more solutions.
Despite this scandalous cascade, 2018 also recorded dozens, hundreds of hopeful and positive deployments of machine learning, and AI. There were small victories, everywhere in astronomy, where machine learning saw new craters on the moon and ignored the exoplanets; to fundamental scientific research, such as the use of AI to develop stronger metals and plastics; and health care, where there were numerous examples of AI systems that are able to detect diseases faster and more precisely than humans. New tools such as Google and Amazon Learning and Learning Learning Services as well as available learning courses from organizations like Fast.ai say artificial intelligence in more hands, and the results are largely useful and often inspiring.
These successes do not balance major failures, but taken together show that AI is a complex field. It does not move in a moral direction, but, like all technologies, has been taken over by a diverse array of players that use it for many results.
Looking throughout the year as a whole one lesson stands out: AI is not magic. It is not a double-skinned mask that can be used to call venture capital and institutional trust with a whim; nor is it a fair dust that can be sprayed through products and institutions for immediate improvements. Artificial intelligence is process: something to be examined, reviewed, and – if everything goes well understood. In other words, the long extension of the calculation can continue.
Final degree: B.
Card Report 2018: AI
- AI tools become more accessible
- Countless cases of use are found in different areas
- Technology that is changing around the world is just beginning to hit its foot
It needs improvement
- Potential for increased oversight and assistance of authoritarian states
- Major technology companies and governments first set up AI systems, asking questions later
- It will all end with tears (maybe)