bots on social media — programmed to, for instance, reply to posts about climate change
by denying its basis in science — as AI.
Others would limit the term to highly complex
instantiations such as the Defense Advanced Research Project Agency’s (“DARPA”)
Cognitive Assistant that Learns and Organizes (“CALO”)
or the guidance software of a
fully driverless car. We might also draw a distinction between disembodied AI, which
acquires, processes, and outputs information as data, and robotics or other cyber-physical
systems, which leverage AI to act physically upon the world. Indeed, there is reason to
believe the law will treat these two categories differently.
Regardless, many of the devices and services we access today — from iPhone autocorrect
to Google Images — leverage trained pattern recognition systems or complex algorithms
that a generous definition of AI might encompass.
The discussion that follows does not
assume a minimal threshold of AI complexity but focuses instead on what is different
about contemporary AI from previous or constituent technologies such as computers and
the Internet.
Why AI “policy”?
That artificial intelligence lacks a stable, consensus definition or instantiation complicates
efforts to develop an appropriate policy infrastructure. We might question the very utility
of the word “policy” in describing societal efforts to channel AI in the public interest.
There are other terms in circulation. A new initiative anchored by MIT’s Media Lab and
Harvard University’s Berkman Klein Center for Internet and Society, for instance, refers
to itself as the “Ethics and Governance of Artificial Intelligence Fund.”
Perhaps these
are better words. Or perhaps it makes no difference, in the end, what labels we use as
long as the task is to explore and channel AI’s social impacts and our work is nuanced
and rigorous.
This essay uses the term policy deliberately for several reasons. First, there are issues
with the alternatives. The study and practice of ethics is of vital importance, of course,
and AI presents unique and important ethical questions. Several efforts are underway,
within industry, academia, and other organizations, to sort out the ethics of AI.
But
these efforts likely cannot substitute for policymaking. Ethics as a construct is
See Claw Dillow, Tired of Repetitive Arguing About Climate Change, Scientists Makes a Bot to Argue
for Him, POPULAR SCI. (Nov. 3, 2010), http://www.popsci.com/science/article/2010-11/twitter-chatbot-
trolls-web-tweeting-science-climate-change-deniers.
See Cognitive Assistant that Learns and Organizes, SRI INT’L, http://www.ai.sri.com/project/CALO (last
visited Sept. 17, 2017). No relation.
See Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 532 (2015).
See Matthew Hutson, Our Bots, Ourselves, THE ATLANTIC, Mar. 2017, at 28, 28-29.
See MIT SCH. OF ARCHITECTURE & PLANNING, ETHICS AND GOVERNANCE OF ARTIFICIAL INTELLIGENCE,
https://www.media.mit.edu/groups/ethics-and-governance/overview/ (last visited Sept. 11, 2017).
E.g., IEEE, ETHICALLY ALIGNED DESIGN: A VISION FOR PRIORITIZING HUMAN WELLBEING WITH
ARTIFICIAL INTELLIGENCE AND AUTONOMOUS SYSTEMS 3 (2016),
http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf. I participated in this effort as a member of the
Law Committee. Id. at 125.