Amazon Is Watching You

The Internet giant is wiring homes, neighborhoods, and cities with cameras and microphones, and powering the nation’s intelligence services. Are we sure we can trust it?

for OneZero Medium

When you think of Amazon, you might think of comparison shopping from your couch, buying exactly what you want, for less than you’d pay at the store. You might think of a delivery person dropping a package at your door, right on time, and how if there’s anything amiss you can send it back for a full refund. You might think of asking Alexa to play a song or a TV show or turn on the lights, and the marvel of how it all just works (usually). You might think of a Prime members’ discount on avocados at Whole Foods, which Amazon acquired in 2017.

Amazon’s reputation for serving its customers with low prices and ruthless efficiency might help to explain why, in survey after survey, the Seattle-based company ranks as America’s most valuable — nay, most loved — brand. One recent study found that Amazon is the second most-trusted institution of any kind in the United States, ahead of Google, the police, and the higher-education system, and trailing only the U.S. military. At a time when an endless string of privacy and election scandals has left Facebook’s reputation in smoldering ruins, and Google’s has been dented by YouTube’s radicalization and content moderation woes, Amazon’s is stronger than ever.

But Amazon’s public image as a cheerfully dependable “everything store” belies the vast and secretive behemoth that it has become — and how the products it’s building today could erode our privacy not just online but also in the physical world. Even as rival tech companies reassess their data practices, rethink their responsibilities, and call for new regulations, Amazon is doubling down on surveillance devices, disclaiming responsibility for how its technology is used, and dismissing concerns raised by academics, the media, politicians, and its own employees.

“We’re all hoping they’re not making a panopticon,” says Lindsey Barrett, staff attorney at Georgetown Law’s Institute for Public Representation. Last month, the institute, serving as counsel to a group of 19 watchdog groups, called on the Federal Trade Commission to investigate Amazon for alleged violations of the federal law protecting children’s online privacy. Among other concerns, they found that Amazon’s Echo Dot Kids Edition smart speaker retained children’s voice recordings and personal data even after parents tried to delete them. Amazon blamed at least part of the problem on a software bug, which it says it has since fixed.

While the outcome of that case remains to be seen, the complaint represents just the tip of the iceberg. The Amazon of today runs enormous swaths of the public internet; uses artificial intelligence to crunch data for many of the world’s largest companies and institutions, including the CIA; tracks user shopping habits to build detailed profiles for targeted advertising; and sells cloud-connected, A.I.-powered speakers and screens for our homes. It acquired a company that makes mesh Wi-Fi routers that have access to our private Internet traffic. Through Amazon’s subsidiary Ring, it is putting surveillance cameras on millions of people’s doorbells and inviting them to share the footage with their neighbors and the police on a crime-focused social network. It is selling face recognition systems to police and private companies.

“Amazon is essentially selling fear.”

The Amazon of tomorrow, as sketched out in patents, contract bids, and marketing materials, could be more omnipresent still. Imagine Ring doorbell cameras so ubiquitous that you can’t walk down a street without triggering alerts to your neighbors and police. Imagine that these cameras have face recognition systems built in, and can work together as a network to identify people deemed suspicious. Imagine Ring surveillance cameras on cars and delivery drones, Ring baby monitors in nurseries, and Amazon Echo devices everywhere from schools to hotels to hospitals. Now imagine that all these Alexa-powered speakers and displays can recognize your voice and analyze your speech patterns to tell when you’re angry, sick, or considering a purchase. A 2015 patent filing reported last week by the Telegraph described a system that Amazon called “surveillance as a service,” which seems like an apt term for many of the products it’s already selling.

Behind it all is a company whose leaders too often see privacy concerns as overblown, internal dissent as insignificant, and the potential for abuse of Amazon’s technologies as someone else’s problem.

“Just because tech could be misused doesn’t mean we should ban it and condemn it,” said Andy Jassy, the head of Amazon’s cloud business, at the Code Conference earlier this month. He suggested that banning face recognition systems, which can be potent surveillance tools but tend to discriminate against people of color, would be akin to banning email or knives — two pieces of technology, one new and one very old, that could also be employed for good or ill. “You could use a knife in a surreptitious way,” he said.

Everything Amazon is building, it should be said, has the potential to be used for good. Its doorbell cams can catch porch pirates; its face recognition software can help authorities track down suspects; Alexa can be useful around the house in myriad ways (usually). But when you put together the pieces, the company’s cloud-connected eyes and ears can take on an Orwellian dimension that makes Facebook and Google — the two tech companies that tend to loom largest in the privacy fears of consumers — look modest by comparison.

Ears in the living room

Let’s start with the company’s Alexa-powered devices, such as the Amazon Echo, Echo Dot, and Echo Show, which have become a staple of smart homes. They put Internet-connected, always-on microphones, and in some cases cameras, inside your kitchen, living room, or bedroom. For the paranoid, that alone might be reason to shun them: Why risk opening a permanent portal between the Internet and your family’s most intimate spaces? But more than 100 million buyers have embraced them, trusting Amazon to guard their recordings in the process.

Amazon says that Echo devices only start recording when they hear a pre-set wake word, such as “Alexa.” Unfortunately, the system is far from foolproof — just about every Alexa user has been taken by surprise when the devices mishears the wake word and suddenly activates. Alexa often picks up and stores snippets of conversation by mistake, and on at least one occasion, Amazon accidentally sent those recordings to a random stranger. It doesn’t help that Amazon has neglected data privacy features that rival smart speakers include. It’s harder to delete your recordings from an Echo than a Google Home or Apple HomePod, and Amazon was slow to put a physical shutter on the camera of devices like the Echo Show to guard against accidental recording. (Even Facebook’s has one.)

For a company that’s leading a revolution in the relationship between humans and machines, Amazon’s attitude toward privacy fears has at times seemed dismissive. In an interview last year on Slate’s podcast If Then, I asked Amazon’s vice president of Alexa Engine Software, Al Lindsay, to name one Amazon-related data privacy concern he thought was valid, or one privacy-related challenge his team was working on. He said he couldn’t think of a single one.

But watchdogs outside Amazon have found a few. In April, Bloomberg reported that Amazon employs thousands of human contractors in offices around the world to listen to the recordings of unsuspecting Alexa users. The aim isn’t surveillance, but rather an effort to improve the device’s software. That’s a common practice among companies training A.I. — human users of intelligent digital assistants like Alexa and Siri are part of the training process whether they know it or not — but Amazon wasn’t explicitly disclosing the mechanics of the system to users. Apple’s Siri and Google’s Assistant also use human reviewers, but they take steps to anonymize users’ recordings, whereas Alexa’s recordings are tied to each user’s account number, device serial number, and first name.

Then there’s the allegation that Amazon’s Echo Dot Kids Edition has been violating the federal Children’s Online Privacy Protection Act, or COPPA. In addition to retaining kids’ personal information even after parents tried to delete it, the device allegedly stored that information indefinitely, and used a flawed method of parental consent. Barrett of the Institute for Public Representation said she found it hard to believe a company so large could be committing such straightforward privacy violations on a device explicitly intended for and marketed to kids.

For this story, the company sent a statement indicating that it has strict protocols to protect customers’ privacy and security, and noted that customers can review and delete their recordings anytime in the Alexa app or by visiting the online Alexa Privacy Hub.

When you consider that Amazon is putting Alexa devices in cars, hotel rooms, classrooms, and even children’s hospitals, it’s curious that the company isn’t making more of a public-relations push around privacy and security. It seemed like that might finally be changing last month, when Amazon touted a suite of new Alexa privacy features, including the ability to say, “Alexa, delete everything I said today.” But the feature turned out to be rather byzantine: Instead of just deleting your recordings the first time you asked, Alexa would direct you to open the Alexa app on your smartphone and navigate a long series of menu options. After all that, you could only delete your recordings by voice from a single day, not a longer period of time. Deleting everything is an eight-step process that requires the Alexa app. And Amazon still doesn’t offer the option to auto-delete your recordings, as Google does.

Amazon has so far held off on putting voice ads in Alexa, which may help to reinforce its image as a company that sells products to you, rather than making you the product. But that doesn’t mean your Echo isn’t collecting and monetizing data on you. It already tracks your purchases and music choices to use in product recommendations. And while it hasn’t attracted the same scrutiny as Google or Facebook, Amazon’s digital advertising business is quietly booming: It’s now the third-largest behind the duopoly. Amazon is expected to capture nearly 10 percent of the $130 billion U.S. market in 2019, according to one recent estimate.

Eyes on the street

Then there’s Ring, the “smart doorbell” startup that Amazon acquired for $1 billion in early 2018. Whereas other Internet giants mostly confine their snooping to users’ online behavior, Ring lets Amazon — and you — monitor other people’s actions in the real world. Its Wi-Fi-connected devices, mounted outside the doors of homes and businesses, continuously survey a 30-foot radius, capturing video whenever they detect motion. Users can watch the footage in real time, and can pay a fee to store and watch recordings.

Those surveillance capabilities aren’t novel: Businesses and mansions have long had expensive security camera systems. But in the same way that online commerce existed before Amazon brought it to the masses, Ring has mainstreamed that service by putting doorbell cams into a simple, $200 package and by marketing them aggressively to ordinary homeowners — as well as police departments. Amazon is now the dominant player in the doorbell camera market, part of home surveillance-camera market that one analyst predicts will be worth $10 billion by 2023.

Not content to let users surveil their own front yards, Ring has turned its cameras into semi-public dragnets via an app called Neighbors. Neighbors lets Ring owners upload, share, and comment on each other’s surveillance footage, and gives them the option to make it available to the police. “Ring’s Community Alerts help keep neighborhoods safe by encouraging the community to work directly with local police on active cases,” Amazon said in a statement.

That feature has some law enforcement agencies giddy. A report by CNET found that police departments from Houston to Hammond, Indiana are partnering with Amazon and offering citizens discounted Rings, while encouraging them to share footage on Neighbors. Thanks to Ring, “our township is now entirely covered by cameras,” a police commander in Bloomfield, New Jersey, told CNET. The police chief of Mountain Brook, Alabama, told the site that access to residents’ Ring footage via Neighbors gave his department the equivalent of citywide security camera coverage, for virtually nothing. Amazon does not require users to share footage with police, but its portal for law enforcement makes it easy for officers to request footage from any given user — a request that many people would likely find awkward to refuse. Users technically remain anonymous in the app, though their general location is evident. Ring told OneZero that it is committed to protecting users’ privacy, and noted that it does not support programs that require customers to share footage with the police in return for discounts on the device.

You might think a giant tech company moving into literal, physical surveillance at a time of heightened online privacy concerns would tread lightly. In Amazon’s case, you’d be wrong. Under Ring founder Jamie Siminoff — who originally pitched the idea for his startup on Shark Tank — the company’s internal messaging has been militant. In 2016, Siminoff handed out camouflage-print t-shirts to employees, and declared war on “dirtbag criminals.”

But the company’s own security, at least prior to the Amazon acquisition, has at times appeared lax: In separate incidents, Ring was found to be storing customers’ home WiFi passwords in plain text, and sending tiny, 20-millisecond packets of audio data to servers in China, where the government aggressively monitors Internet traffic. Ring moved quickly to address those two problems, and there’s no evidence they led to any harm. Beginning in 2016, Ring also gave research and development employees in Ukraine access to users’ personal video recordings for analysis, according to a 2018 report by The Information. Amazon later told The Intercept that the practice was limited to videos publicly shared on the Neighbors app, but declined to say when that policy had taken effect. “As a security company on a mission to reduce crime in neighborhoods, security is at Ring’s core and drives everything we do,” Ring told OneZero in a statement. “Nobody can view a user’s video recordings unless the user allows it or shares them.”

Amazon has embraced Ring’s crime-fighting ethos. “I can think of no nobler mission,” said Amazon’s vice president of devices, Dave Limp, at an event in September 2018. Earlier this year, Ring placed targeted Facebook ads that showed residents of Mountain View, California, actual surveillance footage of a woman who appeared to be trying to break into a car.

“Amazon is essentially selling fear,” says Chris Gilliard, an English professor at Macomb Community College who researches discriminatory uses of technology. “They’re selling the idea that a more surveilled society is a safer one.” But that depends on whether you’re the person being surveilled. Gilliard believes Ring and Neighbors could actually make society less safe for people of color, who are disproportionately identified as “suspicious” on social networks with a neighborhood-watch component. In the past, a resident might tell a neighbor or call the police if they saw a person they thought looked out of place, “but they probably weren’t going to broadcast it to the whole neighborhood,” he says. “Now they are.”

Brains in the cloud

Though most people still think of Amazon primarily as an online retailer, the bulk of its profits now come from Amazon Web Services, whose cloud servers power nearly half the internet, by some estimates. For smaller websites, AWS serves mostly as an infrastructure provider. But for some of the world’s largest institutions and corporations, AWS also mines and analyzes data, decoding text and images, making predictions and recommendations.

Those clients include major law enforcement and intelligence agencies, such as the Department of Homeland Security, Department of Defense, and the CIA. Amazon set up a special division of AWS in 2017 to handle classified government intelligence. AWS’s clients also include Palantir, the Silicon Valley big-data firm co-founded by Peter Thiel, which provides software for U.S. Immigrations and Customs Enforcement, or ICE. (Amazon is also reported to have made a pitch directly to ICE.) Now AWS is vying with Microsoft Azure for a mammoth $10 billion contract to bring the Pentagon onto its cloud, after Google dropped out under pressure from its own employees.

AWS’ most controversial service is Rekognition, a platform that uses machine learning to analyze images and video footage. Among other features, Rekognition offers the ability to match faces found in video recordings to a collection of faces in a database, as well as facial analysis technology that can pick out facial features and expressions. A 2018 report from the American Civil Liberties Union (ACLU) highlighted how Amazon has been marketing its face recognition capabilities to law enforcement agencies, and has partnerships under way with police in Orlando, Florida, and Washington County, Oregon.

At a developer conference in Seoul, according to NPR, Amazon’s Ranju Das explained that police in Orlando have “cameras all over the city” which stream footage for Amazon to analyze in real time. It can then compare the faces in the surveillance video to a collection of mugshots in a database to reconstruct the whereabouts of a “person of interest.” A CNET report in March detailed how the Washington County Sheriff’s Office used Rekognition to identify and apprehend a shoplifting suspect.

One well-documented problem with face recognition software is inaccuracy, particularly when it comes to identifying people of color. In a 2018 test by the ACLU, Rekognition incorrectly found matches for 28 members of Congress to mugshots of crime suspects. Congress members of color were overrepresented among the false matches. Amazon argues the study is misleading in several ways, because it used a confidence threshold for matches that is lower than AWS recommends, and used an outdated version of the Rekognition software.

The idea of building A.I. tools and face recognition for the likes of Palantir and ICE doesn’t sit well with some of Amazon’s own employees and shareholders — especially at a time when immigrants are being separated from their families and children coming across the border are being denied basic human rights. An anonymous Amazon employee wrote in Medium last fall that some 450 workers had signed a letter to CEO Jeff Bezos calling on the company to drop Palantir and stop supplying Rekognition to police departments. “Companies like ours should not be in the business of facilitating authoritarian surveillance,” the employee wrote. In an accompanying interview, the employee said the letter had been met by Amazon’s leaders with “radio silence.”

That’s not surprising, given that AWS vice president Teresa Carlson had touted the company’s “unwavering commitment” to police and military uses of face recognition at a security conference in July 2018. Gizmodo reported in November 2018 that AWS CEO Andy Jassy reaffirmed the company’s marketing of Rekognition to law enforcement in an internal meeting. He portrayed the technology as largely positive, highlighting the work of an organization that is using the software to help find and rescue victims of human trafficking. Jassy added that if Amazon discovered customers violating its terms of service, or people’s constitutional rights, it would stop working with them. But when Carlson was asked previously whether Amazon had drawn any red lines or guidelines as to the type of defense work it would do, her answer was clear: “We have not drawn any lines there.”

In a February 2019 blog post, Amazon suggested that it would be open to some form of national legislation on face recognition to promote transparency and respect for civil rights, among other goals. That followed Microsoft’s far more direct calls for regulation of the technology in 2018. But where Microsoft made a stark moral argument for reining it in, conjuring images of a 1984-esque dystopia, Amazon has consistently defended its use. “In the two-plus years we’ve been offering Amazon Rekognition, we have not received a single report of misuse by law enforcement,” the company said in the blog post.

Surveillance as a service

Patent filings can’t tell you what a company is actually going to build. Often, a firm’s lawyers are just trying to cover as many bases as possible, in case some facet of their intellectual property might one day become relevant to business. But patents can still reveal what a given company thinks its competitive environment might look like in the years to come. And if Amazon’s patent filings tell us anything, it’s that the company sees the expansion of surveillance capacities as a major part of its future.

Some of these revolve around Alexa and voice recognition. One 2017 application describes a “voice sniffer” algorithm that could pick out keywords for targeted advertising from a conversation between friends. Another proposes to infer people’s health or emotional state from coughs, sniffles, or their tone of voice — again, with targeted advertising as a potential use case.

Amazon also has some big ideas for the future of Ring. Earlier this month, Quartz reported that Amazon received trademarks for devices that could cover cameras mounted on cars or on baby monitors, or “home and business surveillance systems.” A separate patent filed back in 2015 makes clear that Amazon has been thinking along these lines since long before it bought Ring: It described a system by which package delivery drones could be hired by customers to fly over a specified target and shoot spy footage. Amazon referred to the idea as “surveillance as a service.

It’s doubtful even Amazon would ever roll out a product with a name or function quite that blatantly dystopian. But there’s one other set of patents, unsealed in November 2018, that sounds like something the company might actually be working on. They describe how a network of cameras sharing data could be used in tandem with software to automatically identify people whose faces appear in a database of suspicious persons. As CNN first reported, it sounds a lot like a roadmap to incorporating face recognition with Ring and the Neighbors app. An ACLU attorney described it as a “disturbing vision of the future” in which people can’t even walk down a street without being tracked by their neighbors.

In a statement to OneZero, Ring reiterated that a patent application doesn’t necessarily imply a product in development. “We are always innovating on behalf of neighbors to make our neighborhoods better, safer places to live, and this patent is one of many ideas to enhance the services we offer,” the company said.

Smart speakers, A.I. voice assistants, doorbell cameras, and face recognition in public spaces all have their upsides. They offer convenience, peace of mind, and the potential to solve crimes that might otherwise go unsolved. But put them all together — with one very large company controlling all the data and partnering closely with police and intelligence agencies — and you have the potential for a surveillance apparatus on a scale the world has never seen.

Assuming no one’s going to stop Amazon from building this network, the question becomes whether we can trust the company to be responsible, thoughtful, and careful in designing its products and guarding the copious amounts of data they collect — and whether we can trust all the entities that use Amazon’s technology to do so responsibly.

The cloud over us all

In contrast to Facebook, which has spent recent years apologizing for privacy lapses and pledging to fix itself — however ineffectively — or Apple, which has made privacy an explicit selling point, Amazon has so far shown little concern for the ethical implications of its sprawling surveillance capabilities. While Microsoft is declining to sell face recognition technology to police, and Google isn’t selling its face recognition technology at all, Amazon is hawking it to police departments around the country.

In case there remained any doubt about Amazon’s stance — or lack of one — on the societal responsibilities involved in developing its surveillance technologies, CTO Werner Vogels put it to rest at a company event in May. In an interview with the BBC, Vogels explained that it isn’t Amazon’s role to ensure that its face recognition systems are used responsibly. “That’s not my decision to make,” he said. “This technology is being used for good in many places. It’s in society’s direction to actually decide which technology is applicable under which conditions.”

That tone is set at the very top. At a tech conference in San Francisco in October 2018, Bezos framed the company’s military contracts as part of a patriotic duty. And he compared his company’s development of high-tech surveillance tools to the invention of books, which he said have been used for both good and evil. “The last thing we’d ever want to do is stop the progress of new technologies,” Bezos said, according to a CNN report. One might hope that the last thing Amazon would want to do is develop new technologies that cause harm, but apparently that’s of lesser concern to its executives than standing in the way of innovation.

Bezos did sound one brief note of concern, only to dismiss it in the same breath: “I worry that some of these technologies will be very useful for autocratic regimes to enforce their role… But that’s not new, that’s always been the case. And we will figure it out.”

The idea that innovation’s march is inevitable, and that it’s not up to companies to guide how new technologies they create are used, has been embodied in the strategies of tech companies for decades. It’s implicit in the ethos of “move fast and break things,” in the belief that it’s better for innovators to ask for forgiveness than for permission. But as the social costs of these innovations have grown heavier, once-transgressive platforms such as Facebook, Google, and Twitter — urged on by their own employees — have come to accept the view that they are indeed responsible, at least to some extent, for their products’ impact on society, and that they have some power to shape it proactively.

Amazon, it seems, has not joined them in this view.

Gilliard, the researcher who has been studying how Ring and Neighbors could affect minorities, says he suspects Amazon will not be able to maintain such a blasé attitude toward the impacts of its products — intended or unintended — much longer. “Amazon has not had their Cambridge Analytica moment yet,” he told me.

What might that moment look like for Amazon? Gilliard hesitated for a moment, and then sketched out a hypothetical scenario. He noted that Amazon’s Key in-home delivery service allows delivery people to drop off packages inside customers’ garages, using a smart lock. (It had initially tried dropping them off inside people’s front doors.) Gilliard imagines an Amazon delivery person of color entering a customer’s garage, and the customer getting an alert of a suspicious person from one of Amazon’s own products, such as Ring or the Neighbors app. The situation could turn ugly. “I really, really hope this doesn’t happen: I think someone’s going to get hurt,” Gilliard said. “An Amazon employee is going to get hurt, arrested, assaulted.”

In the absence of a massive, headline-dominating fiasco, or a sudden crisis of conscience on the part of Amazon employees and executives, the best defense against the company’s surveillance overreach might be regulation. In May, San Francisco became the first major U.S. city to ban face recognition. Defenders of the technology found the action premature and drastic. But Amazon’s own anonymous employee, whose identity was verified by Medium, warned that if we don’t act soon, “the harm will be difficult to undo.” It’s equally hard to imagine police departments giving up access to the Neighbors app once they’ve come to rely on it, unless they’re compelled to do so. And while one might hope that Amazon would stop short of putting face recognition in doorbells — or on cars, or on drones — its leaders have given us no indication that they see a problem with it.

Amazon has overcome societal trust barriers before. It launched as an online bookstore in 1995, just one year after the very first online purchase in Internet history. Amazon, along with eBay, helped to persuade the public that using their credit card online wasn’t as crazy as the skeptics thought.

Even as the company has expanded in seemingly incongruous directions, it has maintained a rigorous focus on streamlining processes that used to be cumbersome, from rapid and cheap delivery to controlling smart gadgets by voice. You can see the same impulse at work in the way Rekognition automates the onerous task of comparing a single criminal suspect’s face to hundreds of thousands of mugshots in a database, or how Ring makes it seamless to alert neighbors and the police when something suspicious happens on your street. But it’s worth asking, before we’ve made surveillance as easy and ubiquitous as a one-click Amazon purchase, whether society might be better off keeping certain tasks a bit cumbersome after all.

Update: An earlier version of this story misidentified the market category in which Amazon, via Ring, is the dominant player. It is the dominant player in the market for doorbell cameras. An earlier version misidentified a free feature of Ring as a paid feature. Users can watch live footage for free, or store and watch recordings for a fee. An earlier version did not make clear the precise nature of police access to the Ring system. They have access to Ring footage shared via the Neighbors app.

SUBSCRIBE TO ONEZERO MEDIUM