Ethics In Tech & Lack Thereof

Sleeping Under The Cell Tower

By Vahid Razavi

The Five Areas of Ethics In Tech

1- Equality in Tech

It’s no secret that the technology industry has a racial and gender equality problem, or that there’s a digital divide in the United States every bit as wide as the gap between rich and poor people. And while all levels of government — federal, state, and local — have acknowledged and are taking steps to address the former, and tech companies recognize and are tackling the latter, there is still a tremendous amount of work left to be done. Much has been made of the “bro culture” that has for so long pervaded Silicon Valley. Documentaries and books like Emily Chang’s Brotopia, an exposé of Silicon Valley sexism replete with salacious depictions of Valley sex parties, have gotten plenty of attention in recent years, and for good reason. And just as the world is gaining glimpses into a microcosm where “business” might include pitching investors in a hot tub, along comes the #MeToo movement and a societal shift in tolerance and expectations. Unfortunately, according to data compiled by virtual event solutions company Evia, women — who make up more than half of the US workforce — occupy less than 20 percent of all tech jobs. What’s worse, women today hold a smaller share of computer sciences jobs than they did in the 1980s.51 One of the reasons for this is, ironically, tech itself — the share of women in computer sciences plateaued around 1984 and the subsequent decline occurred at a time when more and more homes were getting personal computers. A narrative that computers are for boys was established, reinforced by product development and marketing aimed mostly at men and boys. This all informed and nurtured the burgeoning techie culture. Today, surveys have shown that girls become interested in tech careers at age 11 but soon thereafter lose interest. Experts say a lack of tech education, lack of female mentors, and general gender inequality are to blame. As in so many other areas, education is the key to tackling disparities. Nonprofit organizations like Girls Who Code and TechGirlz are working hard to close the gender gap in technology by targeting girls at a young age. The results have been promising — Girls Who Code founder Reshma Saujani claims we’re on track to achieve gender parity in computer sciences by 2027. Of course there’s a lot more to it than just gender equality. Women in tech endure situations and even dangers that no man in the industry ever faced. In addition to discussing her company’s products and services, Bethanye Blount, who later co-founded and became CEO of startup compensation platform Cathy Labs, found herself giving keynote speeches at tech conferences warning other women attendees to cover their drinks because sexual harassment and even assault were all too common at such events. Anita Sarkeesian, who launched a Kickstarter campaign to fund a series of videos examining sexism in video gaming, received rape and death threats for her efforts. Focusing solely on gender pay and representation gaps often overlooks the daily trials and tribulations to which women in technology must endure.

Race, however, and not gender, remains the biggest obstacle to women when it comes to attaining executive and management positions in the tech industry. Ascend Foundation, a business organization that represents Asian Americans, analyzed data from the US Equal Employment Opportunity Commission for the San Francisco Bay Area and found that the racial gap in tech leadership positions between white men and minority men was larger than the gender gap between white men and white women. In 2015 white women were 31 percent more likely than Latino men to be executives, 88 percent more likely than Asian men, and 97 percent more likely than Black men.53 While minorities, especially Blacks and Latinos, have made significant progress in penetrating the nation’s tech industry, they are still shamefully underrepresented in computer and math jobs — Blacks occupied less than 8 percent of all computer and math positions and Latinos less than 7 percent, according to a 2018 Brookings Institution study. Recent trends don’t look very promising; Black representation in science and math jobs has actually shrunk over the past decade. I don’t know what the stats are for Persian men, and I’m not the kind of guy who thinks there’s a racial motivation behind every mistreatment or slight, but I’d also be blind if I didn’t consider the role racial prejudice and bias may have played throughout my career.

Statistics and awareness are one thing; taking meaningful action to combat the problem is quite another. Ethics In Tech is a solutions-based endeavor, and the solution here starts with better education. Specifically, boosting computer, science, and math education starting as young as possible. We’re not just talking about university-level courses — our country must dramatically expand exposure and access to entry-level tech skills as well. Computer sciences education should be more inclusive of girls and minorities; witness the success of coding programs and boot camps like Black Girls Code and #YesWeCode, and work-oriented skills programs like Year Up. Organizations including Byte Back and Per Scholas are stepping in to offer often overlooked entry- level computer and IT certification courses. Tech companies have launched online certification courses as well. Ethics In Tech supports these programs and looks forward to making its own contributions to solving this persistent problem.

2- Employer-Employee Relations

Much of the problem with employer-employee relations stems from the way business owners and managers generally view their workers in the United States. Much of this country’s labor history has been fraught with struggle as workers have fought for their rights, and Big Business has pushed back — often with deadly force. The days of mass casualty strikes and physically hostile bosses may be long over, but as many tech sector workers (especially those at the lower rungs of the economic ladder) can attest, there are plenty of lingering and new injustices today. There is so much work left to be done to improve the overall relationship between the titans of tech and what novelist Douglas Coupland called their “MicroSerfs.” I’ve traveled around the world and there’s no doubt that other countries are light-years ahead of the US in employee-employer relations. For contrast, I’d like to revisit the plight of those overworked and underpaid Amazon warehouse workers one more time. I really wonder: What sort of corporate board of executives gets together and comes up with policies like forbidding workers from sitting on the job, or giving them just one half-hour lunch break and two other 15-minute breaks over the course of their 10-hour shift? What kind of “human resource” should be expected to pack more than 1,200 items per day? Who decided it was ethical to force mandatory overtime shifts on workers and punish those who refuse by docking vacation time?

To me it feels like many of the biggest tech players, the ones that attract the best and brightest recruits, are dangling shiny objects before their workers as part of a concerted effort to propagate what I call the work-life balance myth. Tech firms might let you bring your Maltipoo to work, they might have a kick-ass video game arcade or pool tables or climbing wall, their cafeterias might serve to-die-for sushi or quinoa bowls, but there’s no such thing as a free lunch. Can you really call it a “work-life balance” if you’re working 50-60 hours a week and have to sit in traffic or commute for another 10 hours each week? Can you really maximize worker happiness when employees are treated like cattle, constantly monitored from badge in to badge out, and security patrols are everywhere? Treatment is even worse for the low-wage — usually subcontracted — custodians, security personnel, receptionists, and others. They are truly second-class citizens at Amazon and other companies. Workers subcontracted to deliver packages for Amazon started blowing the whistle on repeated instances of wage theft in 2017, staging a protest outside the company’s Eagan, Minnesota fulfillment center on a frigid November day. Many of the demonstrators were East African immigrants who fled political, ethnic, and other persecution in Ethiopia, Eritrea, and Somalia, only to discover a new set of problems in the United States. “In my country I was a professional civil engineer, and I was working this job to pay my bills and to be able to study and get back to my profession,” one protester explained. “This has been shocking to see how this country treats immigrants doing work people seem to take for granted.” He continued:

“I am incredibly disappointed not only with the company but also with my experience working in America. This experience is something that too many people face and it has changed my mentality about fairness in America. All of the things I heard about racism and unfairness were shown to be true. My hope is that this company doesn’t represent the rest of the country.”

Sadly, it does represent the general direction in which employment in the US seems to be heading. Witness how leading players in the “sharing economy” are loathe to even call their workers employees. Ride-hailing giant Uber’s stupendous global success and wealth is only possible because of the hard work of its “driver partners”, but it squeezes its drivers in a race to the bottom of ever-lower earnings and unsympathetic treatment. Ask any random Uber driver and they’ll bristle at the very term “driver partner.” They’ll tell you that it’s a one-way “partnership” in which the driver does the grunt work and takes all the risk for the enrichment of a behemoth company that avoids conferring the rights and responsibilities to which all employees are entitled because they’re technically not employees. Uber doesn’t have to pay a minimum wage or overtime, or offer benefits, to its workers. It also avoids lawsuits and other liabilities under its current scheme.

There are hopeful signs that this might be changing soon. There have been a number of lawsuits filed against Uber and other “sharing” or “gig economy” companies, and courts are beginning to rule in favor of workers. This doesn’t happen without serious setbacks along the way. In 2015, the California Labor Commission issued a ruling in favor of a former Uber driver, ordering the company to reimburse her for costs she incurred while she was a driver. The decision said Uber was liable for these costs because its drivers are employees. However, in April 2018 a federal judge in Philadelphia ruled that Uber drivers aren’t employees but rather independent contractors. The judge’s “logic” in the ruling strained credulity — he posited that Uber doesn’t exert enough control over its drivers for them to be considered employees; that because they’re free to turn off the app and pee, grab a bite to eat, nap, or do whatever else they please, then they’re really not employees after all. But again, if you ask a full-time Uber driver whose living wage — if she can even earn that much — is dependent upon a bonus pay scheme that requires 40 or more hours a week of work, she’ll likely laugh at the much-ballyhooed driver “freedom” touted by the company. A step forward was then taken shortly after the Philadelphia ruling when the California Supreme Court in May 2018 essentially scrapped a decades-old test for determining the employment status of a worker, making it much easier to classify people as employees rather than independent contractors. It remains to be seen whether this will prove a transformative ruling, but some sort of major reckoning seems like it is closer rather than further away as the public and the officials they elect are increasingly eager to put an end to the “Wild West” era of largely unregulated gig economy startups.

Within companies, efforts are being made in fits and starts to improve employer-employee relations. At Amazon, a human resources program called Connections engages employees with daily questions about various work-related issues in a bid to better understand the company’s massive workforce and its concerns. However, some Amazonians expressed concerns that they could face repercussions and reprisals for answering these questions honestly, while others have questioned whether Connections is an effective tool to begin with. Much better received has been Forte, Amazon’s revamped employee review program that in a company notorious for its often ruthless and Darwinian review and critique process, now focuses more on worker strengths rather than on weaknesses.

As a former Amazonian, I think I am in a position to assert that HR programs might look good on paper, but all the programs in the world can’t make up for some really stupid policies and actions, mistakes that anyone with a bit of common sense ought to be able to avoid. I’ll give you one example: Due to my back issues I was fitted for an ergonomic chair, but believe it or not, as part of Amazon’s cost-cutting and other principles, two employees are assigned to one desk! Who wants to work for a company where management doesn’t care enough about employees to get them their own desks? Or where the burdens of the workday are so onerous? Who wants to work where employees spend eight hours each day in a cold environment where people don’t think about each other except to try to out-compete each other?

I was so overworked at Amazon. Some of my partners had whole teams assigned to them; I had to manage these partners with just the help of a part-time solutions engineer. I never felt like I had the support that I needed. I had four bosses in one year, for goodness sake! Besides all this there was just a glaring lack of humanity at Amazon. Don’t get me wrong, a lot of Amazonians are individually caring people. But when I was laid up in the hospital, I got a gift basket from my partners wishing me well. I got nothing from Amazon.

3- The Environmental Impact of Technology Development

According to a recent article in Time magazine concerning a report from Greenpeace, the cloud — and the associated networks that deliver the data — consumes so much energy that it’s as if it were the world’s sixth most energy depleting nation. After this leading environmental protection group called out tech companies about the problem, some of the biggest energy consumers among the bunch — including Apple, Google, Facebook, and others — vowed to dramatically slash energy consumption at their data centers. They followed through on their promise, with some powering their data centers completely from renewable energy sources. Apple and Facebook built new data centers running on 100 percent renewable energy. Amazon, which was at first reluctant to follow suit, soon did so as well, with AWS issuing a low-key yet important statement highlighting its new “long-term commitment to achieve 100% renewable energy usage for our global infrastructure footprint.” By 2018 AWS had achieved 50 percent renewable energy usage. It has half a dozen solar farms generating 200 megawatts in Virginia as well as a trio of wind farms in North Carolina that make 458 megawatts. These renewable projects are expected to deliver over two million megawatt hours of energy onto the power grid to power AWS data centers in Ohio and Virginia. AWS is generating enough electricity to power nearly 200,000 US homes each year, or approximately as many homes as there are in the city of Atlanta, Georgia. Additionally, AWS, which announced its first carbon-neutral region way back in 2011, now offers five carbon-neutral regions for its customers to utilize. There’s a ripple effect since AWS servers host Netflix, Pinterest, Spotify, Vine, Airbnb, and many other websites.

The massive size, influence, and financial resources of the leading tech companies makes them ideally placed to take the type of meaningful action to combat climate change and other ecological destruction which threatens the very existence of humanity on Earth. Where governments lack the political will, efficiency, and speed needed to drive these necessary changes, the tech industry, with its obsession with innovation and break-neck pace of development, is stepping up and stepping in to fill in the gaps that the government can’t or won’t. This isn’t just limited to the United States. Climate change affects the entire planet, and companies like Amazon are implementing solutions across all the countries in which they operate. China, where there are millions of new Internet users with each passing year, is one of the world’s leading actors in the fight against climate change. Amazon is planning to open new data centers running on renewable energy to keep up with the tremendous and ever-growing demand there.

This is more than just doing right by the planet. A Google spokesperson told TechRepublic that the company views renewable energy as a business opportunity. But business without ethics (in this case the ethics of planet over profit) isn’t a sustainable model, and Google and other tech companies know it. That’s why Google quit the American Legislative Exchange Council, better known by its infamous acronym ALEC, following other companies like Microsoft which withdrew from the probusiness lobby. Google explained its decision by accusing ALEC of lying about climate change.

The group rejects the international consensus by around 97 percent of climate scientists that human activity, especially carbon emissions, are responsible for and driving climate change. ALEC, which says it “provides a constructive forum for state legislators and private sector leaders to discuss and exchange practical, state-level public policy issues,” has drafted sample legislation that would prohibit the implementation of climate-mitigating regulations and laws, including the Trump-scuppered Obama-era Clean Power Plan. ALEC is almost completely funded by corporations, special interest and lobby groups, and trade associations. Its model legislation is often adopted word-for-word by state legislatures, once tragicomically without even removing traces of its provenance. ALEC’s proposals are almost entirely Republican-sponsored, with only 10 percent of bills based on ALEC model legislation sponsored by Democrats. The most common subjects of model bills are immigration and climate change, followed by guns and crime. One of the most controversial bills ever put forth by ALEC resulted in the Castle Doctrine Act which gives occupants of a home the right to protect life and property with the use of deadly force. ALEC also wrote bills outlawing sanctuary cities for undocumented immigrants, bills against the disclosure of the components in the proprietary fluids used in hydraulic fracturing (fracking), bills against firearm controls, and were against measures that affirmed steps needed to combat climate change, to name but a few. Public backlash against ALEC’s overtly pro-corporate, anti-planet agenda led to a mass exodus of the group’s members beginning in 2014, when Microsoft — followed by Google, Facebook, Yelp, Yahoo, Uber, and Lyft — said they would either quit or not join the group. Soon even some of the worst corporate actors, including fossil fuel companies, were dropping out. ExxonMobil, which had given ALEC more than $1.8 million (part of the more than $35 million the group has spent on deceiving the public about climate change) even quit the group in 2018.

4- Privacy

There is no doubt that technology is invading our privacy. Your device probably knows more about you than your best friend. As our work and personal lives become more connected and integrated with the technology that powers our world, we often let our guard down. We have become so used to and dependent on pervasive, invasive technology that we’ve grown comfortable with ubiquitous surveillance in one form or another. We store intimate and valuable personal data on our devices without giving too much thought to who else can access it or what they’re doing with it. In the landmark 2014 Supreme Court decision Riley v. California, the majority wrote that “the fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought.” As technology continues to develop faster than lawmakers can guarantee privacy protections, both government and corporations are getting away with tracking and surveilling us in ways that were not too long ago the stuff of science fiction, if they could be imagined at all. They glean immense amounts of data about our locations, communications, online searches, purchases, and much, much more. It’s not just privacy that’s compromised by all of this. Our right to freedom of expression, our security (both online and off), and the equality promised in the Constitution and other documents are all infringed, sometimes in shocking ways. We are all faced with having to choose between embracing the latest technologies and protecting our civil liberties. It shouldn’t be this way, and that’s why privacy and privacy rights are a central focus of Ethics In Tech.

Back in 1999, Sun Microsystems CEO Scott McNealy raised eyebrows and ire in and beyond the tech industry when he infamously called consumer privacy issues a “red herring.” “You have zero privacy anyway,” McNeal told a group of reporters. “Get over it.”59 McNealy’s statement shocked back then, but it has only grown truer with each passing year. In 2010, Facebook co-founder Mark Zuckerberg essentially reiterated McNealy’s sentiment, updated for the burgeoning social media era, when he declared that privacy was no longer a “social norm.” Zuckerberg asserted that “people have gotten really comfortable not only sharing more and different kinds of information, but more openly with more people.” He noted that when he started Facebook a lot of his fellow Harvard students didn’t get it; they asked him why on Earth would they want to broadcast their private information and images across the Internet. Of course we now know that it was only a matter of time before Facebook launched Beacon, its controversial advertisement system released in 2007 that allowed companies to track users’ online activities.60 This concerned and angered those who were paying attention — admittedly not so many folks in those days — and who understood the wider societal implications of intrusions on online privacy. Electronic Frontier Foundation called changes in Facebook’s privacy policy a ploy to get users to share even more information, accusing the company of actually reducing the amount of control people had over their personal data. The worst part of the new policy, according to EFF, was that Facebook now treated information including names, profile pictures, current city, gender, networks, pages “Liked” and Friend lists as “publicly available information,” meaning you no longer had the option of keeping it private. The Office of the Privacy Commissioner of Canada completed an investigation of Facebook’s privacy policies and practices in September 2010 concluding that changes made by the company brought it more in line with Canada’s privacy laws. One of the investigators’ main concerns was that thirdparty developers of games and other popular apps had virtually unrestricted access to Facebook users’ personal data. While the Canadian government inquiry concluded that Facebook was making strides toward addressing some of its most egregious privacy violations, it was still concerned that the company was expanding the categories of user data it was making available for all to see as well as not giving users enough control over privacy settings.61 In Norway the national consumer council has repeatedly taken Facebook to task for these and other practices. A 2018 report accused both Facebook and Google of embedding “dark patterns” — exploitive design choices — into their interfaces in a dubious effort to persuade users to share as much personal data as possible. The Norwegian Consumer Council report concluded:

Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.

The popups from Facebook, Google, and Windows 10 have design, symbols, and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option…

Users are given an illusion of control through privacy settings. Firstly, Facebook gives the user an impression of control over use of third party data to show ads while it turns out that the control is much more limited than it initially appears. Secondly, Google’s privacy dashboard promises to let the user easily delete user data, but the dashboard turns out to be difficult to navigate, more resembling a maze than a tool for user control.

The combination of privacy intrusive defaults and the use of dark patterns nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed, and freely given.

In 2014, the now-infamous UK-based political consulting firm Cambridge Analytica began collecting the personal information of tens of millions of Facebook users, data which was then allegedly used by the company to influence voters in favor of its client politicians. In late 2015 The Guardian reported that presidential candidate Sen. Ted Cruz (R-TX) was utilizing psychological data based on research of 87 million Facebook users — data largely taken without users’ explicit consent — in an attempt to gain an edge over his GOP presidential rivals. Cambridge Analytica, which is funded by the reclusive hedge fund billionaire and Republican donor Robert Mercer, used these psychological profiles (as did conservative super PACs backed by the Mercer family) to subject voters to “behavioral micro-targeting” in a bid to help win votes.63 In 2018 Christopher Wylie, a Canadian former Cambridge Analytica employee turned whistleblower, came forward with even more disturbing details regarding the stunning size and scope of the company’s data collection and Facebook’s role. He called himself the chief architect of “Steve Bannon’s psychological warfare mindfuck tool.”

In addition to Cruz and Trump’s campaigns, Cambridge Analytica also worked to influence the outcome of the 2016 UK Brexit campaign and the 2018 Mexican general election. Facebook CEO Zuckerberg was forced to apologize for what he called a “mistake” and a “breach of trust.” He took out full-page ads in major papers across in both the United States and abroad promising “to do better.” He was also subjected to a multi-day congressional grilling from which he emerged chastised yet largely unscathed, even if public trust in the world’s most prominent social media platform had been eroded considerably. During the congressional hearings, Zuckerberg indicated he was open to ethics regulations in the tech industry, possibly including a rule suggested by Sen. Amy Klobuchar (D-MN) that would require Facebook to notify users of a data breach within 72 hours. This is the standard established in the new General Data Protection Regulation (GDPR) implemented in the European Union in May 2018.

However, if you listen carefully to Zuckerberg’s testimony and apology you’ll notice that while he calls it an “issue” and concedes there was a “breach of trust,” he never uses the term “data breach.” Some current and former Facebook officials argued that users who took personality quizzes and other data-leeching apps had actually consented to sharing their data. “Unequivocally not a data breach,” tweeted a defiant Andrew “Boz” Bosworth, Facebook’s consumer hardware VP. “No systems were infiltrated, no passwords or information were stolen or hacked.”

The emergence of the mass surveillance state over the past two decades has arguably been the most disturbing development affecting the privacy of just about everyone on the planet. This is not an overstatement, as former National Security Agency employee turned whistleblower revealed in his Earth-shaking leaks beginning in 2013. Snowden copied at least tens of thousands, and perhaps as many as 200,000, NSA and Pentagon documents, many of them classified. He shared them with journalists including Glenn Greenwald, Laura Poitras, and Ewen McCaskill. These journalists, over a long period of time, published them in papers and on news sites around the world, including The Guardian, The Washington Post, The New York Times, and Der Spiegel. The Snowden leaks first revealed previously unknown details regarding the scope and scale of the NSA’s global surveillance program, which operated with the close co-operation of British, Australian, and Canadian intelligence. Through its PRISM program, the NSA collected Internet communications with the active help of leading telecom and tech firms including Microsoft, Google, Yahoo, Apple, and Facebook.

The NSA utilized a big data analysis and data visualization tool it called Boundless Informant to catalog its worldwide surveillance, a direct contradiction of the agency’s assurances to Congress and the American people that it was not collecting data on millions of individuals. XKeyscore, a data retrieval system consisting of a series of user interfaces, backend databases, servers, and software, was also revealed. According to Snowden, with XKeyscore, You could read anyone’s email in the world, anybody you’ve got an email address for. Any website: You can watch traffic to and from it. Any computer that an individual sits at: You can watch it. Any laptop that you’re tracking: You can follow it as it moves from place to place throughout the world. It’s a one-stop-shop for access to the NSA’s information. You can tag individuals… Let’s say you work at a major German corporation and I want access to that network, I can track your username on a website on a form somewhere, I can track your real name, I can track associations with your friends and I can build what’s called a fingerprint, which is network activity unique to you, which means anywhere you go in the world, anywhere you try to sort of hide your online presence, your identity.

The NSA spied on the social media profiles of millions of Americans and countless millions more people around the world in a truly global effort to discover and track connections between American citizens and suspected terrorists. In service of its never-ending worldwide War on Terror, the US government, with the assistance of major tech and telecom companies, had been engaged in a sweeping and illegal surveillance dragnet of phone and online communications since 2001. This has included warrantless wiretapping and monitoring of Americans’ communications. Under President Barack Obama, Justice Department officials acknowledged that the NSA was guilty of “overcollection” of domestic communications, but claimed such illegal acts were unintentional and that the practice had been corrected. However, Obama then quietly signed into law a reauthorization of the Foreign Intelligence Surveillance Act (FISA) Amendments Act of 2008, which permits the warrantless wiretapping of phone and electronic communications in which at least one of the persons involved is a foreigner. At times Snowden’s revelations were downright bizarre — the NSA infiltrated popular online gaming and social media communities including World of Warcraft and Second Life, and it spied on some of America’s closest allies and other world leaders including the Pope.

Snowden’s bombshell leaks shattered what was already a badly-cracked facade of privacy. People today — especially the younger generations — have nearly no expectation of privacy. According to a 2015 Pew Research poll, only 6 percent of Americans were “very confident” that government agencies would safeguard the privacy of their personal information, with the same percentage expressing high confidence that telecom companies would do likewise. More than two-thirds of respondents said they had little or no confidence that social media sites, search engines, or online video sites would protect their data.

Of course the erosion of privacy isn’t just limited to our online lives. Just about everything you do is being watched or tracked. Just take a walk outside and down the street of any major city and you’ll see buildings bristling with cameras. We’re constantly being watched in cities and towns big and small across the nation. Fear of terrorism and growing affordability of closed-circuit television cameras (CCTV) and other technology has fueled the trend toward the ubiquitous surveillance we more or less take for granted. Indeed, an April 2013 New York Times/CBS poll found that Americans overwhelmingly favor installing video surveillance cameras in public places. Fully 78 percent of respondents said surrendering their privacy and civil liberties was an acceptable tradeoff for greater security.

Granted, this was just a week after the Boston Marathon bomb attacks, but one is left to wonder whether those 78 percent have ever heard Benjamin Franklin’s prescient warning that “those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” In that case, the town of Tiburon, California, located just a short drive north of the Golden Gate Bridge in San Francisco, surely must not deserve either. The affluent bayside community of 9,100 residents, almost all of them white, is served by just two roads in and out of town. Tiburon is an extremely safe place, yet its Town Council decided to spend $200,000 to place six security cameras (critics call them “insecurity cameras”) along the roads to monitor every vehicle entering or leaving town. While police do run license plate checks on individual vehicles all the time, civil libertarians are alarmed by what they call “scope creep” — with serious Fourth Amendment questions arising. Those cameras not only record your license plate, they also run the plate number through a law enforcement database and cross-check it with vehicles known or believed to be involved in crimes. The most famous crime caught on camera in Tiburon was the 2011 theft of celebrity chef Guy Fieri’s yellow Lamborghini by a local teenager. But in a super-safe town where there hasn’t been a murder in a decade and where less than a car a month was stolen before the cameras were installed, many concerned residents and civil liberties groups near and far have vehemently opposed mass surveillance. “Comprehensive location tracking, which is made possible by advances in technology such as these license-plate readers, reveal all kinds of intimate details about a person’s life,” Stephanie Martin, an ACLU attorney, told KQED. “Visits to the Alcoholics Anonymous meeting, the gay bar, the union hall, the abortion clinic, and so on.”

Back across the Golden Gate Bridge in San Francisco it’s much more than just a few cameras monitoring incoming and outgoing traffic. “There are not a lot of spots left where there’s not some sort of private or public surveillance camera,” Nadia Kayyali of Electronic Frontier Foundation told CBS San Francisco. “The idea that you can sort of meet in a public place and quietly have a conversation… is really not realistic anymore.” Indeed, according to San Francisco police, the average city resident appears on camera dozens of times every day, depending on their movements. Kayyali argues that the level of monitoring to which we are all subjected on a daily basis is outpacing the NSA’s surveillance.

The rise of facial recognition technology raises the prospect of even more invasive and pervasive surveillance, both public and private in origin. Of course such new technologies are always touted as a way to decrease terrorism or crime and to increase security. Such is the case with Face-Int, a database of facial profiles of thousands of terrorism suspects harvested from social media sites including Facebook, YouTube, and other online forums. FaceInt was created by a company called Terrogence, which was bought by the Israeli firm Verint in 2017. More than 35,000 online videos and photos were stored and analyzed by FaceInt, making Terrogence yet another tech company that, like Cambridge Analytica, was able to secretly capitalize upon Facebook’s openness for its own dubious purposes.74 It doesn’t take much imagination to envision a “scope creep” scenario in which FaceInt’s use is expanded beyond terrorism and into the realm of the political — or even for the benefit of corporations or other private actors. Tech companies including Amazon have positioned themselves to be major players in facial recognition technology marketed to law enforcement, a development that even many workers of those companies vehemently oppose. In June 2018 Amazon employees signed and sent a letter to Jeff Bezos asking him to order the company to stop selling facial recognition software to law enforcement agencies and to stop its business dealings with Palantir, a data mining company started by Peter Thiel. Thiel is a billionaire investor who co-founded PayPal and who curiously considers himself a libertarian. Palantir develops predictive policing tools, the kind of technology that was until quite recently the stuff of sci-fi films like “Minority Report.” But “pre-crime” has gone from fantasy to reality, with firms like Palantir and Amazon at the cutting edge. More than 100 Amazon employees signed the letter, which read:

Dear Jeff,

We are troubled by the recent report from the ACLU exposing our company’s practice of selling AWS Recognition, a powerful facial recognition technology, to police departments and government agencies. We don’t have to wait to find out how these technologies will be used. We already know that in the midst of historic militarization of police, renewed targeting of Black activists, and the growth of a federal deportation force currently engaged in human rights abuses — this will be another powerful tool for the surveillance state and ultimately serve to harm the most marginalized. We are not alone in this view: Over 40 civil rights organizations signed an open letter in opposition to the governmental use of facial recognition, while over 150,000 individuals signed another petition delivered by the ACLU. We also know that Palantir runs on AWS. And we know that ICE relies on Palantir to power its detention and deportation programs. Along with much of the world we watched in horror recently as U.S. authorities tore children away from their parents. Since April 19, 2018 the Department of Homeland Security has sent nearly 2,000 children to mass detention centers. This treatment goes against U.N. Refugee Agency guidelines that say children have the right to remain united with their parents, and that asylum-seekers have a legal right to claim asylum. In the face of this immoral U.S. policy, and the U.S.’s increasingly inhumane treatment of refugees and immigrants beyond this specific policy, we are deeply concerned that Amazon is implicated, providing infrastructure and services that enable ICE and DHS. Technology like ours is playing an increasingly critical role across many sectors of society. What is clear to us is that our development and sales practices have yet to acknowledge the obligation that comes with this. Focusing solely on shareholder value is a race to the bottom, and one that we will not participate in. We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used. We learn from history, and we understand how IBM’s systems were employed in the 1940s to help Hitler. IBM did not take responsibility then, and by the time their role was understood, it was too late. We will not let that happen again. The time to act is now. We call on you to:

1 . Stop selling facial recognition services to law enforcement.

2. Stop providing infrastructure to Palantir and any other Amazon partners who enable ICE.

3. Implement strong transparency and accountability measures that include enumerating which law enforcement agencies, and companies supporting law enforcement agencies, are using Amazon services, and how.

Our company should not be in the surveillance business; we should not be in the policing business; we should not be in the business of supporting those who monitor and oppress marginalized populations.

Sincerely, Amazonians

Of course our government has been using technology to surveil us for a very long time — and it’s been watching some of us a lot more than others. Perhaps the all-time classic example of this is the FBI’s COINTELPRO (COunter INTELligence PROgram). Originally launched by FBI Director J. Edgar Hoover in 1956 to monitor and thwart US communists, within two months the program was targeting the growing civil rights movement. In one of the most dubious acts of surveillance in US history, Robert F. Kennedy, attorney general during the John F. Kennedy administration, authorized the wiretapping of the Rev. Martin Luther King Jr.’s phone. William Sullivan, the head agent in charge of COINTELPRO, reflected the prevalent thinking among American law enforcement when he warned that “we must mark [King]… as the most dangerous Negro of the future in this nation.” Mind you, this was right before the civil rights icon was awarded the Nobel Peace Prize for his leading role in the nonviolent struggle against Jim Crow segregation, disenfranchisement, and other racial injustices. Under COINTELPRO the FBI bugged King’s home and hotel rooms, then sent anonymous letters in a bid to encourage him to kill himself. Agents even sent King’s wife a recording of what they claimed was her husband with an alleged mistress. When King slammed the FBI for ignoring horrific and sometimes murderous crimes committed with impunity by members of the still-powerful Ku Klux Klan, Hoover was infuriated. He even called the honorable King the nation’s “most notorious liar.” As is still the case in the United States today, identifying and condemning racism was too often seen as a worse offense than the racism itself. Would you expect it to be any other way in a nation built on a foundation of genocide and slavery?

That other slain giant of the 1960s civil rights movement, Malcolm X, was also targeted under COINTELPRO. FBI agents infiltrated his Organization of Afro-American Unity during the final months of his life, fomenting division and conflict among the group’s members which led up to X’s assassination. Other targeted groups included the Black Panther Party, American Indian Movement, the Nation of Islam, the Socialist Workers Party, the Ku Klux Klan, National Lawyers Guild, the women’s liberation movement, and other groups and causes on both the Left and the Right.77 But the very worst of COINTELPRO and associated groups’ surveillance, infiltration, psychological operations, illegal harassment, violence, and even assassinations (most notably of Black Panther leaders Fred Hampton and Mark Clark in Chicago in 1969) were reserved for Black people fighting for the justice and equality long promised but long denied by their country.

Too many people think such unconscionable — indeed unconstitutional — racial surveillance is a thing of the distant past. It’s not. Federal, state, and local law enforcement have infiltrated, tracked, or monitored Black Lives Matter and other groups. During the 2014 BLM protests surrounding the fatal police shooting of Michael Brown in Ferguson, Missouri, FBI agents tracked activists as they traveled across the country to participate in the demonstrations.

And it’s not just racial justice groups. Documents obtained by The Intercept shed light on surveillance activities that went far beyond the online intelligence gathering that had been previously reported. Although heavily redacted, the documents suggested that federal agents had staked out the homes and vehicles of people associated with Black Lives Matter. In Memphis, police went so far as to create fake social media accounts to spy on BLM. Upon admitting this, the city’s chief legal officer called such surreptitious surveillance “simply good police work.” In one somewhat humorous incident, Massachusetts State Police accidentally tweeted a photo showing a computer screen with progressive groups bookmarked in the browser window, confirming the suspicions of those in movements like Occupy, Black Lives Matter, Antifa, and other social justice movements.

There is also a long history in the United States of government and private companies or interests working together to draft watch lists and databases of targeted groups and individuals. But especially since 9/11 it seems as if we are constantly being watched. Our government watches us. Corporations watch us, especially including the ones we work for. Our devices watch us — I remember laughing when a college professor predicted back in the 1990s that one day even our toilets and refrigerators would monitor us. Well, the Internet of Things is here, it’s here to stay and I can report that while my fridge is of the old-school variety, there are plenty of refrigerators and other home appliances that are Internet-connected. And yes, at the 2018 Consumer Electronics Show in Las Vegas, Kohler unveiled its Numi multifunctional toilet, powered by Amazon’s Alexa. You can ask Numi to prep the bidet, play songs from your stored playlists (it’s got speakers) and — in a potential game-changer for romantic relationships everywhere — even lower the seat!

If there’s a silver lining to global mass surveillance, it’s that people all over the world are now aware of it and pushing back against its excesses. Informed people really can make a difference, as we saw in 2012 during the coordinated protests against a pair of proposed laws, the Stop Online Piracy Act (SOPA) and PROTECT IP Act (PIPA), which critics including myself argued would have infringed on online free speech. Organized by Fight for the Future, protests — both online and off — against SOPA and PIPA involved over 100,000 websites and millions of people. Millions of Americans emailed their members of Congress in opposition to the bills. A Google petition collected more than 4.5 million signatures. Wikipedia and Reddit led online protests by temporarily shutting down their sites and redirecting users to a page opposing the bills. Other sites — including Google, Flickr, and Mozilla — prominently expressed their opposition, while companies that supported the measures, including GoDaddy, were targeted with boycotts. Amid this, the Obama White House put out a statement promising the administration would “not support legislation that reduces freedom of expression, increases cybersecurity risk, or undermines the dynamic, innovative global Internet.” The protests worked; neither SOPA nor PIPA became law. We would do well to remember the level of vigilance and action necessary to defeat measures that infringe upon our rights. As EFF said, “If the victory against SOPA/PIPA taught us anything, it’s that whether or not the Internet will remain a place that everyone can access reliably and affordably to share, connect, and create freely depends on us.”

Another evolving fight over privacy involves terms of service. In May 2018 the Supreme Court unanimously ruled in Byrd v. United States that the driver of a rental car had a constitutional right to reasonable expectation of privacy in the vehicle even though he wasn’t authorized to operate the vehicle under the terms of the rental agreement. In the case at hand, Pennsylvania state troopers pulled over Terrence Byrd, who was driving a rental car his girlfriend had loaned him; his girlfriend was the only driver authorized by the rental agreement. The state troopers used this as probable cause to search the vehicle, in which they discovered 49 bricks of heroin. Byrd claimed the search violated the Fourth Amendment prohibition of unreasonable search and seizure. Two lower courts ruled against him, but the Supreme Court decided in his favor. In essence, what the high court was saying is that your expectation of privacy shouldn’t depend on the fine print in a rental contract or other agreement. In other cases, courts — including the Supreme Court — have ruled that by merely agreeing to terms of service you waive your Fourth Amendment rights. Such was the case in United States v. Frank DiTomassi in which AOL reported a man for online transmission of child pornography. The court ruled that:

A reasonable person familiar with AOL’s policy would understand that by agreeing to the policy, he was consenting not just to monitoring by AOL as an ISP, but also to monitoring by AOL as a government agent. Therefore, DiTomasso’s Fourth Amendment challenge fails as to the emails.

The DiTomasso ruling cites Smith v. Maryland in which the Supreme Court distinguishes between the “contents of communication” and the ancillary information accidentally disclosed by such communication. Data — the former — is protected under the Fourth Amendment while the latter, often metadata, forfeits such constitutional protection upon disclosure. DiTomasso also cites United States v. Jones in which the Supreme Court found that, under the Fourth Amendment, installing a GPS tracking device on a vehicle and monitoring its movements amounts to a search. Justice Sonia Sotomayor wrote that:

It may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties… This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.

5- Artificial Intelligence

AI is the hot topic around town these days with seemingly everyone abuzz about the progress, possibilities, and perils of artificial intelligence development. The burgeoning creation of thinking machines excites humanity with possibilities, but it also raises serious ethical questions that must be addressed early on. Not only are ethicists concerned with ensuring machines do no harm to humans, they must also grapple with questions of the moral status of the machines themselves. As workplace automation continues apace, how can we best manage the inevitable disruption and displacement that will result? How do we best distribute the wealth generated by machines? How do we guard against AI bias (yes, it’s a thing)? How do we protect AI against adversaries? How will AI affect the way we behave and interact with not only other humans, but with AI as well? How can we safeguard against AI mistakes, and how on Earth will we proceed once the singularity (the point at which AI evolves past the level of human intelligence into what some believe will be a superintelligence that could mean the end of humanity) is reached? How do we manage unintended consequences? What, if any, rights will we bestow upon the intelligent machines we create?87 These and other critical questions must be answered if we are to proceed ethically and responsibility with AI development.

As a spiritual man and as a student of the late Rev. Dr. Howard Thurman, I believe that all life is holy. If all life is holy then AI should be ethically bound to the preservation of life. “Thou shalt not kill” should be one of the fundamental principles guiding the development of AI. If it is, then it is highly unethical to work for or promote the tech companies that do not respect this core guiding principle. There are tech companies that are creating instruments of war, instruments whose only purpose is to kill other human beings. Many of these companies have grown fantastically wealthy offering Death and Destruction as a Service. Others have been complicit in maintaining and promoting the machinery of systemic oppression. Unfortunately, war and oppression are highly lucrative endeavors. Both corporate executives and Pentagon planners, those twin pillars of the military-industrial complex, are drooling over the possibilities of profit and power they see in artificial intelligence. When most people think of AI and the battlefield of the future, they imagine scenes from The Terminator or other sci-fi flicks of their youth. What’s more likely in the shorter term is that AI will be an enabler, like electricity or the internal combustion engine in earlier eras. AI will improve the accuracy and efficacy of weapons systems. It will make them faster, more agile, and better able to quickly respond to the rapidly shifting facts on the battleground. The ability to rapidly process the literally overwhelming amount of incoming data during wartime will be critical to achieving mission success, and AI is the key to realizing this ability. Of course, at some point Skynet, the AI overlord in The Terminator franchise, will become a very real possibility. Elon Musk and the late, great Stephen Hawking are among the many leading minds who have said they fear AI could spark World War III or worse. With the stakes being potentially the extinction of humanity, wouldn’t it be prudent to proceed with well-defined and universally accepted ethics rules for AI?

The military-industrial complex has given us some of the most important things in our lives today. The now-indispensable Internet, for example, was born half a century ago from the Defense Department’s Advanced Research Projects Agency (ARPA). It would be naive to advocate total destruction of certain aspects and elements of the military-industrial complex, but ethics are more important than ever in this area. Just as society developed an ethical framework to govern use of the Internet, it must now devise rules governing the use of artificial intelligence. And just as watchdogs, whistleblowers, and other activists sound the alarm whenever government, corporate, or other powers attempt to flout the rules (the fight over net neutrality comes to mind), we must now be ever vigilant in the face of rapidly-developing AI technology and its burgeoning application.

Much more than the military — which is, after all, in the business of killing people — tech companies are wrestling with and devising guidelines for the ethical development and use of artificial intelligence. Google recently released a set of AI ethics principles which include being socially beneficial, avoiding algorithmic bias, being safe and accountable, and protecting privacy. Google explicitly stated that it would not develop or deploy AI for weapons, illegal surveillance, violations of international law or human rights, or “technologies that cause or are likely to cause overall harm.” While Google was widely applauded for addressing concerns regarding the ethical use of AI, digital rights advocates including Electronic Frontier Foundation have raised concerns that ethics policies like Google’s don’t go far enough. Google, for example, hasn’t committed to any sort of independent transparent review to ensure the company adheres to its own ethnics principles. And while Google’s ethics code commits to avoiding human rights abuses, the company’s willingness to bend to the will of the Chinese government in order to do business in one of the world’s biggest consumer markets gives lie to its own stated principles.

As I said, when most people imagine the potential dangers of AI they imagine a “Terminator”-type situation in which “awakened” machines far surpass humans in intelligence and then either exterminate us or make us their slaves — or worse their food source à la “The Matrix.” The actual dangers we are likely to face from AI are far less apocalyptic yet still quite perilous. Take Deepfakes, AI software that renders artificial facial imagery, for example. A malicious actor could use Deepfakes to create fake pornographic content in which a celebrity’s — or your — face is digitally superimposed over that of an adult film actor’s in an X-rated video. Ariana Grande, Gal Gadot, and Taylor Swift are among the celebrities who have been targeted.

Please follow and like us:
Share