Ethics In Tech & Lack Thereof

Sleeping Under The Cell Tower

By Vahid Razavi

Bugsplat

“Thou shalt not kill” keeps playing over and over again in my head as I think about some of the more sinister advances and applications of technology. Killing is made a lot easier when you don’t even regard the people you’re killing as fully human. Dehumanization has always been a part of armed struggle, but technology allows this species of ours to satiate its bloodlust in a sanitary manner never before witnessed in the annals of warfare. Sanitary for the killer, at least. There tend to be definite dehumanizing effects inherent in the application of technology for the business of death and destruction. Take the example of unmanned aerial drone warfare. Most people don’t know that the US drones raining so much destruction upon militants and innocent civilians alike in an arc stretching from northern Africa, the Middle East, and South Asia aren’t locally controlled. They’re controlled from facilities in the United States, where kids from the second video game generation are commanded by officers from the first one to execute life-and-death missions. They do this from the comfort and safety of air-conditioned control rooms half a world away from the men, women, and children they’re killing. It’s easy enough to dehumanize the “enemy” in wartime when you’re facing them on the field of battle. It’s even easier to rationalize killing, maiming, terrorizing, and displacing people when you don’t have to see their faces or even understand anything about their lives because you’re blowing them up from 8,000 miles away. Before too long you find yourself fully immersed in the perversely absurd and you must choose whether you will swim in the shit or drown in it.

Iraq is a country that had absolutely nothing to do with the September 11th, 2001 terror attacks, but, nevertheless, it was cited to justify Bush’s illegal war. In preparation for the 2003 US-led invasion and occupation, the Pentagon’s war planners created a computer model to estimate how many innocent civilians were likely to be killed in a given aerial bombardment. They called their program “Bugsplat”. That was also what some of them called their victims. When the military was planning its “shock and awe” campaign of massive bombardment of heavily-populated central Baghdad, Gen. Tommy Franks was informed of 22 proposed targets where bombing would likely result in “heavy bugsplat,” or more than 30 civilian deaths per raid. The good general approved all 22 bombings.

There is another valuable public function of Ethics In Tech, and that’s raising public awareness. Just today I was discussing how cloud computing giant Salesforce may be profiting from one of the Trump administration’s internationally condemned policies. The company may be gaining revenue from the administration’s cruel tactic of separating undocumented refugee children — some of them just infants — from their parents in a bid to discourage them from fleeing deadly violence and economic privation (much of which was caused by decades of US foreign policy). Salesforce has a lucrative contract with US Customs and Border Patrol which uses the company’s products in committing Child Abuse as a Service. Most people don’t think of this when they think of Salesforce. If they’re in the industry, they might think of its customer success stories or of Dreamforce, the company’s annual extravaganza conference in San Francisco. If they’re from the Bay Area they’ll probably think of the Salesforce Tower, San Francisco’s first super-tall skyscraper that now looms above the rest of the glistening city skyline. But very few, if any, people are thinking about Salesforce’s role in bolstering the Trump administration’s racist policies and actions targeting both documented and undocumented immigrants. That’s because they don’t know about it. Part of what Ethics In Tech does is raise public awareness about the crimes and misdeeds of tech companies and other industry actors whom the public generally views positively or neutrally.

Considerable ink has been expended exploring ethical issues arising from the intended use of technology, but we must also consider and plan as best as possible for the unintended consequences and uses of tech. Take Facebook for example. When its stock plummeted amid allegations of misuse and fallout from the Cambridge Analytica scandal it didn’t just affect Facebook; it was a drag on nearly the entire tech sector. And who would have imagined back when Facebook launched that it would come to be used as one of the leading platforms for online bullying? Some have argued that tech is in a state of crisis that requires skilled crisis managers to find solutions. Ian I. Mitroff, professor emeritus at the Marshall School of Business and the Annenberg School of Communications, University of Southern California, asserts that we need a government agency similar to the Food and Drug Administration to monitor the social impact of technology and protect society from its dangers:

We must establish panels composed of parents, social scientists, child development experts, ethicists, crisis management authorities, — and kids — to think of as many ways as they can about how a proposed technology could be abused and misused.

Mitroff argues that this isn’t just the right ethical move, it’s also good for companies’ bottom lines:

Ideally, tech companies would do this on their own. Indeed, research has shown that companies that are proactive in anticipating and planning for crises are substantially more profitable than those that are merely reactive. Crisis management is not only the ethical thing to do, it’s good for business; it heads off major crises before they are too big to fix.

Please follow and like us:
Share