Former FTC chief technologist Ashkan Soltani believes it is time for companies to formalize and test not only the security of a product but also how it can be abused.
TECHNOLOGY HAS NEVER RESTRICTED ITS EFFECTS TO THOSE INTENDED BY ITS CREATOR: it disrupts, reshapes, and backfires. Even as the unintended consequences of innovation have increased in the twenty-first century, tech companies have frequently relegated thinking about its second-order effects to the occasional embarrassing congressional hearing, scrambling to prevent unexpected abuses only after the damage has been done. According to one Silicon Valley watchdog and former federal regulator, that is no longer acceptable.
Former FTC chief technologist Ashkan Soltani plans to speak on Monday at the USENIX Enigma security conference in Burlingame, California, about an overdue reckoning for move-fast-and-break-things tech firms. He believes it is time for Silicon Valley to treat the potential for unintended, malicious use of its products with the same seriousness that it does their security. From Russian disinformation on Facebook, Twitter, and Instagram to YouTube extremism to drones grounding air traffic, Soltani argues that tech companies must consider abusability: the possibility that users will use their technology to harm others or the world.
“There are hundreds of examples of people finding ways to use technology to harm themselves or others,” Soltani said in an interview ahead of his Enigma talk. “We need to consider all of the possible outcomes. Not only in ways that hurt us as a company but also in ways that hurt those who use our platforms, as well as other groups and society.”
There is precedent for shifting the abusability testing paradigm. Many software companies did not invest heavily in security until the 2000s when they began to take the threat of hackers seriously, as Soltani points out. They began hiring their own security engineers and hackers, and audits for hackable vulnerabilities in code were elevated to a critical component of the software development process. Today, most serious tech firms not only attempt to break their code’s security internally but also bring in external red teams to hack it and even offer “bug bounty” rewards to anyone who alerts them to a previously unknown security flaw.
“Security guys were once considered a cost center that got in the way of innovation,” Soltani recalls from his previous job as a security administrator for Fortune 500 companies. “Fast forward 15 or 20 years, and we’re in charge.”
However, when it comes to abusability, tech companies are just getting started. Yes, major technology companies such as Facebook, Twitter, and Google have large anti-abuse teams. However, these teams are frequently reactive, relying heavily on users to report inappropriate behavior. According to Soltani, most companies still do not devote significant resources to the problem, and even fewer hire external consultants to assess their abusability. Soltani contends that an outside perspective is essential for thinking through the possibilities for unintended uses and consequences that new technologies create.
He points out that Facebook’s role as a disinformation megaphone in the 2016 election demonstrates how a large team dedicated to stopping abuses can be blind to devastating ones. “Historically, abuse teams were primarily concerned with abuse on the platform itself,” Soltani explains. “Now we’re talking about societal and cultural abuse, as well as abuse of democracy. I would argue that Facebook and Google did not begin with abuse teams thinking about how their platforms could be used to abuse democracy and that this is a new phenomenon in the last two years. That is something I would like to formalize.”
According to Soltani, some tech companies are beginning to address the issue, albeit often belatedly. Following 2016, Facebook and Twitter removed thousands of disinformation accounts. WhatsApp, which has been used to spread violent calls and false news from India to Brazil, finally restricted mass message forwarding earlier this month. DJI has placed geofencing limits on its drones to keep them out of sensitive airspaces, in an attempt to avoid disasters such as the paralysis of Heathrow and Newark’s airports caused by nearby drones. According to Soltani, these are all cases where companies were able to limit abuse without limiting their users’ freedoms. Twitter, for example, did not need to ban anonymous accounts, nor did WhatsApp need to weaken its end-to-end encryption.
“I believe Black Mirror has done more to educate people about the potential pitfalls of artificial intelligence than any White House policy paper.” – SOLTANI, ASHKAN
Soltani believes that such lessons must now be applied at every tech firm, just as security flaws are formally classified, checked for, and scrubbed out of code before it is released or exploited. “You need to define the problem space and the history in order to compile and classify different types of attacks,” Soltani says. Even more importantly, tech companies must work to anticipate the next type of sociological harm that their products may cause before it occurs, rather than after the fact.
That kind of prediction can be extremely difficult, and Soltani recommends that tech companies consult those who make it their job to predict unintended consequences of technology: academics, futurists, and even science fiction authors. “We can use art to consider potential dystopias that we want to avoid,” Soltani says. “I believe Black Mirror has done more to educate people about the potential pitfalls of artificial intelligence than any White House policy paper.”
During his time at the FTC—first as a staff technologist in 2010 and then as its chief technologist in 2014—Soltani was involved in the commission’s investigations into privacy and security issues at Twitter, Google, Facebook, and MySpace, the types of cases that have highlighted the FTC’s growing role as a Silicon Valley watchdog. In several of those cases, the FTC placed the companies “under order” for making deceptive claims or engaging in unfair trade practices, a type of probation that has resulted in tens of millions of dollars in fines for Google and will almost certainly result in far more for Facebook as punishment for the company’s latest privacy scandals.
However, Soltani believes that such regulatory enforcement will not solve the abusability problem. Victims of the indirect abuse he warns about frequently have no relationship with the company, making accusations of deception impossible. However, even in the absence of an immediate regulatory threat, Soltani contends that companies should be concerned about reputational damage or knee-jerk government reactions to the next scandal. He cites the contentious FOSTA anti-sex-trafficking law, which was passed in early 2018.
All of this means that Silicon Valley must devote the same thought and resources to abusability that it has for years to security, let alone growth and revenue. “There are opportunities to at least inform some of the known unknowns in academia, research, and science fiction,” Soltani says. “And possibly some of the unknown unknowns as well.”