Rob Reich, Mehran Sahami and Jeremy M. Weinstein are professors at Stanford University and the authors of System Error: Where Big Tech Went Wrong and How We Can Reboot. The opinions expressed in this commentary are their own.
Frances Haugen’s spellbinding testimony before Congress last week riveted the attention of lawmakers and the country. She claimed that Facebook’s own research found that the platform is harmful to teenage girls and amplifies dangerous speech that has led to violence and death, but those findings were ignored or buried. The testimony — which CEO Mark Zuckerberg has said created a “false picture of the company” — followed her explosive whistleblowing of how Facebook prioritizes engagement over safety and profits over people. Haugen’s revelations brought comparisons to the tobacco company insiders who testified about the addictive qualities of cigarettes and that Big Tobacco knew.
It’s hard not to see parallels between Haugen’s declaration of Facebook’s “moral bankruptcy” and the branding of tobacco companies as irredeemable organizations that could not be trusted. But is Facebook really no different than a tobacco company — peddling a fundamentally toxic product that is addictive, unhealthy, has powerful second-hand negative effects on others, and that no amount of tweaking and tuning could possibly reform?
We don’t think so. Facebook itself is not the problem. Neither is Mark Zuckerberg. The problems of algorithmic amplification and prioritizing engagement over safety are ubiquitous on social media. We see the same fundamental dynamic in other platforms built around user-generated content such as YouTube, Twitter, Snapchat and TikTok. What’s more, we see a similar dynamic in end-to-end encrypted messaging services like WhatsApp (owned by Facebook), Signal and iMessage, which optimize for user privacy, but leave those services open to potential abuse by child pornographers, human traffickers and terrorist organizations, despite the companies saying they try to remove such content and to encourage their users to report it.
The problems in Big Tech are systemic, not just the fault of a single company. New technologies continue to transform how we live, work and engage one another, yielding enormous benefits. But the push to scale new technologies quickly and achieve market dominance makes it even more likely that societal harms aren’t fully considered until the negative consequences become evident and inescapable. Examples abound. There are facial recognition tools that offer the hope of greater safety at a significant risk of misuse by law enforcement agencies, and ride-sharing apps that deliver seamless transportation options while leaving gig workers vulnerable without the wage and benefit protections given to regular employees. The companies have said that they’re committed to providing minimum earnings and health care benefits, but drivers claim they haven’t kept those promises.
So what’s to be done? It appears from Zuckerberg’s note to Facebook staff that he appreciates that more than just apologies are needed. And that he recognizes companies have lost the public trust and legitimacy to manage the negative consequences of the technologies they produce. The prospects of self-regulation as a pathway out of this mess are a dead end.
But while Republicans and Democrats demonstrated unusual unity on their dislike of Facebook, the big question is whether they can agree on what government should do next.
There are a lot of places to start. Part of the reason we find ourselves loving to hate Facebook is that it doesn’t seem like there are many other options available. After all, we’ve spent so much time building our friend network and posting on the platform that it doesn’t seem like it’s reasonable to try to do that all again on some other social network. If we want meaningful alternatives, we need to have mandated standards for interoperability between social networking platforms to allow new competitors to be able to provide useful alternatives to the entrenched players. That means that new social networking apps would be able to access your friend networks on Facebook, Twitter or anywhere else if you (and your friends) give permission. This would make it easier for you to switch to a new platform that, say, offered better privacy guarantees or promoted less toxic content to you, while being able to leverage all the friendships or followers you had already built on Facebook or Twitter. That not only means more platform options that might better match your preferences, but it also creates competitive pressure for Facebook and other social media companies to improve their own policies.
Congress should pass federal privacy legislation that demands transparency about the personal data that social media companies collect and requires them to fully inform users — in accessible language — while seeking consent regarding how that data can be used. It should mandate data portability across platforms, which would enable users to easily bring their personal data to a new platform (say, to move your profile from Facebook to a different app). And it should require those building algorithms to check them for bias and assess potential harms, and then publicly report on their findings. Even a heavily polarized Congress should be able to agree that it’s a good idea for citizens to know how algorithms make high-stakes decisions in their lives, both in the public sector and in private companies.
There are even harder issues to deal with, like reining in online misinformation. Progress on content moderation will require new regulations that deftly navigate Democratic concerns about misinformation and Republican worries about the censorship of right-wing voices. Congress can begin with laws that require external and independent researchers to be able to access the mountains of data about users and the effects of algorithmic amplification on exposure to misinformation, hate speech, incitements to violence and other dangerous forms of content. Congress could mandate audits of the algorithms that decide what content gets amplified, and provide for external accountability to determine whether platforms are adhering to their stated content moderation policies.
This has been a turning point for Big Tech, not just for Facebook. Progress is possible, and it begins with diminishing the concentrated power in the hands of a few executives who have shown that the inescapable pull of greater profits will always tip the scales when tough choices need to be made. Yes, the events have been disruptive for Facebook. And they should be just as disruptive for other social media companies. But more than anything else, Congress must get to work on reforming Big Tech.
Source: Read Full Article