These days, before buying or consuming food or drinks, many consumers first check out an item’s origin, its wholesomeness, how safe it is, what the ingredients are, if it has any nutritious elements, and of course its price.
Manufacturers are free to put together a variety of contents (coming from different sources) and sell any resulting product as long as it is legally safe to do so, but that doesn’t mean that it’ll necessarily be nutritious. The chain of command is clear. First come suppliers, then manufacturers, then distributors, then storefronts, and for each chain link there are different regulations and regulators.
If a consumer gets sick from a product acquired in a given storefront, every company (and regulator) in the chain is potentially liable.
Let’s now compare this to the manufacturing of information by the traditional media. Here we have a manufacturer (which collects the news from different sources), a product (the news report itself), a product distributor, and a storefront (radio, TV, newspaper, website, etc.). Before using content from traditional media, consumers can easily check its origins, wholesomeness, accuracy, safety levels, and of course price, just like any other product on the market.
Similarly, if a consumer is harmed by the news because of the reporting (for example, if a news report knowingly or unknowingly recommends a dangerous medicine), the consumer has several options. They can pursue legal options (lawsuits), they can stop supporting that medium, they can pressure advertisers to remove their ads, they can damage the medium’s reputation, or (in the case of radio and TV) they can cause said medium to lose its license.
Now let’s transport this concept to social media platforms. In this case, we have the manufacturer of information, the distributor (the Internet), and the storefront (the social medium).
In social media the consumers themselves manufacture the content, but, while the origin of content for traditional media is well known, in the case of social media, the manufacturer can be unidentified or unidentifiable. In other words, the chain of accountability here is broken.
So, if a social media platform displays dangerous information (such as something that can be damaging for public health, or the incitement of violence), consumers have no protection or recourse. Actually, in the U.S., Section 230 of the Telecommunications Act of 1996 protects online platforms from being held liable.
The narrative is that Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey, for example, don’t have the authority to censor or restrict free speech. They only constitute the messenger, not the message, it is said. In effect, while traditional media is responsible if the source of content they use is proven to be dangerous, social media is exempt even though social media sites are also data miners, meaning that they presumably know exactly where the content on their platform is generated.
The free speech argument is also not valid because they’re private enterprises, and as such can choose to publish whatever content they’d like to from what they receive from users. (The First Amendment constrains only government action and protects the rights of platforms to carry what they want.)
Another narrative is that social media platforms are like TELCOs that connect users, and therefore TELCO “C” cannot be responsible if user “A” endangers user “B” by using the platform of “C.” This, of course, is not completely accurate since “C” is responsible if user “A” makes robocalls or scam calls to user “B.” (Dom Serafini)