The internet, arguably the greatest invention in human history, gone wrong. We can all feel it. It’s harder than ever to know if we’re engaging with friends or foes (or bots), we know we’re constantly being watched in the name of better ad conversion, and we live in constant fear of clicking something and getting scammed.

The failures of the Internet stem largely from the inability of the big tech monopolies, particularly Google and Facebook, to verify and protect our identities. Why not?

The answer is that they have no incentive to do so. In fact, the status quo suits them, thanks to Section 230 of the Communications Decency Act, passed by the United States Congress in 1996.

Related: Nodes are set to dethrone tech giants, from Apple to Google

But things may be about to change. This term, the Supreme Court will hear González v. Google, a case that has the potential to reshape or even kill Section 230. It’s hard to imagine a scenario where it wouldn’t kill the social media platforms we use today. That would present a golden opportunity for blockchain technology to replace them.

How did we get here?

A key enabler of the early development of the Internet, Section 230 states that web platforms are not legally responsible for content posted by their users. As a result, social networks like Facebook and Twitter are free to post (and profit from) anything their users post.

The plaintiff in the case now before the court believes that internet platforms are responsible for the death of his daughter, who was killed by Islamic State-affiliated attackers in a Paris restaurant in 2015. He believes that algorithms developed by YouTube and its parent company Google “recommended ISIS videos to users,” thus prompting the terrorist organization’s recruitment and ultimately facilitating the Paris attack.

Section 230 gives a lot of coverage to YouTube. If a user posts defamatory or, in the above case, violent content, the platform may serve that content to many consumers before any action is taken. In the process of determining whether the content violates the law or the terms of the platform, a lot of damage can be done. But Section 230 protects the platform.

Related: Crypto Is Breaking Google-Amazon-Apple Monopoly On User Data

Imagine YouTube after Section 230 is repealed. Do you have to put the 500 hours of content uploaded every minute into a review queue before anyone else can watch it? That wouldn’t scale and would take away much of the compelling immediacy of the site’s content. Or would they just let the content run as it is now but take legal responsibility for every copyright infringement, incitement to violence, or slanderous word spoken in one of their billions of videos?

Once you pull the Section 230 thread, platforms like YouTube start to fall apart quickly.

Global implications for the future of social media

The case centers on a US law, but the issues it raises are global. Other countries are also grappling with how best to regulate internet platforms, particularly social media. France recently ordered manufacturers to install easily accessible parental controls on all computers and devices and banned the collection of data from minors for commercial purposes. In the UK, it has been officially discovered that the Instagram algorithm contributed to the suicide of a teenage girl.

Then there are the world’s authoritarian regimes, whose governments are stepping up censorship and manipulation efforts by harnessing armies of trolls and bots to sow misinformation and mistrust. The lack of any viable form of identity verification for the vast majority of social media accounts makes this situation not only possible but unavoidable.

And the beneficiaries of an economy without Section 230 may not be who you’d expect. Many more people will file lawsuits against the major technology platforms. In a world where social networks could be held legally responsible for content posted on their platforms, it would be necessary to assemble armies of editors and content moderators to review every image or word posted on their sites. Considering the volume of content that has been posted to social media in recent decades, the task seems almost impossible and would likely be a win for traditional media organizations.

Looking a little further, the demise of Section 230 would completely change the business models that have fueled the growth of social media. Suddenly, platforms would be responsible for a nearly limitless supply of user-created content, while increasingly strict privacy laws restrict their ability to collect massive amounts of user data. It will require a total reengineering of the concept of social networks.

Many misunderstand platforms like Twitter and Facebook. They think that the software they use to log into those platforms, post content, and view content from their network is the product. It is not. Moderation is the product. And if the Supreme Court strikes down Section 230, that completely changes the products we consider to be social media.

This is a great opportunity.

In 1996, the Internet consisted of a relatively small number of static websites and message boards. It was impossible to predict that its growth would one day cause people to question the very concepts of freedom and security.

People have fundamental rights in their digital activities as well as in their physical ones, including privacy. At the same time, the common good demands some mechanism to separate fact from misinformation, and honest people from fraudsters, in the public sphere. Today’s Internet meets neither of these needs.

Some argue, either overtly or implicitly, that a more sensible and healthy digital future requires difficult compromises between privacy and security. But if we are ambitious and intentional in our efforts, we can achieve both.

Related: Facebook and Twitter will soon be obsolete thanks to blockchain technology

Blockchains make it possible to protect and prove our identities simultaneously. Zero-knowledge technology means we can verify information (age, for example, or professional qualification) without revealing any corollary data. Soulbound tokens (SBT), decentralized identifiers (DIDs), and some forms of non-fungible tokens (NFTs) will soon allow a person to carry a cryptographically verifiable unique identity across any digital platform, current or future.

This is good for all of us, whether in our work, personal, or family lives. Schools and social media will be safer places, adult content can be reliably restricted by age, and deliberate misinformation will be easier to track down.

The end of Section 230 would be an earthquake. But if we take a constructive approach, it can also be a golden opportunity to improve the Internet we know and love. With our identities established and cryptographically proven on-chain, we can better demonstrate who we are, where we are, and who we can trust.

Nick Dazé is Co-Founder and CEO of Heirloom, a company dedicated to providing no-code tools that help brands create secure environments for their customers online through blockchain technology. Dazé also co-founded PocketList and was an early team member at Faraday Future ($FFIE), Fullscreen (acquired by AT&T) and Bit Kitchen (acquired by Medium).

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed herein are those of the author alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

This post A Supreme Court case could kill Facebook and other social networks, allowing blockchain to replace them

was published first on


Write A Comment