Helping people to navigate the harms of social media.

By Tom Barton

A little over twenty years ago, nobody had heard of Facebook. But fast forward to today and about one-third of the entire human race uses it regularly.

Social media can be a good thing. It allows people to stay in touch with far-away friends and family and means that small businesses can grow and reach new customers. It can also help people to find out what’s going on in the world without having to rely on corporate media. However, its harms are rapidly beginning to outweigh its benefits.

At the heart of the issue is the fact that social media platforms like Facebook, Instagram, X (formerly Twitter), TikTok and others are, at their most basic level, profit-driven companies. Their goal is to make as much money as they possibly can. In practice, this means maximising the amount of time that users spend on their platforms and thereby how many adverts they can be exposed to.

At the individual level, social media platform’s efforts to maximise time on site have played a significant role in exacerbating the ongoing mental health crisis in developed countries, especially among young people. With features like “views”, “shares”, “likes” and “followers”, social media platforms have – by design – become spaces where users find themselves competing for social validation.

Constantly seeking digital validation increases anxiety and is detrimental to users’ self-esteem. This anxiety and lack of self-esteem makes people crave social validation even more, which keeps them coming back to the apps that created the problems in the first place. This vicious cycle is highly profitable for social media companies, but detrimental to the mental well-being of entire generations of young people.

Beyond individual harms, social media is also driving political polarisation and crippling democracies’ ability to function. Features like “recommendations” and “feeds” are mostly to blame for this. Once again, the algorithms that run these features are designed to maximise time on site. This means pushing whatever content gets the most views and clicks, regardless of how harmful or untrue it is.

An untrue but shocking story will get far more views and engagement than a true but non-shocking one. But algorithms aren’t designed to maximise truth – they are designed to maximise engagement. The result? Rampant conspiracy theories and false information that hinder and drown-out important public discussion.

Another psychological trigger that social media algorithms have learned to exploit is moral outrage. Content that expresses or provokes outrage — especially when tied to a sense of right and wrong — spreads much faster than content that doesn’t. Over time, and across billions of users, this creates an environment where divisive, emotionally charged posts are rewarded, while calm, balanced ones get ignored. The end result of this is that people get pushed further into ideological corners.

The growing hatred and hostility between the political left and right in the United States, Europe and beyond can largely be traced to trends taking place in online spaces. It isn’t a coincidence that the number of democracies around the world has been steadily declining since social media emerged just over a decade ago.

Overall, the situation is pretty bleak. But one organisation is meeting the challenge head-on.


The Center for Humane Technology (CHT) was founded in 2018 by a collective of former big-tech employees and insiders—including Tristan Harris, Aza Raskin, and Randima Fernando—who had seen firsthand the harms of digital products that are designed to hijack human attention. Building on earlier work like Harris’s 2014 “Time Well Spent” initiative, they launched CHT to push beyond awareness into real change.

CHT’s mission is straightforward. They want to, in their own words, “realign technology with humanity’s best interests.” In other words, they want to help guide us away from harmful and exploitative digital technologies towards ones that support well-being, strengthen democracy, and foster a healthy and credible information ecosystem.

CHT works on three levels.

Level 1 – “Articulating the Challenge”
CHT is very effective in how it uses storytelling to inform and change minds. They helped produce The Social Dilemma, a Netflix documentary seen by tens of millions of people, which laid bare how social media manipulates emotions, spreads misinformation, and harms mental health and democracy. They also host the popular podcast Your Undivided Attention, where Tristan Harris and Aza Raskin interview experts to reveal how tech hijacks the brain.

Level 2 – “Identifying Interventions”
Beyond raising awareness, they partner across sectors to deliberate, develop and implement effective strategies and policies. They advise policymakers (including the U.S. Congress), share policy frameworks and offer design guides and toolkits for ethical technology.

Level 3 – “Empowering Humanity”
Finally, CHT equips leaders and technologists with practical tools. Their Foundations of Humane Technology course, a free online program, has equipped over 10,000 technologists with the principles of humane technology design. They also run events, trainings, and private briefings to help change-makers implement ethical practices.

That said, the path forward is far from easy. The elephant in the room is the sheer scale of the problem. CHT themselves admit that the time for change was years ago and that social media is now too embedded in society for the damage to ever be fully reversed.

They also face a powerful opposition. Tech companies, their shareholders and their billionaire owners, all of whom benefit from the current situation, are unlikely to support regulation. Meanwhile, politicians across the spectrum have discovered how effectively social media can be used to win elections—making them hesitant to disrupt and reform the platforms they rely on.

But CHT also has some key strengths. Unlike many critics of tech (so-called tech-agnostics), they are not just looking backwards at what went wrong—they are helping to shape what comes next. As emerging technologies—particularly  AI—gain momentum, CHT is developing the tools to ensure that they are built with on stronger, more humane foundations.

Another of their biggest strengths lies in public awareness. By helping people understand how their attention is being monetised, and how their mental health and democratic systems are being shaped by invisible algorithms, CHT hopes to shift cultural expectations and values around technology. If enough people start to demand something better, the tech industry may eventually have no choice but to adapt.

CHT isn’t just sounding the fire alarm—they’re giving people the tools to put the fire out and stop the next one before it starts.


Learn more:

Previous
Previous

Fighting political polarisation in the United States.

Next
Next

Storm-proof renewable energy for tropical coastal regions.