www.noemamag.com /the-last-days-of-social-media/

The Last Days Of Social Media

James O'Sullivan 30-38 minutes
DOI: 10.37016/mr-2020-151, Show Details

Credits

James O’Sullivan lectures in the School of English and Digital Humanities at University College Cork, where his work explores the intersection of technology and culture.

At first glance, the feed looks familiar, a seamless carousel of “For You” updates gliding beneath your thumb. But déjà‑vu sets in as 10 posts from 10 different accounts carry the same stock portrait and the same breathless promise — “click here for free pics” or “here is the one productivity hack you need in 2025.” Swipe again and three near‑identical replies appear, each from a pout‑filtered avatar directing you to “free pics.” Between them sits an ad for a cash‑back crypto card.

Scroll further and recycled TikTok clips with “original audio” bleed into Reels on Facebook and Instagram; AI‑stitched football highlights showcase players’ limbs bending like marionettes. Refresh once more, and the woman who enjoys your snaps of sushi rolls has seemingly spawned five clones.

Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks. 

These are the last days of social media as we know it.

Drowning The Real

Social media was built on the romance of authenticity. Early platforms sold themselves as conduits for genuine connection: stuff you wanted to see, like your friend’s wedding and your cousin’s dog.

Even influencer culture, for all its artifice, promised that behind the ring‑light stood an actual person. But the attention economy, and more recently, the generative AI-fueled late attention economy, have broken whatever social contract underpinned that illusion. The feed no longer feels crowded with people but crowded with content. At this point, it has far less to do with people than with consumers and consumption.

In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet’s largest repositories of AI‑generated spam. Research has found what users plainly see: tens of thousands of machine‑written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half‑coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney.

It’s all just vapid, empty shit produced for engagement’s sake. Facebook is “sloshing” in low-effort AI-generated posts, as Arwa Mahdawi notes in The Guardian; some even bolstered by algorithmic boosts, like “Shrimp Jesus.”

The difference between human and synthetic content is becoming increasingly indistinguishable, and platforms seem unable, or uninterested, in trying to police it. Earlier this year, CEO Steve Huffman pledged to “keep Reddit human,” a tacit admission that floodwaters were already lapping at the last high ground. TikTok, meanwhile, swarms with AI narrators presenting concocted news reports and “what‑if” histories. A few creators do append labels disclaiming that their videos depict “no real events,” but many creators don’t bother, and many consumers don’t seem to care.

The problem is not just the rise of fake material, but the collapse of context and the acceptance that truth no longer matters as long as our cravings for colors and noise are satisfied. Contemporary social media content is more often rootless, detached from cultural memory, interpersonal exchange or shared conversation. It arrives fully formed, optimized for attention rather than meaning, producing a kind of semantic sludge, posts that look like language yet say almost nothing. 

We’re drowning in this nothingness.

The Bot-Girl Economy

If spam (AI or otherwise) is the white noise of the modern timeline, its dominant melody is a different form of automation: the hyper‑optimized, sex‑adjacent human avatar. She appears everywhere, replying to trending tweets with selfies, promising “funny memes in bio” and linking, inevitably, to OnlyFans or one of its proxies. Sometimes she is real. Sometimes she is not. Sometimes she is a he, sitting in a compound in Myanmar. Increasingly, it makes no difference.

This convergence of bots, scammers, brand-funnels and soft‑core marketing underpins what might be called the bot-girl economy, a parasocial marketplace fueled in a large part by economic precarity. At its core is a transactional logic: Attention is scarce, intimacy is monetizable and platforms generally won’t intervene so long as engagement stays high. As more women now turn to online sex work, lots of men are eager to pay them for their services. And as these workers try to cope with the precarity imposed by platform metrics and competition, some can spiral, forever downward, into a transactional attention-to-intimacy logic that eventually turns them into more bot than human. To hold attention, some creators increasingly opt to behave like algorithms themselves, automating replies, optimizing content for engagement, or mimicking affection at scale. The distinction between performance and intention must surely erode as real people perform as synthetic avatars and synthetic avatars mimic real women.

There is loneliness, desperation and predation everywhere.

“Genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks.”

The bot-girl is more than just a symptom; she is a proof of concept for how social media bends even aesthetics to the logic of engagement. Once, profile pictures (both real and synthetic) aspired to hyper-glamor, unreachable beauty filtered through fantasy. But that fantasy began to underperform as average men sensed the ruse, recognizing that supermodels typically don’t send them DMs. And so, the system adapted, surfacing profiles that felt more plausible, more emotionally available. Today’s avatars project a curated accessibility: They’re attractive but not flawless, styled to suggest they might genuinely be interested in you. It’s a calibrated effect, just human enough to convey plausibility, just artificial enough to scale. She has to look more human to stay afloat, but act more bot to keep up. Nearly everything is socially engineered for maximum interaction: the like, the comment, the click, the private message.

Once seen as the fringe economy of cam sites, OnlyFans has become the dominant digital marketplace for sex workers. In 2023, the then-seven-year-old platform generated $6.63 billion in gross payments from fans, with $658 million in profit before tax. Its success has bled across the social web; platforms like X (formerly Twitter) now serve as de facto marketing layers for OnlyFans creators, with thousands of accounts running fan-funnel operations, baiting users into paid subscriptions. 

The tools of seduction are also changing. One 2024 study estimated that thousands of X accounts use AI to generate fake profile photos. Many content creators have also begun using AI for talking-head videos, synthetic voices or endlessly varied selfies. Content is likely A/B tested for click-through rates. Bios are written with conversion in mind. DMs are automated or outsourced to AI impersonators. For users, the effect is a strange hybrid of influencer, chatbot and parasitic marketing loop. One minute you’re arguing politics, the next, you’re being pitched a girlfriend experience by a bot. 

Engagement In Freefall

While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren’t connecting or conversing on social media like they used to; they’re just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as “mostly reliable”— down from roughly two-thirds in the mid-2010s.  Young adults register the steepest collapse, which is unsurprising; as digital natives, they better understand that the content they scroll upon wasn’t necessarily produced by humans. And yet, they continue to scroll.

The timeline is no longer a source of information or social presence, but more of a mood-regulation device, endlessly replenishing itself with just enough novelty to suppress the anxiety of stopping. Scrolling has become a form of ambient dissociation, half-conscious, half-compulsive, closer to scratching an itch than seeking anything in particular. People know the feed is fake, they just don’t care. 

Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention – time spent, impressions, scroll velocity – and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

The Great Unbundling

Social media’s death rattle will not be a bang but a shrug.

These networks once promised a single interface for the whole of online life: Facebook as social hub, Twitter as news‑wire, YouTube as broadcaster, Instagram as photo album, TikTok as distraction engine. Growth appeared inexorable. But now, the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens.

Since Elon Musk’s takeover, X has shed at least 15% of its global user base. Meta’s Threads, launched with great fanfare in 2023, saw its number of daily active users collapse within a month, falling from around 50 million active Android users at launch in July to only 10 million active users the following August. Twitch recorded its lowest monthly watch-time in over four years in December 2024, just 1.58 billion hours, 11% lower than the December average from 2020-23.

“While content proliferates, engagement is evaporating.”

Even the giants that still command vast audiences are no longer growing exponentially. Many platforms have already died (Vine, Google+, Yik Yak), are functionally dead or zombified (Tumblr, Ello), or have been revived and died again (MySpace, Bebo). Some notable exceptions aside, like Reddit and BlueSky (though it’s still early days for the latter), growth has plateaued across the board. While social media adoption continues to rise overall, it’s no longer explosive. As of early 2025, around 5.3 billion user identities — roughly 65% of the global population — are on social platforms, but annual growth has decelerated to just 4-5%, a steep drop from the double-digit surges seen earlier in the 2010s.

Intentional, opt-in micro‑communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram. 

But the old practices are still evident: Substack is full of personal brands announcing their journeys, Discord servers host influencers disguised as community leaders and Patreon bios promise exclusive access that is often just recycled content. Still, something has shifted. These are not mass arenas; they are clubs — opt-in spaces with boundaries, where people remember who you are. And they are often paywalled, or at least heavily moderated, which at the very least keeps the bots out. What’s being sold is less a product than a sense of proximity, and while the economics may be similar, the affective atmosphere is different, smaller, slower, more reciprocal. In these spaces, creators don’t chase virality; they cultivate trust.

Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber‑only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate. A lot of people seem to be fine with slop, but as more start to crave authenticity, the platforms will be forced to take note.

From Attention To Exhaustion

The social internet was built on attention, not only the promise to capture yours but the chance for you to capture a slice of everyone else’s. After two decades, the mechanism has inverted, replacing connection with exhaustion. “Dopamine detox” and “digital Sabbath” have entered the mainstream. In the U.S., a significant proportion of 18‑ to 34‑year‑olds took deliberate breaks from social media in 2024, citing mental health as the motivation, according to an American Psychiatric Association poll. And yet, time spent on the platforms remains high — people scroll not because they enjoy it, but because they don’t know how to stop. Self-help influencers now recommend weekly “no-screen Sundays” (yes, the irony). The mark of the hipster is no longer an ill-fitting beanie but an old-school Nokia dumbphone. 

Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd. Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?

These are the last days of social media, not because we lack content, but because the attention economy has neared its outer limit — we have exhausted the capacity to care. There is more to watch, read, click and react to than ever before — an endless buffet of stimulation. But novelty has become indistinguishable from noise. Every scroll brings more, and each addition subtracts meaning. We are indeed drowning. In this saturation, even the most outrageous or emotive content struggles to provoke more than a blink.

Outrage fatigues. Irony flattens. Virality cannibalizes itself. The feed no longer surprises but sedates, and in that sedation, something quietly breaks, and social media no longer feels like a place to be; it is a surface to skim. 

No one is forcing anyone to go on TikTok or to consume the clickbait in their feeds. The content served to us by algorithms is, in effect, a warped mirror, reflecting and distorting our worst impulses. For younger users in particular, their scrolling of social media can become compulsive, rewarding their developing brains with unpredictable hits of dopamine that keep them glued to their screens.

Social media platforms have also achieved something more elegant than coercion: They’ve made non-participation a form of self-exile, a luxury available only to those who can afford its costs.

“Why post a selfie when an AI can generate a prettier one? Why craft a thought when ChatGPT can produce one faster?”

Our offline reality is irrevocably shaped by our online world: Consider the worker who deletes or was never on LinkedIn, excluding themselves from professional networks that increasingly exist nowhere else; or the small business owner who abandons Instagram, watching customers drift toward competitors who maintain their social media presence. The teenager who refuses TikTok may find herself unable to parse references, memes and microcultures that soon constitute her peers’ vernacular.

These platforms haven’t just captured attention, they’ve enclosed the commons where social, economic and cultural capital are exchanged. But enclosure breeds resistance, and as exhaustion sets in, alternatives begin to emerge.

Architectures Of Intention

The successor to mass social media is, as already noted, emerging not as a single platform, but as a scattering of alleyways, salons, encrypted lounges and federated town squares —  those little gardens.

Maybe today’s major social media platforms will find new ways to hold the gaze of the masses, or maybe they will continue to decline in relevance, lingering like derelict shopping centers or a dying online game, haunted by bots and the echo of once‑human chatter. Occasionally we may wander back, out of habit or nostalgia, or to converse once more as a crowd, among the ruins. But as social media collapses on itself, the future points to a quieter, more fractured, more human web, something that no longer promises to be everything, everywhere, for everyone.

This is a good thing. Group chats and invite‑only circles are where context and connection survive. These are spaces defined less by scale than by shared understanding, where people no longer perform for an algorithmic audience but speak in the presence of chosen others. Messaging apps like Signal are quietly becoming dominant infrastructures for digital social life, not because they promise discovery, but because they don’t. In these spaces, a message often carries more meaning because it is usually directed, not broadcast.

Social media’s current logic is designed to reduce friction, to give users infinite content for instant gratification, or at the very least, the anticipation of such. The antidote to this compulsive, numbing overload will be found in deliberative friction, design patterns that introduce pause and reflection into digital interaction, or platforms and algorithms that create space for intention.

This isn’t about making platforms needlessly cumbersome but about distinguishing between helpful constraints and extractive ones. Consider Are.na, a non-profit, ad-free creative platform founded in 2014 for collecting and connecting ideas that feels like the anti-Pinterest: There’s no algorithmic feed or engagement metrics, no trending tab to fall into and no infinite scroll. The pace is glacial by social media standards. Connections between ideas must be made manually, and thus, thoughtfully — there are no algorithmic suggestions or ranked content.

To demand intention over passive, mindless screen time, X could require a 90-second delay before posting replies, not to deter participation, but to curb reactive broadcasting and engagement farming. Instagram could show how long you’ve spent scrolling before allowing uploads of posts or stories, and Facebook could display the carbon cost of its data centers, reminding users that digital actions have material consequences, with each refresh. These small added moments of friction and purposeful interruptions — what UX designers currently optimize away — are precisely what we need to break the cycle of passive consumption and restore intention to digital interaction.

We can dream of a digital future where belonging is no longer measured by follower counts or engagement rates, but rather by the development of trust and the quality of conversation. We can dream of a digital future in which communities form around shared interests and mutual care rather than algorithmic prediction. Our public squares — the big algorithmic platforms — will never be cordoned off entirely, but they might sit alongside countless semi‑public parlors where people choose their company and set their own rules, spaces that prioritize continuity over reach and coherence over chaos. People will show up not to go viral, but to be seen in context. None of this is about escaping the social internet, but about reclaiming its scale, pace, and purpose.

Governance Scaffolding

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos?

A public-service model wouldn’t require state control; rather, it could be governed through civic charters, much like public broadcasters operate under mandates that balance independence and accountability. This vision stands in stark contrast to the current direction of most major platforms, which are becoming increasingly opaque.

“Non-participation [is] a form of self-exile, a luxury available only to those who can afford its costs.”

In recent years, Reddit and X, among other platforms, have either restricted or removed API access, dismantling open-data pathways. The very infrastructures that shape public discourse are retreating from public access and oversight. Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects.

Some initiatives gesture in this direction. Meta’s Oversight Board, for example, frames itself as an independent body for content moderation appeals, though its remit is narrow and its influence ultimately limited by Meta’s discretion. X’s Community Notes, meanwhile, allows user-generated fact-checks but relies on opaque scoring mechanisms and lacks formal accountability. Both are add-ons to existing platform logic rather than systemic redesigns. A true public-service model would bake accountability into the platform’s infrastructure, not just bolt it on after the fact.

The European Union has begun exploring this territory through its Digital Markets Act and Digital Services Act, but these laws, enacted in 2022, largely focus on regulating existing platforms rather than imagining new ones. In the United States, efforts are more fragmented. Proposals such as the Platform Accountability and Transparency Act (PATA) and state-level laws in California and New York aim to increase oversight of algorithmic systems, particularly where they impact youth and mental health. Still, most of these measures seek to retrofit accountability onto current platforms. What we need are spaces built from the ground up on different principles, where incentives align with human interest rather than extractive, for-profit ends.

This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems. The key is diversity, delivering an ecosystem of civic digital spaces that each serve specific communities with transparent governance.

Of course, publicly governed platforms aren’t immune to their own risks. State involvement can bring with it the threat of politicization, censorship or propaganda, and this is why the governance question must be treated as infrastructural, rather than simply institutional. Just as public broadcasters in many democracies operate under charters that insulate them from partisan interference, civic digital spaces would require independent oversight, clear ethical mandates, and democratically accountable governance boards, not centralized state control. The goal is not to build a digital ministry of truth, but to create pluralistic public utilities: platforms built for communities, governed by communities and held to standards of transparency, rights protection and civic purpose.

The technical architecture of the next social web is already emerging through federated and distributed protocols like ActivityPub (used by Mastodon and Threads) and Bluesky’s Authenticated Transfer (AT) Protocol, or atproto, (a decentralised framework that allows users to move between platforms while keeping their identity and social graph) as well as various blockchain-based experiments, like Lens and Farcaster.

But protocols alone won’t save us. The email protocol is decentralized, yet most email flows through a handful of corporate providers. We need to “rewild the internet,” as Maria Farrell and Robin Berjon mentioned in a Noema essay. We need governance scaffolding, shared institutions that make decentralization viable at scale. Think credit unions for the social web that function as member-owned entities providing the infrastructure that individual users can’t maintain alone. These could offer shared moderation services that smaller instances can subscribe to, universally portable identity systems that let users move between platforms without losing their history, collective bargaining power for algorithm transparency and data rights, user data dividends for all, not just influencers (if platforms profit from our data, we should share in those profits), and algorithm choice interfaces that let users select from different recommender systems. 

Bluesky’s AT Protocol explicitly allows users to port identity and social graphs, but it’s very early days and cross-protocol and platform portability remains extremely limited, if not effectively non-existent. Bluesky also allows users to choose among multiple content algorithms, an important step toward user control. But these models remain largely tied to individual platforms and developer communities. What’s still missing is a civic architecture that makes algorithmic choice universal, portable, auditable and grounded in public-interest governance rather than market dynamics alone.

Imagine being able to toggle between different ranking logics: a chronological feed, where posts appear in real time; a mutuals-first algorithm that privileges content from people who follow you back; a local context filter that surfaces posts from your geographic region or language group; a serendipity engine designed to introduce you to unfamiliar but diverse content; or even a human-curated layer, like playlists or editorials built by trusted institutions or communities. Many of these recommender models do exist, but they are rarely user-selectable, and almost never transparent or accountable. Algorithm choice shouldn’t require a hack or browser extension; it should be built into the architecture as a civic right, not a hidden setting.

“What if we treated these platforms as public utilities rather than private casinos?”

Algorithmic choice can also develop new hierarchies. If feeds can be curated like playlists, the next influencer may not be the one creating content, but editing it. Institutions, celebrities and brands will be best positioned to build and promote their own recommendation systems. For individuals, the incentive to do this curatorial work will likely depend on reputation, relational capital or ideological investment. Unless we design these systems with care, we risk reproducing old dynamics of platform power, just in a new form.

Federated platforms like Mastodon and Bluesky face real tensions between autonomy and safety: Without centralized moderation, harmful content can proliferate, while over-reliance on volunteer admins creates sustainability problems at scale. These networks also risk reinforcing ideological silos, as communities block or mute one another, fragmenting the very idea of a shared public square. Decentralization gives users more control, but it also raises difficult questions about governance, cohesion and collective responsibility — questions that any humane digital future will have to answer.

But there is a possible future where a user, upon opening an app, is asked how they would like to see the world on a given day. They might choose the serendipity engine for unexpected connections, the focus filter for deep reads or the local lens for community news. This is technically very achievable — the data would be the same; the algorithms would just need to be slightly tweaked — but it would require a design philosophy that treats users as citizens of a shared digital system rather than cattle. While this is possible, it can feel like a pipe dream. 

To make algorithmic choice more than a thought experiment, we need to change the incentives that govern platform design. Regulation can help, but real change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

Digital Literacy As Public Health

Perhaps most crucially, we need to reframe digital literacy not as an individual responsibility but as a collective capacity. This means moving beyond spot-the-fake-news workshops to more fundamental efforts to understand how algorithms shape perception and how design patterns exploit our cognitive processes. 

Some education systems are beginning to respond, embedding digital and media literacy across curricula. Researchers and educators argue that this work needs to begin in early childhood and continue through secondary education as a core competency. The goal is to equip students to critically examine the digital environments they inhabit daily, to become active participants in shaping the future of digital culture rather than passive consumers. This includes what some call algorithmic literacy, the ability to understand how recommender systems work, how content is ranked and surfaced, and how personal data is used to shape what you see — and what you don’t.

Teaching this at scale would mean treating digital literacy as public infrastructure, not just a skill set for individuals, but a form of shared civic defense. This would involve long-term investments in teacher training, curriculum design and support for public institutions, such as libraries and schools, to serve as digital literacy hubs. When we build collective capacity, we begin to lay the foundations for a digital culture grounded in understanding, context and care.

We also need behavioral safeguards like default privacy settings that protect rather than expose, mandatory cooling-off periods for viral content (deliberately slowing the spread of posts that suddenly attract high engagement), algorithmic impact assessments before major platform changes and public dashboards that show platform manipulation, that is, coordinated or deceptive behaviors that distort how content is amplified or suppressed, in real-time. If platforms are forced to disclose their engagement tactics, these tactics lose power. The ambition is to make visible hugely influential systems that currently operate in obscurity.

We need to build new digital spaces grounded in different principles, but this isn’t an either-or proposition. We also must reckon with the scale and entrenchment of existing platforms that still structure much of public life. Reforming them matters too. Systemic safeguards may not address the core incentives that inform platform design, but they can mitigate harm in the short term. The work, then, is to constrain the damage of the current system while constructing better ones in parallel, to contain what we have, even as we create what we need. 

The choice isn’t between technological determinism and Luddite retreat; it’s about constructing alternatives that learn from what made major platforms usable and compelling while rejecting the extractive mechanics that turned those features into tools for exploitation. This won’t happen through individual choice, though choice helps; it also won’t happen through regulation, though regulation can really help. It will require our collective imagination to envision and build systems focused on serving human flourishing rather than harvesting human attention.

Social media as we know it is dying, but we’re not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces where the metrics that matter aren’t engagement and growth but understanding and connection, where algorithms serve the community rather than strip-mining it.

The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones. The question is whether we will do this or whether we will continue to drown.