I suffered from an early version of this when I was one of the top Elite Yelpers in my city about a dozen years ago. I started to notice that anytime I went somewhere, I spent much of my time looking for flaws to post about in my reviews instead of enjoying myself. I still notice and mentioned the positive things but for most people, seeing the good is the default when going out. Elevating the flaws that often would have quickly been dismissed otherwise was having a negative impact on my quality of life. Additionally, friends who also started doing reviews were becoming increasingly critical of everything and everyone in their lives, which made me realize I was doing so too. The entire social scene around the local Yelp community was getting increasingly hostile and petty with each other. When I stepped back and looked at it from the outside, it was shocking how much it was changing everyone's personalities.
In the end I quit writing reviews for Yelp and exited the community even though much of my social life at that point had become subsumed by it. Luckily I was still at an age where it was easy to rebuild my social life. For many, social media has become their primary social outlet and without it, they don't have anything else to turn to. Even without the pandemic, I suspect that it's more difficult to build an offline social life than it used to be, especially for the young.
This could be true, but I personally found negative reviews valuable. While I'm not familiar with Yelp, I tend to read negative/mixed reviews on Amazon products much more carefully than positive reviews, or constructive criticisms than blindly agreeing comments on HN. I think we all want a "balanced" view, as it's elusive as it gets. While there's no single ruleset to define what's balanced and what's not, I hope we continue discussing it until the end.
Years back, when Cracked used to be decent, there was an article by John Cheese, which I forget now, wrote the most profound sentence that I have ever read in my life:
> The difference, according to the people who study this sort of thing, is recognizing whether your reaction is designed to actually help you fix the thing you're mad about, or just satisfying the adrenaline and dopamine rush you get from lashing out (the latter, after all, is what makes anger so addictive).
Apparently it's from 2099, retro-loaded to present day.
My thoughts - why do likes accumulate to a user? You want likes to signal that the article/comment is interesting. But what value is it to assign the number of likes to the person who posts it? Like in stack overflow, I haven't used it for ages but my points are still there, they should die off after a while I feel. This would remove the karma chasing you get on reddit and would, perhaps, make things more reasonable. It's nice to get the serotonin kick when you accumulate thousands of points on a post - but its fundamentally meaningless and encourages weird behaviour.
I think you might be onto something. What examples do we have of products that purely use likes or upvotes that benefit the thing shared without benefitting the user?
For the greater good? You probably wouldn't get many 12 year olds, but that is perhaps what you want. Getting points for comments isn't really anything anyway.
Comments on YouTube and ArsTechnica have vote counts, but the count is only attached to the comment, not the user. The votes are just used for sorting.
How does the sorting work on Youtube? It's not by upvotes, it's not by date, it's not any obvious combination of the two (as newer comments with more upvotes can still be sorted after older comments with fewer upvotes). Them using a hidden total-like-count might be an explanation.
If you don't surface newer comments all the upvotes will skew towards older comments. I expect there's an element of randomness. Number of replies, whether the uploader put their badge on it or replied are probably also taken into account.
This is precisely why I opted to exclude voting and "likes" from Sqwok (https://sqwok.im).
From the outset I wanted to build a discussion site that was entirely focused on live conversation, without the gimmicks that have become so ubiquitous across the social media landscape and beyond.
In the real world we signal our approval of a conversation by either engaging or walking away. Other people sense our liking of it by seeing our engagement, not a cheap binary sticker we throw up.
> In the real world we signal our approval of a conversation by either engaging or walking away
What does 'engaging' mean, exactly? You also signal your disapproval by engaging. If I tell someone to go fuck themselves after they tried to get me alone in a dark alley... I engaged with them, I didn't walk away with a tacit approval of their attempted assault and say I remained disengaged.
If someone creeps on me online and I tell them to fuck off, the creepy cunt they are... then I engaged with them too. If I didn't engage and instead walked away, I might have a stalker to deal with because I never said no.
`engage` is doing too much work and, in my mind, it's an inhuman term. You're applying the logic of a toilet cubicle to a human.
s/engage/talked to? whether you tell them they're delightful or a creepy perp, you're _talking_ to them. You're expending energy which is the opposite of a up/down vote or a like which requires no energy and doesn't add (or subtract) anything from the conversation.
Of course as you point out, not all conversations are positive ones, but I like the idea of optimizing for more conversation in general.
Also I don't claim that it's going to work out or that all voting is bad (in general on HN it seems useful), but I like the idea of trying something different.
> In the real world we signal our approval of a conversation by either engaging or walking away. Other people sense our liking of it by seeing our engagement, not a cheap binary sticker we throw up.
So, is that what your ranking is based on for Sqwok? Clicks and comments? Is it having the effect you'd hoped?
It's based on realtime activity and time decay so far, with ideas for enhancing it as the site grows. So far the reception has been positive and people appreciate that it's just focused on conversation.
Def aware that scaling it past a small user base may be challenging, but I also see opportunities to experiment in that regard.
Then again, I could find out that the majority of people prefer to simply "like" something.
I fear this this dynamics will promote highly controversial topics. Even here in HN, a topic with many comments and few upvotes have a higher temperature, more reported comments. It sounds like you are selecting for exactly that.
That said, it's always hard to know until you have a large user base.
Trolls are graded on the inverse of the ratio of characters they type vs. characters typed back at them. If you can get people to type novels back at your poorly spelled, no caps, no punctuation incomplete sentence, you win.
Trolls will continue winning in ad-supported social media. A troll is fantastic at engaging large numbers of eyeballs. A low-quality trollish sentence is quick to write; drop a few of them here and there and you got a few lurkers graduate to what the platform can now claim as “active users”. It’s a symbiotic relationship.
(In this regard, as far as I’m concerned, for a novel type of social networking platform it’s much more important to be paid, rather than play around with likes and algorithms. Long-term business model will prevail—if your paying users are the advertisers, you’ll align with them sooner or later.)
> A low-quality trollish sentence is quick to write; drop a few of them here and there and you got a few lurkers graduate to what the platform can now claim as “active users”. It’s a symbiotic relationship.
Agree, trolls will always try to game the system. I like the idea of optimizing for quality users vs quality content. You can always add optional controls for conversations as well e.g. account age restriction.
I'm not really blaming trolls for gaming the system. Rather, I'm saying that any ad-based platform will one way or another evolve to have such an UI that encourages trolls, because they help its bottom line.
It is not just social media; it is the way our society is structured. Even if social media did not exist, the press would still reward people who act outraged. Just look at newspaper op eds and cable TV talk shows with discussion panels. Sports fans know that ESPN sells outrage and living vicariously through athletes/celebs as a business model.
The crux of the issue is that society rewards attention whoring behavior. I would love to see our leaders promote more "do, not tell" behavior.
Yes! I grew up on Donahue and the Geraldo Rivera show and Mory too. Those shows ratings were entirely built upon outrage. Social media is an iteration which allows all of us to join the audience. Tho I do admit for a romantic longing for 90s internet. It was total discovery and horror and obscenity and surprise but zero outrage. It is hard for me to be mad now about anything online now after spending hours on Cult of the Dead Cow and having my art downloaded by thousands of East Germans to jack off to. It's confusing and the lies I tell myself, but those were good times. The solution is not censorship but meaningful engagement with how human subjectivity is actually constructed vs the dissonance between individuated sovereign subject we are told we are and how we need each other to construct an ideological order/a safe polite backdrop we all agree on so we can privately go on about our lives.
Donahue though wasn't a news program though. It was daytime garbage programming. These were small parts of TV programming.
What I remember is how I didn't like the actual news after the local news because of how boring it was. It was actually a news program.
I also grew up watching ESPN sports center. There was never any outrage at that time. It was just showing highlights of games with minimal added commentary. The internet then destroyed all these business models and turned everything into a tabloid.
Growing up I can remember at the grocery store when checking out there was always a tabloid newspaper that every other week would have on the cover how Elvis faked his own death or something just ridiculous.
Outrageous, this can't be!
But seriously, it is saddening sometimes to see so much mindshare and engagement wasted on poorly thought out "solutions" to whatever issue is currently trending on Twitter. Even worse, the constant social media outrage machine seems to reduce the inherent kindness that most people have in them (before discovering Twitter).
It seems that the problem is more insidious than this study hints at. We know from the behaviour on social media some morally righteous outrage is satiated by furiously liking or sharing something that you might agree with, but most people just harumph and move on once they've satisfied their itch to "do something".
However there is a certain element in society that gets truly over-stimulated by this stuff - the over-amplified likes and shares are making it seem like the outraged community you are aligned with is much larger than it actually is. Or more precisely, the number of people actively engaged and willing to undertake actions to back up their likes looks much bigger than it actually is. This pushes our over stimulated friends into over-reactions.
This has now fomented a lot of extreme acts - hitting the streets, burning stuff, occupying buildings, safe in the completely misleading knowledge that your in-group is much larger than you think.
The outrage machine has been an interesting social experiment but now people are getting killed because of it and it's probably time to nuke Facebook and Twitter from orbit unless they start taking moderation seriously.
Nuking platforms isn’t the answer, the real answer is to educate the populace to recognize such phenomenon and train them to not be misled. Otherwise new outrage machines will just be created.
Who does the educating and who decides what that education should be? No doubt the most extreme across the spectrums will eagerly volunteer to do both while insisting that they're the moderate center.
Well, for example, schools? We already teach children history, basic statistics, media literacy, basic epistemology, etc.
That does not mean that schools teach kids what to think. Rather, we teach kids how to think, and what the common pitfalls are.
The problem is that this means that a bad education leaves you vulnerable to manipulation. That, and the fact that it takes 10-20 years for the effect to reach society.
I like the idea, but people seem very resistant to things like being de-programmed from Qanon and the like. A lot of the anti-intellectualism that underpins vaccine refusal is grounded in a deep mistrust of authority. It will be quite difficult to achieve without stomping on a lot of sources of misinformation, which in this age is like playing whack-a-mole in a 10,000 acre field.
However, deplatforming seems to work. Nuking the platforms doesn't seem too extreme to me given the harm they are causing.
"deep mistrust of authority" won't likely go away if people see "stomping on a lot of sources of misinformation" every day.
This isn't a new situation. People spread hoaxes, but also true and unwanted info, in the wars when governments tried to suppress unfavorable news from the fronts. Mistrust / distrust cannot really be remedied by massive shows of force.
How else is it supposed to be managed? When a mere 12 sources of disinformation can generate a wide panic, we have a massive problem that needs a sledgehammer to fix.
Taking sledgehammer on human communication does not seem efficient and side-effect-less to me.
I grew up in late Communist Czechoslovakia. The party cracked down on unwanted information really hard. They had a 13,000 strong secret police with extensive powers to do that, and forty years of experience how to strangle dissent. Kids were schooled in the One and Only Correct Way of Thinking, up to and including university, where you had to clear several classes of Marxism-Leninism before being allowed to, say, build bridges. Things like Xerox machines were tightly controlled and subject to registration, to curtail spread of illegal leaflets.
And yet rumors spreaded within days of anything happening and distrust against the government and whatever the government said to the public was absolutely rampant.
If anything, at the end of the day they sledgehammered themselves.
One problem is the design of the “Like” button. The target audience - the recipient - of the action of pressing this button is the other people. They will see a higher number next to the thumb up. If this is the visible result - this is what people will use it for. To show support to the opinions they agree with. The stronger the opinion - the stronger the support. Often the content people agree with is not actually what they find useful for themselves.
But what if we designed the “Like” button to be directed not at the others, but at yourself. I mean, when you press the “Like” button it has consequences for you. If you “Like” useless content, then you will get more useless content in the future. Would that change the dynamics of “liking”? Would it make people think more carefully about what they like if their future content recommendations depended on it?
When you upvote an item - you don’t simply increment the counter and make that item rank higher for everyone else. Instead, you connect stronger to other users who upvoted that item before you. The stronger you connect to someone - the more weight their upvote has for you - the higher their other upvoted content ranks in your recommendations. This creates a feedback loop where you use the upvote button no to influence others, but to direct your future recommendations. This is a “filter bubble” - one that you very consciously form.
This may have been true-ish some years ago, and it certainly was my expectation, and how many others expected the systems to work.
However I was shocked to find that pages I had liked, I would not see their posted content and would instead see other stuff that was often outrageous.
I can't remember when twitter rolled out the 'curated' instead of chronological change for higher engagement / more advertising value - but at this point it was long ago too.
This was shocking to many people who created 'pages' for businesses and such as well. Even Kim Komando (on the air/via her broadcast / podcasts) railed against the algorithm hiding her stuff from people that had liked her fbook.. (meanwhile offering to 'boost' her presence in people's feed for high fees.
So I'm not sure if the systems originally were setup to run that way, but they certainly have not been in the past many years akaik.
Other people I have spoken with have been shocked that the system has not been working as they expected and wondered why they 'liked' a page when they don't get all the info the page posts, many not believing that pages they were really interested in posting new stuff were instead being censored from appearing in their feed.
Most platforms use a black box ML model (a deep neural net) to optimize for business objectives such as time spent which correlates more with the number of ads viewed than if they optimized for "Likes".
These systems have far fewer "Likes" than other implicit signals such as which posts the user read (ie, didn't scroll past). So they use all of those signals and likes are unlikely to have a strong impact on the output of the algorithm.
The algorithms are opaque by nature. Even the developers of these deep models won't be able to explain to you why a user's recommendation list is ranked the way it is. Now imagine a user contemplating the consequences of liking a certain item on their future recommendations. It's hopeless.
As a result, the developers learn not to rely on these "like" signals and the users learn to provide likes only to affect the directly observable changes - the like counter.
If you know that the content you like will get more visibility then it incentivizes people to upvote what they want other people to see, as opposed to what they themselves want to see.
I remember the way that old style internet forums used to work. It was the posts that generated the most active discussion that naturally "rose to the top" simply because forums were designed to display the posts in order of recent activity. And there was no signal such as "likes" or "votes". So you had to read through the entire thread to decide what you thought about an issue or what your takeaway should be.
They were also self-contained. What you wrote on a forum about skyscrapers didn't have an impact on how you were perceived on a forum about bocce ball. Sites like Reddit that aggregate all of these interests into one place, with one reputation score across all interests, end up leaking the outrage from the most contentious areas into everything else. It used to be that niche hobbies were safe from this but more and more of them are getting overflow from the geysers of outrage coming from the more mainstream areas. If you're the moderator of one of these niche subreddits, trying to keep this out will often lead to raids and complaints to higher management who can replace you at will. This puts more power into the hands of the outrage manufacturers to leak over into everything and everywhere they want.
Most forums did have reputation systems though, which I would argue were the primitive form of likes. But I agree, I do miss the old forum style of communities.
True, but I always found that those with higher reputations were given preferential treatment or seen as “superstars” of the forum. Similar to how people with large followers are considered the same these days.
I hypothesize that any "short reactions" like reaction buttons, shares and ~200 characters posts are used to express outrage and hate because it's the only emotions that can be accurately expressed with such limited way of expression.
"Fuck you" has a lot more meaning and impact than a "thank you" without additional context; anger encourages short and impulsive actions, kindness necessitates understanding and sophisticated social interactions. "Thank you for helping me out when my wife threw me out of the apartment" has more punch than just "fuck you" but also requires more involvement to even produce the initial situation that began the chain of event. Researched arguments are nearly impossible to find compared to the sea of impulsive reactions.
As an example of this is the difference between 4chan's /b/ and /r9k/ boards. /b/ is pure random, with nothing off limit except what is illegal under the law while /r9k/ is similar but no duplicates of text or picture are permitted across all the history of the board. While the post on /r9k/ remain vulgar, insults are far less prominent.
Specialized groups/boards/subreddits are also similar where content and researched opinions are the main point of those groups by their nature.
This isn't particularly surprising. Some folks like attention, and giving them attention for something they are doing, results in them doing that thing more. Social media simply taps into that dopamine loop in order to sell page views and ad clicks.
Perhaps the saddest testament to this was a troll who was banned from Twitter and told a journalist "I feel like I don't exist anymore." That is a really awful place to end up. It is also an unhealthy place to be.
David Brin (sf writer) shoe-horned a scene into one of his books where a "terrorist" deliberately poisons a populist politician with a sci-fi drug designed to cure addiction to stop him from making angry speeches.
I have been talking to more people in the real world about social issues or whatever the big topics of the year are... and what I'm finding which shouldn't surprise me is that people are a lot more moderate than you would be led to believe looking at social media or online conversations.
It's not that everybody is at the exact center, but most of the people I've encountered will have a side but be pretty far away from the loud extreme narrative you see online and in the news.
Yeah it's like, social media has you thinking there are literally only two or three overarching positions you can hold but in real life things are more nuanced. I know a lot of EU immigrants and anti-fascists who voted for Brexit, racists who voted remain, sex workers who are against the current de-stigmitization of sex work, trade/student union types who are toxic in the workplace, trans people who don't like jk rowling but idenitfy with her because they have also been alienated from the online trans community, effective activists who will refuse to use their social media platform to share activist messages, former kids of the care system who as adults are very wary of people who say they want to adopt or foster, and so on. Social media has everyone thinking x therefore y (and therefore z) when it just is not the case. I can tell when someone is Very Online because they assume I hold a series of political positions based on very surface level observations about me, or vice versa, they assume because I hold a political position on something, I don't know about or haven't experienced something else (even though they also haven't)
> “Our studies find that people with politically moderate friends and followers are more sensitive to social feedback that reinforces their outrage expressions,” Crockett said. “This suggests a mechanism for how moderate groups can become politically radicalized over time — the rewards of social media create positive feedback loops that exacerbate outrage.”
That would seem to be the path-to-radicalization, and there certainly seem to be more people at the extremes. BUT there is a really broad group of people not at the extremes resisting radicalization much less interested in expressing their ideas online because the backlash is too much, they don't feel confident enough in their opinions, or they simply don't care that much.
The problems in this country and perhaps the world at large is a lot of people seem to want the extremists to get along and get into arguments with the fringes and take away that these situations represent most of reality. The truth of it is, hopefully, that there are actually an enormous number of people who are relatively quiet who just need to learn how to identify other moderate people of differing views and how to, when to, and really TO have conversations about their differences and understand and empathize with why somebody else has a different position.
There's nothing more vicious and outraged than centrist twitter, but I've always attributed that to the fact that a lot of centrist twitter are trying to make it a career.
I don't do social media, except for HN (which has somehow avoided this problem... I think?). My wife, on the other hand, uses Twitter. She likes philosophy, current affairs and a good logical debate. Her social anxiety means that she doesn't like stressful human interaction but in spite of this, she will argue her side of a contentious issue even if she's on the unpopular side and at risk of a pile on. For that reason, she never engages in pile-ons or "cancelling". Probably because of this, she has a surprising number of pretty famous sceptics, writers, philosophers and scientists following her (and she really is a "nobody" in any of these areas).
What annoyed the hell out of her is that her top tweet is one that she typed while drunk after reading something that made her blood boil.
I don't think that likes and shares teach people to express more outrage. I think that the root cause here is that we reward outrage. We love to hear somebody bitch lyrical, more than we like a good joke. We live to read about some "Karen" getting her comeuppance.
I'm thankful for "Like" and "Share" being positive words. It would be much worse if they were "Dislike" and "Shame".
The reinforcement strategy behind AI recommendations (on Facebook, YouTube, Spotify) is to continue showing more similar content. This leads to extremism.
To encourage diversity, we the programmers need to intentionally change the algorithm to present opposing views. Mix some 80s disco in with that punk rock. Remind people that there is another perspective.
We can also encourage "thank you"s - actively creating backlinks so that people know where the thought came from originally, and can learn more.
Hacker News has a great system of self-moderation where downvotes are allowed but only after the user has received 1000 upvotes to establish their credibility. First contribute well, then earn the right to criticise. I hope we can find ways to translate this model into other domains, even real life.
Dislike and Shame are essentially how downvotes on reddit (and HN?) work.
I’m sure somewhere this has been analyzed in depth, but a few years ago it seems the YouTube algorithm underwent a fundamental change in which it shifted from recommending videos based on what other people viewing a video liked to recommending videos based only on your own personal history.
Outrage culture has to be one of the worst mainstream aspects of "normal" social media interaction. Then it spawned cancel culture, which went completely off the rails (IMO).
If the consequences had any semblance of consistency or proportionality, then that may be the case.
Instead, you have some people's lives ruined for a dumb joke on Twitter, and others getting away scot free with actual sexual assault because the substantive and evidence-supported accusations were covered up before they could go viral.
As it stands, I think cancel culture more closely resembles the 2 minutes' hate from 1984 than actual justice.
The most disturbed I've been by the woke wave is the multiple times I've heard Weinstein's actions referred to as "sexual harassment" by low-information online types a few times (in person.) The outrage and canceling of random actors for commenting on a woman's body on twitter once, and huge, daily reactions to other celebrity trivialities has flattened out the entire subject until a physically aggressive serial rapist becomes just another sexual harasser.
Not when there is little if any due process and the opposite that when there is a favorite son like mr Cuomo despite the many accusations enjoyed great support from all the major players in the party and the mainstream media (Emmy award?) till the water could not be bailed out any longer.
So, yeah, thanks but no thanks. There is no due process and no consequences for false accusers.
I agree that people should face the consequence of their actions - heinous acts warrant a response.
What I can't get behind, is people sifting through the entire (online and offline) history of people, in the hopes of finding some evidence of wrongdoing, no mater how small or trivial, and then starting a cancel campaign. That kind of behavior is just plain creepy and stalkerish, bordering the psychotic.
One act expresses disagreement, the other - cancel culture - is tied to getting a person fired from their job or otherwise cutting off their ability to feed themselves. One of these is vastly more significant than the other.
I would argue firings are extremely rare compared to brigading of voting, commenting, reporting systems and trending topics. A reasonable interpretation of “cancel culture” would include the less severe ways of kicking content/people off of platforms as well.
Facebook’s denial is particularly unpersuasive. It ignores that there may be unintended consequences of the explicit decisions in their algorithm. The analysis in the article somewhat overlooks the that it is quite possible that content showing women with less clothing is more popular.
One thing I’ve never seen Facebook seem to have any recognition of is that the basic concept their news feed is a self reinforcement loop. They choose to show a user a particular type of photo, and then the photos that user interacts with are primarily composed of that particular type of photo, which coincidentally is the only thing Facebook displays to them.
HN seems to avoid this to some degree. Or at at least you can't just express outrage without a reasonable explanation because the community is pretty good at downvoting useless and low effort posts. I suppose outrage may still be common (see recent Apple news) but it seems geared toward more productive conversation than just shouting at each other.
HN also visually avoids gamifying everything. Pay attention to how interactive likes on a site like Twitter are, it's literally a little heart that pops up. A lot of those sites look like slot machines.
I hadn't thought like that about HN but you're right: up/down vote arrows are not visually prominent, public votes counts aren't visible. I think it also helps that most people on HN are cynical enough about useless internet points that they don't chasing them all that much just to boost the number.
Sort of. Back then we’d end up with one forum for a topic run as an independent website/fiefdom. This is one thing if it’s about your hobby but another if it’s about your profession. Imagine if posting on HN was important to your career and pg and dang decided you were a threat to them personally. This same thing can happen now with groups on the corporate platforms, but it was worse when individuals ran the entire sites.
I wonder if there wouldn't be value in making the process of liking and sharing to something more onerous so it's not just one click away but requires work, at least for any negative sentiment to be re-shared.
Everyone would then have to put that small effort in balance with the actual amount of interest/outrage they feel.
I bet most re-shares and likes would not happen if they required any sort of effort.
If the outrage is not enough to deserve actual effort to make it heard, then the outrage is probably not worth being heard.
It would be interesting to take the question a step further. Was expressing more outrage a result of the Likes and Shares or is it the individual has discovered a place where there are not consequences/questioning for expressing founded or unfounded outrage.
In person, if you are to express outrage you run the risk of being directly challenged. If you are in a system when the simplest forms of feed back are positive in nature ( Like, Share ) you most likely find you can express outrage without negative consequences.
IMO 'likes' pushes extreme views more than anything else, for news especially. The majority people in the middle of spectrum are losing their voices. The extreme right and left are pushing their views to everyone, and we lazy people in the middle like regular news stories, but we are too lazy to like it because it is normal. The only way to turn this around is to only allow dislike in social media, so the extreme right to dislike extreme left, and vice versa, we, the normal majority will win because no one bother to dislike a normal view of the world.
dang, the article from yale.edu is just a few links down. possibily benefit from merge with that link? the yale.edu article has more to it and is the source institution
In all seriousness though, this is one of the most important recent studies. Social media is an experiment that has been having profound effects in our society and people are just starting to realize it.
Please downvote or upvote to express your outrage.
I initially thought this was saying that train passengers were more likely to be manipulated by social media toward outrage, which probably rings true to anybody familiar with transit twitter.
In the end I quit writing reviews for Yelp and exited the community even though much of my social life at that point had become subsumed by it. Luckily I was still at an age where it was easy to rebuild my social life. For many, social media has become their primary social outlet and without it, they don't have anything else to turn to. Even without the pandemic, I suspect that it's more difficult to build an offline social life than it used to be, especially for the young.