Is the assumption that non "tech industry" communities (e.g: voat, parler, ovaries, gab, truth, lemmy, mastodon, 4chan, 8chan, etc) are less likely to be a problem or to negatively impact teens than the mainstream "big tech" ones (e.g: facebook, twitter, youtube, tiktok, reddit, etc)?
I think if you run a website as a main source of your business profitable or not you’re in the tech industry. It’s a question of scale not industry classification or purpose classification.
The thing with those alternative communities is that they sort of orbit around the larger tech platforms. Their agenda is set by the news-of-the-day within certain X/FB/YouTube subcommunities. Its sort of analogous to wire services in traditional media.
Additionally, people that post on those platforms originally gained notoriety on the bigger tech platforms, and took their audience with them.
Not my point. The original comment said the tech industry can decide to break up the federal government because they don't want to be forced to clean up their act. Societies should be stronger than any industry and fight to maintain freedom, health, peace, and prosperity. If the tech industry is against that, then they should be the ones broken up.
> Societies should be stronger than any industry and fight to maintain freedom, health, peace, and prosperity.
I think (I hope!) we all agree with this sentiment.
But societies also need to be stronger than states, especially in an age of connection and sharing.
States are the main source of uncertainty and violence in the world right now, and I think it's reasonable to hope that the internet will bring the age of peace we pray for.
Obviously the social media giants are not it. They are closer to states than they are to algorithms.
But I'm wary of siding with states over web apps. What we need are healthier (meaning, chiefly, more decentralized and less rent-seeking) web apps.
Exactly, societies need to be stronger than states too and really need to act early. States can become one person or party and it's game over for a long time. Actually, the American Constitution is pretty great at preventing this exact outcome and I still have a lot of faith in it.
but the constitution is just a piece of paper with some words written on it.
Without an active civic society protection what is enshrined in the document, it is all but powerless.
> They are closer to states than they are to algorithms
This seems like nonsense. All the tech industry does is convince people. It doesn't force anyone to do anything. States have a monopoly on violence. No one holds a gun to anyone's head forcing them to consume <insert content you disagree with>. In a country of equals, everyone's opinion, including <position you disagree with>, should hold equal sway, and be resolved via democratic due process.
Just because many people hold <position you disagree with> and vote for <politician you find repugnant> doesn't give you any sort of reasonable justification to limit the freedom of others to advocate (including on social media) for it.
I agree with everything you've said with regard to the justice of the matter, but I don't think that there is a free market at work in social media.
* So-called "intellectual property" laws dramatically skew what can and cannot be shared
* Censorship at the behest of world governments is rampant, and completely overran anything representing a nonviolent scientific dialogue during the recent COVID19 pandemic
* States, with their monopoly on the legitimate initiation of force, pick winners and losers at every level of the experience, from chip makers to the duopolistic mobile OS vendors to their app stores to the social media offerings. Sure, network effect may describe the reason people join and stay, but the availability of places to join and stay is in no sense a market phenomenon
Consider: the major social media barons meet with POTUS all the freakin' time. Do you suppose that's just because they enjoy his company?
> So-called "intellectual property" laws dramatically skew what can and cannot be shared
Agree! let's get rid of these :)
> Censorship at the behest of world governments is rampant
Agree! States have always pursued censorship to maintain power. That doesn't contradict the point that social media companies themselves are not state actors, and are not the problem.
> States ... pick winners and losers
I'm not sure I'm 100% on board here. States may thumb the scales, but the fact of the existence of FAANG/MANGO seems much more like a market phenomenon than an interventionist project.
> social media barons meet with POTUS all the freakin' time
There is almost no clearer display of corporate self-preservation than social media vendors kowtowing to the president.
Much of what you're outlining is standard run of the mill corruption. The US Government (and others) is acting in contradiction to its stated principles. This is not a new phenomenon, and seems in the category of core human governance challenges.
I think you may have misunderstood my comment - or perhaps misunderstood the consequences of the censorship regime.
If anything, it seemed like the denialism was amplified by the censorship. What fell by the wayside were the serious, rigorous dialogue that had previously been the best thinking on epidemiology and public health.
I was a moderator and frequent contributor to /r/ebola during the 2014 outbreak; during that time I reached out and began to form relationships with (and respect spectrums for) various epidemiologists and academic departments. And it was really hard during the COVID19 pandemic to watch people like John Ioannidis, David Katz, Sunetra Gupta, Michael Levitt, etc. be totally cut out of the conversation while a group of second-stringers who were willing to toe the corporate line took their place.
Was it your experience that the censorship worked to _stem_ denialism? It seemed to me that it made it much louder and much worse, muddying the water of genuine discussion and research.
The idea that real, serious scientific debate was stymied by social media platform policies doesn't pass the smell test for me. Facebook/twitter/et al were making good faith efforts to stop the flood of downright harmful misinformation, and government didn't force them to do it. None of even the most questionable scientists were ever silenced. Those folks had the right wing press broadcasting their worst ideas to the world, the didn't even need social media when they could get on Fox News every day of the week.
It was the final attempt of social media even trying to be something more than a cancer. Now? Every social media platform (especially Facebook and twitter) would have zero problems being the driver of modern day pogroms, complete with running betting markets on the outcomes, if it would keep their share prices up.
> None of even the most questionable scientists were ever silenced.
...a literal nobel laureate, a literal Einstein scholar, and literally the author of the most cited paper in the history of open publishing were all censored.
Multiple scholars of the Hoover Institution. The director of Oxford Center for EBM. An author of the most widely-assigned textbook in preventative epidemiology. Two editors-in-chief of BMJ publications. Literally the BMJ itself had articles removed from Facebook! The British Journal of Medicine was censored from Facebook dude!
Tenured professors form Yale, Johns Hopkins, Oxford, Harvard, and Standard (several from Stanford in particular) had their work either totally removed or subject to shadowban-style censorship.
What can you possibly be talking about? I'm broadly anti-credentialist, but I can't fathom not noticing what happened: The world's foremost experts were silenced; we all watched it happen.
Let's not mince words here: there was a _thunderous_ chorus of the world's top experts opining against lockdowns. And social media depicted something entirely different, and entirely false. It wasn't like... close. Lockdowns never gained anything resembling mainstream support in the actual real world of epidemiology.
David Katz, Michael Levitt, Carl Henegan, Monica Ghandi, Scott Atlas, Vinay Prasad, Eran Bendavid, Sunetra Gupta, John fucking Ioannidis (my personal favorite author of medical science for over a decade prior to COVID19, and arguably the most accomplished medical scientist of our generation)... I can go on and on and on. How on earth are you conducting your "smell test"?!
All the most impressive minds of our age were cast aside so some second-stringers from suburban Virginia, who had been collecting a paycheck from NIH and CDC but not doing anything resembling continuing education at their alma maters, could babble nonsense about interdiction and hold aloft the Imperial study which they obviously didn't understand (and which all of us who read it knew it was destined to retracted from the word go).
There were a tiny few serious academics who endorsed lockdowns. And some were genuine experts who simply got it wrong. I respect Carl Bergstrom and Marc Lipsitch enormously, and I give them credit for sticking their head above the parapet - I think they genuinely believed in horizontal interdiction and, although they were absolutely wrong, I don't think they were intentional being propagandistic.
And I don't think they went out intending to be amplified as they were. I only wish their other work were amplified as much as when it was convenient for the lockdown narrative.
...but it's simply, totally false that accomplished academics and experts weren't censored. I can't even approach that with a straight face.
A lot of people with credentials join the grift train, yes. Apparently it's quite profitable. Listing many of them isn't really an argument that the grift is true.
What a bizarre and reckless take. I thought this 'no true scotsman' nonsense was put to bed in 2022.
By this metric, who is _not_ a grifter? You have to be Scott Gottleib or Peter Daszak - shilling pseudoscience while sitting on the boards of corporations making billions from the pandemic - to _not_ be a grifter? Is that it?
> Literally the BMJ itself had articles removed from Facebook!
These people got their stuff published in the British Medical Journal, so nobody in the scientific community had the slightest problem seeing it.
Facebook posted a fact check where the story was shared pointing out some problems with it. They didn’t “censor” anything. It was frankly entirely reasonable and the BMJ should have done better in the first place. Facebook did “combat bad speech with more speech”, the thing you’re supposed to do, and the cranks absolutely lost their minds.
In any case, the danger is over now and we can rest easy knowing that Facebook won’t lift a finger to prevent millions from being misled about vaccines causing autism. They’ll sell ads alongside the posts! phew
...let's get our facts straight here. I hope we can agree on this nutshell:
* During phase III of the Pfizer trial, there was an unblinding event which was not initially disclosed. At first, it appeared that it might only have been a few dozen participants, but later disclosures showed that it was more serious.
* The BMJ learned of this - again, only knowing about a few dozen patients - from the regional director of the contractor carrying out one of the arms of the trial, who was fired the same day she reported the unblinding to the FDA (as required by law). This disclosure included photographs of documents, in the study area, with unblinding information on them.
* The BMJ published what was, in retrospect, an extremely cautious report, even though by that time it was becoming clear that the problem went even beyond mass unblinding and into falsified data, so much so that the contractor's quality control check team were overwhelmed trying to catch up in the days between Jackson's termination and the publication of the report.
* In response, Facebook added an inane "fact check", calling the BMJ a "news blog", and which got several of the above facts wrong. In fact, the "fact check" didn't actually make any coherent assertions about the actual content of the article at all. It seemed its primary function was to add an insinuation of doubt, via scary red boxes, about the BMJ report, without any critique of the substance or merits.
* Three days later, Facebook went further - preventing the story from being shared at all, and adding warnings to users commenting on the article (in places where it had already been shared) that they risked having their accounts degraded or terminated for spreading misinformation.
* All the while, board members of Pfizer (one of who was a former FDA commissioner) were permitted to deny these assertions and smear the whistleblowers (in what, in retrospect, turns out to have been actual misinformation) with no "fact checks" or prohibitions on sharing.
* Months later, Facebook acknowledged that they took these actions at the urging of the White House.
...I don't think it's the least bit far-fetched to call this "censorship".
Facebook 'reduced distribution,' they didn't block. And again, your original claim was that social media somehow blocked scientific debate, which is categorically false. All these claims are hand-waving away the fact that this was published in the BMJ from the outset.
Facebook could throw all their servers in a wood chipper today and it would have zero effect on scientific debate in the world.
All that a state does is convince people. States don't really exist. They're fictional constructs that sometimes convince a police officer to break into a murderer's home and kidnap him. And most of us agree that's a good thing. However sometimes they convince a police officer to break into a protestor's home and kidnap him. And some of us agree that's a bad thing. Other times they convince bomb makers to make bombs and convince aircraft mechanics to attach them to airplanes and convince pilots to fly over hospitals and press the release button. That's bad too - sadly not everyone agrees on that.
You've written this with a certain sardonic tone, seemingly in efforts to show the person to whom you're responding that their view necessarily leads to the particular brand of anarchism you're espousing.
And I must say, I find your argument and phraseology very convincing. I agree with everything you've said here; states are not imbued with any particular magic. They simply convince people to do things that, if people weren't filled with the mindset of exceptions that seem to come when engaging in public services, they'd never ever do.
I have a degree in political science, and I wish that the reading material required to get that degree displayed more of the technique you've used here.
I mean, it's good prose but it's just sort of hand-waving away all the history of how we ended up with modern states. States solve a lot of problems, they're not perfect but I'm pretty passionate about not living in walled cities because there are hordes of raiders who go around enslaving everyone.
I think you both may have misunderstood my comment. It's not about history. It's simply a rebuttal to the idea that something which "only convinces" is less influential than a state. States themselves also fall into that category, and therefore we can see that things in that category can be so influential they need forceful restraint.
This could be amended to "States have a monopoly legitimate on violence". Your comment seems to deny the existence of "legitimacy" as a concept. How do you distinguish between legitimate and illegitimate use of force?
No, states can't do violence because they don't have hands, so they can't hold guns or bats. The violence is done on behalf of the state by some of the people it convinces, mostly police officers and soldiers.
Deprecations in all forms are always a shitshow. There isn’t a particular pattern that “just works”. Anybody that tells you about one, best case scenario, it just worked for them because of their consumer/user not because of the method itself.
The best I have seen is a heavy handed in-editor strike through with warnings (assuming the code is actively being worked on) and even then it’s at best a 50/50 thing.
50% of the developers would feel that using an API with a strike through in the editor is wrong. And the other 50% will just say “I dunno, I copied it from there. What’s wrong with it??”
> One guy on the Internet is--and always will be--an anecdote
That's true of course. The problem, in my view, is that this is how everyone on the internet acts especially the "reviewers" or "builders" or "DIYers". It's not just you, so don't take this as a personal attack.
Almost all articles and videos about tech (and other things now too) do the equivalent of "unboxing review". When it's not strictly an unboxing, it's usually like "I've had this phone/laptop/GPU/backpack/sprinkling system/etc for a month, and here is my review"
I stopped putting much weight on online reviews and guides because of that. Almost everyone who does them uses whatever they are advertising for _maybe_ a month and moves on to the next thing. Even if I'm looking for an older thing all reviews are from the month (or even day) it was released and there is very little to non a year or 2 after because understandably they don't get views/clicks. Even when there are later reviews, they are in the bucket of "This thing is 3 years old now. Is it still worth it in 2025? I bought a new one to review and used it for a month"
Not to mention that when reviewers DO face a problem, they contact the company, get a replacement and just carry on. Assuming everyone will be in the same position. From their prospective, it's understandable. They can't make a review saying "Welp, we got a defective one. nothing to see here". On the other hand, if half the reviewer faced problems, and documented it, then maybe the pattern will be clearer.
Yes, every reviewer is a "one guy on the internet" and "is--and always will be-- an anecdote". No one is asking every reviewer to be come Consumer Reports and test hundreds of models and collect user feedback to establish reliability scores. But at the same time if each did something similar it would be a lot more useful than what they do.
I'll give you a concrete example off the top of my mind --a Thermapen from ThermoWorks.
When I was looking for "the best kitchen thermometer" the Thermapen was the top result/review everywhere. Its accuracy, speed and build quality were all things every review outlined. It was a couple of years old by then and all the reviews were from 2 years ago. I got one and 6-8 months later, it started developing cracks all over the body. A search online then showed that this is actually a very common issue with Thermapens. You can contact them and they might send you another one of the older models if they still have them (they didn't in my case) but it'll also crack again. Maybe you can buy the new one?
May sound petty to put that one example on the spotlight, but very similar thing happened to me with a Pixel 4, a Thinkpad P2, a Sony wireless headphones, a Bose speaker, and many more that I'm forgetting. All had stellar "3 week use reviews". After 6 months to a year and they all broke down in various ways. Then it becomes very easy to know what to search for and the problems are "yeah, that just always happens with this thing"
You're entirely right about this kind of content and the people who create it festers cynicism. But in the end, I am powerless to do anything to counter said cynics. Unfortunately anything that I write otherwise will just invoke more cynicism and I'm sure in the eyes of the cynics, it's justified.
These DIY NAS build blogs have a bit of formula: Here's my criteria, here are the parts that I chose to meet that criteria, and here's what I think after I've built and tested it to the best of my ability.
If I had my choice, my blog would inspire people to understand their own criteria and give them the confidence to go build something unique that meets that same criteria. This absolutely happens, but it's the exception rather than the rule. The rule is that people choose to replicate these DIY NAS builds part-for-part.
I'm as confident in this DIY NAS as I've been for the ones I created in the past. The times there were issues with these builds (eg: the defective C255X/C275X CPUs from Intel), I've updated those blogs with all the details I can muster about those issues.
That was a funny period of time because you could very transparently see the clear application of a corporate team that was tasked with improving the “startup speed KPI”.
During that time IE startup time went from a dozen or so seconds to also instantaneous. It was even faster than chrome sometimes. But that was just the startup. The application wasn’t ready to accept any user input or load anything for another 10 or 15 seconds still. Sometimes it would even accept input for a second then block the input fields again.
It’s the same mentality all those insanely slow webapps do when they think some core react feature for a “initial render” or splash screen etc will save them from their horrific engineering practices.
Google did a great job communicating Chrome's improvements over speed (both with startup and prefetch) and reliability (isolated and sandboxed tabs) during its launch. When you saw it, you knew that it was basically game over for any browser that had chosen to stagnate until then. They destroyed the competition.
lol, what? You’re gonna hold 20 people hostage on the bus until some enforcers navigate a busy city to ticket a person who is likely to wipe their ass with the ticket? What country is that exactly?
Seriously, other than law enforcement what else can you do to someone who brazenly refuse to follow the rules? Even law enforcement (at least in the US) highly depends on where you live. In left leaning states and cities, DAs are not very likely to prosecute such small crimes like not paying a bus fare because they know it’ll make them unpopular next election. I live in a very left leaning county and state and it swings between center and left every 4 years or so. The swing is always “look how awful that guy was. He prosecuted vulnerable people for petty crimes for no reason”. Cops don’t wanna have to deal with all the paper work to book a guy for a couple of nights before they get released and do it all over again. If they know the person will not get prosecuted because there is no political capital to do so, why bother with the theatrics and all the paper work of arresting them? Brazenly refusing to pay the bus fare and getting in a verbal altercation with the driver and everyone on the bus is a fun afternoon for some people.
You end up with an outcry from the rich “liberals” (for lack of a better word), who never take the bus in the first place, complaining about how enforcing fares on buses is harming the poor who can’t afford transportation and pushing people away from public transportation.
It’s pretty infuriating. I started biking to work 2 years ago and try to bike almost anywhere I can. Mostly to lose weight but also put my money where my mouth is. I voted for every levy and prop to improve bike-ability and public transportation of the city in the last 10 years and figured I’m a hypocrite if I expect others to bike and take the bus and I never do. My tolerance for the homeless on buses has been dropping as I have to deal with them more and more. I was always “It’s our failure in not helping them. If I can’t help, least I could do is let them be” kind of person. Now every other week I end up with a negative interaction with someone on the bus or at a bus stop. Every time I air my grievances with people I know (who never take the bus) I always have to find myself on the defensive somehow.
For me, 2023 was an entire year of weekly demos that now looking back at were basically a "Look at this dank prompt I wrote" followed by thunderous applause from the audience (which was mostly, but not exclusively, upper management)
Hell man, I attended a session at an AWS event last year that was entirely the presenter opening Claud and writing random prompts to help with AWS stuff... Like thanks dude... That was a great use of an hour. I left 15 minutes in.
We have a team that's been working on an "Agent" for about 6 months now. Started as prompt engineering, then they were like "no we need to add more value" developed a ton of tools and integrations and "connectors" and evals etc. The last couple of weeks were a "repivot" going back full circle to "Lets simplify all that by prompt engineering and give it a sandbox environment to run publicly documented CLIs. You know, like Claude Code"
The funny thing is I know where it's going next...
But did it work? This is the sticking point with me now. I've seen slides, architecture diagrams, job descriptions, roadmaps and other docs now from about a dozen different companies doing AI Agent projects. And while it's completely feasible to build the systems they're describing, what I have not seen yet is evidence of any of them working.
When you press them on this, they have all sorts of ideas like a judge LLM that takes the outputs, comes up with modified SOPs and feeds those into the prompts of the mixture-of-experts LLMs. But I don't think that works, I've tried closing that loop and all I got was LLMs flailing around.
It hasn’t really worked so far. Pretty much exactly what you’ve described. I don’t even really work on that team, but “a judge LLM” low-key triggered me just because of how much I’ve been hearing it over the last couple of months.
I think the reason of the recent pivot is to “keep the human in the loop” more. The current thinking is they tried to remove the human too much and were getting bad results. So now they just want to make the interaction faster and let the human be more involved like how we (developers) use Claude code or copilot by checking every interaction and nudging it towards the right/desired answer.
I got the sense that management isn’t taking it well though. Just this Friday they gave a demo of the new POC where the LLM is just suggesting things and frequently asking for permissions and where to go next and expecting the user to interact with it a lot more than the one-shot approach before (which I do think is likely to yield better results tbh) but the main reaction was “this seems like a massive step backward”
I think long-term just having a single LLM responsible for everything will win out compared to brittle and complex subagent hierarchie. Most use of "subagents" today are just workarounds for LLM limitations: lack of instruction following, context length, non- determinism, or "hallucinations".
All of these are things that will need to be solved long-term in the model itself though, at least if the AI bubble needs to be kept alive. And solving those things would in fact materially improve all sorts of benchmarks, so there's an incentive for frontier labs to do it.
I think this is why you have the back-forth pattern that GP mentioned. You start with a single model doing everything. Then you find all sorts of gaps that you start to plug ad-hoc, and decide that breaking it into subagents might help fix things. This works for a while but then you realize that you lose out on the flexibility of a single-model having access to the entire context, so you starting trying to improve communication between subagents. But then a new model drops that fixes a lot of the things you originally had to workaround, so you go back to a single-model setup. Rinse and repeat. It's a great VC- bubble funded employment program though.
The general pattern seems to be that LLM+scaffolding performs better than LLM. In 6 months time a new model will incorporate 80% of your scaffolding, but also will enable new capabilities with a new layer of scaffolding.
I suspect the model that doesn’t need scaffolding is simply ASI, as in, the AI can build its own scaffolding (aka recursive self-improvement), and build it better than a human can. Until that point, the job is going to remain figuring out how to eval your frontier task, scaffold the models’ weaknesses, and codify/absorb more domain knowledge that’s not in the training set.
You are talking about context management stuff here, the solution will be something like a proper memory subsystem, maybe some architectural tweaks to integrate it. There are more obvious gaps beyond that which we will have to scaffold and then solve in turn.
Another way of thinking about this is just that scaffolding is a much faster way of iterating on solutions than pre-training, or even post-training, and so it will continue to be a valuable way of advancing capabilities.
I can't take anyone seriously who uses prompt engineering unironically. I see those emails come through at work and all I can do is roll my eyes and move on
Assuming you mean "context engineering" as in "engineering the context for LLM prompts" - 0.
I take particular issue with the usage of the word "engineering", in this context, as in my practical experience what I witnessed was more akin to "try random things until it somewhat works" than anything I would associate with "engineering". But hey, it's a free country and people can use words whichever way they want. Just shouldn't be confused if noone keeps on listening ;)
You don't. Envoy is great if you programmatically configure it, or if you have very small and simple configs. It can't be maintained by a human. But if you have tools that generate it programmatically based on other config, you can read through it.
My understanding was always that this is a way to monetize a text editor. How else do you monetize dev tools? Developers are used to very high quality free tools. You’re either one of the few old guards (like JetBrains, Microsoft or maybe Oracle) that can sell IDEs and other dev tools because 25 years ago open source dev tools were far from beginner friendly.
But how do you monetize a programming language, a text editor, a build system, a terminal emulator, etc in 2025? The examples are deno, bun, mojo, nextjs, zed, earthly, warp, etc. all know they can’t monetize the actual tool. You monetize services that you build around the tool. Like a cloud/workers/deployment (basically compute), or a sharing service or an AI service, etc. once you have critical mass on your platform, you can find other easy services to offer. Like if Zed has a critical mass of users, maybe the offer “in editor chat”. A small startup with just 3 devs working together can replace slack with zed. Maybe they offer an uptime check service. Why not? Maybe a file sharing service. Maybe a small wiki service, etc. all things that have million other solutions. But if you have critical mass, someone will pay for those things.
I'm honestly all for it. As AI keeps poisoning all aspects of online content and online interactions, people who care about that sort of thing will have to move back to in person interaction more.
The person you're physically interacting with might be using AI in their workflow, that's fine. I use AI too. I just don't want to "build a relationship" with AI. I don't care for AI "content". Art, blogs, articles, advertisements, even detestable things like sales and marketing are all forms of human relationships to me. It's fine if you wanna autogenerate it. I'm even "in the market" for autogenerates stuff as I use AI bots too, but you can't try to sell me a 100% automated-burger when I have the Fabricator 3000 too.
If I'm hungry and just want a burger, I'll get my Fabricator 3000 to generate one for me. If I'm in the mood for a human touch on food and a dining experience, I'll cook, go to a (reputable) restaurant or a friend's place who likes to cook. Maybe there is a market for you to run your Fabricator 3000 to generate a burger for me. maybe. I don't know why I'd buy it though when I can just get your prompt and feed it into my own Fabricator 3000...
reply