You're right, pg should have spoken out against Palantir when Biden was in charge too. Just because he's right about them now doesn't mean he was always right about them and we should keep that in mind.
In short, the W3C adopted it because they thought it was a good idea, while browsers and screen readers both refused to adopt it for various reasons like ambiguity with existing web content or concerns about screen readers having to implement and maintain their own independent outline algorithm implementations. 8 years and an entire standards organization after the thread above, the WHATWG finally dropped it.
I'm not going to lie, I don't have a lot of faith in the people making markup decisions for HTML these days. It was obvious that none of these tags made any sense and anyone who knows what semantics mean knows they would get semantically bleached the instant they hit end users. Wordpress still uses B and I buttons for <em> and <strong>. That's never going to change because emphasis and strong are just not a thing that users understand so it can't be on the UI. In fact, I don't even understand the difference between the documentation fails to explicitly assert it. Screen readers and web browsers render them the same way as <b> and <i>. At this point I have to wonder for whom exactly what this markup created, and what problem did it seek to solve. I have no idea what was going on with the committee to take years of <h1> and <h2> meaning completely different things and think "what if <h1> meant the same thing as <h2> sometimes if it's in a <section>?" or "what if we <h3> didn't mean <h3> when it's in a <hgroup>?" This was a great place to introduce an <h> tag. Did they just want to avoid breaking backwards compatibility while at the same time not caring about it? I just don't understand...
Meanwhile everybody from users, to search engines, to social media platforms, to forums, to article writers are still waiting for a <spoiler>.
I think, if I remember correctly, that b and i are used to only alter the style of text, while strong and em are used for emphasis in something like a paragraph of text.
So you could use b in something like UI breadcrumbs, but of you wanted to strongly highlight something as an author of some text, you'd use strong.
I'm not really sure we need these though. As for my breadcrumbs example, I think everyone would use CSS rather than a b element.
So, I kinda understand the idea, but I've never needed it.
A thing that maybe comes to mind is that inner monologue in books uses italics sometimes. So using i would make sense more than em. But when wanting to emphasize/stress a word or phrase an em would make more sense than i.
I would prefer to use a neutral <h> tag, that was proposed in the xhtml2 specs. It always made more sense to let the browser infer the place in the hierarchy.
I've seen this done with one of the existing tags and appropriate nesting. <h1> for masthead, <h2> for major subdivisions in a huge page (essentially sub-mastheads), then <h3> for everything else with styling (and nesting in ToCs and such) being dictated either by how nested they are in <section> tags (or <div>s with an appropriate class). H3 here becomes a neutral header, and h4+ are just not used (nor is H2 in short/medium pages).
How does that help with a document structure if everything on the h3 level is the same?
I wish there was a neutral <h> element that could then be specified at an arbitrary <h~n~> sometimes I have documents that have headings 8 levels deep.
Going back a lil further to 2011… We had a lil fight[1][2] about the value of using perfect HTML semantics in the community. The outline algorithm was the big defense for <section>, but its problems were clear then: https://www.smashingmagazine.com/2011/11/pursuing-semantic-v...
The APA even removed sex addiction from the DSM-V, which isn't the end-all-be-all of what is or isn't a mental disorder, but is indicative of how science has broadly rejected the idea of sex addiction being a meaningful disorder.
Although the addiction model is controversial and not widely accepted, most professionals do acknowledge that psychological problems can drive maladaptive sexual behavior that causes significant personal and professional impairment. It’s important to stress that pornography use alone is not considered maladaptive, irrespective of local cultural norms that seek to restrict the sexual behavior of consenting adults.
AASECT (the largest professional body of sex therapists in the United States) has a definition of Out of Control Sexual Behavior, and Compulsive Sexual Behavior Disorder is retained by the ICD. Most consumers of pornography do not meet either definition, but it’s important to recognize that some do.
It's pretty standard among browsers. The risk should be about equal to someone spoofing the domains that the browser downloads software updates from, and you can turn it off via prefs if you really don't want it.
Yes you can donate to Mozilla Foundation. But no guarantee any such donation will actually fund Firefox development, as opposed to some completely unrelated non-technical project of theirs.
And in fact their FAQ for donations doesn't even mention Firefox under their "How will my donation be used?" section:
At Mozilla, our mission is to keep the Internet healthy, open, and accessible for all. The Mozilla Foundation programs are supported by grassroots donations and grants. Our grassroots donations, from supporters like you, are our most flexible source of funding. These funds directly support advocacy campaigns (i.e. asking big tech companies to protect your privacy), research and publications like the *Privacy Not Included buyer's guide and Internet Health Report, and covers a portion of our annual MozFest gathering.
LLMs are not a general public benefit. Artists whose work is trained upon by text-to-image models aren't made any more whole just because Meta has to share its weights—it just means it's even cheaper for the folks impersonating them or effortlessly ripping off their style to keep doing so.
Meta really does not need to be subsidized when they have so many resources at hand—if LLMs are really hard to train without that much data, then perhaps that's a flaw with the approach instead of something the world has to accommodate.
Maybe it's just me but if I felt like my application's error messages weren't easy enough to understand I'd try to improve the messages instead of throwing all the context at an AI and hoping for the best.
People have been trying to get compilers and runtimes to generate better errors for decades, and sites like StackOverflow exist to backfill the fact that this is a really hard problem. If an AI can get you a better explanation synchronously, doesn't that in fact represent an improvement in the "messages"?
No because all the AI is doing is making up statistically plausible sounding nonsense? The best case output is a correct summary of the documentation page - why add a huge amount of power use alongside massive privacy invasion just to deal with that?
I have read and re-read this article and I don’t understand how this is better for any purpose other than “we put AI in something, increase our stock price!”
That sounds like a generic argument against any AI integration, though. "All they do is make up statistically plausible sounding nonsense" is definitionally true, but sorta specious as it turns out that nonsense is often pretty useful. In particular in this case because it gives you a "summary of the documentation page" you'd otherwise have to go look up, something we know empirically is difficult for a lot of otherwise productive folks.
No. Humans can have actual domain knowledge plus contextual awareness which leads them to actually understand the subject by means of their education, and thus make guesses based on more than linguistic and syntactic plausibility. Educated guesses can be wrong, but are by definition not merely "plausible sounding nonsense."
Yep. The Web console could just link to some documentation.
The link could even be parameterized so the URLs or other elements related to the error replace placeholders in the doc. But I'm sure a developer is capable of enough abstraction to replace example data themselves.
Agreed! it would be really helpful if the console just showed me some documentation but if google manages to make something similar to github copilot then it could potentially be a game changer.
Sega has long had an intentionally loose policy with regards to fanart and fan games and rarely issues takedowns for projects that don't generate profit. It doesn't seem to have hurt them at all.
Capcom too. An infinite amount of Mega Man fan games. Quite a few Mega Man bands making music, performing live. Projects that have been active for literal decades now. Do they issue silly takedowns and destroy all this culture? No, they invite them to play at their official events.
It's just corporations like Nintendo that suck. It's definitely not alone in that group. We all need to stop giving them money.
I feel ya, but I am sympathetic to the argument that Nintendo’s IP is uniquely valuable and their brand is globally very unique. They prize their reputation and the Nintendo authenticity seal has been a thing since forever.
The speed increases are nothing to sneeze at; I've moved a few Vite projects over to Bun and even without specific optimizations it's still noticeably faster.
A specific use case where Bun beat the pants out of Node for me was making a standalone executable. Node has a very VERY in-development API for this that requires a lot of work and doesn't support much, and all the other options (pkg, NEXE, ncc, nodejs-static) are out-of-date, unmaintained, support a single OS, etc.
`bun build --compile` worked out-of-the-box for me, with the caveat of not supporting native node libraries at the time—this 1.1 release fixes that issue.
Bun's standalone executables are great, but as far as I'm aware unlike Deno and Node there's no cross compilation support, and Node supports more CPU/OS combinations than either Deno or Bun. Node supports less common platforms like Windows ARM for example (which will become more important once the new Snapdragon X Elite laptops start rearing their heads [1]).
We'll add cross-compilation support and Windows arm64 eventually. I don't expect much difficulty from Windows ARM once we figure out how to get JSC to compile on that platform. We support Linux ARM64 and macOS arm64.
You have a narrow view of what a beautiful experience is. It does not require professional-level voice acting.
It is not unfair that, in order to have voice acting, you must have someone perform voice acting. You don't have the natural right to professional-level voice acting for free, nor do you need it to create beautiful things.
The tech is simply something that may be possible, and it has tradeoffs, and claiming that it's an accessibility problem does not grant you permission to ignore the tradeoffs.
> You don't have the natural right to professional-level voice acting for free
I also don't have the natural right to work as a professional-level voice actor.
"Natural rights" aren't really a thing, the phrase is a thought-terminating cliché we use for the rhetorical purpose of saying something is good or bad without having to justify it further.
> The tech is simply something that may be possible, and it has tradeoffs, and claiming that it's an accessibility problem does not grant you permission to ignore the tradeoffs.
A few times as a kid, I heard the meme that the American constitution allows everything then tells you what's banned, the French one bans everything then tells you what's allowed, and the Soviet one tells you nothing and arrests you anyway.
It's not a very accurate meme, but still, "permission" is the wrong lens: it's allowed until it's illegal. You want it to be illegal to replace voice actors with synthetic voices, you need to campaign to make it so as this isn't the default. (Unlike with using novel tech for novel types of fraud, where fraud is already illegal and new tech doesn't change that).