Another proposal specifies certain legacy RegExp features, such as the RegExp.prototype.compile method and the static properties from RegExp.$1 to RegExp.$9. Although these features are deprecated, unfortunately they cannot be removed from the web platform without introducing compatibility issues. Thus, standardizing their behavior and getting engines to align their implementations is the best way forward. This proposal is important for web compatibility.
Interesting view. Is this better than a "let it break" approach?
Link rot already claims N% of websites per year. I wonder if cleaning up APIs like this one would increase N noticeably.
I think breaking JS tends to be much more destructive than dead links. At least with dead links there's a fairly clear non-technical fix. With a web standards break, maybe 8 years ago you hired a contractor to build a website that uses a library that uses a library that uses a library that uses some JS feature that is now broken, so your website is now completely broken. Some of those libraries are on older unmaintained versions, where the only upgrade would be through non-trivial breaking changes, or you would need to just find alternate libraries. Getting things working again is a huge undertaking, not just a matter of "don't use that weird JS feature anymore", and I think in that situation it seems reasonable to blame the browser for the breaking change.
It's also maybe more widespread than you'd think. Adding `global` seemed safe for a long time, but ended up breaking both Flickr and Jira because they both use a library that broke: https://github.com/tc39/proposal-global/issues/20
> Interesting view. Is this better than a "let it break" approach?
Or perhaps a branch and let the old stagnate approach?
Strip the deprecated features to create RegEx2/RegExNG/whatever[1] and build the new features on that. Old code can keep using the old version as they always have, new code can use all the nice new shinys, and the new version doesn't have to worry about backwards compatibility with the broken parts of dark age UAs. Also make sure everything in the new spec can be polyfilled for cases when new code needs to work on those older UAs.
[1] or while we are branching off, why not answer the complaint of regexs usually not actually being regular (as per the mathematical concept of regular languages) and rename them completely? Maybe SearchExpressions? Or SearchExp if you want something smaller. Or SExp if you want even shorter and don't mind attracting bad puns.
ECMA is pretty bad about this. For example, in ES5 the RegExp prototype is a regexp. In ES6, it became an ordinary object, which broke some stuff. In ES8 they kept it as an object, but instead changed a bunch of regexp methods to require them to specially check for the regexp prototype object. This churn is pointless and baffling.
Writing code, not strings, means you can do everything to it that you can do to code. You can type check code. You can lint it. You can optimize it, compile it, validate it, syntax highlight it, format it with tools like Prettier, tree shake it...
Both intellij and emacs do all of these things for Vue code. That seems to cover >90% of developers' tooling needs.
You've sent us hundreds of emails using many accounts. We responded to all of them until it became obvious that it was only energizing the problem, and even then never stopped responding completely. I doubt there has ever been a case of forum moderators treating a user with greater patience. It isn't infinite though.
Since you won't stop posting abusive and misleading comments to HN, we're not going to unban you.
Welp. It's identical. I even tried to defend the current link with "Well, some cool things can happen when you hook up a UI to a shader..." but the UI doesn't even do anything except turn the rain off and on.
There's lightning too in that other one! That's so cool.
When your site grows large and you move it to an hosted service, or wants to point it to an Web Application Firewall or a DDoS mitigator, you might want to use a CNAME type record, to point the hostname to another flexible hostname that the vendor manages depending on your traffic and needs.
Now, if your website is hosted at the origin (“example.com”), you can’t do that. But there is no issue with the “www” hostname being a CNAME record. So if you want any scaling flexibility, now or in the future, you should go with the www hostname from the beginning.
Granted, his blog was knocked offline by HN. But would a CNAME have saved their Wordpress site?
His face is arguably too red. But on average it’s fine. (Amusing: is this comment correct, or unconsciously biased by the lack of knowledge of what native Americans actually look like? I admit the latter is possible.)
Humans interpret colors thanks to context. When you strip away context, it’s easy to come up with things that fool you. (Optical illusions are the limit case of this.)
New users can submit up to 2 stories every 3 hours. Bad users can submit up to 1 story every 3 hours.
Looks like the bot is submitting at precisely that limit, but the timestamps aren't quite accurate enough. Some interested user could check the API to get the full times.
I wish there were some way to contact the bot author. I want to see the code. It's no small feat to write an effective bot.
That's quite a lot. For a site with a single feed that <50 articles surface on every day, one would think one story a day would be more than sufficient for new users.
I think zero stories for new users until 100 karma would be better. It is going to be almost impossible to build 100 karma through comments with automation.
By allowing new users to submit stories, they can build up enough karma to downvote all through automation. 1 a day would just slow the process but not stop the spammers.
Eh, I think I disagree. If a story is good for HN, it doesn't matter where it came from or who posted it, or whether it was a bot. That fact is true independently of any other consideration.
If LN turns into anything, it'll be because of Lisp, not in spite of it. It really doesn't matter that the world is phobic to it when you alone are the blacksmith.
One could argue "See? It's running in JS. Doesn't that mean Lisp is useless?"
Maybe. But macros are a thing. And when you can generate React on the fly, without having to make a class for every single thing you want to do, the power disparity starts becoming very apparent.
$ rlwrap bin/lumen-node
> (load "arc.l")
> (print:compile:expand '(<a> href: "https://news.ycombinator.com" "Hacker News"))
React.createElement("a", {href: "https://news.ycombinator.com"}, "Hacker News")
> (print:html (<a> href: "https://news.ycombinator.com" "Hacker News"))
<a href="https://news.ycombinator.com">Hacker News</a>
> (print:html (<html> (<body> (<div> width: "100%" "Hello, world"))))
<html><body><div width="100%">Hello, world</div></body></html>
> (print:html (whitepage "Look ma, no Racket"))
Warning: Each child in an array or iterator should have a unique "key" prop.
Check the top-level render call using <html>. See https://fb.me/react-warning-keys for more information.
in body
<html><body bgcolor="white" alink="blue">Look ma, no Racket</body></html>
You can see it's actually JS, since you get all the same warnings that you'd normally get in a node repl. (It's literally running on Node.)
And (print:html (msgpage 'shawn toofast*)) spits out a page saying "You're submitting too fast. Please slow down. Thanks."
I find it much easier to generate HTML than traditional methods, and much more maintainable. But it's not fair to claim that something new is inherently better. Time will tell.
I'm most interested in any unexpected warts, but there don't seem to be any so far.
Interesting view. Is this better than a "let it break" approach?
Link rot already claims N% of websites per year. I wonder if cleaning up APIs like this one would increase N noticeably.