Hacker Newsnew | past | comments | ask | show | jobs | submit | Rarebox's commentslogin

That's a pretty good example. The summary is actually useful, yet it still annoys me.

But I'm not usually reading the comments to learn, it's just entertainment (=distraction). And similar to images or videos, I find human-created content more entertaining.

One thing to make such posts more palatable could be if the poster added some contribution of their own. In particular, they could state whether the AI summary is accurate according to their understanding.


I definitely read the comments to learn. I love when there's a post about something I didn't know about, and I love when HN'ers can explain details that the post left confusing.

If I'm looking for entertainment, HN is not exactly my first stop... :P


Interesting example! I've been learning AVX512 by using it to optimize Huffman coding. I found _mm512_permutexvar_epi8 and used it to do byte-indexed lookups, but _mm512_permutex2var_epi8 means I can get by with 2 shuffles and 1 comparison instead of 4 shuffles and 3 comparisons. In the end, on my CPU (AMD 9950x), changing to _mm512_permutex2var_epi8 only sped up compression by ~2%.

Compared to Huff0[1] (used by Zstd), my AVX512 code is currently ~40% faster at both compression and decompression. This requires using 32 datastreams instead of 4 used by Huff0.

[1] https://github.com/Cyan4973/FiniteStateEntropy


Oh, this is cool, I wanted to look into using SIMD for huffman as well.

For decode, do you use AVX512 to speedup the decode via caching the decode of small codewords?

Do you decode serially or use the self syncronizing nature of huffman codes to decode the stream from multiple offsets in parallel? I haven't seen the later done in SIMD before.

Are there any new SIMD instructions you'd like to see in future ISA extensions?

OpenPower has proposed a scalar instruction to speedup prefix-code decoding: https://libre-soc.org/openpower/prefix_codes/


Also just stopped reading at that point. The idea seemed quite clever.


The peak of human civilization, before robots took over.


I could be wrong but I think many times the researchers don't care about the exact function. It could be something like 1/log(log(n)) .


Yes, I am very aware that many times they don't, but that doesn't mean they shouldn't!

Fortunately, in many cases, even when the detail is omitted from the headline theorem, they did in fact do the work and it is in fact stated lower down in the body of the paper; or in other cases they fail to state it but it can be easily assembled from a few parts. That's why I was asking.

But sometimes though it's a big complicated thing and they were just like, eh, let's not bother figuring out exactly what it is. To which I say, laziness! You're just making other people do a proof-mine later! You're doing this much, do the work to get a concrete bound, because you can do it better than later proof-miners can!

I won't say that in an ideally functioning mathematics proof-mining would never be necessary, that'd be like saying that in a well-written mathematics one would never need to refactor, but c'mon, mathematicians should at least do what they can to reduce the necessity of it.


https://yuri.is/not-julia/ is a good write-up of one person's opinion on the problems of Julia. I'm much less experienced with Julia but I somewhat agree. There's too much focus on "magic" instead of correctness for me to try building serious software using Julia. An amazing language in many aspects though.


Some of the response in the Julia community: https://discourse.julialang.org/t/discussion-on-why-i-no-lon...


He says he'll port his performance optimizations to the original game once he's done with his game / romhack. Otherwise he'd have to always update two codebases when he finds a new optimization.


Back then I remember switching to Chrome for these reasons:

1. Feeling faster than firefox (still does)

2. Process separation for tabs. Firefox used to crash the whole browser, whereas chrome would crash just a single tab.

3. Shipped with support for Flash and PDFs


Speed is why I use Safari now. It feels like Chrome did compared to FF back in the day.

It’s painful using Chrome by comparison. If it wasn’t for Chromes better dev tools I doubt I’d use it much


Only reason Safari feels fast is that it has avoided implementing features. There's still stuff that's been present in Chrome/Firefox for 10 years, but still doesn't work in Safari.

Does Google still show the old search results page design to Safari users? It did that for a long time.


Frankly, Safari works.

If those features slow down the web then that’s a knock on those features. Plus I’ve yet to see a single one of these supposedly “missing” features actually matter in the real world.


May I recommend Orion? It's based on Safaris engine, but imo improves massively on its ui (for example it comes with a fantastic (opt-in) tree-style tab browser). But most importantly it supports chrome and firefox extensions. Most of them do just work out of the box.


Not gonna lie, a tree-style tab browser sounds like awful UI to me, but… I’m glad you like it.

Supporting the insecure bloatware that is Chrome & FF extensions tho? Massive convenience factor, I get it, but… ewwww


Orion is fantastic.

I still use Firefox since it is cross browser but if I change to MacBook as my dev machine I might go all in on Orion as well.


It's funny how massive the impact of "bundle a bunch of proprietary stuff into our open browser" was on the browser wars. Between H264, MP3, Flash and PDF, Firefox never had a chance.


I can't remember firefox ever being included in installers for flash updates, antivirus, etc with install by default checked either.

Perhaps Chrome did succeed mostly on its own merits, but it wasn't above techniques used by things like Bonzai Buddy and Ask Toolbar to get the job done.


To top it, it could be argued that Chrome is a worse piece of spyware than Bonzi Buddy ever was.

You don't put a single character into the address bar of Chrome without notifying Google.

Yes, even if you are typing a domain instead of planning to search for something you have told Google about it, which means they know everything you visit, both internal websites at work and everything else.


The problem is that there simply wasn't a better option at the time.

Ogg Vorbis was a novelty at best, and it was the only decently widely adopted open source competitor for any of the items listed that was available at the time.

HTML5 was only just published when Chrome launched. So Flash was at that point the only option available to show a video in the browser (sure, downloading a RealPlayer file was always an option, but it was clunky, creators didn't like people being able to save stuff locally, and was also not open source). Chrome in fact arguably accelerated the process of getting web video open sourced: Google bought On2 in 2010 to get the rights to VP8 (the only decent H.264 competitor available at that point) so they could immediately open source it. The plan was in fact to remove H.264 from Chrome entirely once VP8/VP9 adoption ramped up[1], but that didn’t end up happening.

Flash was integrated into Chrome because people were going to use it anyway, and having Google distribute it at least let them both sandbox it and roll out automatic updates (a massive vector for malware at the time was ads pretending to be Flash updates, which worked because people were just that used to constant Flash security patches, most of which required a full reboot to apply; Chrome fixed both of those issues). Apple are the ones who ultimately dealt the death blow to Flash, and it was really just because Adobe could not optimize it for phone CPUs no matter what they tried (even the few Android releases of Flash that we got were practically unusable). That also further accelerated the adoption of open source HTML5 technologies.

PDF is an open source format, and has been since 2008. While I don't know if pressure from Google is what did it, that wouldn’t surprise me. Regardless, the Chrome PDF reader, PDFium, is open source[2] and Mozilla's equivalent project from 2011, PDF.js, is also open source.[3] Both of these projects replaced the distinctly closed source Adobe Reader plugin that was formerly mandatory for viewing PDFs in the browser.

Chrome is directly responsible for eliminating a lot of proprietary software from mainstream use and replacing it with high-quality open source tools. While they've caused problems in other areas of browser development that are worthy of criticism, Chrome's track record when it comes to open sourcing their tech has been very good.

[1]: https://blog.chromium.org/2011/01/html-video-codec-support-i...

[2]: https://github.com/chromium/pdfium

[3]: https://github.com/mozilla/pdf.js


Exactly. It could even be turtles all the way down, with new building blocks of physics becoming relevant as we go smaller and smaller (and back in time).


Would that be morally close to torturing an equally intelligent real worm?


That’s exactly what stuck in my head. I’m 99.9% sure the answer is no. That little tickle in my brain is the interesting part to think about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: