Hacker Newsnew | past | comments | ask | show | jobs | submit | OlivierLi's commentslogin

Yep that’s pretty much it. There’s no pushing of the version from the privileged process to the renderer, only pulling. So there’s not many renderers unblocking at the same time to create any kind of herd.


This is interesting stuff. I’m one of the author of the change and working on the next iteration. If you have an example of what you describe I’d love to have a look!


Right now it’s falling back on a mojo IPC. Next step is indeed shared memory mutex :)


Hey, one of the authors here. That’s pretty much it. That said I’m currently working in the next iteration of this which would indeed share more. That said for now it’s not super trivial because it needs a cross-platform condition variable like abstraction that works across shared memory. The pthread based one is not that bad and I’m hacking on it.


In systems performance I would advise to never think of any workload as unidimensional (ie: Any file system optimization can either improve IO latency or be useless)

Issuing individual truncates of 1B files can be just as much of a CPU problem then an IO one for example.


But why wouldn't using one of many CPU cores be sufficient?


Using some scripts/parsers to take DTrace/perf/Instruments/ETW data and transfer it to perfetto was one of the most exciting moments of my performance engineering career. It’s such a powerful thing compared to every single other workflow I’ve ever used.

It just shows contention in a way that so hard to see otherwise.

If this tool packages some of that in an easier to use package it’s going to be a great tool for some.


"It just shows contention in a way that so hard to see otherwise."

Could you elaborated on this? I've never used it myself.


You start with a trace event you care about. You see it’s too slow. The first step is that you see explicitly that it’s not because the code is slow, you constantly get descheduled because there are thread state indicators above your event.

Then you navigate to the CPU tracks find the thread(s) that were running instead of the one you care about and directly inspect their stacks. Sometimes almost nothing is running. Maybe contention is not on even on the CPU.

So the “parsing styles was slow” conclusion you would get from only looking at histograms turns into “parsing styles was slow because I couldn’t get my fonts and that was slow because IndexDB was hogging the hard drive on another process”

Edit: I should mention the user provided trace events are very important here. You get flow arrows really underlining task posting and IPC response as well as user interactions. Entry/Exit style uprobes on function are great but I found won’t get you all the way there on a large application.


Thanks!


I'm curious how does it work on Windows?


Joyeux Noël :D


> Joyeux Noël :D

Seeing as I was born on Christmas Day, I got only the one present, which I figured as cheating. My parents said the one present would have to do :]


Joyeux Noël et joyeux anniversaire :)

My son was born on the 26th and he is not happy either.


No mention of any LanguageServer implementation?

I highly recommend rtags (even if it's not stricly LSP)

https://github.com/Andersbakken/rtags


For the C++ using ccls[1] is way better.

[1] https://github.com/MaskRay/ccls


clangd is really good these days (clangd-9), I suggest you give it a try. It's incredibly easy to set up if your distribution packages it. I'm using it with Emacs' lsp-mode, it's great.


The high tech finance sector is non negligible also.


Tip : Setting cd as an alias for pushd make the whole thing much more intuitive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: