Hacker Newsnew | past | comments | ask | show | jobs | submit | somerando7's commentslogin

Scribe aaS? ;)


It's pretty useful when you have some distribution layer (i.e. some pubsub system)

Consider 10-15 applications running on a host, and all of them are listening to data being distributed by another service. Instead of all of them opening a connection to that service, instead they would all be connected to this sidecar, and the sidecar would merge the distribution of data (and subscriptions) to the pubsub system


What would be the advice for Canadians who want to start a company, but are unable to do so because they are on a TN/H1B?

As I see it my only options are:

a) Move back to Canada, start company there. Does this impede getting VC funding? Can I somehow move back to the US if my company becomes successful?

b) Wait until I get a greencard.

Further question:

c) Is it better to go TN->PERM or TN->H1B->PERM, what are the drawbacks of the first option?


Get some tax advice!!! If you own more than 10% of a foreign company your US taxes become very, very complicated. I relinquished my green card after my first tax filing. If you want to be in the US, found your company in the US.


Good advice. Always get tax and corporate advice. There's no legal issue with going from a TN to a green card but because of the current green card backlogs - because the green card process takes so long - you could be in the difficult position of needing to renew your TN or travel internationally during the process and you wouldn't be able to. So an intervening H-1B is helpful, just not required.


I'm confused. I thought on a TN or H1B I'm not even allowed to start a company in the US.


Your option A involved incorporating a Canadian company and moving to the US. My comment was on your option A. It is extremely painful to own a Canadian company and have USA tax filing obligations. If you absolutely must do this, make sure your company’s fiscal year falls on the calendar year boundary. I found the reporting onerous and chose to stay in Canada.


Oh yeah, for option A I meant to just stay in Canada until my company hopefully gets profitable and then "transfer" it to the US and move to the US if somehow I can self-sponsor/sponsor through company.


IMO passing a lambda for synchronized code makes it much easier to read (going off of working with folly::Synchronized)


C++ was probably compiled without optimizations. In a compiled binary there won't even be any calculations done - see for yourself https://godbolt.org/z/Mhhzhdr7c


I did it, and now c++ is slightly faster than Rust :)


But likely not at calculating that sum (which, nitpicking, isn’t “calculating the sum of 1M numbers”, but “calculating the sum of integers 1 through 1,000,000”), but at printing a constant.

It’s possible the C library you used is faster at converting this particular integer to a string (performance will vary between different ones, and not necessarily with one being always faster than the other) and writing it than rust’s one, but also possible that the C compiler did the string conversion at compile-time, too, and compiled a call to puts (if you printed a newline at the end of your string)


over here we never say more faster, instead we say fasterer or simply fasterrr


oops, edited


About time. The amount of value you get from having random conversation with your coworkers is too much to pass up on. Breakdown of communication through text also sucks.


What's better about git? I haven't used svn, but have used perforce/mercurial professionally/git professionally, and use git personally, but I find all of them to provide the same "feature set" when doing basic development: have your own branch, and merge it in to the main branch when done/reviewed.

Merging seems the same on all 3 version control systems I've used... I've heard that git branching is better(?), but haven't seen that being used anywhere really.


> I haven't used svn

Yeahhhh. Svn is centralized. You must be connected to the server to do any source control work. There is no local repository, there are no local commits. Every commit is global on the server. When you make a commit, it is pushed to the server and your coworkers can fetch it. You don't make commits locally and fiddle around and then push.

Also Svn doesn't have branches per se. You just use subdirectories. It does have facilities for merging directories and managing these "branches", but it feels real weird to be switching branches with 'cd'.

It's a very different world.

A quick read: https://svnbook.red-bean.com/en/1.7/svn.tour.cycle.html


> You must be connected to the server to do any source control work. There is no local repository...

That's not correct technically speaking. You can create repository on your machine - on local FS.

But of course it is more reliable to run it on server, even for yourself. If on Windows then VisualSVN is one click solution for those who just want to have that thing working.


Am I understanding you correctly if I compare it to making the argument that, technically, you could run any Internet website from your laptop and network connectivity isn't actually required for using the web?


I am referring exactly to "must be connected to the server to do any source control work. There is no local repository..." which is plainly wrong by any means.

SVN client supports equally well as "svn:" protocol as "file:". Server is not mandatory with SVN - you can work with repositories on your local HD or network share.


"Server" was perhaps the wrong word. "Central repo", if you like, regardless of connection method. What I meant is that "svn checkout" does not make a new repository, as "clone" does in decentralized source control systems. You must interact with the central repository (wherever it is stored, locally or over the network) to do version control work.


I have used svn in a decentralised manner, pretty much as the person above me describes, for years. It's super simple to do. You create a local repo, populate it from "origin", and when you want to commit your local changes, you make a "foreign merge" request from "origin".

Your argument that this cant be done in svn because "there's a central repo" could have been translated in "git" as: "yeah you cant work on your machine without internet because you wont be able to push on github"


> Also Svn doesn't have branches per se. You just use subdirectories. It does have facilities for merging directories and managing these "branches", but it feels real weird to be switching branches with 'cd'.

Also, this means that it's possible to do some horrifying things with branches and tags, like making a merge commit which is isolated to a single directory, or checking out a tag and making a commit to it.

Hopefully no one is actually depending on these workflows being possible, because they make project history extremely hard to follow.


> I've heard that git branching is better(?), but haven't seen that being used anywhere really.

How are you merging without branches?

Git is mostly faster and more flexible than svn, and the merging works far better. Unless svn's merging has improved in the past decade or so, which is entirely possible.

When I switched to git from svn, the main differences were: merging was usable, making new branches and switching branches and such were _instant_ instead of many seconds, and I could work more flexibly (git doesn't require being connected to the server).


Yes, that’s actually the thing about branching in SVN. Everyone remembers how awful it was 10+ years ago under SVN 1.4 and earlier but has improved immensely since then. Combined with modern client tooling, i.e. TortoiseSVN, problems with merging are almost non-existent for a long time.

I certainly wouldn’t call SVN modern but it’s very well maintained and has never lost code on me. Many git-like features also exist now such as being able to stash some changes in order to pivot to something else for a bit. Except for the central server being a problem for some use-cases, SVN just works.


I meant, using branches in a way that's "better" - whatever people who use git mean when they say that.

As I said, I haven't used SVN. It just seems like perforce and mercurial are basically "identical" for the ways I use them at least.


> Merging seems the same on all 3 version control systems I've used... I've heard that git branching is better(?), but haven't seen that being used anywhere really.

Much better working merges was reason many people moved to SVN but since then SVN just got better at it


You have to use something similar to https://github.com/facebook/folly/tree/main/folly/experiment... to solve this problem.

It's a nasty bug that everyone encounters when first working with coroutines. (Similarly everyone will encounter references that don't live until you co_await the task).


C++20 coroutines demonstrated one side of committee-led modern C++ that left an impression of it is ultimately designed-by and -for library writers; instead of being suggested third-party library or "wait for C++23" for better experience, I'd love to see these related machinery released the same time in the standard library.


Alternatively, you can pass the things you would have captured instead as arguments to the lambda (by value!) and they are valid for the duration of the coroutine. So you can do a lambda returning a coroutine lambda like

  task<Foo> t = [foo]() {
    return [](auto f) -> task<Foo> {
      co_await something();
      co_return f;
    }(foo);
  }();


To me it's not obvious. I wouldn't think that an app can inject JS into a website because I'm using a web-browser from their app.


> I was taught that to allocate memory was to summon death itself to ruin your performance. A single call to malloc() during any frame is likely to render your game unplayable. Any sort of allocations that needed to happen with any regularity required writing a custom, purpose-built allocator, usually either a fixed-size block allocator using a freelist, or a greedy allocator freed after the level ended.

Where do people get their opinions from? It seems like opinions now spread like memes - someone you respect/has done something in the world says it, you repeat it without verifying any of their points. It seems like gamedev has the highest "C++ bad and we should all program in C" commmunity out there.

If you want a good malloc impl just use tcmalloc or jemalloc and be done with it


I'm a sometimes real-time programmer (not games - sound, video and cable/satellite crypto) - malloc(), even in linux is an anathema to real-time coding (because deep in the malloc libraries are mutexes that can cause priority inversion) - if you want to avoid the sorts of heisenbugs that occur once a week and cause weird sound burbles you don't malloc on the fly - instead you pre-alloc from non-real-time code and run your own buffer lists


Mutexes shouldn't be able to cause priority inversion, there's enough info there to resolve the inversion unless the scheduler doesn't care to - i.e. you know the priority of every thread waiting on it. I guess I don’t know how the Linux scheduler works though.

But it's not safe to do anything with unbounded time on a realtime thread, and malloc takes unbounded time. You should also mlock() any large pieces of memory you're using, or at least touch them first, to avoid swapins.


if you have to wait on a mutex to get access to shared resource (like the book keeping inside your malloc's heap) then you have to wait in order to make progress - and if the thread that's holding it is at a lower priority and is pre-empted by something lower than you but higher than them then you can't make progress (unless your mutex gives the thread holding it a temporary priority boost when a higher priority thread contests for the mutex)

(this is not so much an issue with linux but with your threading library)

I'm completely in agreement that you shouldn't be mallocing, that was kind of my point - if you just got a key change from the cable stream and you can't get it decoded within your small number of millisecond window before the on-the-wire crypto changes you're screwed (I chased one of these once that only happened once a month when you paid your cable bill .....)


> (this is not so much an issue with linux but with your threading library)

If your threading library isn't capable of handling priority inheritance then it's probably Linux's fault for making it not easy enough to do that. This is a serious issue on AMP (aka big.little) processors, if everything has waits on the slow cores with no inheritance then everything will be slow.


Aside from the performance implications being very real (even today, the best first step to micro-optimize is usually to kill/merge/right-size as many allocations as possible), up through ~2015 the dominant consoles still had very little memory and no easy way to compact it. Every single non-deterministic malloc was a small step towards death by fragmentation. (And every deterministic malloc would see major performance gains with no usability loss if converted to e.g. a per-frame bump allocator, so in practice any malloc you were doing was non-deterministic.)


If this person was taught game dev any time before about 2005, that would have still been relevant knowledge. Doing a large malloc or causing paging could have slaughtered game execution, especially during streaming.

>If you want a good malloc impl just use tcmalloc or jemalloc and be done with it

This wasn't applicable until relatively recently.


> Doing a large malloc or causing paging could have slaughtered game execution, especially during streaming.

... it still does ? I had a case a year or so ago (on then-latest Linux / GCC / etc.) where a very sporadic allocation of 40-something bytes (very exactly, inserting a couple of int64 in an unordered_map at the wrong time) in a real-time thread was enough to go from "ok" to "unuseable"


i suppose so.

modern engines generally have a memory handler, which means that mallocs are usually coached in some type of asset management. you are also discouraged from extending working memory of the scene suddenly. When I was doing gamedev, even then, there was no reason to big malloc because everything was already done for you with good guardrails


I mean, if it's a custom memory handler, pool allocator, etc. it's not what people generally mean by malloc, which is the call to the libc function


If you go way back into the archives of the blog's author, probably about ten years now, you will find another memory-related rant on how multisampled VST instrument plugins should be simple and "just" need mmap.

I did, in fact, call him out on that. I did not know exactly how those plugins worked then(though I have a much better idea now) but I already knew that it couldn't be so easy. The actual VST devs I shared it with concurred.

But it looks like he's simply learned more ways of blaming his tools since then.


As always there is some truth to it - the problem of the MSVCRT malloc described in this blog article is the living proof of that - but these days it's definitely not a rule that will be true in 100% of cases. Modern allocators are really fast.


Strong agree. I recently wrote a semi-popular blog post about this. https://www.forrestthewoods.com/blog/benchmarking-malloc-wit...

It's interesting that LLVM is suffering so horrifically using default malloc. I really wish the author did a deeper investigation into why exactly.


Discussed here:

Benchmarking Malloc with Doom 3 - https://news.ycombinator.com/item?id=31631352 - June 2022 (30 comments)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: