Take a CO2 meter into a hotel, office, convention center, etc in summer. In many, it will be at 1200+. I've seen up to 2500. This is due to "green" interior air recycling that just happens to save companies a few dollars on air conditioning. How much of society is operating under this lower IQ condition?
I take it as a sign of typical increasing corporate dysfunction. Obvious problems, some even easy and uncontroversial, don't get fixed. Why?
The people who can fix them are not in control. The org must be very top-down. But Steve Jobs had a top down style, so what's the difference? Its: Using and caring about the product.
It's top down direction with the people at the top not using/caring about the product. Presumably they're concerned with other things like efficiency, stocks, clout.
Also if you had a majorly obvious bug, you could email steve@apple.com, which he would forward to a VP, who would be fired if it wasn't fixed ASAP. Knew a guy who lost his job that way, so it's not just a myth. Steve really was like that.
The wrath of Steve was a real thing that people feared.
On the one hand, this kind of thing seems like a mercurial, dictatorial management style. On the other hand, you do need a way to cut through the levels of hierarchy in a large company so that fixes like this don't always get bogged down in process and stakeholder meetings.
It seemed to work for Apple/Steve Jobs, but I'm not convinced it would work for everyone.
This is very interesting and disturbing. We are outsourcing our decision making to an algorithmic “Mentalist” and will reap a terrible reward. I need to ween myself off the comforting teat of the chatbot psychic.
Useful take, thanks for mentioning specifics. Some of these I wasn't aware of.
- What makes load balancing easier with SSE? I imagine that balancing reconnects would work similar to WS.
- Compression might be a disadvantage for binary data, which WS specializes in.
- Browser inspection of SSE does sound amazing.
- Mobile duplex antenna is way outside my wheelhouse, sounds interesting.
Can you see any situation in which websockets would be advantageous? I know that SSE has some gotchas itself, such as limited connections (6) per browser. I also wonder about the nature of memory and CPU usage for serving many clients on WS vs SSE.
I have a browser game (few players) using vanilla WS.
- Load balancing is easier because your connection is stateless. You don't have to connect to the same server when you reconnect. Your up traffic doesn't have to go to the same server as your down traffic. Websocket tend to come with a lot of connection context. With SSE you can easily kill nodes, and clients will reconnect to other nodes automatically.
- The compression is entirely optional. So when you don't need it don't use it. What's great about it though is it's built into the browser so you're not having to ship it to the client first.
- The connection limit of 6 is only applies to http1.1 not http2/3. If you are using SSE you'll want http2/3. But, generally you want http2/3 from your proxy/server to the browser anyway as it has a lot of performance/latency benefits (you'll want it for multiplexing your connection anyway).
- In my experience CPU/memory usage is lower than websockets. Obviously, some languages make them more ergonomic to use virtual/green threads (go, java, clojure). But, a decent async implementation can scale well too.
Honestly, and this is just an opinion, no I can't see when I would ever want to use websockets. Their reconnect mechanisms are just not reliable enough and their operational complexity isn't worth it. For me at least it's SSE or a proper gaming net code protocol over UDP. If your browser game works with websockets it will work with SSE.
I appreciate the answers. For others reading, I also just ran across another thread where you posted relevant info [0]. In the case of my game, I'm going to consider SSE, since most of the communication is server to client. That said, I already have reconnects etc implemented.
In my research I recall some potential tradeoffs with SSE [1], but even there I concluded they were minor enough to consider SSE vs WS a wash[2] even for my uses. Looking back at my bookmarks, I see that you were present in the threads I was reading, how cool. A couple WS advantages I am now recalling:
SSE is one-way, so for situations with lots of client-sent data, a second connection will have to be opened (with overhead). I think this came up for me since if a player is sending many events per second, you end up needing WS. I guess you're saying to use UDP, which makes sense, but has its own downsides (firewalls, WebRTC, WebTransport not ready).
Compression in SSE would be negotiated during the initial connection, I have to assume, so it wouldn't be possible to switch modes or mix in pre-compressed binary data without reconnecting or base64-ing binary. (My game sends a mix of custom binary data, JSON, and gzipped data which the browser can decompress natively.)
Edit: Another thing I'm remembering now is order of events. Because WS is a single connection and data stream, it avoids network related race conditions; data is sent and received in the programmatically defined sequence.
With http2/3 the it's all multiplexed over the same connection, and as far as your server is concerned that up request/connection is very short lived.
Yeah mixed formats for compression is probably a use case (like you said once you commit with compression with SSE there's no switching during the connection). But, then you still need to configure compression yourself with websockets. The main compression advantage of SSE is it's not per message it's for the whole stream. The implementations of compression with websockets I've seen have mostly been per message compression which is much less of a win (I'd get around 6:1, maybe 10:1 with the game example not 200:1, and pay a much higher server/client CPU cost).
Websockets have similar issues with firewalls and TCP. So in my mind if I'm already dealing with that I might as well go UDP.
As for ordering, that's part of the problem that makes websockets messy (with reconnects etc). I prefer to build resilience into the system, so in the case of that demo I shared, if you disconnect/reconnect lose your connection you automatically get the latest view (there's no play back of events that needs to happen). SSE will automatically send up the last received event id up on reconnect (so you can play back missed events if you want, not my thing personally). I mainly use event ID as a hash of content, if the hash is the same don't send any data the client already has the latest state.
By design, the way I build things with CQRS. Up events never have to be ordered with down events. Think about a game loop, my down events are basically a render loop. They just return the latest state of the view.
If you want to order up events (rarely necessary). I can batch on the client to preserver order. I can use client time stamp/hash of the last event (if you want to get fancy), and the server orders and batches those events in sync with the loop, i.e everything you got in the last X time (like blockchains/trading systems). This is only for per client based ordering, no distributed client ordering otherwise you get into lamport clocks etc.
I've been burnt too many times by thinking websockets will solve the network/race conditions for me (and then failing spectacularly), so I'd rather build the system to handle disconnects rather than rely on ordering guarantees that sometimes break.
Again, though my experience has made me biased. This is just my take.
An intriguing idea! I like this approach for being an innovative interface to SQL. I wonder if it would reduce cognitive load when interfacing with the DB.
I'm a game dev and often need to avoid situations where I'm using '.map' to iterate an entire array, for performance reasons. It would feel odd to use the concept, knowing it wasn't really iterating and/or was using an index. Is that how it works?
It’s exactly what Entity Framework does in dotnet. It allows you to query the database like it’s an enumerable.
In fact, in EF, an IQueryable (which is the interface you use to query a SQL dataset) implements IEnumerable. So you can 100% manipulate your dataset like a normal array/list.
Sure it comes with its own shenanigans but 90% of the time it’s easy to read and to manipulate.
Performing a query with EF is able to do stuff that can't be done with `IEnumerable`. So that a filter()/.Where() can actually generate a WHERE clause instead of looping over every record.
Yes of course it generates the corresponding SQL and don’t iterate over the table.
But in the framework’s code, IQueryable implements IEnumerable, it’s just a totally different implementation but for the developer it’s 100% the same API and so any IQueryable can be used where a IEnumerable is expected.
This is a hazard that trips people up commonly. If you use an IQueryable where an IEnumerable is expected, it will use brute-force iteration semantics, and not do things like generating a WHERE clause. Linq provides similar extension methods for both interfaces, but you need to be sure your call resolves to the right interface, otherwise you'll end up doing things like pulling the whole table into memory.
I find it notable that tokens don't necessarily express people's feelings. Put another way, tokens aren't how people feel, they're how they write.
Samstave mentioned in this thread that Twitter is a 'global sentiment engine'. I'm sure that's literally true. Sentiment measurement is only accurate to the degree that people are expressing their real feelings via tokens. I can imagine various psychological and political reasons for a discrepancy.
If you did sentiment analysis of publicly known writings of North Korean administrators, would that represent their feelings?
I think the interplay with free speech is interesting here: In a setting where people feel socially and legally safe to express their true opinion, sentiment analysis will be more accurate.