Someone mentioned "App Store" during a meeting, probably accidentally. It's a known killer, the tech world equivalent of sneezing while carrying an infectious disease
It's worth making, but there's some serious drama queen vibes that make it feel pretty overblown. If an uber is late to the airport a reasonable person doesn't threaten the driver with covering the cost of their flight.
If an Uber driver caused you to miss a flight by driving around a parking lot in circles at a speed you can't exit the vehicle, you don't think it would be a reasonable request for the customer to ask Uber to make it right?
Fair enough, there is a difference. But now we are not looking at a missed flight so much as attempted kidnapping or imprisonment or some other much more serious crime. Which is interesting to think about with the Waymo example, but hard to take seriously in the context of the video since the rider declines to do what the customer service rep asks them to do (at least appears to for the sake of producing additional outrage for their video)
> good enough to stream YT, in my experience, so presumably already good enough to attend meetings
YT needs bulk throughput while meetings need latency and quality. YT can seem smooth for much longer despite massive amounts of retransmission and packet loss, meetings fall apart rapidly with even a tiny bit of those
I don't know how meaningful it is any more, but with long polling with a short timeout and a gracefully ended request (i.e. chunked encoding with an eof chunk sent rather than disconnection), the browser would always end up with one spare idle connection to the server, making subsequent HTTP requests for other parts of the UI far more likely to be snappier, even if the app has been left otherwise idle for half the day
I guess at least this trick is still meaningful where HTTP/2 or QUIC aren't in use
I am not. Everything above 30 MHz or so is line of sight in the consumer products sense of the word. We can talk about niche modes like tropospheric ducting or super high power troposcatter, but ducting is irregular and unpredictable and tropo scatter requires many kilowatts of power.
Pretty much everything from VHF and up (all of wifi) is line of sight only. Not just the mm-wave stuff. Cell phone networks only work because the telcos pay big money to get their transceivers power/data up on towers high above the terrain.
Store and forward meshes assume the nodes will eventually see each other but down at 5ft above the ground they often won't.
This is the first "rewrite it in rust" reason I've heard that actually makes total sense, congratulations. Ability to safely hack on ancient code really does sound attractive
Do you have to ask Hetzner nicely for this? They have a publicly documented 10G uplink option, but that is for external networking and IMHO heavily limited (20TB limit). For internal cluster IO 20TB could easily become a problem
It sounds like you might know the answer to this. Would it be straightforward to use this for sandboxed headless file conversion? You can do that already with LibreOffice, but it's a monster amount of unsafe code that's difficult to containerize securely
Regarding sandboxing - everything WebAssembly is heavily sandboxed already, and requires cross-origin isolation in the browser, so we can use SharedArrayBuffers.
So that's likely no worse than running LibreOffice containerized on a server.
Oh whoa a 5 minute video for exactly this :) Apologies for making you be my Google. Yep, everything in wasm makes things much easier to work with, especially if you want to run it on a client device
I just containerized LibreOffice to do docx->pdf conversation, but now I'm wondering - what parts seem particularly gnarly to you? My naive strategy is to mount an external volume to put/collect files, then call `soffice` inside the container to process them. We generate all source docx files so I'm not worried about an injection from that angle.
I think the central nature of moderation needs fixed, rather than moderation itself. Real world moderation doesn't work by having a central censor, it involves like-minded people identifying into a group and having their access to conversation enabled by that identification. When the conversation no longer suits the group, the person is no longer welcome. I think a technical model of this could be made to work.
Looked semi-seriously at doing a Twitter clone around the time Bluesky was first announced, and to solve this I'd considered something like GitHub achievement badges (e.g. organization membership), except instead of a static number, these could be created by anyone, and trust relationships could exist between them. For example, a programming language community might have existing organs who might wish to maintain a membership badge - the community's existing CoC would necessarily confer application of this badge to a user, thus extending the existing expectation for conduct out from the community to that platform.
Since within the tech community these expectations are relatively aligned, trust relationships between different badges would be quite straightforward to imagine (e.g. Python and Rust community standards are very similar). Outside tech, similar things might be seen in certain areas of politics, religion or local cultural areas. Issues and dramatics regarding cross-community alignment would naturally be confined only to the neighbouring badges of a potential trust relationship, not the platform as a whole.
I like the idea of badge membership and badge trust being the means by which visibility on the platform could be achieved. There need not be any big centralized standards for participation, each user effectively would be allowed to pick their own poison and starting point for building out their own visibility into the universe of content. Where issues occur (abusive user carrying a highly visible badge, or maintainer of such a badge turning sour or suddenly giving up on its reputation or similar), a centralized function could still exist to step in and potentially take over at least in the interim, but the need for this (at least in theory) should be greatly diminished.
A web of trust over a potentially large number of user-governed groupings has some fun technical problems to solve, especially around making it efficient enough for interactive use. And from a usability perspective, application onboarding for a brand new account
Running on little sleep but thought it was worth trying to sketch this idea out on a relevant thread.
> involves like-minded people identifying into a group and having their access to conversation enabled by that identification.
I don't think it has anything to do with "identification." It has to do with interest. If your groups are centered around identity then that will be prioritized over content.
Content needs little moderation. Identity needs constant moderation.
The whole point of online discussion IMO is not to join some little hive mind where everyone agrees with each other (eg many subreddits) but rather to have discussion between people with different information bases and different viewpoints. That's why it's valuable, you learn new things and are exposed to different points of views.
I like HN's approach of too many downvotes hiding the comment entirely. Maybe a combination of that, along with charging a small fee to create an account (even with crypto), and limiting the amount of posts/time you can spend on the site, might keep the spam/bots down considerably.
You could also have a global member limit to encourage competition from other small sites to keep there from being big huge echo chambers.
Yeah, I think friend-to-friend (F2F) networks are the most natural and take the most reasonable approach to spam resistance.
I don't think badges will work, because who assigns the badges? Friend groups IRL are generally not led by a single tyrannical leader. You just end up making a forum with a single owner.
Money laundering conviction to tech worker sounds like a great story, I'm sure plenty more than just me would love to hear more about your background if you were willing to share.