Hacker Newsnew | past | comments | ask | show | jobs | submit | MerrimanInd's commentslogin

I love this. I spent my holidays hearing non-technical family members complain about their ever deteriorating Windows experiences, issues that make me righteously angry at Microsoft.

IMO the next important unblocker for Linux adoption is the Adobe suite. In a post-mobile world one can use a tablet or phone for almost any media consumption. But production is still in the realm of the desktop UX and photo/video/creative work is the most common form of output. An Adobe CC Linux option would enable that set of "power users". And regardless of their actual percentage of desktop users, just about ever YouTuber or streamer talking about technology is by definition a content creator so opening Linux up to them would have a big effect on adoption.

And yes I've tried most of the Linux alternatives, like GIMP, Inkscape, DaVinci, RawTherapee, etc. They're mostly /fine/ but it's one of the weaker software categories in FOSS-alternatives IMO. It also adds an unnecessary learning curve. Gamers would laugh if they were told that Linux gaming was great, they just have to learn and play an entirely different set of games.


Photoshop (for example) largely works in Wine, although it's not stable enough for production usage. The problem is the CC itself and the installer, which is unimaginably bloated and glued to the Internet Exp... I mean Edge Web View and many other Windows-only things.


Your entire comment could be written 20 years ago and it would just about fit perfectly.


Like any opposition party, the anti big tech crowd is actually a loose coalition of different goals and interests. I've noticed that as these platforms get through the earlier stages of "will it even work" the differences in values are becoming more pronounced and controversial. The primary two groups seem to be those who value federation and see centralized control and algorithms as the threat and those who value encryption and see surveillance as the threat. Obviously these two things aren't mutually exclusive and we all want to see new platforms that can solve for both. But there's a quite distinct difference in the primary priority and consequent technical decisions.

I hope maybe if we can be aware that this is a broad set of technologies being driven by a broad set of goals then we can be a bit more gracious when a project isn't perfectly aligning with our personal values and find the common ground and values.


Thanks for this comment, you've said exactly what I've been thinking.

I'm definitely in the sect of people who have "detach from big centralised tech, be self hostable & interoperable" as the main priority, with E2EE being a nice extra. So it's always interesting when I read articles from the other side who see privacy, maximal E2EE & zero metadata as their #1 priority. They entirely dismiss protocols as junk for reasons I would never think or care about. But these things do matter to them, and they are just as important as me.

It strikes me as a near impossible balancing act for a project like Matrix to please everyone. They are clearly trying.

I will also note that there's a volume difference in the messaging being sent out. The privacy/security people are often very loud & critical, with good reason from their perspective. For example this article. That makes the discourse seem more negative than the overall sentiment probably is.


I'd go a few layers even broader than this article and say that the modern tech industry has an abysmal track record when building tools for non-software technical fields. Tech builds either their own software-oriented workflows or the most dumbed down consumer-oriented workflow they can. Law is an excellent example of a field with a very high degree of fidelity, philosophy, and process yet it can only ever have partial crossover with software development methodologies. Tech often treats someone like a lawyer as either a substandard developer or an advanced consumer without making a real effort to understand the context and needs of highly complicated yet non-software professions.


Another reason to have a second compiler is for safety-critical applications. In the assessment of safety-critical tools if something like a compiler can have a second redundant version then each one of them can be certified to a lower criticality level since they'll crosscheck each other. When a tool is single-sourced the level of qualification goes up quite significantly.


rustc (via Ferrocene) is already being qualified, and form what I hear it’s been fairly easy to do so, for various reasons.


Yeah it is and that's a great effort, I've worked with that team on various things. But the industry is still itching for a second compiler with no crossover (can't just be another LLVM frontend or rustc fork) for those certification reasons. Not that people want to replace rustc! It's just a cert requirement.


Yeah certainly wouldn't hurt :)


IMO Zen Browser fixed a lot of the Firefox UI painpoints while keeping what I like about it. It would be a smart move to make the Zen UI the canonical version of Firefox. Especially since features like vertical tabs, folders, pins, split screen, and new tab previews are more in the power user use case and Chrome has entirely dominated the casual user demographic.


I think you're right but there's also an opportunity to sell picks when everyone is digging for gold. Like AI-driven VS Code forks, you have AI companies releasing their own browsers left and right. I wonder if Mozilla could offer a sort of white-labeling and contracting service where they offer the engine and some customization services to whatever AI companies want their own in-house browsers. But continue to offer Firefox itself as the "dumb" (from an AI perspective) reference version. I'm not sure exactly what they could offer over just forking Chromium/Firefox without support but it would be a great way to have their cake and eat it too.


> This would be career suicide in virtually any other technical field.

The cognitive load is unavoidable and in some ways worse in industries with highly technical names.

At one point in my career I was an engine calibrator at a large automotive OEM. Our lexicon included physics industry terms (BMEP, BTDC, VVT, etc), a large software package where every variable, table, and function was an acronym (we had about 75k tunable parameters, each with an acronym), and all the internal company jargon and acronyms you'd expect in a large corporation. But every name was as technical and functional as the author would desire.

During my first month I was exhausted. I would doze off in afternoon meetings or pass out in my car as soon as I pulled in the driveway. I finally mentioned this to a more senior coworker and his insight was that my brain was working overtime because it was busy learning another language. He was entirely right! The constant mental load was a very real and tangible load. He relayed an anecdote when he went to S. America on his honeymoon and despite him and his wife having taken ~4 years of HS/college Spanish the mental work they had to do to function basically nixed half the daily activities they had planned due to exhaustion. That was what I was experiencing.

The idea that more technical and specific names reduces mental load does not track with my experience. The complexity is intrinsic not incidental and I don't think it has much to do with the specific names chosen.


In mobile telephony, one of the first things new hires are told is “don’t even try to work out what all the acronyms stand for; it won’t help”. You just have eat all the alphabet soup. Worse is that they nest themselves. You can have acronyms where every letter stands for another acronym. Writing a thousand words without using a single noun is easy. And of course all the short ones are overloaded. Is an AP an Application Processor or an Access Point? Depends on which subfield the person you’re talking to is from.

But they’re a necessary evil, since MSISDN is still less cumbersome than Mobile Station International Subscriber Directory Number.


I thought this was a wonderful example of "some things are just intrinsically challenging to represent in your brain":

https://www.youtube.com/watch?v=6ZwWG1nK2fY

Apparently they've found structural differences in the brains of people undergoing London's famously difficult taxi qualification.

I think I saw a video that said people studying for "the knowledge" as it's known report massive fatigue.


The article's complaint (as I read it) is more about incidental load: names that force you to context-switch just to figure out what category of thing you're dealing with


I worked for a company that had 8-12 different employee passwords across various systems. There was no SSO, they each password had different requirements, and required changes at different intervals ranging from 30-90 days. Consequently every employee had a post-it note directly on the laptop with most or all of their passwords. The outdated IT policy security was so strict that real world security was abysmal.


Meshcore is another alternative. I haven't done a deep dive into either but have heard that they both fix some Meshtastic issues.

https://meshcore.co.uk/


One of the main differences with MeshCore is that client nodes don't repeat messages, only dedicated repeater nodes repeat with the idea that they should be placed in more ideal locations.

Just don't mention MeshCore anywhere around Meshtastic, or they'll kickban you.


> Just don't mention MeshCore anywhere around Meshtastic, or they'll kickban you.

Thats not the problem. And Ive also mentioned Meshcore as well on their discord with no threats of banning or anything of the sort. Ive also seen people come in the group, with "Meshtastic sucks and Meshcore is best", and the worst by admins was 'we have no problem discussing but that tone was overly harsh'.

Liam Kottle, the head of Meshcore ran the first Meshtastic map from grabbed MQTT data. However, he was grabbing and saving everything, including public channels, direct messages, GPS, telemetry. Everything. 1.5y ago, people were going to his map and snooping on Defcon Meshtastic DM's, since even 1 node who reported MQTT would send everything. And then, DMs were simply filtered by the UI, but were effectively encrypted by the same shared key.

Normally there was a general expectation that the data was ephemeral. Liam basically created and caused this data problem by saving and making available everything sent to MQTT.

Meshtastic devs ended up having to tighten down the public MQTT broker a bunch. They also made the client on phones be more restrictive what was done and sent to MQTT. Also made "OK to forward MQTT" flag in the data packets too. And 2.5 introduced PKI TOFU for direct messages to prevent leakage.

Aside the personnel difficulties, the technical issues with Meshcore are similar at node capacity too. Messages still dont get delivered near capacity. Core requires infrastructure nodes. Its more like APRS+LoRa than anything like a mesh.


It's unfair to assign that much blame to assign to any one person. I think it's more fair to say that the Meshtastic community as a whole has a problem with people making overly-narrow assumptions about the goals and what use cases Meshtastic is intended for, suitable for, or usable for. As a result, the community was able to do a lot of development work seemingly without considering that there could even be privacy concerns. And then they had to scramble to retrofit a lot of privacy controls that would have been obvious requirements all along to people coming at the project with a different mindset.

Some people want Meshtastic to be rock-solid communication infrastructure for use in a doomsday or disaster scenario. Some people want to use it to undermine the importance of cellular communications networks. Some people want it to be used much like CB radio as a local public conversation channel. Some people envision it used mostly with stationary transmitters, while other people want to use it entirely with mobile nodes. I use it primarily for group location sharing (many to many), since the location sharing capabilities Apple and Google provide for their smartphone platforms only easily support one-to-one or one-to-several location sharing.


It seems that at scale meshcore is much better. The more nodes you get, the worst it gets with Meshtastic after a certain point. For meshcore you now have entire regions connected in a single mesh with hundreds of nodes.


Sad to see open source communities being so insecure that they feel threatened by an alternative project. Both can coexist and competition is good.


The Meshtastic community is almost as toxic as the ham radio one.


MeshCore app is way better than the Meshtastic one.


Agreed. Ran the comms for my burning man camp and everyone kept getting confused with the channels mess among other usability issues. I like where Mesh core is going, just wish the repeater nodes could run on gateway hardware so they don’t become the choke point with a half-duplex radio (bs like 8 full duplex channels on the RAK wireless gateway)


I've heard two arguments for these rewrites that don't always come up in these discussions. There are fair counterpoints to both of these but I think they add valuable dimensions to the conversation, or perhaps may explain why a rewrite may not seem justified without them.

* It's becoming increasingly difficult to find new contributors who want to work with very old code bases in languages like C or C++. Some open source projects have said they rewrote to Rust just to attract new devs.

* Reliability can be proven through years in use but security is less of a direct correlation. Reliability is a statistical distribution centered around the 'happy path' of expected use and the more times your software is used the more robust it will become or just be proven to be. But security issues are almost by definition the edgiest edge cases and aren't pruned by normal use but by direct attacks and pen testing. It's much harder to say that old software has been attacked in every possible way than that it's been used in every possible way. The consequences of CVEs may also be much higher than edge case reliability bugs, making the justification for proactive security hardening much stronger.


Yeah I get point for attracting young blood. But I wonder if the core utils which have been rewritten got rewritten by the original maintainers? And again the question why not simply write something new. With a modern architecture etc rather than drop in replacements.

On your second part. I wonder how aviation and space and car industry do it. They rely heavily on tested / proven concepts. What do they do when introducing a new type of material to replace another one. Or when a complete assembly workflow gets updated.


> And again the question why not simply write something new.

The world isn't black or white. Some people write Rust programs with the intent to be drop-in compatible programs of some other program. (And, by the way, that "some other program" might itself be a rewrite of an even older program.)

Yet others, such as myself, write Rust programs that may be similar to older programs (or not at all), but definitely not drop-in compatible programs. For example, ripgrep, xsv, fd, bat, hyperfine and more.

I don't know why you insist on a word in which Rust programs are only drop-in compatible rewrites. Embrace the grey and nuanced complexity of the real world.


> And again the question why not simply write something new.

There is a ton of new stuff getting written in Rust. But we don't have threads like this on HN when someone announces a new piece of infra written in Rust, only when there's a full or partial rewrite.

Re automotive and other legacy industries, there's heavy process around both safety and security. Performing HARAs and TARAs, assigning threat or safety levels to specific components and functions, deep system analysis, adding redundancy for safety, coding standards like MISRA, etc. You don't get a lot of assurances for "free" based on time-proven code. But in defense there's already a massive push towards memory safe languages to reduce the attack surface.


> why not simply write something new.

Because of backwards compatibility. You don’t rewrite Linux from scratch to fix old mistakes, that’s making a new system altogether. And I’m pretty sure there are some people doing just that. But still, there’s value in rewriting the things we have now in a future-proof language, so we have a better but working system until the new one is ready.


Sorry. I will answer on this because I feel people got a bit hung up on the “new” thing. Might be a language barrier. I really understand the reasons why with backwards compatibility etc. The point I tried to make is that we really spend tons of time either to maintain software that where written or “born” 50 or so years ago or rewrite things in the same spirit. I mixed my comments wit the the security aspect which might muddled a lot what I tried to say with the “new” part. One sees this also on HN. I love the UNIX philosophy and also the idea of POSIX. But it’s treated as if it is the holy grail of OS design and in case of POSIX the only true cross platform schema. Look also at the boot steps a CPU has to run through to boot up. By pretending to be 40 year old variant and then piece by piece startup features. Well I hope I cleared my point :)


Writing tools that are POSIX compatible doesn't mean one puts it on the pedestal of the "holy grail of OS design." I've certainly used POSIX to guide design aspects of things I build. Not because I think POSIX is the best. In fact, I think it's fucking awful and I very much dislike how some people use it as a hammer to whinge about portability. But POSIX is ubiquitous. So if you want your users to have less friction, you can't really ignore it.

And by the way, Rust didn't invent this "rewrite old software" idea. GNU did it long before Rust programmers did.


Yes but GNU to put them under GPL. Or that was my understanding.


So then your original comment should be amended to say, "and this is actually all fine when the authors use a license I personally like." So it's not actually the rewriting you don't like, but the licensing choices. Which you completely left out of your commentary.

You also didn't respond to my other rebuttal, which points out a number of counter-examples to your claim.

From my view, your argument seems very weak. You're leaving out critical details and ignoring counterpoints that don't confirm your bias.


Sorry I didn’t response by intention. The thing with the license I actually didn’t bring up because I totally forgot about this part of the discussion. I saw comments a few weeks back going into the fact that it’s not just a rust rewrite but also a relicense with maybe shady intend. I don’t know. I don’t know much about this. To your comment. I don’t know when I actually did any claims? Nor did I claim that a rewrite is fine when it’s changing to a license I like. Just stated that the reason back then was not to rewrite in a more modern language with better security. I wasn’t around when this happened and have no real thought if at the time I would have like or dislike the move. As it’s stands the net positive was obviously great otherwise a Linux as we know it might have been longer in the making. Or never. And yes my argument is weak because I’m actually not an expert on core utility development. I voiced just my feelings about the fact that we seem to move slowly forward or stand still in development rather than progressing to something else. And it seems that others see it differently and or have a better perspective on that.


> And yes my argument is weak because I’m actually not an expert on core utility development.

Yes, and I'm trying to point out why your argument is weak. You said things like this:

> But I wonder if the core utils which have been rewritten got rewritten by the original maintainers? And again the question why not simply write something new. With a modern architecture etc rather than drop in replacements.

And it honestly just comes across as rather rude. People do write things that are new. People also write things that re-implement older interfaces. Sometimes those people are even the same people.

Like, why are you even questioning this? What does it matter to you? People have been re-implementing older tools for decades. This isn't a new thing.

> Nor did I claim that a rewrite is fine when it’s changing to a license I like.

When I pointed it out, your response was, "oh yeah but they did this other thing that made it all okay."


Damn I wanted to write “not with intention”.


Inviting inexperienced amateurs to wide-reaching projects does not seem to be a prudent recipe. Nay, it is a recipe for disaster.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: