People said the exact same thing about web searches, and I think there's a lot of devs who would instant search for every issue they hit.
Isn't this just better web search?
On the other hand, it definitely feels like it might be too big a step in the spoon feeding direction.
Writing code without AI feels like art, and writing it with AI feels like painting a wall: get it done quickly, cheaply, and good enough that people don't see issues.
It's the art part of engineering that's being lost, AI has no appreciation of elegance. It has no empathy for cognitive overhead of bad code or poor-fit design patterns.
But should code be art? As much as there are '100s of ways to skin a cat', it is also deterministic at the end of the day. It either does, or does not, do what it was designed to do.
Sculptors can turn clay into wonderful pottery. Masons can turn it to brick. Both have their purposes, and it is wrong to assume everyone with a ball of clay is looking to make pottery.
I understand at the moment, part of the 'art' of code is ease of legibility, being concise, well documented, following standards, etc. But when I need a quick script to automate a process I've done 100 times, I personally can fumble around in python for an hour or two, or give the current trendy LLM a few shots and get to the same result. For me, I am happy to do it "quickly, cheaply, and good enough that people don't see issues." Even things like iOS Shortcuts, Home Assistant automations, etc.
I wouldn't build a start-up based on vibed code, though. I get the extents
imo it feels like art because there are an infinite number of needs to keep in mind when writing code and everyone prioritizes them differently. Even when the output is the same, different implementations will have different effects elsewhere: performance, legibility, security, type safety, error prone, etc.
So I would say "it either does, or does not, do what it was designed to do" isn't the full picture. I'm not sure it needs to truly be art though.
I would argue that your 2 first examples are exceedingly apt. Sure, sculptors can turn clay into works of art and masons can build cathedrals. However, a potter can throw a basic jug to hold wine that doesn't have any care out into it besides being functional, and a mason can build a retaining wall.
These second examples aren't any less valuable, they solve real problems and improve people's lives. However, they aren't really art. Writing code is the same thing. I'm not creating art when I hack together yet another CRUD app that is basically plumbing together existing modules with a tiny bit of logic sprinkled on top, but it improves how our business functions and makes the employees who use the software more productive. That isn't art, but it's useful.
There is code out there that is art. But most programmers aren't writing it. We're writing the boring everyday stuff. Very few masons built cathedrals, but building a retaining wall is useful too.
It's basically any technology. The whole point of technology is essentially to reduce friction.
I have something ranging from learned helplessness to total indifference about taking care of horses, or how to appropriately pitch a tent to survive a cold night, because modernity doesn't require me to care about these things.
>What does your persistent storage layer look like on Talos?
Well, for its own storage: it's an immutable OS that you can configure via a single YAML file, it automatically provisions appropriate partitions for you, or you can even install the ZFS extension and have it use ZFS (no zfs on root though).
For application/data storage there's a myriad of options to choose from[0]; after going back and forth a few times years ago with Longhorn and other solutions, I ended up at rook-ceph for PVCs and I've been using it for many years without any issues. If you don't have 10gig networking you can even do iSCSI from another host (or nvmeof via democratic-csi but that's quite esoteric).
>How have you found it's hardware stability over the long term?
It's Linux so pretty good! No complaints and everything just works. If something is down it's always me misconfiguring or a hardware failure.
It was years ago but I recall high CPU usage being an issue in particular.
In general it's just not as battle tested as ceph and I needed something more bulletproof.
However I will say this: I'm sure that issue with the CPU usage was fixed (I was watching the GitHub issue) and you might not need your distributed FS to be CERN ready for your lab; AND the UI and built-in backups Longhorn offers are great for beginners so I'd suggest giving it a try if you don't already know you want ceph or OpenEBS Mayastor for the performance and so on.
My tech friends and I cannot wait for this agentic bubble to pop. Much like the dotcom bubble, there's absolutely value in AI but the hype is absurd and is actively hurting investments into reasonable things (like just good UX).
The hype and zealotry remind me of a cult. And as I go higher up the chain at my big tech company, the more culty they are in their beliefs. And the less they believe AI can do their specific jobs, and the less they have actually tried to use AI beyond badly summarizing documents they barely read before.
AI, as far as I can tell, has been a net negative for humans. It's made labor cheaper, answers less reliable, reduced the value we placed on creativity and professionals in general, allows mass disinformation, and mostly results in people being lazier and not learning the basics of anything. There are of course spots of brightness, but the hype bubble needs to burst so we can move on.
My belief that's kind of settling in after a few years of observation is that I absolutely believe the "hype" claim that AI is a force multiplier. However, lots of things out there are terrible and shouldn't be force multiplied (spam, phishing, scams, etc) or say like, people that are very bad at their jobs. If people like this's output is multiplied, it clearly can and will be very bad. I have seen this play out at a small scale already on some teams I've worked with.
For the maybe ~1-5% of people out there that have something valuable to contribute (that's my number, and I fully believe it) then I think it can be good, but those types also seem to be the most wary of it.
What depresses me is all these people that are leading us with these stupid decisions re: AI will get bonuses and promotions after the bubble pops. All the useless effort getting AI everywhere will be forgotten, no one will care or remember the idiotic decisions and we will all be chasing the new new thing.
Sincerity will not win in the end. VC money and the quest for insurmountable tech driven cash flows is what drives everything. The age of software being driven by sincere engineers trying to build is dead outside niche projects.
How does it compare to komorebi? I've been using it for about 5 months with great success. I'm a Hyprland user when I'm on my personal machine, but for windows Komorebi has let me keep my muscle memory and workflow largely intact.
I think these are the most obvious differences between the two:
* By default, Komorebi uses dynamic tiling, while Jwno uses manual tiling.
* Komorebi has workspaces, Jwno works with Windows native virtual desktops instead.
* Komorebi uses IPC and native system command line to send commands, while Jwno usually operates all by itself.
There are definitely other details that are important to you, but these are the things that immediately came to my mind. I don't run Hyprland so can't really comment on that.
LLMs will spit out responses with zero backing with 100% conviction. People see citations and assume it's correct. We're conditioned for it thanks to....everything ever in history. Rarely do I need to check a wikipedia entry's source.
So why do people not understand that: this is absolutely going to pour jet fuel on misinformation in the world. And we as a society are allowed to hold a bar higher for what we'll accept get shoved down our throats by corporate overlords that want their VC payout.
The solution is to set expectations, not to throw away one of the most valuable tools ever created.
If you read a supermarket tabloid, do you think the stories about aliens are true? No, because you've been taught that tabloids are sensationalist. When you listen to campaign ads, do you think they're true? When you ask a buddy about geography halfway across the world, do you assume every answer they give is right?
It's just about having realistic expectations. And people tend to learn those fast.
> Rarely do I need to check a wikipedia entry's source.
I suggest you start. Wikipedia is full of citations that don't back up the text of the article. And that's when there are even citations to begin with. I can't count the number of times I've wanted to verify something on Wikipedia, and there either wasn't a citation, or there was one related to the topic but that didn't have anything related to the specific assertion being made.
I'm beginning to become disillusioned with these things. We're replacing like 1000s of jobs with a system that will almost certainly do a worse job than before. And the money is split between hospital shareholders and VC.
I get that there's an efficiency (market) gain here. But these AI startups that target existing sector automations seem like they're most just attempting to drive wealth inequality in a period of already terrible westh inequality.
These are topics that we at Cenote also mull over. Right now how we see things are:
1. If we perform worse, we won't deliver any value to the owner and we'll soon be out of a business
2. It's our bet that AI agents can actually perform these monotonous, detailed tasks very well and that this will free up humans to take on higher value work.
3. That higher value work being: calling patients, educating them, helping facilitate patient care. This is ultimately the work the owners we talk to are excited for their teams to take on!
Depends on how you perform worse (if you do so). It takes a while for medical errors to occur (in human or automated systems ) and then there is a lag before consequences for the patient, and not all negative outcomes lead to complaints or lawsuits.
At this point a world where no one needs to work would be dystopian. Are we going to rely on the benevolence of our increasingly for-profit government. On the benevolence of our oligarchs to allow us the labor the robots aren't capable of doing yet? I see the promise of post-scarcity, but I haven't seen anything close to the technology it would require. Just greed. Corner cutting and rent collection for profit. I'd rather not see our medical back offices enshittified.
People said the exact same thing about web searches, and I think there's a lot of devs who would instant search for every issue they hit.
Isn't this just better web search?
On the other hand, it definitely feels like it might be too big a step in the spoon feeding direction.
Writing code without AI feels like art, and writing it with AI feels like painting a wall: get it done quickly, cheaply, and good enough that people don't see issues.
It's the art part of engineering that's being lost, AI has no appreciation of elegance. It has no empathy for cognitive overhead of bad code or poor-fit design patterns.
Cognitive Debt is the phrase to Google btw.