Hacker Newsnew | past | comments | ask | show | jobs | submit | naikrovek's commentslogin

> As an avid reader (and sometimes writer) of technical books, it's sad to see the, perhaps inevitable, decline of the space.

When I think about this, I get a little bit scared. Imagine books going away, even if it's just the subcategory of technical books.

The printed word has been around for a long time. The number of things that have been printed has always gone up. It really bothers me that that's changing.

PDFs and websites are no substitute for printed paper bound in a cover. PDFs and websites are a fallback when the preferred media isn't available, they are not supposed to be the preferred media. All of the of the reasons that people have given over the years are applicable when it comes to why paper is superior for this.


For the (very) long term, books may be superior, but for the non-illustrated fiction short term, eBooks and an eReader are vastly better. Reading synced to my phone and tablet, takes up less space than a single paperback, and immediate delivery of the next book.

Mid-pipeline? No, but midday, oh yeah.

I use the shell all day every day and I got stopped at the SECOND question.

"lines that contain 'laugh'". lines of what? Doesn't tell you without looking at the answer.

genius.


async/await is confusing in every language it is implemented in. Whoever came up with this paradigm has (or at the time, had) zero idea about usability. It is the programming equivalent of the programmer user interface. Makes perfect sense to the one person because they wrote it; it’s an enormous pain for everyone else.

Raw threads and doing everything manually makes far more sense to me.

And of course Goroutines in Go make the most sense but they’re not as performant as something like Zig by any means.


yes, GitHub Enterprise Server is not free. And yes you pay a license fee per user per month, billed annually, and the minimum license purchase is 10 users at something like $21/user/month. Microsoft discounts you qualify for will bring that down. You pay because you get support. You won't need it often, but when you do, you really need it.

It is easy to administer even for 15k users, and mostly it takes care of itself if you give it enough RAM and CPU for all the activity.

Downloading the virtual hard drive image from GitHub is easy and decrypting the code inside is borderline trivial, but I'm not going to help anyone do that. I've never had a need to do it.

As a server product it is good. I recommend it if you can afford it. It is not intended for private individuals or non-profits, though. It's for corporations who want their code on-premise, and for that it is quite good.


It really is amazing how much success Linux has achieved given its relatively haphazard nature.

FreeBSD always has been, and always will be, my favorite OS.

It is so much more coherent and considered, as the post author points out. It is cohesive; whole.


> It really is amazing how much success Linux has achieved given its relatively haphazard nature.

That haphazard nature is probably part of the reason for its success, since it allowed for many alternative ways of doing things being experimented in parallel.


That was my impression from diving into The Design & Implementation of the FreeBSD Operating System. I really need to devote time to running it long term.

Really great book. Among other things, I think it's the best explanation of ZFS I've seen in print.

Linux has turned haphazardry into a strength. This is impressive.

I prefer FreeBSD.


I like the haphazardry but I think systemd veered too far into dadaism.

THIS. As bad as launchctl on Macs. Solution looking for a problem so it causes more problems -- like IPv6

> Solution looking for a problem

Two clear problems with the init system (https://en.wikipedia.org/wiki/Init) are

- it doesn’t handle parallel startup of services (sysadmins can tweak their init scripts to speed up booting, but init doesn’t provide any assistance)

- it does not work in a world where devices get attached to and detached from computers all the time (think of USB and Bluetooth devices, WiFi networks).

The second problem was evolutionary solved in init systems by having multiple daemons doing, basically, the same thing: listen for device attachments/detachments, and handling them. Unifying that in a single daemon, IMO, is a good thing. If you accept that, making that single daemon the init process makes sense, too, as it will give you a solution for the first problem.


Yes, ”a solution”. We need a thing. Systemd is a thing. Therefore, we need systemd.

Not to get into a flame war, but 99% of my issues with systemd is that they didn't just replace init, but NTP, DHCP, logging (this one is arguably necessary, but they made it complicated, especially if you want to send logs to a centralized remote location or use another utility to view logs), etc. It broke the fundamental historical concept of unix: do one thing very well.

To make things worse, the opinionated nature of systemd's founder (Lennart Poettering) has meant many a sysadmin has had to fight with it in real-world usage (eg systemd-timesyncd's SNTP client not handling drift very well or systemd-networkd not handling real world DHCP fields). His responses "Don't use a computer with a clock that drifts" or "we're not supporting a non-standard field that the majority of DHCP servers use" just don't jive in the real world. The result was going to be ugly. It's not surprising that most distros ended up bundling chrony, etc.


> (this one is arguably necessary, but they made it complicated, especially if you want to send logs to a centralized remote location or use another utility to view logs)

It is not complicated at all. Recent enough versions of systemd support journal forwarding, but even without it, configuring rsyslog is extremely easy:

1. Install rsyslog

2. Create a file /etc/rsyslog.d/forwarding.conf

    $ActionForwardDefaultTemplate RSYSLOG_ForwardFormat
    *.* @@${your-syslog-server-here}:514
3. Restart rsyslog

4. Profit.


You can't be serious thinking that IPv4 doesn't have problems

Of course not.

But IPv6 is not the solution to Ipv4's issues at all.

IPv6 is something completely different justified post-facto with EMOTIONAL arguments ie. You are stealing the last IPv4 address from the children!

- Dual stack -- unnecessary and bloated - Performance = 4x worse or more - No NAT or private networks -- not in the same sense. People love to hate on NAT but I do not want my toaster on the internet with a unique hardware serial number. - Hardware tracking built into the protocol -- the mitigations offered are BS. - Addresses are a congintive block - Forces people to use DNS (central) which acts as a censorship choke point.

All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc

I actually wrote a code project that implements this network as an overlay -- but it's not ready to share yet. Works though.

If I were to imagine my self in the room deciding on the IPv6 requirements I expect the key one was 'track every person and every device every where all the time' because if you are just trying to expand the address space then IPv6 is way way way overkill -- it's overkill even for future proofing for the next 1000 years of all that privacy invading.


> All we needed was an extra pre space to set WHICH address space - ie. '0' is the old internet in 0.0.0.0.10 --- backwards compatible, not dual stack, no privacy nightmare, etc

That is what we have in ipv6. What you write sounds good/easy on paper, but when you look at how networks are really implemented you realize it is impossible to do that. Networks packets have to obey the laws of bits and bytes and there isn't any place to put that extra 0 in ipv4: no matter what you have to create a new ipv6. They did write a standard for how to send ipv4 addresses in ipv6, but anyone who doesn't have ipv6 themselves can't use that and so we must dual stack until everyone transitions.


Actually there is a place to put it... I didn't want to get into this but since you asked:

My prototype/thought experiment is called IPv40 a 40bit extension to IPv4.

IPv40 addresses are carried over Legacy networks using the IPv4 Options Field (Type 35)

Legacy routers ignore Option 35 and route based on the 32-bit destination (effectively forcing traffic to "Space 0". IPv40-aware routers parse Option 35 to switch Universes.

This works right now but as a software overlay not in hardware.

Just my programming/thought experiment which was pretty fun.

When solutions are pushed top down like IPv6 my spider sense tingles -- what problem is it solving? the answers are NOT 'to address address space limitations of IPv4' that is the marketing and if you challenge it you will be met with ad hominen attacks and emotional manipulations.


You didn't save anything as everyone needs to know the new extension before anyone can use it.

Hardware is important - fast routers can't do work in the CPU (and it was even worse in the mid 90's when this started), they need special hardware assistance.


All good points guys -- but my point was to see what is possible. And it was. And it was fun! Of course I know it will perform poorly and it's not hardware.

So you have to update every router to actually route the "non-legacy" addresses correctly. How is this different from IPv6?

That is the easy part - most of the core routers have supported ipv6 for decades - IIRC many are IPv6 only on the backbone. The hard part is if there is even one client that doesn't have the update you can't use the new non-legacy addresses as it can't talk to you.

Just like today, it is likely that most client will support your new address, but ISPs won't route them for you.


Yes of course I know all that. That was the whole point of the overlay first approach. i.e. Build a network that works over the existing network before adding any barriers to entry like specialized hardware requirements.

So either the new octet is in the least-significant place in an ipv40 address, in which case it does a terrible job of alleviating the IP shortage (everyone who already has IP blocks just gets 256x as much as them)

Or, it’s in the most-significant place, meaning every new ipv40 IP is in a block that will be a black hole to any old routers, or they just forward it to the (wrong) address that you get from dropping the first octet.

Not to mention it’s still not software-compatible (it doesn’t fit in 32 bits, all system calls would have to change, etc.)

That all seems significantly worse than IPv6 which already works just fine today.


> it’s in the most-significant place, meaning every new ipv40 IP is in a block that will be a black hole to any old routers, or they just forward it to the (wrong) address that you get from dropping the first octet.

Black hole for old routers. You run the software overlay until you can run the hardware.

I wrote a linux kernel module and an node/relay daemon that runs on every host.

There is a (0.)0.0.0.0 dedicated LAN space that auto-assign IPs. I called it the standard LAN party :) No more 10.0/192.168/etc Gateway always has .1 -- sensible defaults.

Also 0-255. Adds 255 IPv4 internets of address space. Billions and Billions of addresses are more than moar than enough.

Maybe one day I'll put on a fireproof suit and post it here for fun and see how much flame I get.


I almost completely agree with you, but IPv6 isn't going anywhere - it's our only real alternative. Any other new standard would take decades to implement even if a new standard is agreed on. Core routers would need to be replaced with new devices with ASICs to do hardware routing, etc. It's just far too late.

I still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.

That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there.


> still shake my head at IPV6's committee driven development, though. My god, the original RFCs had IPSEC support as mandatory and the auto-configuration had no support for added fields (DNS servers, etc). It's like the committee was only made up of network engineers. The whole SLAAC vs DHCP6 drama was painful to see play out.

So true.

> That being said, most modern IPv6 implementations no longer derive the link-local portion from the hardware MAC addresses (and even then, many modern devices such as phones randomize their hardware addresses for wifi/bluetooth to prevent tracking). So the privacy portions aren't as much of a concern anymore. Javascript fingerprinting is far more of an issue there

JS Fingerprinting is a huge issue.

Honestly if IPv6 was just for the internet of things I'd ignore it. Since it's pushed on to every machine and you are essentially forced to use it -- with no direct benefit to the end user -- I have a big problem with it.

So it's not strictly needed for YOU, but it solves some problems that are not a problem for YOU, and also happens to address space. I do not think the 'fixes' to IPv6 do enough to address my privacy concerns, particularly with a well-resourced adversary. Seems like they just raised the bar a little. Why even bother? Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.


Browser / JS fingerprinting applies to IPv4, too. And your entire IPv4 home network is likely NAT'd out of an ISP DHCP provided address that rarely changes, so it would be easy to track your household across sites. Do you feel this is a privacy concern, and why not?

> Tell me why I must use it without resorting to 'you will be unable to access IPv6 hosted services!' or 'think of the children!?' -- both emotional manipulations.

You probably don't see it directly, but IPv4 IP addresses are getting expensive - AWS recently started to charge for their use. Cloud providers are sucking them up. If you're in the developed world, you may not see it, but many ISPs, especially in Asia and Africa, are relying on multiple levels of NAT to serve customers - you often literally can't connect to home if you need or want to. It also breaks some protocols in ways you can't get around depending on how said ISPs deal with NAT (eg you pretty much can't use IPSEC VPNs and some other protocols when you're getting NAT'd 2+ times; BitTorrent had issues in this environment, too). Because ISPs doing NAT requires state-tracking, this can cause performance issues in some cases. Some ISPs also use this as an excuse to force you to use their DNS infra that they can then sell onwards (though this can now be mitigated by DNS over HTTPS).

There are some benefits here, though. CGNAT means my phone isn't exposed directly to the big bad internet and I won't be bankrupted by a DDOS attack, but there are other, better ways to deal with that.

Again, I do get where you're coming from. But we do need to move on from IPv4; IPv6 is the only real alternative, warts and all.


C'mon that just rude to Dada.

Linux is haphazard because it's really only the kernel. The analog of "FreeBSD" would be a linux distro like Redhat or Debian etc. In fact, systemd's real goal was to get rid of linux' haphazard nature... but it's ahhh really divisive as everyone knows.

I go back to early Linux' initial success because of the license. It's the first real decision you have to make once you start putting your code out there.


Yes and no. There was also some intellectual property shenanigans with FreeBSD 4.3 and then the really rough FreeBSD 5 series and their initial experiments with M:N threading with the kernel and troubles with SMP

Just another instance of Worse is Better?

I’m very sure this is a myth. Like any good myth, it makes sense on the surface but holds zero water once you look close.

Code isn’t prose. Code doesn’t always go to the line length limit then wrap, and prose doesn’t need a new line after every sentence. (Don’t nitpick this; you know what I’m saying)

The rules about how code and prose are formatted are different, so how the human brain finds the readability of each is necessarily different.

No code readability studies specifically looking for optimal line length have been done, to my knowledge. It may turn out to be the same as prose, but I doubt it. I think it will be different depending on the language and the size of the keywords in the language and the size of the given codebase. Longer keywords and method/function names will naturally lead to longer comfortable line lengths.

Line length is more about concepts per line, or words per line, than it is characters per line.

The 80-column limit was originally a technical one only. It has remained because of backwards compatibility and tradition.


Finding the start of the next line is a challenge universal to both code and prose, and the longer the line the harder it gets, regardless of how good your vision is. I acknowledged that there are other factors with code (such as indentation or syntax highlighting), which is why 80 characters—wider than either newspaper or book—makes sense, unless your typical identifiers are really long.

It’s not baffling at all. They strongly value maintaining backwards compatibility guarantees.

For example, Windows 11 has no backwards compatibility guarantees for DOS but operating systems that they do have backwards compatibility guarantees for do.

Enterprises need Microsoft to maintain these for as long as possible.

It is AMAZING how much inertia software has that hardware doesn’t, given how difficult each are to create.


They've stopped caring as much about backwards compat.

Windows 10 no longer plays the first Crysis without binary patches for instance.


Theres a big difference between Enterprise-Level software and games.

Windows earns money mainly in the enterprise sector, so that's where the backwards-compatibility effort is. Not gaming. That's just a side effect.

Anecdotal, you can run 16bit games (swing; 1997) on Windows, only if you patch 2-3 DirectX related files.


The prototypical examples given in the past were for applications like Sim City, hardly bastions of enterprise software.

And with win11, Microsoft stopped shipping 32bit versions of the OS, and since they don't support 16bit mode on 64bit OSes, you actually can't run any 16bit games at all.


Things that go through the proper channels are usually compatible. Crysis was never the most stable of games and IIRC it used 3DNow, which is deprecated - but not by Windows.

As a counter-anecdata, last week I ran Galapagos: Mendel's Escape with zero compat patches or settings, that's a 1997 3D game just working.


> Things that go through the proper channels are usually compatible.

But that's a pretty low bar - previously Windows went to great lengths to preserve backwards compatibility even for programs that are out of spec.

If you just care about keeping things working if they were done "correctly" then the average Linux desktop can do that too - both for native Linux programs (glibc and a small list of other base system libraries have strong backwards compatibility) as well as for Windows programs via Wine.


On paper maybe. In practice there's currently at least one case that directly affects me where Wine-patched Windows software still works on Windows thanks to said patch... but doesn't work under Wine anymore.

The 3.5mm audio jack is 75 years old, but electrically-compatible with a nearly 150-year-old standard.

Victorian teletypes can be hooked to a serial port with a trivial adapter, at least enough to use CP/M and most single-case OS'es.

Also, some programming languages have a setting to export code compatible with just Baudot characters: http://t3x.org/nmhbasic/index.html

So, you could feed it from paper tape and maybe Morse too.


> Victorian teletypes

Wait what? There were devices called teletypes in the Victorian era (ending in 1901)? What were they doing?


There's a recent steampunky hacked mashup called a Victorian teletype.

Of more interest, to myself at least, teleprinters have a long history:

* Early developments (1835–1846)

* Early teleprinters (1849–1897)

~ https://en.wikipedia.org/wiki/Teleprinter


Yeah speakers haven’t changed enough to make the 3.5mm connector obsolete.

Many new devices use a 2.5mm audio jack instead of the 3.5mm audio jack.

Yes, but that doesn’t obsolete the 3.5mm jack or the 1/4” jack. It’s just a different form factor of the same thing.

I really wish Plan9 got more attention when it came out.

All processes in Plan9 are given their own namespace. By mounting things to or unmounting things from the namespace, you grant or disable access to specific parts of the filesystem. And because everything is a file in Plan9, the filesystem is the filesystem, the audio device is part of the filesystem, the video device is part of the filesystem, network interfaces are exposed via the filesystem, etc.

Isolation by default.

In 1995.

Docker would never have been needed if operating systems adopted this feature. The kraken known as Kubernetes might never have been needed if Plan9s features were adopted.

It’s too late to change how things are, but it’s never too late to set things right.

We need an operating system which isolates child processes from their parent and from siblings, and from everything else unless access to specific things is granted at launch time.

We’ve built so much crap on top of our old operating systems that we view it as normal. We should not need docker or virtualization to have isolation. There is no technical need for those things, and they are each another layer on a stack that is maybe already too tall. They are points of failure and if operating systems were capable, we would not need them.

The source code and design of Plan9 can fit entirely inside one mind. It isn’t a huge behemoth. It takes single digit seconds to compile.

It could be the basis of something supreme.

If I were rich, among my other altruistic endeavors, I would be hiring folks to develop this OS into something a little more current and a little more fit for the environment we see in 2025 and beyond.

My point: one should not need docker to do what you have done. Default per-process isolation should be a core feature of the operating system.


> The kraken known as Kubernetes might never have been needed if Plan9s features were adopted.

Which Plan9 features exactly give me a unified API layer to handle workload scheduling incl. fault tolerance, flat networking across a cluster or service discovery? Containers are an implementation detail and not what Kubernetes is fundamentally about.


Let’s be clear about one thing: Kubernetes is an operating system on top of Linux which exists solely because operating systems don’t provide what it needs already. I’m saying that operating systems should provide scalable ways to launch applications securely across many physical machines natively. Plan9 offers that, and it has for 30 flippin years.

Plan9 has those things out of the box if you configure them. Fault tolerance, flat networking across a cluster, and service discovery. And if I’m wrong about that (my knowledge of both Plan9 and kubernetes is incomplete) then it would almost be trivial to implement given what plan9 has out of the box. In fact, I think the built-in network database can do all of these things if you put the relevant data in and use it. It was designed for these exact things.

Plan9 is designed to be deployed as lots of physical systems all working cooperatively. User systems and servers in a server room, both. A program that lives on computer A can run using the CPU of computer B and the networking of computer C. Natively. It can look up the address of any service via the network database (provided that info is put into the database when that service is started) and all of it. Note that I am not talking about DNS. That is separate from the network database.

Plan9 is different and it is superior in many ways.

Unix was built with the assumption that end users had terminals and that computing was centralized at the server. That assumption is no longer even remotely true. Yet we still cling to it as if it is ideal. It is not.

Plan9 was built with the assumption that everyone had capable computers at their desks and that people seated together often worked on things together. Closer to where we are today, but not quite. Today we have near-supercomputers at our desks in the form of development machines and servers of all descriptions in the server room, both more powerful and less powerful than our local machines.

If Plan9 were designed today it would be different, but the core features would remain.

And if you look at the source for Plan9 you’ll see that they got a hell of a lot done with very few lines of code. They were very, very “pro-simplicity”. Go read it and see how they did it. Then count the lines of code in Kubernetes and see which is bigger and more complex and then ponder that for a bit. It would have been easier to write an operating system to handle those workloads natively than it was to write Kubernetes.


Oh thats great to hear, go ahead and post some simple examples then if you will, e.g. what does a cron job look like in Plan9's API?

Clearly,

> which exists solely because operating systems don’t provide what it needs already

means Plan9 provided those needs already.


I'm not sure, at the end of the day, it would have made that much difference. Those who are security and separation aware still are and plan9 would have been a much easier tool for those people.

Those people who are not, still would not be, and having that separation ability wouldn't help. You see this with apps and other things that have the ability to limit their access but you want the app and you just click through without reading what permissions it wants. So those who just want to use something without understanding the danger of it still fall prey to nefarious actors.


It would have if security people really existed as a career at the time I think. But who knows.

They were thinking ahead when they made themselves OS. No one else was.


plan9 were always ahead of its time

> no way do do it correctly in MacOS

What? The MacOS Keychain is designed exactly for this. Every application that wants to access a given keychain entry triggers a prompt from the OS and you must enter your password to grant access.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: