Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lovely stuff. The industry would be so much better off if the family of BSDs had more attention and use.

I run some EVE Online services for friends. They have manual install steps for those of use not using containers. Took me half a day to get the stack going on FBSD and that was mostly me making typos and mistakes. So pleased I was able to dodge the “docker compose up” trap.



As one of the guys who develops a EVE Online service: While you were able to get by with manual install steps that perhaps change with the OS, for a decent number of people it is the first time they do anything on the CLI on a unixoid system. Docker reduces the support workload in our help channels drastically because it is easier to get going.


I can sympathize. It makes sense.

But...

As a veteran admin I am tired of reading trough Docker files to guess how to do a native setup. You can never suss out the intent from those files - only do haphazardous guesses.

It smells too much like "the code is the documentation".

I am fine that the manual install steps are hidden deep in the dungeons away from the casual users.

But please do not replace Posix compliance with Docker compliance.

Look at Immich for an unfortunate example. Theys have some nice high level architecture documentation. But the "whys" of the Dockerfile is nowhere to be found. Makes it harder to contribute as it caters to the Docker crowd only and leaves a lot of guesswork for the Posix crowd.


Veteran sysadmin of 30 years... UNIX sysadmin and developer...

I use docker+compose for my dev projects for about the past 12 years. Very tough to beat the speed of development with multi-tier applications.

To me Dockerfiles seem like the perfect amount of DSL but still flexible because you can literally run any command as a RUN line and produce anything you want for layer. Dockerfiles seem to get it right. Maybe the 'anything' seems like a mis-feature but if you use it well it's a game changer.

Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).

Are their supply chain risks? sure -- Like many package systems. I build my important images from scratch all the time just to mitigate this. There's also Podman with Podfiles if you want something more FOSS friendly but less polished.

All that said, I generally containerize production workloads but not with Docker. If a dev project is ready for primetime now I port it to Kubernetes. Used to be BSD Jails .


> Dockerfiles are also an excellent way to distribute FOSS to people who unlike you or I cannot really manage a systems, install software, etc without eventually making a mess or getting lost (i.e. jr developers?).

Read what you just said:

> ... to people who unlike you or I cannot really manage a systems ...

These are people who should not be running systems.

> I build my important images from scratch all the time...

I doubt it, but assuming you're telling the truth, then you're a rare cookie because my clients don't even do that, and they're either government bodies with millions in funding or enterprises with 60,000 employees across the entire globe.

Again, the art of the operating system, and managing it, has been lost. It's been replaced with something that adds even more problems, security or otherwise, for the sake of convenience.

I hope everything works out super well for you, friend.


> These are people who should not be running systems.

I lol'd at that :) I was just trying to be more inclusive! Results have been mixed.

I do build images from scratch but I also recognize I am atypical (like many around HN). As i get older (and older) I realize In another time and place I would have been someone completely different -- but I got the internet era. not complaining. It's worked out ok.

I wish you the best as well friend!


You couldn't be further from the truth, though.

What you're saying here is: someone new to this simply uses Docker and everything just works and is fine. The support is heavily reduced (for you, not the user) and so everything is good.

And that mentality is why we have crazy botnets doing terabytes per-second attacks these days -- your users just firing up a VM, using "docker compose up", and walking away because "it just works". The reality is, that system falls out of date pretty quickly, and exploit is found and patched, but that patch never sees the light of day for that user.

It's awesome you can get a user up and running so quickly, but the sheer amount of work required to actually maintain a server is too much for the average EVE Online player trying to run some ESI tool


How do people learn then?

Personally I learn best by doing. If I study before I try something the results are poorer compared to the reverse. Make an attempt. Fail. Understand. Then study.

It's not an either or proposition, either.

Dockerfiles freshness is up to the maintainer, like any FOSS.

We do not have botnets because we share information about how to run servers. People run botnets and we have botnets because of mostly economic or political incentives-- the same reasons people do a lot of things.

You are not wrong about the complexity of something like running an EVE online server being beyond the abilities of the non-professional, but that should not preclude the information from being shared.

Script kiddies have been around for a long time. Borrowing the work of the better engineers and adapting it for their less idealistic goals.


You don’t learn in production. We’re talking about running production workloads here, not a localised lab. You can learn just fine locally with or without docker and other tooling that “eases” deployment of software. But when it comes to production it’s best to have a solid idea of what you’re doing and what a real production system requires.

Sadly, when people learn locally with “docker compose up” that becomes their baseline, their reality, and they believe everything else is taken of for them. Actually, you’re still running a process that’s bound to a network port (but with extra steps, because you used a container), and the entire ecosystem around that still needs to be secured.

That’s what’s been lost as of late :-(


Can you explain why "Docker compose" is a trap?


For my two cents, it discourages standardization.

If you run bare-metal, and instructions to build a project say "you need to install libfoo-dev, libbar-dev, libbaz-dev", you're still sourcing it from your known supply chain, with its known lifecycles and processes. If there's a CVE in libbaz, you'll likely get the patch and news from the same mailing lists you got your kernel and Apache updates from.

Conversely, if you pull in a ready-made Docker container, it might be running an entire Alpine or Ubuntu distribution atop your preferred Debian or FreeBSD. Any process you had to keep those packages up to date and monitor vulnerabilities now has to be extended to cover additional distributions.


You said it better at first: Standardization.

Posix is the standard.

Docker is a tool on top of that layer. Absolutely nothing wrong with it!

But you need to document towards the lower layers. What libraries are used and how they're interconnected.

Posix gives you that common ground.

I will never ask for people not to supply Docker files. But to be it feels the same if a project just released an apt package and nothing else.

The manual steps need to be documented. Not for regular users but for those porting to other systems.

I do not like black boxes.


Why I move from docker for selfhosted stuff was the lack of documentation and very complicated dockerfiles with various shell scripts services config. Sometimes it feels like reading autoconf generated files. I much prefer to learn whatever packaging method of the OS and build the thing myself.


Something like harbor easily integrates to serve as both a pull-through cache, and a cve scanner. You can actually limit allowing pulls with X type or CVSS rating.

You /should/ be scanning your containers just like you /should/ be scanning the rest of your platform surface.


You've put that command in quotation marks in three comments on this topic. I don't think it's as prevalent as you're making out.


I wonder how it would work with the new-ish podman/oci container support?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: