Hacker Newsnew | past | comments | ask | show | jobs | submit | tgtweak's commentslogin

RDS is not, by default, multi-instance and multi-region or fault tolerant at all - you choose all of that in your instance config. The amount of single-instance single-region zero-backup RDS setup's I've seen in the wild is honestly concerning. Do Devs think an RDS instance on it's own without explicit configuration is fault tolerant and backed-up? If you have an ec2 instance with EBS and auto-restart you have almost identical fault tolerance (yes there are some slight nuances on RDS regarding recovery following a failure).

Just found that assumption a bit dangerous. The ease with which you can set that up is easy on RDS but it's not on by default.


As someone who self hosted mysql (in complex master/slave setups) then mariadb, memsql, mongo and pgsql on bare metal, virtual machines then containers for almost 2 decades at this point... you can self host with very little downtime and the only real challenge is upgrade path and getting replication right.

Now with pgbouncer (or whatever other flavor of sql-aware proxy you fancy) you can greatly reduce the complexity involved in managing conventionally complex read/write routing and sharding to various replicas to enable resilient, scalable production-grade database setups on your own infra. Throw in the fact that copy-on-write and snapshotting is baked into most storage today and it becomes - at least compared to 20 years ago - trivial to set up DRS as well. Others have mentioned pgBackRest and that further enforces the ease with which you can set up these traditionally-complex setups.

Beyond those two significant features there isn't many other reasons you'd need to go with hosted/managed pgsql. I've yet to find a managed/hosted database solution that doesn't have some level of downtime to apply updates and patches so even if you go fully hosted/managed it's not a silver bullet. The cost of managed DB is also several times that of the actual hardware it's running on, so there is a cost factor involved as well.

I guess all this to say it's never been a better time to self-host your database and the learning curve is as shallow as it's ever been. Add to all of this that any garden-variety LLM can hand-hold you through the setup and management, including any issues you might encounter on the way.


Good at finding/fixing security vulnerabilities = Good at finding/exploiting security vulnerabilities.

Yes but if it's not getting removed at the origin... it's not fixing the actual issue of the context/conversation surviving past an explicit "delete" request. Also let's not forget that anyone proxying LLMs is also man in the middle to any code that goes up/down.

79a6ed87’s comment applies to Codex cloud, not the codex CLI, which is what Terragon is using.

Anecdotally I've found it very good in the exact same case for multi-agent workflows - as the "reviewer"

Monero's proof of work (RandomX) is very asic-resistant and although it generates a very small amount of earnings, if you exploit a vulnerability like this with thousands or tens of thousands of nodes, it can add up (8 modern cores 24/7 on Monero would be in the 10-20c/day per node range). OPs Vps probably generated about $1 for those script kiddies.

Hit 1000 servers and it starts adding up. Especially if you live somewhere with a low cost of living.

So $40 a year? Does that imply all monero is mined like this because it's clearly not cost effective at all to mine legitimately?

I think so, but it is hard to say. Could be a lot of people with extra power (or stolen power), but their own equipment. I mine myself with waste solar power

Another option:

Deliberate heat generation.

If it's cold and you're going to be running a heater anyways, then if your heat is resistive, then running a cryptominer is just as efficient and returns a couple dollars back to you. It effectively becomes "free" relative to running the heater.

If you use a heat pump, or you rely on burning something (natural gas, wood, whatever) to generate heat, then the math changes.


Yeah the calculus on that becomes tricky when you have heat pumps since the coefficient of performance is >1 vs resistive heating (often 3-4 depending on the temperature).

I used a rack of GPUs to heat my house for a few years back when gpu mining was decently profitable, and my electricity bill was 3-4x more than with the heat pump - so you have to keep a close eye on the math when you're running at/under profitability.


The manageability of having everything on one host is kind of nice at that scale, but yeah you can stack free tiers on various providers for less.

Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.

Another is running containers in read-only mode, assuming they support this configuration... will minimize a lot of potential attack surface.

Never looked into this. I would expect the majority of images would fail in this configuration. Or am I unduly pessimistic?

Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.

Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].

I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.

1: https://github.com/kubernetes/kubernetes/issues/48912


Readonly and rootless are my two requirements for Docker containers. Most images can't run readonly because they try to create a user in some startup script. Since I want my UIDs unique to isolate mounted directories, this is meaningless. I end up having to wrap or copy Dockerfiles to make them behave reasonably.

Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.


I like steering docker runs with docker-compose, especially with .env files - easy to store in repositories, easy to customise and have sane defaults.

Yeah agreed. I use docker-compose. But it doesn't help if the Docker images try to update /etc/passwd, or force a hardcoded UID, or run some install.sh at runtime instead of buildtime.

It's hit or miss... you sometimes have to make /tmp writable or another data directory... some images just don't operate right because of initialization steps that happen on first run. It's hit or miss and depends... but a lot of your own apps can definitely be made to work with limited, or no write surface.

Depends on specific app use case. Nginx doesn't work with it but valkey will.

This is true, but it's also easy to set at one point and then later introduce a bursty endpoint that ends up throttled unnecessarily. Always a good idea to be familiar with your app's performance profile but it can be easy to let that get away from you.

While this is a good idea I wonder if doing this could allow the intrusion to go undetected for longer - how many people/monitoring systems would notice a small increase in CPU usage compared to all CPUs being maxed out.

Soft and hard memory limits are worth considering too, regardless of container method.

This is a great shout actually. Thanks for pointing it out!

The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.

If I can get in once, I can do it again an hour later. I'd be inclined to believe that dumb recycling is not very effective against a persistent attacker.

I wonder if a crypto miner like this was a person doing the work, or just an automated thing someone wrote to scan IPs for known vulnerabilities and exploit them automatically.

Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.

Too bad it straight doesn't work without heavy mods in pve9

Illumos had a really nice stack for running containers inside jails and zones... I wonder if any of that ever made it into the linux world. If you broke out of the container you'd just be inside a jail which is even more hardened.

SmartOS constructed a container-like environment using LX-branded zones, they didn't create an in-kernel equivalent to Linux's namespaces which it then nested in a zone. You're probably thinking of the KVM port to Solaris/illumos, which does run in a zone internally to provide additional protection.

While LX-branded zones were a really cool tech demo, maintaining compatibility with Linux long-term would be incredibly painful and you're bound to find all sorts of horrific bugs in production. I believe that Oxide uses KVM to run their Linux guests.

Linux has always supported nested namespaces and you can run Docker containers inside LXC (or Incus) fairly easily. Note that while it does add some additional protection (in particular, it transparently adds user namespaces which is a critical security feature most people still do not enable in Docker) it is still the same technology as containers and so kernel bugs still pose a similar risk.


Yes it was SmartOS - bcantrill worked on it post-oracle. I remembered Illumos since it was the precursor.

I trust companies that immediately and regularly update their status/issues page and follow up any outages with proper and comprehensive post-mortems. Sadly this is becoming the exception these days and not the norm.

There really should be an http header dedicated to "outage status" with a link to the service outage details page... clients (for example, in this case, your code IDE) could intercept this and notify users.

503 is cool and yes, there is the "well if it's down how are they going to put that up" but in reality most downtimes you see are on the backend and not on the reverse proxies/gateways/cdns where it would be pretty trivial to add a issues/status header with a link to the service status page and a note.


Something something if everyone used HATEOS this wouldn't have been a problem something something

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: