A Django+Celery app behind Nginx back in the day. Most maintenance would be discovering a new failure mode:
- certificates not being renewed in time
- Celery eating up all RAM and having to be recycled
- RabbitMQ getting blocked requiring a forced restart
- random issues with Postgres that usually required a hard restart of PG (running low on RAM maybe?)
- configs having issues
- running out of inodes
- DNS not updating when upgrading to a new server (no CDN at the time)
- data centre going down, taking the provider’s email support with it (yes, really)
Bear in mind I’m going back a decade now, my memory is rusty. Each issue was solvable but each would happen at random and even mitigating them was time that I (a single dev) was not spending on new features or fixing bugs.
Er… what? Even in today’s world with Docker, you have differences between dev and prod. For a start, one is accessed via the internet and requires TLS configs to work correctly. The other is accessed via localhost.
Just fyi, you can put whatever you want in /etc/hosts, it gets hit before the resolver. So you can run your website on localhost with your regular host name over https.
- certificates not being renewed in time
- Celery eating up all RAM and having to be recycled
- RabbitMQ getting blocked requiring a forced restart
- random issues with Postgres that usually required a hard restart of PG (running low on RAM maybe?)
- configs having issues
- running out of inodes
- DNS not updating when upgrading to a new server (no CDN at the time)
- data centre going down, taking the provider’s email support with it (yes, really)
Bear in mind I’m going back a decade now, my memory is rusty. Each issue was solvable but each would happen at random and even mitigating them was time that I (a single dev) was not spending on new features or fixing bugs.