Real world systems often have to deviate from the "pure" version used to run formal methods on. This could be how long you keep transaction logs for, or how long rows are tombstoned for, etc. The longer the time period, the costlier it usually is, in total storage cost and sometimes performance too. So you have to compromise with where you set the time period for.
Let's imagine that the process usually takes 1 minute and the tombstones are kept for 1 day. It would take something ridiculous to make the thing that usually takes 1 minute take longer than a day - not worth even considering. But sometimes there are a confluence of events that make such a thing possible... For example, maybe the top of rack switch died. The server stays running, it just can't succeed any upstream calls. Maybe it is continuously retrying while the network is down (or just slowly timing out on individual requests and skipping to the next one to try it). When the network comes back up, those calls start succeeding but now it's so much staler than you ever even thought was possible or planned for. That's just one scenario, probably not exactly what happened to AWS.
In my mind, anything that has an actual time period is bound to fail, eventually. Then again, I hang around QA engineers a lot, and when you hear about the selenium troubles of "wait until an element is on the page" stories, you realise it relates to software in general.
QA people deal with problems and edge cases most devs will never deal with. They’re your subject-matter-experts of 'what can go wrong'.
Anyway, the point is. You can’t trust anything "will resolve in time period X" or "if it takes longer than X, timeout". There are so many cases where this is simply not true and should be added to a "myths programmers believe" article if it isn't already there.
>You can't trust anything "will resolve in time period X"
As is, this statement just means you can't trust anything. You still need to choose a time period at some point.
My (pedantic) argument is that timestamps/dates/counters have a range based on the number of bits storage they consume and the tick resolution. These can be exceeded, and it's not reasonable for every piece of software in the chain to invent a new way to store time, or counters, etc.
I've seen a fair share of issues resulting from processes with uptime of over 1 year and some with uptime of 5 years. Of course the wisdom there is just "don't do that, you should restart for maintenance at some point anyway" which is true, but it still means we are living with a system that theoretically will break after a certain period of time, and we are sidestepping that by restarting the process for other purposes.
You can have liveness without a timeout. Think about it. Say you set a timeout of 1 minute in your application to transfer 500 mb over a 100mbps link. This normally takes 40s and this is that machines sole job, so it fails fast.
One day, an operator is updating some cabling and changes you over to a 10mbps link for a few hours. During this time, every single one of your transfers is going to fail even though if you were to inspect the socket, the socket is still making progress on the transfer.
This is why we put timeouts on the socket, not the application. The socket knows whether or not it is still alive but your application may not.
Yeah... it has felt kind of ridiculous over the years how many times I have tracked some but I was experiencing down to a timeout someone added in the code for a project I was working with, and I have come to the conclusion over the years that the fix is always to remove the timeout: the existence of a timeout is, inherently, a bug, not a feature, and if your design fundamentally relies on a timeout to function, then the design is also inherently flawed.
How would you handle the case when some web service is making calls to a 3rd-party and that 3rd-party is failing in unexpected ways (i.e. under high load or IPs are not answering due to routing issues) to avoid a snowball effect on your service without using the timeout concept in any way?
You put the timeout on the socket, not your application. Your application shouldn't care how long it takes, as long as progress is being made, which the socket will know about, but you won't. If you put a timeout on your application and then retry, you'll just make the problem worse. Your original packets are still in a buffer somewhere and still will be processed. Retrying won't help the situation.
Sockets actually need a timeout because there is no signal that a client has disconnected. Eventually, maybe, a router along the path will be nice enough to send you a RST packet, but it isn’t guaranteed.
People put a lot of timeouts in code when there are humans in the loop that should handle the timeout. An outgoing socket (as is the case in this scenario) really should not have a timeout.
An incoming one might could have a timeout if there is no other way to garbage collect the connection, but, if at all possible, that should usually be in the higher layers, not the lower ones.
(Maybe read my other response to the person you responded to? I purposefully gave you a really short and matter-of-fact statement that fit into the discussion from the thread more broadly.)
I’m explicitly saying not to put timeouts in code… but you must put a timeout on a socket due to the way they work. Period. Or deal with the default, which is usually many minutes. Sockets timeout when packets haven’t been acknowledged for a long time, but you can also set an idle timeout as well.
I continue to disagree: the socket does not need a timeout, it can simply go into an infinitely held state. Take a web browser (a very typical "outgoing socket" case): there is no value in either the browser or the socket having a timeout, as, if the user decides it takes too long, they will click Stop and/or Reload, which will close the socket. "I guess the remote side didn't send me a response packet within X seconds so I'll automatically stop the load and show the user an error" does not provide any benefit and can only lead to new failure edge cases.
I’m talking about the physical socket in the kernel here. Not a hypothetical one. You can send packets (literally pulses of electricity) down it, but you don’t know if anything happened until you get packets back. By default, this is around half an hour, basically far longer than any human would reasonably wait.
My point is, you have to set this or accept the default timeout. The default is more than reasonable, anything less than minutes — with an s — is unreasonable.
How does the timeout help? Expose the lack of progress to the user and give them a way to give up; if they choose to walk away, then you stop. The only timeout should be in the head of a human that can make real decisions about how long too long is. The real problem: me knowing that if the software would have waited a bit longer, it would have worked. Your timeouts just cause more busy work and are often the root cause of snowball effects.
Let's imagine that the process usually takes 1 minute and the tombstones are kept for 1 day. It would take something ridiculous to make the thing that usually takes 1 minute take longer than a day - not worth even considering. But sometimes there are a confluence of events that make such a thing possible... For example, maybe the top of rack switch died. The server stays running, it just can't succeed any upstream calls. Maybe it is continuously retrying while the network is down (or just slowly timing out on individual requests and skipping to the next one to try it). When the network comes back up, those calls start succeeding but now it's so much staler than you ever even thought was possible or planned for. That's just one scenario, probably not exactly what happened to AWS.