Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh yeah, DNS can be very painful. Definitely been burnt in the past. I generally lower the TTL to 5 minutes a day or so ahead of making any changes just to reduce risk, but DNS is even worse given that not everyone even respects TTL.


I've taken to keeping my TTLs at 5 minutes as a default for personal stuff. The potential extra latency of a full update every time you access a not-very-often accessed resource is fine and even for commonly accessed things the performance difference is negligible. Though I'm away I'm putting a little extra load on DNS caches elsewhere as they need to make extra recursive queries, so I might not do that for a high traffic service.

> not everyone even respects TTL

This used to be a problem with at least one common DNS cache, where it would see a very small value as an error and apply its own default (24 hours IIRC) instead. 10 minutes was fine, but 9m59s and it would not update until next day (the threshold may not have been 10 mins, it could have been 500s (8m20s), but it was something of that order).

I'm pretty sure that is not longer a common DNS daemon, or if it is that behaviour has been fixed, so these days I'm not really concerned for my projects. For work things I might be a bit more restrained with short TTLs, just in case (for personal projects I can take the “it is not my fault your DNS setup is broken” line, but that sort of attitude doesn't always fly in a commercial environment!).


I used to work for a very large DNS service that charged by the query count. The metrics said reducing DNS ttl from 24h to 10 mins only increased the number of requests by some small percentage (my memory is failing, I want to say 10%) due to a hundred external factors, including companies not respecting TTLs. We usually recommended they kept it below 5 mins, and could show query counts wouldn't scale linearly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: