Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of the repos we work on date back to 2004 (going from CSV to Subversion, to git. Developers moving from mostly Windows to mostly Linux, to mostly macOS). If everybody was free to check in debris of whatever IDE and/or OS they were working at any given time, the codebase would be a terrible mess, especially as such debris tends to go unnoticed for ages until it's not.

Just like we have CI checks to make sure nobody accidentally commits a secret (like the GitHub host key thing last week) we have checks that prevent debris to be committed and code to be formatted according to agreed-upon coding standards.

All of this might seem superfluous when the live expectancy of a repo is measured in months or single digit years, but then, no solution or repo is more permanent than a quick throw-away one.

Which is why this isn't even a discussion but just a reality. Been there, done that, learned my learnings.



You're moving the goal posts here. We're not discussing not checking in debris like `.DS_Store` files; I'm totally on board with that, but that's covered in any .gitignore template generated by my IDE.

Instead, you appear to be enforcing in which arbitrary location to place ignore rules for debris, seemingly having spent considerable time implementing that, for entirely puritanical reasons.

And while you're free to play holier-than-thou at your job as you like, I'd give hell to anyone wasting my team's time like that.


There’s also an argument for having every exclusion centralised in one config file (ie gitignore) with regards to making it easier to review what exclusions are active.

As a DevOps guy (but with 30 years of experience in development too), one of my pet peeves is having to deal with a thousand different edge cases because someone decided that intellectual cleverness was more important than a holistic approach to design and architecture.


> Instead, you appear to be enforcing in which arbitrary location to place ignore rules for debris, seemingly having spent considerable time implementing that, for entirely puritanical reasons.

They're "puritanical reasons" right until you have to migrate to a new version control system, at which point they're the rules that all migration tools enforce -- because those are usually written against the two VCS' specs and current implementations, rather than the myriad of alternate practices that software shops everywhere devise.

I've done migrations like these before -- while the parent poster may be doing all that for the wrong reasons (puritanism) they are absolutely right to do them.


So, just to be clear, when migrating from git to a hypothetical new VCS, you're saying that it will be beneficial to have exclusion rules for some files in the repository, and for others locally on every developer's machine, hopefully?

The "myriad of alternate practices" that you mention are a strawman: We're still talking about ignoring some file manager metadata file. Developers put a line into their .gitignore and be done with it. If that breaks your new VCS, maybe migrating to it isn't such a good idea?


> So, just to be clear, when migrating from git to a hypothetical new VCS, you're saying that it will be beneficial to have exclusion rules for some files in the repository, and for others locally on every developer's machine, hopefully?

Not some arbitrary files, but the ones that the VCS recommends, or at least allows, to be excluded locally, where it makes sense. See e.g. github' note: https://docs.github.com/en/get-started/getting-started-with-... on the matter.

Also not a hypothetical new VCS, I've seen this backfiring several times, and it's usually not related to how good the new VCS is. E.g. when doing a ClearCase -> SVN migration years ago, just cutting out a few global exclusion rules that should have been local (e.g. tags files for people who liked etags, cscope.files for people who liked cscope) reduced a full-history migration time from several days to a few hours.

The two tools did not treat filename encoding the same way, so the migration tool had to walk through the exclusion list at every history point it synchronized. Due to how ClearCase presented its repositories (tl;dr userspace filesystem, years before FUSE) this was very slow on its first Linux versions (it had originally been offered for commercial Unices), not so much because ClearCase sucked but because it exposed a nasty quirk in the kernel's VFS implementation.

> The "myriad of alternate practices" that you mention are a strawman: We're still talking about ignoring some file manager metadata file. Developers put a line into their .gitignore and be done with it.

We're talking about ignoring file manager metadata files specific to some developers on some machines. .gitignore is global. Some details matter.

If you don't expect to ever do cross-platform migrations -- of even much cross-platform development, for that matter -- for repositories maintained across multiple platforms, yes, sure, you don't need that. That doesn't mean OP has a twisted view of adding value. Maybe his team does need that.

Edit: FWIW, migrations are just the nasty point that bites you back years later and usually comes with a huge bill. But puting all local exclusion rules in a single global file backfires in all sorts of ways on large and/or long-lived codebases. Sooner or later some eclusion rule specific to one developer's environment will do the wrong thing in another developer's environment.


Secrets checks should be in git as well because if you’re only checking in CI then your secret has already been published.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: