Hacker Newsnew | past | comments | ask | show | jobs | submit | dapirian's commentslogin

Hey thanks for the Q: We will support other CI systems than GitHub Actions, but exact timeline of which and when is TBD.

With regards to cost, you can self-host GitHub Actions runners. Internally we use an autoscaling group of spot instances via k8s, which is much cheaper than GitHub-hosted runners. We actually didn't do it for the cost, we did it because we wanted a local cache to persist on machines during the day, for performance reasons. When cost is a factor, having an analytics system help you optimize your jobs is super important.


In SF, $35/walk is cheap. If you've watched 10hrs of dog training videos on youtube you can command $100/hr (the going rate for an unqualified dog trainer), or $150+/hr for a qualified dog trainer.


I'm not trying to be silly, I guess I should watch one of those videos but I can't imagine not being a pro dog walker my very first walk? I guess I have the complete wrong view of this job as "low skill".


Especially if you're using a finite pool of CI runners. Lots of companies do run their own CI runners either for added security, or to get persistent CI machines with a hot local cache so builds/tests are much faster. Then everyone's PRs are blocked waiting for CI machines just because of people's weird push workflows or micro stacked PRs


I think this definition is outdated - linters find real bugs now. The boundaries between linters, static analyzers, and security tools in real life are basically nonexistent, it's entirely tool by tool. Also, autoformatters should style code, not linters. Used to be that linters told you about code style issues, and many still do, but ppl should turn all rules off and use autoformatters instead. Much easier/better.


I disagree that the definition is "outdated". Yes, linters find real bugs, I never said they couldn't. But tools typically called "linters" routinely also report things that aren't serious defects, while other tools that focus on specific issues (e.g., security vulnerabilities) tend to find those issues. A "static analyzer" is anything that analyzes code without running it, so linters and Static Application Security Testing (SAST) tools are both kinds of static analysis tools. Linters and SAST tools (for example) tend to focus on different kinds of rules. Under the hood they may use the same underlying technology, but they're applying them differently, so it makes sense to have different words for them.

So let's look at ESLint's list of rules: https://eslint.org/docs/latest/rules/ Here are some examples:

* getter-return: Enforce `return` statements in getters

* no-constant-condition: Disallow constant expressions in conditions

* no-fallthrough: Disallow fallthrough of `case` statements

* no-irregular-whitespace: Disallow irregular whitespace

You might find real bugs with these linting rules, sure. But if I want to prioritize finding and fixing security defects, I'll prioritize fixing the security defects found by a tool designed to find the most likely kinds of security defects (e.g., look for XSS, SQL injection, etc.). Not these rules. You can have code that violates many ESLint rules and still works correctly for end-users.

That doesn't mean linters are useless. Quite the contrary, I think linters are really helpful & I strongly encourage their use. Fixing issues found by linters can not only fix real bugs, but they can sometimes encourage creation of simpler code that is easier to analyze by both humans and machines. But I wouldn't prioritize updating a linter over updating other tools and fixes. Yes, update linters, but when you have limited resources, that's not where you spend your first hour.

I think there's a lot of legitimate debate about autoformatters, that's a different topic.


why?


Because it's not artificial intelligence by definition.


> As for squashing, I believe that everything should be rebased instead to maintain a proper, linear git history

Rebasing on your PR branches is up to you, it's subjective, but setting GitHub to rebase your PR commits onto your main branch as it's merge strategy is not proper at all. It might 'work' for a tiny hobby project, but it does NOT work for any kind of team activity / professional environment - it would be an immense amount of bloat onto your main branch, and git will eventually start to get slow as mud on every operation because of it. Fun fact, git performance does not scale linearly, and big monorepos hit this problem. Beyond performance, when looking through git history on the main branch: - Every commit on the main branch should map 1:1 with a reviewed PR that passed CI tests/checks - The 'mapping' can just be done by having the PR number in the commit message title. GitHub does that for you with the settings in this blog post - It's not uncommon that someone has 30+ commits on their PR branch, many of which are in a totally broken state, have blatant typos, etc. People just don't (and it's not even desire-able for them to) structure their local commits while working on a PR to be some kind of iterative working state that is useful for others to look at. It's beyond clutter to have these 30 broken commits with crummy commit messages like "update" rebased onto the main branch for all to see for all time.


Not necessarily real world, considering performance especially. Let's say that a project is smaller, git improves performance, or computers get more efficient:

Regarding commits relation to PRs: GitHub still maintains this with rebase and merge. It's linked in the UI when viewing any individual commit on a branch.

Broken commits are mitigated by testing each commit incrementally.

Commit messages must be reviewed and edited to make sense. Having 30 commits would be something flagged in review. It's okay and encouraged to make a bunch of commits during review and then require those to be rebased into logical, atomic commits prior to merging.


If each of your commits are tested and make sense independently, they should be different PRs - sounds to me like you're just smashing 5 stacked PRs into 1 with highly fussy curated commits, which aren't guaranteed by github settings to each be passing CI


PRs are just logical groupings of commits. It makes it efficient to group related commits together. You are correct that each of the commits could be separate PRs which is part of the advantage to this pattern—code contribution is always atomic and easily organized. How you organize these contributions is done whichever way makes it easiest to get merged in efficiently.


I think the parent's point is that if you only just choose to think of the PR as the "unit of change" on your software product, rather than commits, all of this extra discipline you're imposing on yourself just goes away.

And what do you lose by making this decision?

The main branch, being the only place where the changes really matter, can still capture all of the necessary details of the changes (based on the PR) in the commit message when it's merged. This is the point where the curation is most useful, and where it's easiest to apply and apply standards, run tests, and actually enforce that they're followed.


> all of this extra discipline you're imposing on yourself just goes away.

I think the extra discipline is just shifted to the PR in that case instead of the commits. What's lost is the ability to make code contributions of related changes together in a PR which can be more efficient than making many small PRs.


The extra discipline I'm talking about has to do with curating the commits in order to make them meaningful. Here's an example of this from the wild:

https://github.com/BurntSushi/ripgrep/pull/350#issuecomment-...

BurntSushi (the project maintainer) didn't care about the story the commits told, he cared about the entirety of the changes being made to the repo (at the PR level).

> What's lost is the ability to make code contributions of related changes together in a PR

You're considering each minor commit to be an independent code contribution. BurntSushi instead considered the PR to be the code contribution. The code changes are the same either way you look at it, and there is a single PR in both cases, so it's a matter of choosing your perspective.


I'm finding it difficult to follow the thread of discussion here, so I'll just blurt out some things:

* Most PRs that come in do indeed get squashed merged, but because of a few reasons. Sometimes it's because the PR is really just one logical change and really should just be one commit. Sometimes it's because the commit messages are not written in a way that I like, and so I do a squash-merge so I can either edit or re-write the commit messages provided by the author.

* Sometimes I do a "rebase and merge" if the commit breakdown is good and so are the messages. It's somewhat rare, but some contributors get it right. I'm fine with a single PR containing multiple commits, but the common case is indeed "one PR = one commit."

* I don't usually levy this criticism because I think it's low-brow, but the OP here---"git commit messages are useless"---is pretty egregious clickbait. Halfway into the article, the OP acknowledges that they aren't useless, but just useless in the context of a PR workflow. Which... I don't also 100% agree with, but is usually true for very small changes. What an annoying submission.


Thanks for weighing in.

I agree the title is clickbait but I didn't mind the article because it may challenge the reader to reconsider why they're putting so much effort into individual commits, even if they're not convinced to abandon (PR branch) commit messages entirely.


Trunk Merge is an intelligent service that orchestrates merging pull requests to maintain a repository of code that always passes tests. Airbnb, Uber, Twitter, Robinhood, and many other big tech companies internally have their own sophisticated merge queues. Now, you can get the same thing without having to build it.


true story


100% agree that documenting the practices of your repo is a losing battle: automate it or don't bother. I don't think you go far enough here. Every file in your repo should minimally have an autoformatter and some kind of linter/static analyzer/validator set up. Even shell scripts, ci pipeline configs, dockerfiles, terraform, etc. I recommend https://docs.trunk.io ;)


Thanks for the comment! Go ahead and use it in private repos for personal use or on small teams for free - our pricing structure only targets private repo use by companies with eng teams > 10.

Even for ecosystems that do have tool dep pinning, folks are missing out because the linters/formatters you should be running aren't necessarily part of the language ecosystem you use. For example, you probably have shell scripts that should run shfmt and shellcheck, which aren't on npm, even if you otherwise use javascript.

Trunk is young, but we'll become the one stop shop for the full development experience of highly productive scalable repos: linting, formatting, code coverage, merge automation, test analytics, running/selecting tests, and more. What we've seen is companies & projects repeatedly recreating bad versions of this stuff in house - which leads to bad dev experience and is a waste of time & money.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: