bikelang 21 hours ago

Git bisect was an extremely powerful tool when I worked in a big-ball-of-mud codebase that had no test coverage and terrible abstractions which made it impossible to write meaningful tests in the first place. In that codebase it was far easier to find a bug by finding the commit it was introduced in - simply because it was impossible to reason through the codebase otherwise.

In any high quality codebase I’ve worked in, git bisect has been totally unnecessary. It doesn’t matter which commit the bug was introduced in when it’s simple to test the components of your code in isolation and you have useful observability to instruct you on where to look and what use inputs to test with.

This has been my experience working on backend web services - YMMV wildly in different domains.

  • kccqzy 20 hours ago

    Git bisect is never unnecessary. Even when you can easily test the components and find the bug that way, a bisect allows you to understand why the bug was introduced. This is wonderful in all places where there is a culture of writing long and comprehensive commit messages. You get to understand why the bug occurred from a previous commit message and you will write about that context in your bug fix commit message. And this becomes positive reinforcement. The better the commit messages are, the more useful it is to use git bisect or git blame to find the relevant commit messages.

    • kemayo 20 hours ago

      Yeah, bisect is really handy because often a bug will have been introduced as a side-effect of a change made to support something else, and if you don't know what new usage was introduced you're relatively likely to break that in the course of fixing the bug.

      You can avoid it via the "just look at every usage of this function and hold the entire codebase in your head" method, of course, but looking at the commit seems a bit simpler.

      • _alternator_ 17 hours ago

        ‘git blame’ is often more handy for finding the reason that the change was made, assuming you know the location of the bug. It tells you the commit and the commit message.

        • geon 8 hours ago

          Blame doesn’t help when the same code has been changed multiple times since the bug was introduced.

          • carlmr 6 hours ago

            While true, this is one reason I always introduce automated code-formatting early on. It makes git blame a bit more useful.

        • kemayo 14 hours ago

          > assuming you know the location of the bug

          Well, yes. But `git bisect` is often the quickest way to find that, in a complex system.

  • diegocg 20 hours ago

    There are certainly other use cases. git bisect was enormously useful when it was introduced in order to find Linux kernel regressions. In these cases you might not even be able to have tests (eg. a driver needs to be tested against real hardware - hardware that the developer that introduced the bug could not have), and as an user you don't have a clue about the code. Before git bisect, you had to report the bug and hope that some dev would help you via email, perhaps by providing some patch with print debug statements to gather information. With git bisect, all of sudden a normal user was able to bisect the kernel by himself and point to the concrete commit (and dev) that broke things. That, plus a fine-grained commit history, entirely changed how to find and fix bugs.

    • bsder 16 hours ago

      > With git bisect, all of sudden a normal user was able to bisect the kernel by himself and point to the concrete commit (and dev) that broke things.

      Huh. Thanks for pointing that out. I definitely would never have thought about the use case of "Only the end user has specific hardware which can pinpoint the bug."

      • kragen 11 hours ago

        This is why operating systems are hard. It's not the architecture or the algorithms.

  • Volundr 20 hours ago

    If all you care about it fixing the bug, this is probably often true. Certainly bisect is not part of my daily workflow. Sometimes though you also need to know how long a bug has been in place e.x. to track down which records may have been incorrectly processed.

    Edit: tracking down where something was introduced can also be extremely helpful for "is this a bug or a feature" type investigations, of which I have done many. Blame is generally the first tool for this, but over the course of years the blame and get obscured.

    • jayknight 19 hours ago

      Yes to both of these. In a healthcare setting, some bugs leave data that needs to be reviewed and/or corrected after it is identified and fixed.

      And also a fair number of bugs filed can be traced back to someone asking for it to work that way.

    • hinkley 18 hours ago

      See also Two Devs Breaking Each Other’s Features.

      I got to hear about a particularly irate customer during a formative time of my life and decided that understanding why weird bugs got in the code was necessary to prevent regressions that harm customer trust in the company. We took too long to fix a bug and we reintroduced it within a month. Because the fix broke another feature and someone tried to put it back

  • aidenn0 8 hours ago

    What about finding the commit a bug was fixed in?

    Example use #1: Customer using a 6-year-old version of the software wants to know if upgrading to a 4-year-old version of the software will solve their problem.

    Example use #2: The part of code that was likely previously causing the problem has been significantly reworked; was the bugfix intentional or accidental? If the latter, is the rework prone to similar bugs?

    • geon 8 hours ago

      https://www.reddit.com/r/talesfromtechsupport/s/K2xme9A0MQ

      I once bisected to find a bug in a 6 month old commit. An off-by-one error in some array processing. I fixed the bug there to confirm. But on main, the relevant code didn’t even exist any more. It had been completely refactored away.

      I ended up rebasing the entire 6 months worth of commits onto the bugfix, propagating the fix throughout the refactoring.

      Then a diff against main showed 3 lines changed in seemingly unrelated parts of the code, together triggering the bug. I would never have found them without bisect and rebase.

  • sfvisser 21 hours ago

    Even if you can reason through a code base a bisect can still be much quicker.

    Instead of understanding the code you only need to understand the bug. Much easier!

  • foresto 20 hours ago

    I find git bisect indispensable when tracking down weird kernel bugs.

  • Nursie 13 hours ago

    That was my first thought when reading this.

    It sounds like the author doesn't understand the codebase, if you're brute-forcing bug detection by bisecting commit versions to figure out where the issue is, something's already failed. In most cases you should have logs/traces/whatever that give you the info you need to figure out exactly where the problem is.

    • thfuran 11 hours ago

      Every system is imperfectly understood.

      • Nursie 8 hours ago

        This is very true, but it's a matter of degree.

funnymunny 16 hours ago

I used git bisect in anger for the first time recently and it felt like magic.

Background: We had two functions in the codebase with identical names and nearly identical implementations, the latter having a subtle bug. Somehow both were imported into a particular python script, but the correct one had always overshadowed the incorrect one - that is, until an unrelated effort to apply code formatting standards to the codebase “fixed” the shadowing problem by removing the import of the correct function. Not exactly mind bending - but, we had looked at the change a few times over in GitHub while debugging and couldn’t find a problem with it - not until we knew for sure that was the commit causing the problem did we find the bug.

f311a 21 hours ago

I've used bisect a few times in my life. Most of the time, I already know which files or functions might have introduced a bug.

Looking at the history of specific files or functions usually gives a quick idea. In modern Git, you can search the history of a specific function.

    >> git log -L :func_name:path/to/a/file.c
You need to have a proper .gitattributes file, though.
  • nielsole 19 hours ago

    Alternatively if you do not have that set up, `git log -S` helps you find commits whose diff contain a specific string.

  • _1tan 21 hours ago

    Can you elaborate on the dependent .gitattributes file? Where can I find more information on the necessary content? Sounds super useful!

    • f311a 21 hours ago

      You need to specify diff format, so that Git can correctly identify and parse function body.

      *.py diff=python

      • _1tan 19 hours ago

        Thanks!

  • adastra22 20 hours ago

    I use git bisect literally every day. We are clearly different people :)

    • hinkley 18 hours ago

      I don’t use it for myself often, but I use it fairly often when someone has to escalate a problem to me. And how you work when the shit hits the fan says a lot about you overall, IMO.

      • adastra22 17 hours ago

        Basically any time I'm like "huh, that's weird," even if it is not a bug, I bisect and see when that behavior was introduced. Because (1) this is trivial and no work to do (`git bisect run` is completely autonomous), and (2) it gets me to the commit that introduces the change, which has all the context that might tell me why it is acting that way.

        Nothing annoys me more than a codebase with broken commits that break git bisect.

        • hinkley 17 hours ago

          Ah, in that case the JetBrains diff tool lets you annotate inside the diff window and I can usually walk back to where this possible off by one error was first authored that way.

          It probably would be slightly faster to jump to bisect. But it’s not in my muscle memory.

          • adastra22 16 hours ago

            I'm not sure what you mean by "annotate inside the diff window"?

            If you mean see what commit added code, that's what git-blame is for.

            Bisect is for when you don't know which code, just the behavior.

            • hinkley 7 hours ago

              Git blame doesn’t show you why line 22 has a bug in it. It only shows you who touched it last. And that’s if nobody fucked up a merge.

              A single line of code can have half a dozen authors. You have to pick back through the history and keep running git blame until you determine who put the bug in that line, and why.

              If you show the side by side diff for a commit in JetBrains, you can click show annotations and it’ll show you the blame for the before. And then you can keep going back. For a large file that saves you having to go through all the other commits that were done in completely other parts of the file. Which can be a lot for large files.

              • adastra22 4 hours ago

                That sounds more complicated than git bisect. When I bisect I have a short test script that confirms the bug. Usually, I already have this because part of the bug report/identification. I then run "git bisect run path/to/bug.sh". That's it -- it will output which commit caused the change. Those occasional times I need to confirm the presence of actual text, I use sh -c "git grep ..." as the test command.

  • PaulDavisThe1st 18 hours ago

    I use this often, but it is sadly weak when used on C++ code that includes polymorphic methods/functions:

      /* snip */
    
      void
      Object::do_the_thing (int)
      {
      }
    
      void
      Object::do_the_thing (float)
      {
      }
    
      /* snip*/
    
    AFAICT, git log will never be able to be told to review the history of the second version.
aag 11 hours ago

Make sure you know about exit code 125 to your test script. You can use it in those terrible cases where the test can't tell, one way or another, whether the failure you seek happened, for example when there is an unrelated build problem.

I wrote a short post on this:

https://speechcode.com/blog/git-bisect

thombles 14 hours ago

One place bisect shines is when a flaky test snuck in due to some race condition but you can’t figure out what. If you have to run a test 100000 times to be convinced the bug isn’t present, this can be pretty slow. Bisecting makes it practical to narrow in on the faulty commit, and with the right script you can just leave it running in the background for an hour.

  • kragen 11 hours ago

    We really would benefit from a Bayesian binary search for this purpose, so you can get by with only running the test 1000 times in most cases.

paulbjensen 18 hours ago

I recently used git bisect to help find the root cause of a bug in a fun little jam of mine (a music player/recorder written in Svelte - https://lets-make-sweet-music.com).

My scenario with the project was:

- no unit/E2E tests - no error occurring, either from Sentry tracking or in the developer tools console. - Many git commits to check through as GitHub's dependabot alerts had been busy in the meantime.

I would say git bisect was a lifesaver - I managed to trace the error to my attempt to replace a file I had with the library I extracted for what it did (http://github.com/anephenix/event-emitter).

It turns out that the file had implemented a feature that I hadn't ported to the library (to be able to attach multiple event names to call the same function).

I think the other thing that helps is to keep git commits small, so that when you do discover the commit that breaks the app, you can easily find the root cause among the small number of files/code that changed.

Where it becomes more complex is when the root cause of the error requires evaluating not just one component that can change (in my case a frontend SPA), but also other components like the backend API, as well as the data in the database.

eru 13 hours ago

> People rant about having to learn algorithmic questions for interviews. I get it — interview system is broken, but you ought to learn binary search at least.

Well, the example of git bisect tells you that you should know of the concept of binary search, but it's not a good argument for having to learn how to implement binary search.

Also just about any language worth using has binary search in the standard library (or as a third party library) these days. That's saner than writing your own, because getting all the corner cases right is tricky (and writing tests so they stay right, even when people make small changes to the code over time).

  • Arcuru 13 hours ago

    Unfortunately I can't find the reference now, but I remember reading that even though binary search was first described in the 1940's, the first bug-free implementation wasn't published until the 1960s.

    The most problematic line that most people seem to miss is in the calculation of the midpoint index. Using `mid = (low + high) / 2` has an overflow bug if you're not using infinite precision, but there are several other potential problems even in the simplest algorithm.

    • kragen 11 hours ago

      The overflow bug wasn't fixed until the 21st century; the comment you remember reading dates from before it was discovered.

      To be fair, in most computing environments, either indices don't overflow (Smalltalk, most Lisps) or arrays can never be big enough for the addition of two valid array indices to overflow, unless they are arrays of characters, which it would be sort of stupid to binary search. It only became a significant problem with LP64 and 64-bit Java.

      • eru 9 hours ago

        Agreed.

        Your comment is mostly true, when you do binary search in something like an array, yes.

        But you can also do binary search in any monotonically increasing function.

    • eru 11 hours ago

      Agreed.

      > Using `mid = (low + high) / 2` has an overflow bug if you're not using infinite precision, but there are several other potential problems even in the simplest algorithm.

      Well, if you are doing binary search on eg items you actually hold in memory (or even disk) storage somewhere, like items in a sorted array (or git commits), then these days with 64 bit integers the overflow isn't a problem: there's just not enough storage to get anywhere close to overflow territory.

      A back of the envelope calculation estimates that we as humanity have produced enough memory and disk storage in total that we'd need around 75 bits to address each byte independently. But for a single calculation on a single computer 63 bits are probably enough for the foreseeable future. (I didn't go all the way to 64 bits, because you need a bit of headroom, so you don't run into the overflow issues.)

  • runeblaze 7 hours ago

    My personal mantra (that I myself cannot uphold 100%) is that every dev should at least do the exercise of implementing binary search from scratch in a language with arbitrary-precision integers (e.g., Python) once in a while. It is the best exercise in invariant-based thinking, useful for software correctness at large

    • eru 4 hours ago

      Yes, it's a simple enough algorithm to be a good basic exercise---most people come up with binary search on their own spontaneously when looking a word up in dictionary.

      Property based testing is really useful for finding corner cases in your binary search. See eg https://fsharpforfunandprofit.com/series/property-based-test... for one introduction.

dpflan a day ago

`git-bisect` is legit if you have to do the history archaeological digging. Though, there is the open question of how git commit history is maintained, the squash-and-merge vs. just retain all history. With squash-and-merge you're looking at the merged pull-request versus with full history you can find the true code-level inflection point.

  • echelon 21 hours ago

    Can someone explain why anyone would want non-squashed PRs?

    For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this. Individual commits in a PR are testing and iteration. You don't want to read though that.

    Unless, of course, you're asking the engineer to squash on their end before making the PR. But what's the value in that ceremony?

    Each PR being squashed to 1 commit is nice and easy to reason about. If you truly care about making more semantic history, split the work into multiple PRs.

    For that matter, why merge? Rebase it on top. It's so much cleaner. It's atomic and hermetic.

    • rectang 21 hours ago

      Crafting a PR as an easily-consumed, logical sequence of commits is particularly useful in open source.

      1. It makes review much easier, which is both important because core maintainer effort is the most precious resource in open source, and because it increases the likelihood that your PR will be accepted.

      2. It makes it easier for people to use the history for analysis, which is especially important when you may not be able to speak directly to the original author.

      These reasons also apply in commercial environments of course, but to a lesser extent.

      For me, organizing my PRs this way is second nature and only nominal effort, because I'm extremely comfortable with Git, including the following idiom which serves as a more powerful form of `git commit --amend`:

          git add -p
          git commit --fixup COMMIT_ID
          git stash
          git rebase -i --autosquash COMMIT_ID~
      
      An additional benefit is that this methodology doesn't work well for huge changesets, so it discourages the anti-pattern of long-lived topic branches. :)

      > For that matter, why merge? Rebase it on top.

      Yes, that works for me although it might not work for people who aren't going to the same lengths to craft a logical history. I have no interest in preserving my original WIP commits — my goal is to create something that is easy to review.

      BUT... the PR should ultimately be merged with a merge commit. Then when you have a bug you can run `git bisect` on merges only, which is good enough.

      • Izkata 20 hours ago

        > 2. It makes it easier for people to use the history for analysis, which is especially important when you may not be able to speak directly to the original author.

        I've been on a maintenance team for ~5 years and this has saved me so many times in svn, where you can't squash, for weird edge cases caused by a change a decade or more ago. It's the reason I'm against blind squashes in git.

        My favorite was, around 2022, discovering something that everyone believed was released in 2015, but was temporarily reverted in 2016 while dealing with another bug, that the original team forgot to re-release. If the 2016 reversion had been squashed along with the other bug, I might never have learned it was intended to be temporary.

        I'm fine with manually squashing "typo" commits, but these individual ones are the kind where you can't know ahead of time if they'll be useful. It's better to keep them, and use "git log --first-parent" if you only want the overview of merges.

        • hinkley 18 hours ago

          Someone did this to code meant to cut our web crawler bandwidth and someone didn’t notice it for like two years after it got toggled back off. So stupid. We were able to shrink the cluster after enabling it again.

      • vjerancrnjak 21 hours ago

        I have an exactly opposite preference. Give me a big change to review. Going commit by commit or any imposed steps is not how I write code or how I understand code.

        If you did not approach it through literate programming, I just prefer all of the thousands of lines at once.

        • QuercusMax 21 hours ago

          Reviewing individual changes that may not even build properly is a waste of time; reviewing thousands lines of lines at once is also a bad idea.

          Each unit of code (PR, commit, CL, whatever you want to call it) you send for review should be able to stand on its own, or at the very least least not break anything because it's not hooked into anything important yet.

        • rectang 20 hours ago

          You're in luck! For you, there is `git diff`.

          (And for me as well — both the individual commits and the PR-level summary are useful.)

          So, those of us who prefer commit-sized chunking don't have to do anything special to accommodate your preference.

          It doesn't go the other way, of course, if you present one big commit to me. But so long as your code is well-commented (heh) and the PR isn't too huge (heh heh) and you don't intersperse file renamings (heh heh heh) or code formatting changes (heh heh heh heh) which make it extremely difficult to see what you changed... no problem!

          • Izkata 20 hours ago

            > or code formatting changes (heh heh heh heh) which make it extremely difficult to see what you changed...

            One of the "individual commits saved me" cases was when one of these introduced a bug. They tried to cut the number of lines in half by moving conditions around, not intending to make any functional changes, but didn't account for a rare edge case. It was in a company-shared library and we didn't find it until upgrading it on one of our products a year or two after the change.

            • rectang 20 hours ago

              One of the reasons I don't like a policy of "every commit must pass CI" is that I prefer to perform verbatim file moves in a dedicated commit (which inevitably breaks CI) with no logical changes at all, then modify code as necessary to accommodate the move in a separate commit. It makes review and debugging much easier.

              • hxtk 12 hours ago

                This is my main use for branches or pull requests. For most of my work, I prefer to merge a single well-crafted commit, and make multiple pull requests if I can break it up. However, every merge request to the trunk has to pass CI, so I'll do things like group a "red/green/refactor" triplet into a single PR.

                The first one definitely won't pass CI, the second one might pass CI depending on the changes and whether the repository is configured to consider certain code quality issues CI failures (e.g., in my "green" commits, I have a bias for duplicating code instead of making a new abstraction if I need to do the same thing in a subtly different way), and then the third one definitely passes because it addresses both the test cases in the red commit and any code quality issues in the green commit (such as combining the duplicated code together into a new abstraction that suits both use cases).

              • dpflan 18 hours ago

                You are model citizen with those file moves. Totally agree with how that can be very disruptive for legibility and comprehension of changes.

      • hinkley 18 hours ago

        Also for a year from now when I’m wondering wtf I was thinking when I put that bug into the code. Was a thinking of a different corner case? Or not at all?

    • borntyping 21 hours ago

      > Can someone explain why anyone would want non-squashed PRs? > > For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this.

      I think cause and effect are the other way around here. You write and keep work-in-progress commits without caring about changes because the history will be discarded and the team will only look at pull requests as a single unit, and write tidy distinct commits because the history will be kept and individual commits will be reviewed.

      I've done both, and getting everyone to do commits properly is much nicer, though GitHub and similar tools don't really support or encourage it. If you work with repository history a lot (for example, you have important repositories that aren't frequently committed to, or maintain many different versions of the project) it's invaluable. Most projects don't really care about the history—only the latest changes—and work with pull-requests, which is why they tend to use the squashed pull request approach.

      • baq 20 hours ago

        It’s mostly because pull requests are what is being tested in CI, not individual commits. Might as well squash as nobody wants to deal with untested in-between crap.

        If you mean stacked PRs, yeah GitHub absolutely sucks. Gerrit a decade ago was a better experience.

        • adastra22 20 hours ago

          That's a problem with your CI. You can configure it to test all commits which make up the PR.

          • baq 20 hours ago

            My CI takes 20+ mins and costs meaningful $ per run. Not happening.

            • adastra22 20 hours ago

              > That's a problem with your CI.

              • baq 19 hours ago

                Yes, but not in the way you think.

    • adastra22 20 hours ago

      > For the 5% of engineers that diligently split each PR into nice semantic changes, I suppose that's nice. But the vast majority of engineers don't do this.

      Here's a simple reason: at my company, if you don't do this, you get fired.

      This is basic engineering hygiene.

      • devnullbrain 11 hours ago

        Yep.

        Those 5% shouldn't be approving PRs without it.

        Those 95% shouldn't be code owners.

        Nice things can be had, you just have to work for them.

    • bentcorner 21 hours ago

      > If you truly care about making more semantic history, split the work into multiple PRs.

      This exactly - if your commit history for a PR is interesting enough to split apart, then the original PR was too large and should have been split up to begin with.

      This is also a team culture thing - people won't make "clean" commits into a PR if they know people aren't going to be bisecting into them and trying to build. OTOH, having people spend time prepping good commits is potentially time wasted if nobody ever looks at the PR commit history aside from the PR reviewers.

      • hamburglar 20 hours ago

        If I have a feature branch, and as part of that feature change, I did a simple refactor of something, I definitely want that committed as two separate commits in the PR, because you can look at the changes in isolation and it makes them a LOT easier to follow. And if you prefer, you can look at the diff of the entire PR in one single view. I don’t see the downside.

        And please do not come back with “you shouldn’t do a refactor as part of a feature change.” We don’t need to add bureaucracy to avoid a problem caused by failure to understand the power of good version control.

        • normie3000 19 hours ago

          This bureaucracy has very low overhead. Squash-merge the feature and then the refactor, or the refactor then the feature. Also makes reviewing each quicker.

          • PaulDavisThe1st 18 hours ago

            the feature isn't ready for merge when the refactor happens ....

      • imron 7 hours ago

        > having people spend time prepping good commits is potentially time wasted if nobody ever looks at the PR commit history

        Good habits make good engineers.

        You never know which of your commits will cause a future problem so structuring all of them well means that when you need to reach for a tool like git bisect then your history makes it easy to find the cause of the problem.

        It takes next to no extra effort.

    • embedding-shape 21 hours ago

      > For that matter, why merge? Rebase it on top. It's so much cleaner. It's atomic and hermetic.

      With an explicit merge, you keep two histories, yet mostly care about the "main" one. With rebase, you're effectively forgetting there ever was a separate history, and chose to rewrite the history when "effectively merging" (rebasing).

      There's value in both, mostly seems to come down to human preference. As long as the people that will be working with it agrees, I personally don't care either way which one, granted it's consistently applied.

      • hinkley 18 hours ago

        Rebasing is slower in practice to trunk based development and CI, while merging and squashing are moving farther away.

      • eastbound 20 hours ago

        Squashed PR allow you to see a single commit, while rebased PRs show up as multiple. The squash has the defect that you can’t rebase PRs that were on top.

        But a MERGE… is a single commit on Master, while keeping the detailed history!

        - We just don’t use the merge because they are ugly,

        - And they’re only ugly because the visualizers make them ugly.

        It’s a tooling problem. The merge is the correct implementation. (and yet I use the rebase-fast-forward).

        • thfuran 11 hours ago

          You can squash after rebasing or merge a PR without squashing.

    • pizza234 21 hours ago

      > Each PR being squashed to 1 commit is nice and easy to reason about. If you truly care about making more semantic history, split the work into multiple PRs.

      I don't argue with your point (even if I am obsessive about commits separation), but one needs to keep in mind that the reverse also applies, that is, on other end of the spectrum, there are devs who create kitchen-sink PRs which include, for example, refactorings, which make squashed PRs harder to reasons about.

    • koolba 21 hours ago

      > Can someone explain why anyone would want non-squashed PRs?

      So you can differentiate the plumbing from the porcelain.

      If all the helpers, function expansions, typo corrections, and general renamings are isolated, what remains is the pure additional functional changes on its own. It makes reviewing changes much easier.

    • SatvikBeri 21 hours ago

      Making human-readable commit history is not that hard with a little practice. It's one of the big benefits of tools like magit or jj. My team started doing it a few weeks ago, and it's made reviewing PRs substantially easier.

      • criemen 21 hours ago

        +1: EdaMagit has been a game changer for me wrt reordering commits, fusing them together, and writing at least 1-2 sentences of proper commit messages after the fact.

    • mikeocool 21 hours ago

      If you ever worked with stacked PRs, and the top one gets squashed and merged it often becomes a nightmare to rebase the rest of the PRs to bring them up to date.

      • rectang 21 hours ago

        I wish this was easier. I have a workflow that I use to create stacked PRs which involves changing the target branch of the next PR to `main` after merging its predecessor, but it is too fragile to institute as a policy.

        However, this is also just a more specific version of the general problem that long-lived, elaborate topic branches are difficult to work with.

      • baq 20 hours ago

        jj makes this mostly trivial - well worth checking out if you’d like to work this way, but GitHub gets in the way.

    • mkleczek 20 hours ago

      git merge --no-ff

      git log --first-parent

      git bisect --first-parent

      The above gives you clean PR history in the main branch while retaining detailed work history in (merged) feature branches.

      I really don't understand why would I squash having git merge --no-ff at my disposal...

    • kragen 11 hours ago

      I would definitely neither accept a pull request where the individual commits were testing and iteration, nor a pull request with hundreds of lines of changes are in a single commit. (New code, maybe.) It's not about ceremony; it's about knowing what changed and why.

    • anonymars 20 hours ago

      Hoo boy is it fun to figure out where things went wrong when the real commit history was thrown away to make it look prettier. Especially a mistake from a merge conflict.

      • hinkley 18 hours ago

        I had to stop two devs from fisticuffs. One of them used merge and manufactured a bug in the other’s code and was making a stink about it in the middle of the cubicles.

        Merges lie I worse ways than rebase does. Hands down. With rebase I break my own shit. With merge I can break yours. Since your code is already merges into trunk, it has fewer eyes on it now and it’s on me to make sure my code works with yours and not vice versa.

        • anonymars 16 hours ago

          I don't follow. In either case two branches are combined into one:

          With a merge commit anyone can see each original path, and the merge result (with its author), and even try the three-way merge again. With a rebase, the original context is lost (the original commits were replaced by the rebased versions).

          A rebase is a lossy projection of the same operation. A rebase lies about its history, a merge does not.

          • hinkley 7 hours ago

            Merge with conflicts can be resolved with changes in code that don’t belong to the person who resolved the merge, and are not attributed to the person who resolved the merge. Git blame shows someone else as the person who introduced the bug. When you do a rebase you’re only modifying your own commits, or those of someone sharing your branch.

    • nixpulvis 21 hours ago

      I'd take fully squashed PRs over endless "fix thing" and "updated wip"... but if you work in a way that leaves a couple meaningful commits, that's even better. Sometimes I end up in this state naturally by having a feature branch, which I work on in sub branches, each being squashed into a single final commit. Or when the bulk of the logic is on one commit, but then a test case or two are added later, or a configuration needs changing.

      I like merge commits because they preserve the process of the review.

      • hinkley 18 hours ago

        What you have there is people hiding their neuroses and lack of commit hygiene and that’s avoiding the problem not fixing it.

      • echelon 21 hours ago

        > I like merge commits because they preserve the process of the review.

        I appreciate that, but I still can't square it with my world view.

        GitHub or whatever tool you use preserves all the comments and feedback and change history if you ever need to go back and reference it. Such instances are limited, and in my experience it's mainly politics and not technical when this happens. The team's PR discussion itself isn't captured in git, so it's lossy to expect this type of artifact to live in git anyway. It's also much less searchable and less first class than just going back to GitHub to access this.

        Ultimately, these software development artifacts aren't relevant to the production state of the software you're deploying. It feels muddled to put an incomplete version of it into your tree when the better source of truth lives outside.

        • nixpulvis 21 hours ago

          Usually the merge commit is what has the link to the PR/MR. So it's the best way to actually find it.

          > Ultimately, these software development artifacts aren't relevant to the production state of the software you're deploying. It feels muddled to put an incomplete version of it into your tree when the better source of truth lives outside.

          You could make the same claim about the entire history. Git is a development tool, production just needs a working cut of the source.

    • CogitoCogito 21 hours ago

      I have always been very careful with git histories and often rewrite/squash them before final review/merge. Often my rewritten histories have nothing to do with the original history and commits are logically/intuitively separated and individually testable.

      That said, very few people seem to be like me. Most people have no concept of what a clear commit history is. I think it's kind of similar to how most people are terrible written communicators. Few people have any clue how to express themselves clearly. The easiest way to deal with people like this is to just have them squash their PRs. This way you can at least enforce some sanity at review and then the final commit should enforce some standards.

      I agree on rebasing instead of straight merging, but even that's too complicated for most people.

    • vjerancrnjak 21 hours ago

      You can just inspect merge commits, you can also just bisect over merge commits.

      Splitting work into multiple PRs is unnecessary ritual.

      I have never reasoned about git history or paid attention to most commit messages or found any of it useful compared to the contents of the change.

      When I used git bisect with success it was on unknown projects. Known projects are easy to debug.

    • nothrabannosir 21 hours ago

      Because github doesn't support stacked diffs, basically.

      T_T

    • leptons 21 hours ago

      I manage a team of developers, and I don't think any of us squash commits, and I really don't care. It's been working fine for 8 years at this job.

      We keep our git use extremely simple, we don't spend much time even thinking about git. The most we do with git is commit, push, and merge (and stash is useful too). Never need to rebase or get any deeper into git. Doing anything complicated with git is wasting development time. Squashing commits isn't useful to us at all. We have too much forward velocity to worry that some commit isn't squashed. If a bug does come up, we move forward and fix it, the git history doesn't really figure into our process much, if at all.

      • ziml77 20 hours ago

        Squash is just something you do at the point of merging. It's a single option during the merge and doesn't have you doing any more work than a merge that you don't squash. I don't know about github, but I know in gitlab it's a simple checkbox in the merge request (and it can be set to be checked by default by the admin if they want).

        • leptons 19 hours ago

          That's great for you, but squashing commits doesn't do anything for our team. It hides somewhat useful information, which never seemed like a good thing to me.

          Other teams at my company are obsessive about squashing, and I am glad I don't work on those teams. They are frustrating to work with, they are notoriously slow to ship anything, and their product still breaks even with all their processes and hurdles to getting anything shipped. Teams that demand squashing simply don't impress me at all.

    • stefan_ 21 hours ago

      There is no free lunch, the same people that can't be bothered to make atomic semantic commits are the same people that will ruin your bisect with a commit that doesn't build or has some other unrelated run failure. People that don't care can't be fixed by tools.

      The advice around PRs rings hollow, after all they were invented by the very people that don't care - which is why they show all changes by default and hide the commits away, commit messages buried after 5 clicks. And because this profession is now filled with people that don't care, add the whole JIRA ticket and fix version rigmarole on top - all kinds of things that show up in some PMs report but not in my console fixing an issue that requires history.

  • formerly_proven a day ago

    > with full history you can find the true code-level inflection point.

    "typo fix"

rf15 a day ago

Honestly, after 20 years in the field: optimising the workflow for when you can already reliably reproduce the bug seems misapplied because that's the part that already takes the least amount of time and effort for most projects.

  • nixpulvis 21 hours ago

    Just because you can reproduce it doesn't mean you know what is causing it. Running a bisect to fix which commit introduces it will reduce the area you need to search for the cause.

    • SoftTalker 21 hours ago

      I can think of only a couple of cases over 20+ years where I had to bisect the commit history to find a bug. By far the normal case is that I can isolate it to a function or a query or a class pretty quickly. But most of my experience is with projects where I know the code quite well.

      • cloud8421 21 hours ago

        I think your last sentence is the key point - the times I've used bisect have been related to code I didn't really know, and where the knowledgeable person was not with the company more or on holiday.

        • SoftTalker 16 hours ago

          Even so, normally anything like a crash or fatal error is going to give you a log message somewhere with a stack dump that will indicate generally where the error happened if not the exact line of code.

          For more subtle bugs, where there's no hard error but something isn't doing the right thing, yes bisect might be more helpful especially if there is a known old version where the thing works, and somewhere between that and the current version it was broken.

        • nixpulvis 21 hours ago

          Exactly. And even if I do know the source pretty well, that doesn't mean I'm caught up on all the new changes coming in. It's often a lot faster to bisect than to read the log over the month or two since I touched something.

        • hinkley 18 hours ago

          Or they were barking up a wrong tree and didn’t know it yet, and the rest of us were doing parallel discovery.

          Tick tock. You need competence in depth when you have SLIs.

      • wyldfire 11 hours ago

        > By far the normal case is that I can isolate it to a function or a query or a class pretty quickly

        In general, this takes human-interactive time. Maybe not much, but generally more interactive time than is required to write the bisect test script and invoke `git bisect run ...`

        The fact that it's noninteractive means that you can do other work in the meantime. Once it's done you might well have more information than you'd have if you had used the same time manually reducing it interactively by trying to reduce the scope of the bug.

      • hinkley 18 hours ago

        I’ve needed CPR zero times and bisect around a dozen. You should know both particularly for emergencies.

  • hinkley 18 hours ago

    I would add to nixpulvis’s comments that git history may also help you find a repro case, especially if you’ve only found a half-assed repro case that is overly broad.

    Before you find even that, your fire drill strategy is very very important. Is there enough detail in the incident channel and our CD system for coworkers to put their dev sandbox in the same state as production? Is there enough if a clue of what is happening for them to run speculative tests in parallel? Is the data architecture clean enough that your experiments don’t change the outcome of mine? Onboarding docs and deployment process docs, if they are tight, reduce the Amdahl’s Law effect as it applies to figuring out what the bug is and where it is. Which is I. This context also Brooks ‘s Law.

  • zeroonetwothree 14 hours ago

    Eh not always. If you work in a big codebase with 1000s of devs then it can quite tricky to find the cause of some bug when it’s in some random library someone changed for a different reason.

tarwich 14 hours ago

When I learned about git bisect I thought it was a little uppity. I thought it was something I would never use in a practical scenario. Working on large code bases. However, sometimes a bug pops up and we don't know when it started. We use git bisect not place blame on a person, but to try to figure out when the bug was no longer there so we know what code introduced it. Yes, clean code helps. Sometimes git bisect is really nice to have.

utopiah 20 hours ago

I agree with the post.

I also think that typically if you have to resort to bisect you are probably in a wrong place. You should have found the bug earlier so if do not even know when the bug came from

- your test coverage isn't good sufficient

- your tests are probably not actually testing what you believe they do

- your architecture is complex, too complex for you

To be clear though I do include myself in this abstract "you".

  • imiric 18 hours ago

    I mean, sure—in a perfect world bugs would be caught by tests before they're even deployed to production.

    But few of us have the privilege of working on such codebases, and with people who have that kind of discipline and quality standards.

    In reality, most codebases have statement coverage that rarely exceeds 50%, if coverage is tracked at all; tests are brittle, flaky, difficult to maintain, and likely have bugs themselves; and architecture is an afterthought for a system that grew organically under deadline pressure, where refactors are seen as a waste of time.

    So given that, bisect can be very useful. Yet in practice it likely won't, since usually the same teams that would benefit from it, don't have the discipline to maintain a clean history with atomic commits, which is crucial for bisect to work. If the result is a 2000-line commit, you still have to dig through the code to find the root cause.

gegtik 13 hours ago

git bisect gets interesting when API signatures change over a history - when this does happen, I find myself writing version-checking facades to invoke the "same" code in whatever way is legal

kfarr a day ago

Wow and here I was doing this manually all these years.

inamberclad 20 hours ago

'git bisect run' is probably one of the most important software tools ever.

anthomtb 21 hours ago

Binary searching your commit history and using version control software to automate the process just seems so...obvious?

I get that author learned a new-to-him technique and is excited to share with the world. But to this dev, with a rapidly greying beard, the article has the vibe of "Hey bro! You're not gonna believe this. But I just learned the Pope is catholic."

  • Espressosaurus 21 hours ago

    Seriously.

    Binary search is one of the first things you learn in algorithms, and in a well-managed branch the commit tree is already a sorted straight line, so it's just obvious as hell, whether or not you use your VCS to run the bisect or you do it by hand yourself.

    "Hey guys, check it out! Water is wet!"

lloydatkinson a day ago

I’ve used bisect a couple of times but really it’s a workaround for having a poor process. Automatic unit tests, CI/CD, should have caught it first.

It’s still very satisfying to watch run though, especially if you write a script that it can run automatically (based on the existing code) to determine if it’s a good or bad commit.

  • nixpulvis a day ago

    It's not a workaround. In this case it seems like it, but in general you cannot always rely on your existing tests covering everything. The test you run in the bisect is often updated to catch something new which is reported. The process is often:

    1. Start with working code

    2. Introduce bug

    3. Identify bug

    4. Write a regression test

    5. Bisect with new test

    In many cases you can skip the bisect because the description of the bug makes it clear where the issue is, but not always.

    • Izkata a day ago

      Important addendum to 4 that can throw someone their first time - Put the new test in a new file and don't commit it to the repo yet. You don't want it to disappear or conflict with old versions of the test file when bisect checks old commits.

      • nixpulvis 21 hours ago

        I've always liked having regression tests somewhat isolated anyway, so this works well with that.

      • lloydatkinson 21 hours ago

        This is one annoying footgun. It would be great if git could ignore some special .bisect directory during the entire process. This way the script doesn’t need a load of ../..

        • trenchpilgrim 21 hours ago

          Create a .bisect directory and stick a gitignore inside it that ignores the folder. Or, add .bisect/ to a global gitignore file.

        • 1718627440 16 hours ago

          You can checkout the bisect script commit in another directoy. Or use $git bisect run $(git show ...).

  • masklinn 21 hours ago

    > Automatic unit tests, CI/CD, should have caught it first.

    Tests can't prove the absence of bugs, only their presence. Bugs or regressions will slip through.

    Bisect is for when that happens and the cause is not obvious.

  • slang800 a day ago

    Sometimes you notice a problem that your unit tests didn't cover and want to figure out where it was introduced. That's where git bisect shines.

    You can go back and efficiently run a new test across old commits.

  • tmoertel 21 hours ago

    I don't think it's that simple. For example: Show me the unit tests and CI/CD scripts you would write to prove your code is free from security holes.

    Yet, once you've identified a hole, you can write a script to test for it, run `git biset` to identify what commit introduced the hole, and then triage the possible fallout.

  • lucasoshiro 20 hours ago

    Ideally, we should write bug-free code, but we can't. There are some tools to avoid bugs, tests are one of them. Those tools avoid them, but not mitigate. Bisect doesn't replace tests, it only helps find where the bugs are happening. After finding and fixing the bugs, it's a good idea to write a test covering that bug.

    To sum up: bisect and tests are not in opposite sides, they complement each other

  • trenchpilgrim a day ago

    "We write unit tests so our code doesn't have bugs."

    "What if the unit tests have bugs?"

monitron 21 hours ago

> the OG tool `git`

This phrase immediately turned the rest of my hair gray. I'm old enough to still think of Git as the "new" version control system, having survived CVS and Subversion before it.

  • c0brac0bra 21 hours ago

    But did you survive rcs?

    • kragen 11 hours ago

      Worse: PVCS!

    • smcameron 21 hours ago

      and sccs

      • shermantanktop 20 hours ago

        :raises hand

        At my mid 90s Unix shop, everyone had to use someone’s script which in turn called sccs. I don’t recall what it did, but I remember being annoyed that someone’s attempt to save keystrokes meant I had to debug alpha-quality script code before the sccs man page was relevant.

        Adding -x to the shebang line was the only way to figure out what was really going on.

  • rco8786 20 hours ago

    I still remember dragging my team kicking and screaming away from Subversion. Which, to be fair, was fine. I think GitHub’s rise was really what won it for git vs subversion. The others though, good riddance.

huflungdung a day ago

I hardly think binary search is an unknown algorithm even by beginner standards for someone from a completely different field

  • trenchpilgrim 21 hours ago

    https://xkcd.com/2501

    I know a lot of professional, highly paid SWEs and DevOps people who never went to college or had any formal CS or math education beyond high school math. I have a friend who figured out complexity analysis by himself on the job trying to fix up some shitty legacy code. Guy never got past Algebra in school.

    (If you're about to ask how they can get jobs while new grads can't - by being able to work on really fucking terrible legacy code and live in flyover states away from big cities.)

rr808 21 hours ago

Surely everyone has a CI pipeline that wont allow merges with failing tests?

  • jmount 21 hours ago

    This if the case where you introduce the test after the failure.

  • ervine 21 hours ago

    More than one assumption in that sentence, ha!

    • trenchpilgrim 21 hours ago

      Including "code is delivered in a way that involves merges"

      • ervine 20 hours ago

        This feels like a "is a hotdog a sandwich?" situation.

        "Is sftp-ing to prod a merge?"

        • trenchpilgrim 20 hours ago

          My team follows good practice but I deal with a vendor who emails us a ZIP file :scream:

          • ervine 19 hours ago

            Honestly it's kind of refreshing to just push files to a server.

            • trenchpilgrim 18 hours ago

              I've been telling people for years, if the process to deploy to an environment is more complicated than one click, it's too complicated!

  • thealistra 20 hours ago

    But most CIs allow flaky tests :)