Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I don't understand the argument here at all. Git flow seems pretty orthogonal to concept of branching lifespan. We did the "merge early, merge often" approach with SVN 15 years ago and it was fine. We do git flow now and it's fine. It meshes perfectly with agile development where you work on the smallest feature set that adds incremental value. That means your branch only ever lives long enough to do one tiny thing and it gets merged as soon as it's working and no sooner. I've been following this approach for years with dozens of teams and it's very successful. And I've never run a rebase on purpose in my entire life.


What sort of repository and deployment structure are you using right now in which it works fine? I think that will influence the outcome a lot.

I just was working with a client that used something like git-flow, where develop was deployed to staging in order to test, master was always production, and the codebase is largely a monorepo. Then there were a couple other legacy repos, which also are using git-flow. It... "works", but it also is a needless source of pain. You need to PR to develop, then PR develop into master, and I'd inevitably get bitten with merge conflicts in the process. All that for what gain? I saw no upside to it. Maybe if you have steady release cycles, but git-flow never seemed to fit the whole disposable branches, master is good to deploy dozens of times per day sort of workflow I've gotten used to.


Continuous deployment triggered by a merge into develop is basically a degenerate case of gitflow where there is an automated release associated with every merge into develop.

So if you are doing full gitflow or if you are doing this degenerate case of gitflow you still shouldn't be getting merge conflicts when releasing develop (merging develop into master). This should basically be a fast-forward with perhaps a merge commit for record keeping. The only way to get conflicts in this case is if there is some other process that is changing master independent of the "releases" from develop.

In gitflow those other changes are managed through the hotfix branching rules, which ensures that you resolve the conflict when you close the hotfix and back merge it into develop.


Just pull develop into your feature branch daily. Merging develop to master is a one-lane highway so merges are only required if you did a production patch.


> I've never run a rebase on purpose in my entire life

While it's great that you've found a branch strategy that works for you, I would hesitate to take recommendations from anyone who doesn't rebase their patches. Then it's just an elaborate way of doing cvs style development.

Which there is nothing wrong with of course. That works great for many people. It's just not what git does best.


Git was built for giant open source projects where you have dozens of contributors making unplanned code changes to an organically growing code base. That's not a use case for any well-managed enterprise. Pull requests come a few at a time and representing a few days work and are rarely stepping on each other. Every conflict can be worked out face to face very quickly. If I have to manage a large group of devs, I break them up by vertical so they're not on top of each other.


This is the thing I've never really understood about the common objections to merging and resolving conflicts in larger projects. Sometimes people talk as if that happens all the time and it's not scalable if you have hundreds or thousands of developers all working on some huge code base. But how do you even get hundreds or thousands of developers working on the same huge code base so they keep getting in each other's way? Where is the modularity and coordination?

It always sounds like this happens at a hellish organisation where some Agile consultant once came in and said not to bother with boring stuff like management or software architecture. These must be strange places to work, because reportedly they also need to be prepared for requirements to change significantly every 27 seconds, and not being able to deploy changes in minutes is an existential threat even though the new feature has barely finished running the automated test suite and doesn't yet have any kind of user documentation, knowledge among the sales and support teams, etc.

Personally I do find rebasing and squashing to be very useful in workflows that involve Git, but there are plenty of effective ways to run a project and use a source control tool that don't rely on a 100% linear history with GitHub-style pull requests for change management.


Some people talk about merge conflicts as something inherently bad, but that's a flawed perspective. Merge conflicts are a great feature of a version control system. Given that multiple persons changed the same line, conflicts are the easy way out.

Conflicts have only given me a hard time when someone did something out of the ordinary. Even very large and unstructured open source projects such as the Linux kernel will see people naturally gravitate to different parts of the code. It's not often a filesystem developer suddently change the networking stack, for example.


Exactly this. Whatever you call your branching methodology, if they last too long that's when the fun stops.

This article is full of bluster about how bad gitflow is, but very little actual reasoning why.

I don't like the ceremony all that much, but its working really well at my current large corporate gig across thousands of repos and engineers.


"Whatever you call your branching methodology, if they last too long that's when the fun stops."

Doesn't git-flow explicitly keep a long lived branch around by design? The whole concept of having "develop" run in parallel to master is introducing a long lived branch. That's typically the branch that I'd always have the most trouble with.


That's assuming that there is anything going on on master at all. A gitflow master doesn't really count as a branch of it's just a permanent alias for the latest mainline release tag.


Often when I encounter (startup) teams using this git-flow inspired strategy there definitely are things going on with master that shouldn't be. For example:

Feature A is in staging ('develop'), but can't get into prod ('master') yet. However, the opposite is true of Feature B -- a customer needs it now and timelines get changed by sales or management (your largest customer needs it now). It conflicts with something in A and can't go into staging just yet (another team needs it). So that feature branch needs to go straight into master without hitting develop. The whole deployment infrastructure was designed on the (false) assumption that develop is always deployable to staging, and master to prod, rather than being able to quickly deploy and test feature branches at whim. So now we've diverged between master and develop a bit, and while we could still get our way back to a "clean" git-flow at this point pretty easily, if you're not careful this can keep compounding. I've seen it compound for months, and personally have wasted days trying to clean up conflicts that set in because teams I'm consulting for weren't careful.

It's easy to have an inexperienced startup team make these git-flow based decisions early on, get stuck with them for some time, hit rapid growth, and then get themselves into a messy place.

My point is sure, use it if you have pretty steady release and deploy schedules. But if you're deploying dozens of times per day you need to design your infra and CD strategy to handle it. Typically I've seen teams with fragile deploy strategies they've adopted due to assumptions they made from starting with git-flow.

So why even do it? There are simpler strategies like GitHub Flow that force a team to design a deployment strategy that accepts that feature branches can be tested in a real environment, and master can always be delivered (or rolled back).


I have some long-lived branches lying around for work I only occasionally have time and energy to do. E.g., I've an "ALWAYS DEFERRED" feature patch for PostgreSQL I need to eventually finish. Every time I come back to it I've fallen thousands of commits behind, and neither a merge nor a rebase can save me, but a rebase bisection script I have does save me by identifying the one commit upstream, in order, that causes conflicts that I have to resolve manually, then resumes the rebase bisection.

Specifically, the algorithm is something like this (not tested; I've a script I should publish):

  bisect_rebase() {
    local ultimate_target=$1
    local target=$2
    shift 2
    
    b=$(git merge-base HEAD $target)
    while [[ $b != $(git rev-parse $target) ]]; do
      n2u=$(git log --oneline $b..$ultimate_target | wc -l)
      n=$(git log --oneline $b..$target | wc -l)
      
      # Try rebasing directly onto the target
      git rebase --onto $target $b && return 0
      
      # If we have conflicts and rebased across just
      # one commit, stop here
      ((n2u == 1)) && return 1
      if ((n == 1)); then
         echo "Fix conflicts and exit 0 to continue or 1 to stop"
         if $SHELL -i; then
             git rebase --continue
             return 0
         else
             git rebase --abort
             return $?
         fi
      fi
      git rebase --abort
      bisect_rebase $ultimate_target $(git log -n $((n/2)) --pretty=%h $b..$target)
    done
  }



That is the scariest script I have ever seen in my life.

It makes me believe there is a real existential problem with git as a product.


I wrote it on the spot. The actual script is here: https://gist.github.com/nicowilliams/ea2fa2b445c2db50d2ee650...


The nice thing is that git is serverless, so you can clone a throwaway repo and run this sort of risky thing safely. It makes what's impossible in SVN possible, though maybe not easy.


... unless someone scripts pushing and pulling.

... unless changes were not already pushed and therefore 'saved'.

Git is inherently dangerous and overly complex - such a script shouldn't ever have to exist for so many reasons.


Oh get off your high horse. Git started as a collection of small programs and shell scripts, and as time went by more of those scripts became programs.


How it came to be is basically irrelevant when determining how materially useful it is, given all the bizarre artifacts of git. Nobody really seems to really have mastered it, most people seem to grasp only a subset, and there are so many potential pitfalls. It's a very powerful, raw technology that hasn't been properly productized, or rather, 'it works how Linus wants it to for his reasons' and everyone else just gets dragged along.


> Nobody really seems to really have mastered it

You can speak for yourself. Don't speak for me or my colleagues.


I'll bet $1000 that zero of your colleagues have truly mastered git.

I've watched countless times as employee X did something wrong, the so-called 'git experts' gaggle around them and spend quite a bit of time trying to 'fix' the situation, all with varying approaches that their peer gitters barely understood. All blissfully unaware that their 'intelligence signalling' is woefully misplaced as such situations should not arrive in the first place, and if they did, they should be easily remedied.

Moreover, it doesn't matter that 'a few people' have very strong Git skills, the point is - the vast majority don't and there should absolutely be no need for them to - the problems most developers face re: code management are simply not complex enough to warrant the power of the tool. The failure to understand this among so many glib gitters is basically existentially problematic. The inability to grasp the difference between 'raw power' and usability, complexity and the resulting costs ... is the real problem.


What you describe is not what “Gitflow” proscribes though - you don’t have a separate “develop” and “master” branch.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: