Stop Using Pull Requests
Your team’s code review process is probably an expensive illusion of quality. Here’s what the evidence says, and what to do instead.
“Inspection is too late. The quality, good or bad, is already in the product. Cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place.”
-- W. Edwards Deming, Out of the Crisis (1982)
Abstract
Pull requests have become the default code review mechanism in software teams everywhere. But the pull request was invented for open source projects, where strangers contribute code to repositories maintained by people who don’t know them. When private teams of trusted colleagues adopt the same process, they import a model designed for low trust into an environment that should operate on high trust. The result is a post-development inspection system that the evidence says catches few bugs, introduces enormous waiting time, incentivises large batches, and fractures teams into isolated individuals. Academic research, large-scale industry data from DORA, and a growing practitioner consensus all point in the same direction: there are better ways to build quality into software than inspecting it after the fact. This article examines the evidence and proposes an alternative — T*D, the union of Test-Driven Development, Trunk-Based Development, and Team-focused Development — as a path from slow, inspection-heavy workflows to fast-flowing teams that produce high-quality, safe systems.
TL;DR
Pull requests were designed for open source contributions from untrusted strangers. Applying them to trusted teams is a category error.
Peer-reviewed research shows code review’s primary value is knowledge transfer, not bug detection. Less than 15% of review comments relate to actual bugs.
Async PR workflows mean your code spends 86-99% of its lead time waiting. One organisation spent 130,000 hours in a single year waiting on PRs that received zero comments.
DORA research across 36,000+ professionals shows trunk-based development correlates with dramatically higher software delivery performance, and faster code reviews alone improve performance by 50%.
The alternative is T*D: Test-Driven Development (build quality in), Trunk-Based Development (integrate continuously), and Team-focused Development (review during creation, not after).
The transition is gradual: optimise PRs first, adopt Ship/Show/Ask, then move to pairing and trunk-based development as trust and automation mature.
Deming wrote those words about manufacturing in 1982, but they describe what happens in most software teams today with uncanny precision. A developer writes code in isolation, on a branch, for hours or days. Then they open a pull request. The code sits in a queue. Someone eventually looks at it, leaves a few comments -- mostly about naming and formatting -- clicks approve, and the code merges. The whole process feels thorough. It feels like quality control. But the evidence suggests it is mostly theatre.
This article is not against code review. Code review has real value -- but that value is knowledge transfer and shared understanding, not catching bugs. And a blocking, asynchronous pull request is one of the worst possible mechanisms for achieving it.
What follows is a synthesis of peer-reviewed academic research, large-scale industry data, and practitioner experience spanning two decades. The picture that emerges is consistent and, for many teams, uncomfortable: the way most organisations practice code review is an expensive ritual that slows delivery, encourages large batches, fractures team cohesion, and produces a false sense of security. Better alternatives exist, and they aren’t theoretical -- they are practised successfully by high-performing teams around the world.
A Solution Designed for Strangers
Pull requests have a specific origin story, and understanding it explains why they are a poor fit for most teams.
Linus Torvalds created the git-request-pull command in 2005, shortly after releasing Git. Before that, open source projects used email-based patch workflows: contributors mailed diffs to a mailing list, and the maintainer decided whether to apply them. In 2008, GitHub launched the web-based pull request, making it dramatically easier to accept contributions from the outside world -- from people the maintainers did not know.
This was a genuine innovation for open source. The pull request was designed as a gatekeeper mechanism for untrusted contributors submitting code to repositories they don’t own. It solved a real problem: how do you safely accept code from strangers?
But then something happened. Git became the dominant version control system. GitHub became the dominant platform. And teams everywhere adopted the pull request as their default workflow -- not because they had evaluated it against alternatives, but because it was there and everyone else was using it. As Martin Fowler observed: “I suspect that since pull requests are so popular, a lot of teams are using them by default when they would do better without them.”
The category error should be obvious. In an open source project, you are reviewing code from someone you may never have met, working in a codebase they may not fully understand, with no shared context about the team’s conventions or direction. A gatekeeper model makes sense. In a private team, you are reviewing code from a colleague who sits in the same stand-up, shares the same goals, and (presumably) has been hired precisely because you trust their competence. Yet the process is the same.
Thierry de Pauw puts it bluntly: “Pull requests are designed to make it easier to accept contributions from the outside world, from untrusted people we do not know about.” When your team adopts this model internally, you are importing a trust assumption that does not match your reality.
Even in open source, pull requests cause friction. Contributions sit unreviewed for weeks or months. Maintainer burnout is well documented. The process works better than emailing patches, but it is hardly frictionless. And the friction that is merely inconvenient in open source becomes genuinely destructive in a team that needs to integrate and deploy multiple times a day.
What Code Review Actually Finds
If you ask developers why they do code review, the most common answer is: to catch bugs. The evidence says this is largely a myth.
The most significant data comes from Microsoft Research. A 2015 IEEE study titled “Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down” found that only a very small percentage of code review comments had anything to do with bugs. Most were about structural issues and style. A landmark 2013 paper by Bacchelli and Bird -- “Expectations, Outcomes, and Challenges of Modern Code Review” -- studied code review at Microsoft through observation, interviews, and surveys and reached the same conclusion: while finding defects remains the main motivation for review, reviews are less about defects than expected. Less than 15% of issues discussed in code reviews relate directly to bugs. The primary benefits are knowledge transfer, increased team awareness, and the creation of alternative solutions.
A separate Microsoft Research study analysed 1.5 million review comments across five projects. The more files in a change, the lower the proportion of useful comments. Developers spent on average six hours per week reviewing others’ changes. Reviews containing more than 20 files were already too big for effective review.
Large-scale industry data tells the same story. One major technology company, after examining nine million code reviews internally, cited knowledge transfer as the primary source of code-review ROI, not bug finding. Up to 75% of code review comments affect software evolvability and maintainability rather than functionality.
None of this means code review is worthless. It means its value lies in a different place than most people assume. If your organisation justifies pull requests primarily as a bug-catching mechanism, you are optimising for a benefit that peer-reviewed research says is marginal. The knowledge transfer benefit is real -- but a blocking asynchronous queue is one of the worst ways to achieve it.
The Staggering Cost of Waiting
Here is a calculation that should disturb any engineering leader.
If a code change takes 10 minutes to make but waits 1 hour for review, it is waiting for 86% of its total lead time. If review takes 4 hours: 96% waiting. If review takes one working day -- which is common -- the code spends 99% of its existence waiting for a human to look at it.
Martin Fowler cites a client that spent 130,000 hours in 2020 waiting for 7,000 pull requests that had no comments. Ninety-one percent of their PRs received no comments at all. The vast majority were rubber-stamped without substantive review, yet the process still imposed enormous delay.
The damage compounds through context switching. Research shows that developers wait an average of four days for a pull request review, that 86% of pull requests are handled under context-switching conditions, and that context-switching pull requests are 223% slower than non-context-switching ones. When a developer opens a PR and moves on to something else, rebuilding the mental context of the original work takes 30-60 minutes -- if they rebuild it at all.
Don Reinertsen’s The Principles of Product Development Flow provides the theoretical lens: batch size is an economic tradeoff between holding cost and transaction cost, and halving batch sizes halves queues and halves cycle time. Pull requests create a perverse incentive toward larger batches: because the transaction cost of getting a review is high (waiting, context switching, reviewer availability), developers batch more changes into each PR to amortise the review cost. Larger PRs take longer to review, reviews are less effective on large changes, and the cycle reinforces itself.
Charity Majors frames the cost financially. A 6-person team requiring days to deploy would need 24 people to match the output of a team deploying continuously -- roughly $3.6 million in unnecessary salary costs. A 10-person team shipping weekly would need 80 people: $14 million in waste. Her target: “Any merge triggers automatic deploy to production, completed in 15 minutes or less with no human intervention.”
Async Reviews: The Vicious Cycle
Dragan Stepanovic examined async code review workflows across more than 30 active repositories and observed a counter-intuitive pattern: teams using small PRs with async code reviews tended to have lower throughput than teams using large PRs. The reason is a feedback loop. Waiting for review leads to high work-in-progress (developers pull in new work while waiting). High WIP means less availability for reviewing. Less availability means more async handoffs. More async handoffs mean more waiting.
Instead of forming one team, developers become what Stepanovic calls “N teams of one person, often with different engineering cultures and coding practices.” The isolation is the opposite of what high-performing teams need.
Chelsea Troy identifies the root cause as insufficient shared context. Asynchronous code review places a massive demand on the reviewer’s time because they lack the context the author built over hours or days. Pairing is more efficient precisely because the context transfer is immediate -- the work of understanding and reviewing happens in a condensed amount of time relative to the amount of work being done.
A caveat is warranted: these are practitioner observations, not controlled experiments. The causal mechanism is plausible but not empirically proven. It is possible that teams with lower throughput default to async review for other reasons. But the observations are consistent across multiple independent practitioners, and the logic of WIP accumulation under queuing theory is well established.
What DORA Tells Us
If the practitioner arguments are directional, the DORA data provides the scale.
Accelerate, by Nicole Forsgren, Jez Humble, and Gene Kim, is based on four years of research across 23,000 surveys from over 2,000 organisations. Their findings on trunk-based development are unambiguous: developing off trunk rather than long-lived feature branches correlated with higher delivery performance. High-performing teams had fewer than three active branches at any time, branches lasted less than a day, and there were no code freezes or stabilisation periods.
The DORA capabilities framework identifies trunk-based development as a key technical capability. Elite performers who meet reliability targets are 2.3 times more likely to use trunk-based development. Crucially, the framework explicitly calls out “heavyweight code review processes” as a barrier -- they push developers toward larger batches and delay merges.
The 2023 State of DevOps Report, based on 36,000+ professionals worldwide, found that accelerating the code review process alone can lead to a 50% improvement in software delivery performance. Note what this says: the problem is not review itself, but the speed of review. Make review instant -- through pairing, rapid turnaround, or automation -- and you keep the benefit without the cost.
An important caveat: the DORA research is correlational, not causal. It shows that high-performing teams tend to use trunk-based development, not that trunk-based development causes high performance. Teams that adopt TBD may also be better-funded, more skilled, or have stronger engineering cultures. But the consistency of the finding across years and across tens of thousands of respondents makes it the strongest industry evidence available.
The Minimum CD initiative, co-created by Jez Humble, is blunt about the implication: daily integration to trunk is non-negotiable. If your team is not integrating to trunk daily, you are not doing continuous integration. Pull requests, as typically practised, fail this test.
A Growing Practitioner Consensus
ThoughtWorks placed “Peer review equals pull request” in the HOLD ring of their Technology Radar in April 2021 -- meaning “proceed with caution.” They noted that PRs create significant team bottlenecks, degrade review quality as overloaded reviewers begin rubber-stamping, and in one case, a client’s regulatory audit found that pull requests did not satisfy compliance because there was no evidence the code was actually read.
Kief Morris, also of ThoughtWorks and author of Infrastructure as Code, argues that pull requests add overhead designed for low-trust situations and that pull requests are not continuous integration -- CI is an alternative to pull requests, not a complement.
Dave Farley, co-author of Continuous Delivery, argues that pull requests are an artifact of branch-based development which deliberately isolates changes from mainline -- the opposite of what CI requires.
Jason Gorman asks the most piercing question: “Ask not so much ‘How do we do Pull Requests?’ but rather ‘Why do we need to do Pull Requests?’” His answer: pull requests are a symptom of low trust, not a solution for low quality. Teams should address the root causes -- skills development, professional standards, pair programming -- rather than institutionalising slow inspection.
And perhaps the most quotable line comes from Jessica Kerr, amplified by Kent Beck: “Pull requests are an improvement on working alone. But not on working together.”
T*D: Build Quality In
If pull requests are Deming’s inspection, what is the alternative? What does it look like to build quality into the process itself?
The answer is a union of three practices I’ll call T*D -- a deliberate echo of TDD, because all three components share the same initials and the same philosophy of shifting quality left.
Test-Driven Development
Write a failing test. Write the minimum code to pass it. Refactor. Repeat.
Microsoft and IBM studies found that TDD reduced pre-release defect density by 40-90% compared to projects not using it, at a cost of 15-35% more development time. Kent Beck’s insight was that automated testing replaces fear with confidence: when stress increases, developers run tests rather than skipping them.
TDD’s role in replacing PRs is straightforward: when every line of code is written to satisfy a test, the code has already been verified before it reaches anyone. The automated safety net catches regressions immediately. You do not need a human gatekeeper to tell you whether the code works -- the tests do.
Trunk-Based Development
All developers merge to the main branch at least daily, working in small batches. Feature flags manage incomplete work.
The evidence, covered extensively above, is unambiguous: trunk-based development is correlated with higher delivery performance across every DORA metric. Paul Hammant’s trunkbaseddevelopment.com establishes that the ideal branch duration is one day maximum, with the smallest pieces being a quarter of a day. Feature flags eliminate the argument that “we need branches because features aren’t complete.”
TBD’s role in replacing PRs is structural: when changes are tiny (hours, not days), the review burden is minimal and can be handled through pairing or post-commit review. There is nothing to gate because there is nothing large enough to fear.
Team-Focused Development
Two or more developers work on the same code at the same time, providing continuous real-time review.
This is the component with the most nuance. The peer-reviewed evidence for pair programming’s quality benefits is substantial. Williams et al. (2000) found pairs produced 15% fewer defects. Hannay et al.’s 2009 meta-analysis -- the most comprehensive to date -- found a “small significant positive overall effect on quality”, strongest on complex tasks. Arisholm et al.’s 2007 study of 295 professional developers found a 48% increase in correct solutions on complex systems. Junior developers showed the strongest benefit: 149% improvement on complex tasks.
Most critically for our argument, Muller and Tichy (2005) directly compared pair programming to solo programming with peer review. Their central finding: pairs and solo developers with peer review achieve equivalent cost when quality is held constant. The review phase for solo developers adds enough overhead to match the cost of pairing. This means pair programming does not cost more than solo-plus-review -- it simply moves the quality assurance from after development to during it.
Ensemble (mob) programming extends this: the whole team works on the same thing, at the same time, on the same computer. Review happens instantly, after every line. The academic evidence for mob programming is still preliminary, but the practitioner consensus is strong: when the whole team creates together, there is no need for post-hoc inspection.
An honest caveat: no peer-reviewed study has directly proven that teams can safely eliminate post-hoc review when practising pair programming. The evidence shows cost-equivalence and comparable quality, not that one fully replaces the other. The argument that pairing eliminates the need for PRs is practitioner consensus extending academic evidence -- well-reasoned, but not empirically proven.
The Synthesis
When combined, these three practices create a system where quality is built in at every stage:
TDD means every line of code is verified by an automated test before it exists
TBD means changes are tiny, integrated frequently, and never far from mainline
TFD means human review happens continuously, during creation, not after
Fear of bugs? TDD catches them at creation time. Need for review? Pair programming reviews continuously. Integration risk? TBD integrates frequently in small batches. Knowledge sharing? Co-creation shares knowledge in real time.
The result is what Dave Farley describes as “Extreme Programming practices of ensemble programming and continuous code review that eliminate all waiting and waste.”
The Quality Journey: From Fear to Flow
I want to address the hardest question honestly: what if your team genuinely isn’t ready to drop pull requests?
This is a real concern. Jason Gorman acknowledges that the need for PRs often indicates a real skills gap. Thierry de Pauw argues that distrust-driven PR adoption signals deeper problems -- legacy code comprehension issues, blame cultures, or absent trust in engineers. No process fixes fundamental cultural dysfunction. But the answer is not to permanently institutionalise a slow inspection process. It is to address the root causes directly while gradually reducing dependence on gating review.
Here is a transition path, drawn from practitioner experience:
Stage 1: Optimise PRs. If you’re not ready to eliminate them, make them less harmful. Enforce smaller PRs (under 200 lines). Set review SLAs (under 4 hours). Automate all style and lint checks. Reduce required approvers to one. This alone can dramatically improve flow.
Stage 2: Ship/Show/Ask. Rouan Wilsenach’s model, published on martinfowler.com, categorises changes into three types. Ship: merge directly to mainline without review -- for routine changes where the developer is confident. Show: open a PR, merge immediately without waiting, and let review happen post-merge. Ask: open a PR and wait for discussion -- for complex or uncertain changes. This reduces the proportion of work that needs blocking review and starts building the muscle of trust.
Stage 3: Pair programming plus trunk-based development. Pair on all production code. Merge to trunk multiple times daily. Automated tests run on every commit. Post-commit review covers anything that wasn’t paired on.
Stage 4: Ensemble programming plus trunk-based development. The full team collaborates on code. Continuous review during development. Direct trunk commits. Feature flags for incomplete work.
Stage 5: Full T*D. All three practices operating together. Quality built in through TDD. Continuous integration through TBD. Continuous review through pair or ensemble. A fast automated deployment pipeline. Feature flags managing incomplete work. This is not a fantasy -- it is how the highest-performing teams in the industry operate.
Thierry de Pauw documents a specific intermediate approach: non-blocking continuous code review. Reviews happen on mainline after merging, on a per-feature level. Teams add a “To Review” column to their board. Developers check for pending reviews before starting new work. No hierarchical requirements -- seniors review juniors, juniors review seniors. Unreviewed code may reach production, but only if it passed automated testing. Notably, internal auditors in one organisation found this model superior to traditional checkbox-based compliance.
The key insight from Charity Majors is that speed and safety are not trade-offs -- speed IS safety. “Ship a single changeset by a single dev at a time, making it easy to isolate the owner of any problem, preventing the blast radius from expanding, and making it easy to fix while the intended effects of the code are fresh in their mind.”
Trust Is a Gradient
One final nuance. The argument that PRs are designed for “untrusted strangers” and therefore wrong for “trusted teams” implies a binary. In reality, trust is a gradient.
A small, co-located team of five people who pair daily has very high trust. The case against async PRs is strongest here. A large organisation with hundreds of engineers across multiple teams has lower cross-team trust. Within a team, pairing and trunk-based development may be ideal; across team boundaries, lightweight PRs with fast SLAs may still serve a purpose. For inner-source or platform teams that accept contributions from semi-external developers, a gatekeeper model may remain appropriate -- but even then, the goal should be fast, non-blocking review, not multi-day async queues.
For distributed teams with significant timezone overlap, pair programming via screen sharing works during overlap windows, with non-blocking post-commit review for work done outside overlap. For teams with minimal overlap, async PR workflows may be the least-bad option, but should be optimised aggressively: small changes, 4-hour review SLAs, automated quality gates, and a single required reviewer.
The antipattern diagnosis applies most strongly to same-team, same-context work where async blocking PRs replace trust that should already exist. As contributor trust decreases, some form of gatekeeping becomes more defensible. But it should always be as fast and as non-blocking as possible.
Conclusions
The evidence, while not without gaps, points in a clear and consistent direction.
Pull requests, as commonly practised -- large changes sitting in async queues for hours or days -- are an antipattern for private software teams. They were designed for a context (open source, untrusted contributors) that does not apply to most organisations. Peer-reviewed research shows they catch few bugs. Industry data shows the waiting time they impose constitutes almost all of a change’s lead time. Practitioner experience across dozens of independent voices confirms that better alternatives exist.
The alternative is not “no review.” It is better review -- built into the process rather than bolted on at the end. Test-driven development builds quality into every line. Trunk-based development keeps changes small and integration continuous. Pair and ensemble programming provide review that is immediate, contextual, and collaborative rather than delayed, decontextualised, and adversarial.
This is not about being reckless. It is about being rigorous in the right way. Deming’s insight, now more than 40 years old, still holds: you cannot inspect quality into a product. You must build it in. The teams that ship the fastest, with the fewest defects, are not the ones with the most elaborate gating processes. They are the ones that invested in the practices, skills, and trust that make gating unnecessary.
The path from where you are to where you want to be is gradual. Start by making your PRs smaller and faster. Then start asking which ones you don’t need at all. Then start pairing. Then start driving with tests. At each stage, the fear recedes a little, the flow increases, and the quality -- paradoxically, to those who equate inspection with safety -- gets better.
The question is not “How do we do pull requests better?” The question is “Why do we still need them?”
References
Peer-Reviewed Academic Research
Microsoft Research. “Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down.” IEEE, 2015. https://www.microsoft.com/en-us/research/publication/code-reviews-do-not-find-bugs-how-the-current-code-review-best-practice-slows-us-down/
Bacchelli, A. & Bird, C. “Expectations, Outcomes, and Challenges of Modern Code Review.” ICSE 2013. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ICSE202013-codereview.pdf
Bosu, A. et al. “Characteristics of Useful Code Reviews: An Empirical Study at Microsoft.” Microsoft Research, 2015. https://www.microsoft.com/en-us/research/publication/characteristics-of-useful-code-reviews-an-empirical-study-at-microsoft/
Williams, L. et al. “Strengthening the Case for Pair Programming.” IEEE Software, Vol. 17, No. 4, 2000. https://ieeexplore.ieee.org/document/854064/
Hannay, J.E. et al. “The Effectiveness of Pair Programming: A Meta-Analysis.” Information and Software Technology, Vol. 51, No. 7, 2009. https://www.sciencedirect.com/science/article/abs/pii/S0950584909000123
Arisholm, E. et al. “Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise.” IEEE Transactions on Software Engineering, Vol. 33, No. 2, 2007. https://ieeexplore.ieee.org/document/4052584/
Muller, M.M. & Tichy, W.F. “Two Controlled Experiments Concerning the Comparison of Pair Programming to Peer Review.” Journal of Systems and Software, Vol. 78, No. 2, 2005. https://www.sciencedirect.com/science/article/abs/pii/S0164121205000038
Madeyski, L. “An Empirical Study on Design Quality Improvement from Best-Practice Inspection and Pair Programming.” Springer LNCS Vol. 4034, 2006. https://link.springer.com/chapter/10.1007/11767718_27
di Bella, E. et al. “Pair Programming and Software Defects -- A Large, Industrial Case Study.” IEEE TSE, Vol. 39, No. 7, 2013. https://ieeexplore.ieee.org/document/6331491/
Nagappan, N. et al. “Realizing Quality Improvement Through Test Driven Development.” Microsoft/IBM study. https://www.researchgate.net/publication/258126622_How_Effective_is_Test_Driven_Development
Edmondson, A. “Psychological Safety and Learning Behavior in Work Teams.” Administrative Science Quarterly, 1999.
Industry Reports and Guides
Forsgren, N., Humble, J., Kim, G. Accelerate: The Science of Lean Software and DevOps. IT Revolution, 2018.
DORA/Google. “Capabilities: Trunk-based Development.” https://dora.dev/capabilities/trunk-based-development/
Google Cloud. “Accelerate State of DevOps Report 2023.” https://dora.dev/research/2023/dora-report/
ThoughtWorks. “Peer Review Equals Pull Request.” Technology Radar, April 2021. https://www.thoughtworks.com/radar/techniques/peer-review-equals-pull-request
SmartBear/Cisco. “Code Review at Cisco Systems.” 2006. https://static0.smartbear.co/support/media/resources/cc/book/code-review-cisco-case-study.pdf
Practitioner Sources
Deming, W.E. Out of the Crisis. MIT Press, 1982. Also: Deming Institute, “Dr. Deming’s 14 Points for Management.” https://deming.org/explore/fourteen-points/
Deming Institute. “Software Code Reviews from a Deming Perspective.” https://deming.org/software-code-reviews-from-a-deming-perspective/
de Pauw, T. “The Good and the Dysfunctional of Pull Requests.” ThinkingLabs, 2024. https://thinkinglabs.io/articles/2024/02/22/the-good-and-the-dysfunctional-of-pull-requests.html
de Pauw, T. “Non-Blocking, Continuous Code Reviews -- A Case Study.” ThinkingLabs, 2023. https://thinkinglabs.io/articles/2023/05/02/non-blocking-continuous-code-reviews-a-case-study.html
Stepanovic, D. “Async Code Reviews Are Killing Your Company’s Throughput.” 2021-2023. https://www.slideshare.net/kobac/async-code-reviews-are-killing-your-companys-throughput-248758692
Troy, C. “Reviewing Pull Requests.” chelseatroy.com, 2019. https://chelseatroy.com/2019/12/18/reviewing-pull-requests/
Morris, K. “Why Your Team Doesn’t Need to Use Pull Requests.” infrastructure-as-code.com, 2021. https://infrastructure-as-code.com/posts/pull-requests.html
Fowler, M. “bliki: Pull Request.” https://martinfowler.com/bliki/PullRequest.html
Fowler, M. “Continuous Integration.” Updated 2024. https://martinfowler.com/articles/continuousIntegration.html
Farley, D. “Continuous Integration and Feature Branching” (blog)
https://www.davefarley.net/?p=247
/ “You NEED to Stop Using Pull Requests” (YouTube video).
Gorman, J. “Pull Requests, Defensive Programming -- It’s All About Trust.” Codemanship, 2020. https://codemanship.wordpress.com/2020/09/12/pull-requests-defensive-programming-its-all-about-trust/
Vocke, H. “You Might Be Better Off Without Pull Requests.” https://hamvocke.com/blog/better-off-without-pull-requests/
Beck, K. (via Twitter/X): “Pull requests are an improvement on working alone but not on working together.”
Kerr, J. “Those Pesky Pull Request Reviews.” jessitron.com, 2021. https://jessitron.com/2021/03/27/those-pesky-pull-request-reviews/
Majors, C. “How Much Is Your Fear of Continuous Deployment Costing You?” charity.wtf, 2021. https://charity.wtf/2021/02/19/how-much-is-your-fear-costing-you/
Reinertsen, D. The Principles of Product Development Flow. Celeritas Publishing, 2009.
Wilsenach, R. “Ship / Show / Ask.” Published on martinfowler.com. https://martinfowler.com/articles/ship-show-ask.html
Hammant, P. “Trunk Based Development.”
https://trunkbaseddevelopment.com/
Minimum CD. “Minimum Viable Continuous Delivery.”
https://minimumcd.org/
Meek, J. “Pull Requests are an Anti-Pattern.” Substack.
Zuill, W. “Mob Programming: A Whole Team Approach.”
https://mobprogramming.org/
Beck, K. Test-Driven Development: By Example. Addison-Wesley, 2002.
Humble, J. & Farley, D. Continuous Delivery. Addison-Wesley, 2010.
Bogard, J. “Trunk-Based Development or Pull Requests: Why Not Both?” https://www.jimmybogard.com/trunk-based-development-or-pull-requests-why-not-both/

