Lately I've been re-examining a lot of what I once thought to be best practices.
Here's a common software engineer origin story. You join a team fresh out of college with a blank slate. Your first code review happens and a senior engineer rips it to shreds.
"You should have used a factory pattern here"
"You should avoid using singletons"
"I think this query is going to be slow"
"You should write more tests to cover these edge cases"
You don't question much and after several fixes and back-and-forth discussions your change is finally accepted. You feel proud and validated! Finally, experiencing what it feels like to be a real software engineer. Rinse and repeat dozens of times and soon you are promoted to a mid-level position. Now you find yourself repeating the same lessons to the new hires. The cycle is starting anew. You glance over at Clean Code and The Pragmatic Programmer — gifted by your manager — sitting on your desk. The business keeps humming along, you think. You're doing something right.
Shipping at all costs
The beginning of my career was much like I just described. Our executives peered into their crystal balls and came up with a company vision. Their product manager minions then broke the vision down into a series of roadmaps and features, and tasked the engineering team with building these features as quickly as possible. The faster we could ship, the better. It was a simple equation, but always met with resistance from the engineering team. "We need to do it right", we would say. "We need to follow best practices".
Ultimately a compromise was made. We would ship as quickly as possible, but we required some extra time to also write tests and refactor the code as needed. This was the best of both worlds, we thought. We could still maintain our high level of code quality which felt like a victory. But as time went on, I began to question the value of these best practices. The tests we wrote were often flaky and didn't catch the bugs we thought they would. The refactoring we did often introduced new bugs and took longer than we thought. The best practices we followed often felt like cargo cult programming. We were doing them because we were thought they were right, not because they actually helped us.
Cracks in the facade were forming, but it wasn't until I joined Stripe that I finally understood the bigger picture. Stripe has a culture around "shipping" although initially I didn't know what that meant. Hadn't I been shipping code my entire career? After finally joining their team, I was surprised to find that they didn't follow many of the best practices I had been taught. There weren't as many tests as I thought there would be. They didn't use many design patterns. The codebase was foreign and lacked the object structure I was used to. Instead, everyone was focused and bought in on shipping.
The OOP Myth
Stripe uses Ruby for most of their codebase, or at least did when I was there, and the first thing I noticed was how little OOP they used for such a highly OOP language. How can that be when I was taught OOP was the best way to write software? Instead, most of the code was procedural. You have some bag of data and perform business logic on it using "commands" which are essentially functions. Teams have their own module containing a set of public data and commands for other teams to use, and private data and commands for their own use. The result was a structure like this:
lib/
capital/
commands/
create_loan.rb
get_loan_offers.rb
make_loan_payment.rb
data/
loan.rb
loan_offer.rb
private/
commands/
calculate_interest.rb
calculate_principal.rb
data/
interest.rb
principal.rb
issuing/
...
In the OOP world I would have expected to see a Loan
"POJO" class, and then other classes like LoanService
and LoanRepository
to hold all the business and persistance logic. You'd be left to figure out how to wire all these classes together, often jumping through layers of interfaces and classes.
But in the Stripe world, Loan
class is a simple struct and the business logic was all in the commands
directory. A command is just an imperative set of instructions that would encapsulte everything from performing the business logic and handling persistance. Our data layer did use an ORM framework with some OOP, but that was basically it.
I was surprised how easily I could jump in and understand another team's codebase. The flat hierarchy and obvious "place" to put things was a revelation to me and got me thinking: what is OOP actually trying to solve?
What makes code maintainable?
I thought OOP and design patterns were meant to make your code maintainable. Thinking more about this, I came up top traits of maintainable code in my experience:
- Readable: The code is easy to read and understand. It's easy to see what it's doing and why it's doing it. Code is read far more often than it is written.
- Discoverable: It's easy to understand how code is supposed to be used in common cases. For example, most users don't want to read through a massive
man curl
page to figure out how to make a simple HTTP request. - Predictable: It's easy to understand what the code will do in a given situation, e.g. no side effects.
- Limited blast radius: It's easy to understand the impact of a change. For example, if I change this line of code, what else will happen?
- Debuggable: It's easy to understand what went wrong when something goes wrong.
- Testable: It's easy to write tests for the code. This includes being able to mock out dependencies and not having to set up a lot of state to test a small piece of code.
It turns out the command structure checks all of these boxes. It's dead simple to understand. There's no layer of indirection trying to piece together OOP hierarchies. It encapsulates the business logic developers actually want to run in a single place, you don't have to figure out how to wire together a bunch of objects on your own. And testing is dead simple, it's just a function with easy to understand inputs and outputs.
What is worth testing?
OK so maybe OOP isn't needed to write great code, but what about tests? I never bought into fancy code coverage metrics, but tried to write varying levels of unit, integration, and end-to-end tests for all my features. Now, I've come to realize that many tests are not worth writing. This boils down into a few ideas:
The Pareto Principle (80-20 rule) is particularly relevant to testing. After you've written a test for the happy path and a few error cases, the ROI for tests rapidly diminishes. Remember, tests are code too, and they need to be maintained. By adding more tests, you're making code harder to refactor and change. There's a hidden cost to tests that many devs don't consider.
Am I testing business logic or boilerplate code? I've seen tests where 75% of the code is just setting up the test and wiring dependecies together. You aren't getting much value out of these tests relative to their cost.
A good rule of thumb is simply thinking to yourself, "What is the cost of this code failing?" If you doubt your answer, then it's probably not worth testing. For example, I've seen an extroadinary amount of effort go into testing extreme edge cases that would never happen with a real user. It would have been better for the dev to work on features. At Stripe, the cost of its money-movement code failing would be catastrophic so there is a lot of rigorous testing around it. But the same isn't true for other features.
Integration and E2E tests are much more valuable than unit tests. You can test a lot more in less code, although the tradeoff is that they are slower and more brittle. Investing in good integration and E2E tests (and their environment) is one of the highest ROI things you can do.
This is all to say that many devs get caught up in the idea that they need to test everything. But in reality lots of code is not worth testing, so focus on a few solid test cases and move on. If a feature is very high-stakes, I'll write a few extra test cases, although the bar for that is high. It's a better value to the business to move fast and occasionally have a bug or two if that means you get to ship extra features.
If you're building medical device software then maybe ignore this advice :)
The scalable myth
Developers are trained in the way of "premature optimization is the root of all evil". You write O(N^2) algorithms and push them to production because "N will be small in practice", until in rare cases it isn't, but somehow this thinking often gets lost in system design.
I've seen a lot of over-engineering when it comes to system design. I'm plenty guilty of this myself. It turns out that modern hardware is incredibly fast and your data might not be as big as you think! While it's good to consider scale, very few of the features I thought would have scaling issues actually caused a problem. Most scaling issues I've seen were caused by bugs and not the actual feature itself. I'd rather ship a feature and have to turn it off due to a load issue than worry about imaginary scale problems. It's a good problem to have in most cases. If a feature becomes successful you can always improve its performance later on. You'd be surprised how many businesses could survive off a simple Docker container hosted on a cheap VPS instance these days.
I think this is partly caused by the engineering culture of a company. Much like Conway's Law where systems tend to mirror their own communication structures (e.g. org charts), devs will over-engineer their system designs in order to match the format of their system design review. In other words, if there is a fancy review process engineers will think they need a system with all the bells and whistles.
Delivering value
The point I'm trying to make is that it's easy to get caught up in things that don't matter in the end. Focus on shipping changes as often and early as possible so users can get something in their hands to play with. Learn from their feedback and keep iterating. Soon you'll have successful software.
For example, one of my favorite practical techniques is to include a misc
JSON column on my tables to stuff random data in that doesn't fit the rest of my schema. Both MySQL and PostgreSQL support this natively. At first I thought this was a horrible idea and would kill my database performance. But in practice, this hasn't been an issue at all. It's a great way to ship features quickly and not worry about slight schema changes. If a feature becomes successful, I can easily perform a migration to make it better.
What happened to me, and I think many other engineers, is that we lose sight of what we're actually trying to achieve. We're not trying to write the best code that impresses other engineers, we're trying to deliver value to the business. The best code is the code that delivers the most value. This is not to say that we should write bad code, but that we should be pragmatic about what we're doing. We should be asking ourselves, "Is this the best use of my time?" and "Is this the best use of the company's time?" Sure, I could spend 40 hours making this code perfect, but is that really worth it? What if I could spend 10 hours and build something 95% as good?
This sounds soulless, but it's not. Looking back now, most of those meetings and long PR discussions for arbitrary code quality were a waste of time and didn't matter. Best practices can have good advice, and they can also be abused if followed blindly. I've had to unlearn some of my enterprise Java habits.
I think the best engineers are the ones who can balance these two things. They can write code that is good enough, and they can ship it quickly. They can understand when it's worth writing a test, refactoring, using a design pattern, etc, and also when it's not.