Wednesday, October 9, 2024

Real talk about career trajectories

Almost every time I scroll through LinkedIn, I run into one or more posts with some variation of the following:

  • You're wasting your life working for someone else, start your own business...
  • Invest in yourself, not traditional investments, so you can be super successful...
  • [literally today] Don't save for retirement, spend that money on household help so you can spend your time learning new skills...
  • etc.

These all generally follow the same general themes: investing in yourself allows you to start your own business, and that allows you to get wealthy, and if you're working a 9-5 and doing the normal and recommended financially responsible things, you're doing it wrong, and will always be poor. I'm going to lay out my opinion on this advice, and relate it to the career/life path I'm on, and what I would personally recommend.

Let's talk risk

The main factor that most people gloss over, when recommending this approach to a career, is the element of risk. When last I read up on this, at least in the tech sector, roughly 1/20 startups get to an initial (non-seed) funding round, and roughly 1/20 of those get to the point where founders can "cash out" (ie: sold, public, profitable, etc.). That's a huge number of dead bodies along the way, and the success stories in the world are the outliers. When you hear about anyone's success with investing in themselves and becoming wealthy, there's a very heavy selection bias there.

This is where personal wealth comes into play (eg: family wealth, personal resources, etc.), and why people who succeed at starting businesses usually come from money. As someone without financial resources, not working a steady job is a large financial risk: you usually won't make much money, and if your endeavor doesn't pan it, you might be left homeless (for example). People with resources already can take that risk, repeatedly, and still be fine if it doesn't work out; normal people cannot. Starting a business is expensive, often in outlay costs in addition to opportunity costs. Personal resources mitigates that risk.

There's also an element of luck in starting a business: even if you have a good idea, good skills, good, execution, etc., some of the chance of overall success will still be luck. This is something where you can do everything "right", and still fail. This risk is random, and cannot really be mitigated.

The other risk factor in terms of owning and running a business comes later, if/when the business becomes viable. As a business owner, your fortunes go up or down based on the value of the business, and often you'll be personally liable for business debts when the company is smaller. In contrast, an employee is hired for a specific wage and/or compensation agreement, and that is broadly dependent on just performing job functions, not how successful that makes the business overall. Moreover, employees can generally ply their skills for any business, so if the business become bad to work at, they have mobility, whereas the business owner does not. Again, this is the risk factor.

But you still want to be wealthy...

Okay, so here's my advice to put yourself in the best position to be wealthy:

  1. Get really good at networking
  2. Buy rental property

 #1 is the most important factor in overall wealth potential, in my opinion. In addition to being the primary factor in many types of jobs (eg: sales, RE agent), networking will get you opportunities. Also, being good at networking will make you good at talking to people, which is the primary job skill for managers. Since managers are typically compensated better than skill workers, and almost always the people who get elevated to C-level roles, this is the optimal career path in general for being wealthy, even if you don't start your own business.

#2 is the best possible investment type, at least in the US, because the government has made hording property the single most tax advantaged investment possible. It is an absolutely awful social policy in general, as it concentrates wealth among the very rich while creating durable wealth inequality, and making housing unaffordable for much of the middle class. The tax policy is the absolute pinnacle of corruption and/or stupidity in government policy in the US... but rental property is unequivocally the best investment class as a result of the policy, and unless you're in a position to compromise your own wealth for idealism, this is what you should invest in.

Parting thoughts

Most people will be employees (vs self-employed or business owners). If you are in a position and of the mindset to start a business, and are comfortable with the risk, that is a path you can choose. If that's not for you (as it's not for most people), but you still want the best chance of being reasonably wealthy, get really good at talking to people and maintaining social connections, and go into management: it's significantly easier than skilled or manual labor, and happens to pay significantly better also. But at the end of the day, keep in mind that you don't have to luck into the 0.001% who are in position to and are successful in starting a successful business in order to make enough money to have a comfortable life, and there is more to life than just making lots of money.


Saturday, October 5, 2024

The value of knowing the value

Something I've been musing about recently: there's a pretty high value, for managers, in knowing the value of people within a business. I'm kinda curious how many companies (and managers within said companies) actually know the value for their reports, beyond just the superficial stuff (like job level).

Motivating experience: people leave companies. Depending on how open the company is with departures, motivations, internal communications, etc., this may be somewhat sudden and/or unnoticed for some time. Sometimes, the people that leave have historical specific knowledge which is not replicated anywhere else in the company, or general area expertise which cannot easily be replaced, or are fulfilling roles which would otherwise be more costly to the organization if that work was not done, etc. Sometimes these departures can have significant costs for a company, beyond just the lost nominal productivity.

Note that departures are entirely normal: people move on, seek other opportunities, get attractive offers, etc. As a company, you cannot generally prevent people from leaving, and indeed many companies implicitly force groups out the door sometimes (see, for example, Amazon's RTO mandates and managing people out via underhanded internal processes). However, in concept a company would usually want to offer to pay what a person is worth to the company to retain them, and would not want someone to walk away that they would have been willing to pay enough to entice them to stay. That's where knowing value comes into play: in order to do that calculation, you would need the data, inclusive of all the little things which do not necessarily make it into a high level role description.

Not having been a manager (in any sort of serious sense), I don't really have a perspective on how well this is generally understood within companies, but my anecdotal experience would suggest that it is generally not tracked very well. Granted, everyone is replaceable in some sense, and companies do not want to be in a position where they feel extorted, but the converse is that effective and optimal people management means "staying ahead" of people's value in terms of proactive rewards and incentives. I'd imagine that even a company which treats their employees at cattle would be displeased if someone they would have been happy to retain at a higher rate walked, just because their management didn't accurately perceive their aggregate value to the organization.

All of this is to say: there's a high value for an organization in having an accurate sense of the value that people have for an organization, to optimally manage your business relationship with those people. If people leave your company, and you figure out later that they had more value than you were accounting for, that's a failure of management within the organization, and might be a costly one.

Addendum: Lest anyone think this is in relation to myself and my current position, it is not. However, I have been valued both more and less than I perceived my value as at various points in my career, so I know there can be a disconnect there, and I've seen organizations lose valuable expertise and not realize it until the people were gone. I would surmise that this might be more of a general blind spot than many companies realize.

Sunday, September 8, 2024

Zero cost abstractions are cool

There is a language design principle in C++ that you should only pay for what you use; that is, that the language should not be doing "extra work" which is not needed for what you are trying to do. Often this is extrapolated by developers to imply building "simple" code, and only using relatively primitive data structures, to avoid possible runtime penalties from using more advanced structures and methodologies. However, in many cases, you can get a lot of benefits by using so-called "zero cost abstractions", which are designed to be "free" at runtime (they are not entirely zero cost; I'll cover the nuance in an addendum). These are cool, imho, and I'm going to give an example of why I think so.

Consider a very simple, and somewhat ubiquitous in code from less experienced developers, function result paradigm of returning a boolean to indicate success or failure:

bool DoSomething(); 

This works, obviously, and fairly unambiguously represents the logical result of the operation (by convention, technically). However, it also has some limitations: what if I want to represent more nuanced results, for example, or pass back some indication of why the function failed? These are often trade-offs made for the ubiquity of a standard result type.

Processors pass result codes by way of a register, and the function/result paradigm is ubiquitous enough that this is well-supported by all modern processors. Processor register sizes are dependent on the architecture, but should be at least 32bits for any modern processor (and 64bits or more for almost all new processors). So, when passing back any result code, passing 64bits of data is the same as passing one bit, per the above example, runtime performance wise. So we can rewrite our result paradigm as this, without inducing any additional runtime overhead:

uint64_t DoSomething();

Now we have an issue, though: we have broken the ubiquity of the original example. Is zero success now (as would be more common in C++ for integer result types)? What do other values mean? Does each function define this differently (eg: a custom enum of potential result values per-function)? While we have added flexibility, we have impacted ubiquity, and potentially introduced complexity which might negate any other value gained. This is clearly not an unambiguous win.

However, we can do better. We can, for example, not use a numeric type, but instead define a class type which encapsulates the numeric type (and is the same size, so it can still be passed via a single register). Eg:

class Result
{
    int64_t m_nValue;
    bool isSuccess() const { return m_nValue >= 0; }
    bool isFailure() const { return m_nValue < 0; }
};

Now we can restore ubiquity in usage: callers can use ".IsSuccess()" and/or ".isFailure()" to determine if the result was success or failure, without needing to know the implementation details. Even better: this also removes any lingering ambiguity in the first example as well, as we now how methods which clearly spell out intent in readable language. Also, importantly, this has zero runtime overhead: an optimizing compiler will inline these methods to be assembly equivalent to manual checks.

Result DoSomething();

//...
auto result = DoSomething();
if (result.isFailure())
{
    return result;
}

This paradigm can be extended as well, of course. Now that we have a well-defined type for result values, we could (for example) define some of the bits as holding an indicative value for why an operation failed, and then add inline methods to extract and return those codes. For example, one common paradigm from Microsoft uses the lower 16bits to encapsulate the Win32 error code, where the high bits have information for the error disposition and component area which generated the error. This can also be used to express nuance in success values as well; for example, an operation which "succeeded", but which had no effect, because preconditions were not satisfied.

Moreover, if used fairly ubiquitously, this can be used to easily propagate unexpected error results up a call stack as well, as suggested above. One could, if inclined, add macro-based handling to establish a standard paradigm of checking for and propagating unknown errors, and with the addition of logging in the macro, the code could also generate a call stack on those cases. That compares fairly favorably to typical exception usage, for example, both in utility and runtime overhead.

So, in summary, zero cost abstractions are cool, and hopefully the above has convinced you to consider that standard result paradigms are cool too. I am a fan of both, personally.

Addendum: zero cost at runtime

There is an important qualification to add here: "zero cost" applies to runtime, but not necessarily compile time. Adding structures implies some amount of compilation overhead, and with some paradigms this can be non-trivial (eg: heavy template usage). While the above standard result paradigm is basically free and highly recommended, it's always important to also consider the compilation time overhead, particularly when using templates which may have a large number of instantiations, because there is some small but non-zero compilation time overhead there. The more you know.



Wednesday, July 17, 2024

Why innovation is rare in big companies

I have worked for small and large tech companies, and as I was sitting through the latest training course this week on the importance of obtaining patents for business purposes (a distasteful but necessary thing in the modern litigious world), I was reflecting on how much more innovative smaller companies tend to be. This is, of course, the general perception also, but it's really reflective of the realities of how companies operate which motivates this outcome, and this is fairly easy to see when you've worked in both environments. So, I'm going to write about this a bit.

As a bit of background, I have two patents to my name, both from when I worked in a small company. Both are marginally interesting (more so than the standard big company patents, anyway), and both are based on work I did in the course of my job. I didn't strive to get either; the company just obtained them after the fact for business value. I have no idea if either has ever been tested or leveraged, aside from asset value on a spreadsheet.

Let's reflect a bit on what it takes to do projects/code at a typical big company. First, you need the project to be on the roadmap, which usually requires some specification, some amount of meetings, convincing one or more managers that the project has tangible business value, constraining the scope to only provable value, and getting it into the process system. Then you have the actual process, which may consist of ticket tracking, documents, narrowing down scope to reduce delivery risk, getting various approvals, doing dev work (which may be delegated depending on who has time), getting a PR done, getting various PR approvals (which usually strip out anything which isn't essential to the narrow approved scope), and getting code merged. Then there is usually some amount of post-coding work, like customer docs, managing merging into releases, support work, etc. That's the typical process in large companies, and is very reflective of my current work environment, as an example.

In this environment, it's very uncommon for developers to "paint outside the lines", so to speak. To survive, mentally and practically, you need to adopt to the process of being a cog who is given menial work to accomplish within a very heavy weight process-oriented system, and is strongly discouraged from trying to rock the boat. Works gets handed down, usually defined by PM's and approved by managers in planning meetings, and anything you do outside of that scope is basically treated as waste by the organization, to be trimmed by all the various processes and barriers along the way. This is the way big companies operate, and given enough resources and inherent inefficiencies, it works reasonably well to maintain and gradually evolve products in entirely safe ways.

It is not surprising that this environment produces little to no real innovation. How could it? It's actively discouraged by every process step and impediment which is there by design.

Now let's consider a small company, where there are typically limited resources, and a strong driving force to build something which has differentiated value. In this environment, developers (and particularly more senior developers) are trusted to build whatever they think has the most value, often by necessity. Some people will struggle a lot in this environment (particularly those people who are very uncomfortable being self-directed); others will waste some efforts, but good and proactive developers will produce a lot of stuff, in a plethora of directions. They will also explore, optimize, experiment with different approaches which may not pan out, etc., all virtually unconstrained by process and overhead. In this environment, it's not uncommon to have 10x productivity from each person, and only half of that work actually end up being used in production (compared to the carefully controlled work product in larger companies).

But, small companies get some side-benefits from that environment also, in addition to the overall increases in per-person effective productivity. Because developers are experimenting and less constrained by processes and other people's priorities, they will often build things which would never have been conceived of within meetings among managers and PM's. Often these are "hidden" things (code optimizations, refactoring to reduce maintenance costs, process optimizations for developer workflows, "fun" feature adds, etc.), but sometimes they are "interesting" things, of the sort which could be construed as innovations. It is these developments which will typically give rise to actual advances in product areas, and ultimately lead to business value through patents which have meaning in the markets.

Now, I'd be remiss to not mention that a number of companies are aware of this fact, and have done things to try to mitigate these effects. Google's famous "20% time", for example, was almost certainly an attempt to address this head-on, by creating an internal environment where innovation was still possible even as the company grew (note: they eventually got too large to sustain this in the face of profit desires from the market). Some companies use hackathons for this, some have specific groups or positions which are explicitly given this trust and freedom, etc. But by and large, they are all just trying to replicate what tends to happen organically at smaller companies, which do not have the pressure or resources build all the systems and overhead to get in the way of their own would-be success.

Anyway, hopefully that's somewhat insightful as to why most real innovation happens in smaller companies, at least in the software industry.


Friday, July 5, 2024

How not to do Agile

Note: This post is intended to be a tongue-in-cheek take, based on an amalgam of experiences and anecdotes, and is not necessarily representative of any specific organization. That said, if your org does one or more of these things, it might be beneficial to examine if those practices are really beneficial to the org or not.

The following is a collection of things you should not do when trying to do Agile, imho; these practices either run counter to the spirit of the methodology, will likely impede realizing the value from doing such, and/or demonstrate a fundamental misunderstanding of the concept(s).

Mandating practices top-down

One of the core precepts of Agile is that it is a "bottom-up" organization system, which is intended to allow developers to tweak the process over time to optimize their own results. Moreover, it is very important in terms of buy-in for developers to feel like the process is serving the needs of development first and foremost. When mandated from outside of development, even an otherwise optimal process might not get enough support over time to be optimally adopted and followed.

It is very often a sign of "Agile in name only" within organizations when this is mandated by management, rather than adopted organically (and/or with buy-in across the development teams). This is one of the clearest signals that an organization either has not bought into Agile, and/or the management has a fundamental misunderstanding of what Agile is.

Making your process the basis of work

One of the tenants of Agile is that process is there in service of the work product, and should never be the focus for efforts. As such, if tasks tend to revolve around process, and/or are dependent on specific process actions, this should be considered a red flag. Moreover, if developers are spending a non-trivial amount of time on process-related tasks, this is another red flag: in Agile, developers should be spending almost all their time doing productive work, not dealing with process overhead.

One sign that this might be the case is if/when workflows are heavily dependent on the specifics of the process and/or tooling, as opposed to the logical steps involved in getting a change done. For example, if a workflow starts with "first, create a ticket...", this is not Agile (at least in spirit, and probably in fact). If the workflow is not expressed in terminology which is process and tooling independent, the org probably isn't doing Agile.

Tediously planning future release schedules

Many organizations with a Waterfall mindset always plan out future releases, inclusive of which changes will be included, what is approved, what is deferred, etc. This (of course) misses the point of Agile entirely, since (as encapsulated in the concept of Agile) you cannot predict the timeline for changes of substance, and this mentality makes you unable to adopt to changing circumstances and/or opportunities (ie: be "agile"). If your organization is planning releases with any more specificity than a general idea of what's planned for the next release, and/or it would be a non-trivial effort to include changes of opportunity into a release, then the org isn't doing Agile.

Gating every software change

The Agile methodology is inherently associated with the concept of Continuous Improvement, and although the two can be separated conceptually, it's hard to imagine an Agile environment which did not also emphasize CI. Consequently, in an Agile environment, small incremental improvements are virtually always encouraged, both explicitly via process ideals, and implicitly via low barriers. Low barriers is, in fact, a hallmark of organizations with high code velocity, and effectively all high productive Agile dev teams.

Conversely, if an organization has high barriers in practice to code changes (process wise or otherwise), and/or requires tedious approvals for any changes, this is a fairly obvious failure in terms of being Agile. Moreover, it's probably a sign that the organization is on the decline in general, as projects and teams where this is the prevailing mentality and/or process tend to be fairly slow and stagnant, and usually in "maintenance mode". If this doesn't align with management's expectations for the development for a project, then the management might be poor.

Create large deferred aggregate changes

One of the precepts of Agile is biasing to small, incremental changes, which can be early integrated and tested as self-contained units. Obviously, large deferred aggregate changes are the antithesis of this. If your organization has a process which encourages or forces changes to be deferred and/or grow in isolation, you're certainly not doing Agile, and might be creating an excessive amount of wasteful overhead also.

Adding overhead in the name of "perfection"

No software is perfect, but that doesn't stop bad/ignorant managers from deluding themselves with the belief that by adding enough process overhead, they can impose perfection upon a team. Well functioning Agile teams buck this trend through self-organization and control of their own process, where more intelligent developers can veto these counter-productive initiatives from corporate management. If you find that a team is regularly adding more process to try to "eliminate problems", that's not only not Agile, but you're probably dealing with some bad management as well.

Have bad management

As alluded to in a number of the points above, the management for a project/team has a huge impact on the overall effectiveness of the strategies and processes. Often managers are the only individuals who are effectively empowered to change a process, unless that ability is clearly delegated to the team itself (as it would be in a real Agile environment). In addition to this, though, managers shape the overall processes and mindsets, in terms of how they manage teams, what behaviors are rewarded or punished, how proactively and clearly they articulate plans and areas of responsibility, etc. Managers cannot unilaterally make a team and/or process function well, but they can absolutely make a team and/or process function poorly.

Additionally, in most organizations, manager end up being ultimately responsible for the overall success of a project and/or team, particularly when in a decision making role, because they are the only individuals empowered to make (or override) critical decisions. A good manager will understand this responsibility, and work diligently to delegate decisions to the people most capable of making them well, while being proactively vigilant for intrusive productivity killers (such as heavy process and additional overhead). Conversely, a bad manager either makes bad decisions themselves, or effectively abdicates this responsibility through inaction and/or ignorance, and allows bad things to happen within the project/team without acknowledging responsibility for those events. If the project manager doesn't feel that they are personally responsible for the success of the product (perhaps in addition to others who also feel that way), then that manager is probably incompetent in their role, and that project is likely doomed to failure in the long run unless they are replaced.

Take home

Agile is just one methodology for software development; there are others, and anything can work to various degrees. However, if you find yourself in a position where the organization claims to be "agile", but exhibits one or more of the above tendencies, know that you're not really in an organization which is practicing Agile, and their self-delusion might be a point of concern. On the other hand, if you're in a position to influence and/or dictate the development methodology, and you want to do Agile, make sure you're not adding or preserving the above at the same time, lest you be the one propagating the self-delusion. Pick something that works best for you and your team, but make an informed choice, and be aware of the trade-offs.


Monday, June 17, 2024

Status reporting within orgs

Story time:

Back in my first job out of school, when I didn't really have much broad context on software development, I was working for Lockheed Martin as an entry-level developer. One of my weekly tasks was to write a status report email to my manager, in paragraph form, describing all the things I had been doing that week, and the value that they provided for various projects and initiatives. This was something which all the developers in the team needed to do, and my understanding was that it was a fairly common thing in the company (and by assumption at the time, within the broader industry).

At some point, I was asked to also write these in the third person, which was a bit odd, until I realized what was going on with them. My manager was aggregating them into a larger email communication to his manager, in which he was informing his management as to all the value that his team was providing under his "leadership". I don't know to what extent he was taking credit for those accomplishments (explicitly or implicitly), but I do know that he didn't do anything directly productive per se: his entire job was attending meetings and writing reports, as far as I could tell (Lockheed had separate project management and people management, and he was my people manager, so not involved in any projects directly). I rarely spoke to him, aside from sometimes providing on-demand status updates, or getting information on how to navigate corporate processes as necessary.

Furthermore, my understanding was that there was an entire hierarchy of "people managers", who all did only that: composing and forwarding emails with status information, and helping navigate corporate processes (which in many cases, they also created). Their time was also billed to projects which their reports were attached to, as "management overhead".

I raise this point because, as my career progressed, I realized this was not uniform practice in the industry. In fact, practices like Agile explicitly eschew this methodology; Agile strongly promotes developer agency and trust, minimizing process overhead, developer team self-organization, and only reporting blockers as necessary. In large part, Agile is built upon the premise that organizations can eliminate status reporting and still get good outcomes in terms of product development; or, implicitly, the presumption that middle managers receiving and forwarding status reports provide little to no value to the overall organization.

I've always found that presumption interesting, and I think it's very much an open question (ie: value vs cost for management). I've experienced what I would consider to be good managers; these are people who do not add much overhead, but efficiently unblock development efforts, usually through come combination of asymmetric communications, resource acquisition unblocking, or process elimination (ie: dealing with process overhead, or eliminating it entirely). I've also experienced managers who I thought provided very little value in my perception; these are people who frequently ask for status reporting information, mainly just "pass the buck" when blockers arise, and seem unable to reduce or offload any overhead (and/or add more). Those "low value" managers can also add substantial overhead, particularly when they do not understand technical details very well, but nevertheless involve themselves in technical discussions, and thus require additional hand-holding and people management efforts to "manage up", and try to prevent these individuals from making bad decisions (which might contravene or undermine better decisions by people under them in the company hierarchy).

So, then, the next open question is: how does an organization differentiate between "good" and "bad" management, as articulated here (if they were so inclined)?

In my mind, I'd start with looking at the amount of status reporting being done (in light of the Agile take on the overall value of such), as a proxy for how much value the management is providing, vs overhead they are creating. Obviously reports could also be surveyed as another indirect measurement for this, although there is inherently more bias there, of course. But generally speaking, ideally, status information should be communicated and available implicitly, via the other channels of value-add which a good manager is providing (eg: by providing general career advice to reports, and mitigating overhead for people, a manager should then be implicitly aware of the work efforts for their reports). If explicit status reporting is necessary and/or solicited, that could be a sign that the value-add being provided by the other activities is not extensive, and be a point of concern for the org.

Anyway, those are some of my thoughts on the topic, and as-noted different orgs have different approaches here. I don't think there's a "right" or "wrong" here, but I do think there are tangible trade-offs with different approaches, which means it is a facet of business where there is potentially room for optimization, depending on the circumstances. That, by itself, makes it an interesting point of consideration for any organization.


Sunday, June 9, 2024

Some thoughts on code reviews

Code reviews (https://about.gitlab.com/topics/version-control/what-is-code-review/) are a fairly standard practice in the industry, especially within larger companies. The process of having multiple developers look at code changes critically is found in several development methodologies (eg: extreme programming, paired programming, etc.), and they are often perceived as essentially for maintaining a level of overall code quality. I'd imagine that no respectable engineering leader in any larger organization would accept a process which did not incorporate mandatory code reviews in some form.

So with that intro, here's a bit of a hot/controversial take: I think code reviews are overrated.

Before I dive into why I think this, a quick tangent for an admission: the majority of code I've actually written in my career, personally and professionally, has been done without a formal code review process in place (and thus, not code reviewed). I've also, personally, experienced considerably more contention and strife generated by code reviews than value-add. So I certainly have a perspective here which is biased by my own experience... but I also don't think my experience is that unique, and/or would be that dissimilar to the experience of many other people, based on conversations I've had.

So that being said, why do I not subscribe to the common wisdom here?

I know a manager, who shall remain unnamed here, for whom the solution for every problem (bug, defect, customer issue, design oversight, etc.) is always one of two things: we should have caught that with more code review, or we should have caught that with more unit testing (or both). His take represents the sort of borderline brainless naivety which is all too common among nominally technical managers who have never accomplished anything of significance in their careers, and have managed to leverage their incompetence into high-paying positions where they parrot conventional wisdom and congratulate themselves, while contributing no positive value whatsoever to their employing organizations.

The common perception of this type of manager (and I have known several broadly with this mindset) is that any potential product failure can be solved by more process, and/or more strict adherence to a process. There is not a problem they have ever encountered, in my estimation, for which their would-be answer is not either adding more process, or blaming the issue on the failure of subordinates to follow the process well enough. To these people, if developers just stare at code long enough, it will have no bugs whatsoever, and if code which passes a code review has a bug, it was because the reviewers didn't do a good enough job, and should not have approved it.

Aside: The above might sound absurd to anyone who has spent any time working in the real world, but I assure you that it is not. I know more than one manager who's position is that no code should ever be approved in a code review, and/or be merged into the company's repository, which has any bugs whatsoever, and if there are any bugs in the code, the code review approver(s) should be held accountable. I think this is borderline malicious incompetence, but some of these people have failed upward into positions where they have significant power within organizations, and this is absolutely a very real thing.

Back to code reviews, though: I think they are overrated based on a couple perceptions:

  • The most important factor in producing high-value code over time is velocity (of development), hands down
  • Code reviews rarely catch structural design issues (and even when they do, by that time, it's often effectively too late to fix them)
  • Code reviews encourage internal strife from opinionated feedback, which often has little value on overall code quality
  • Code reviews often heavily implicitly bias against people without as many social connections, and/or those which do not "play politics" within the org/teams (and conversely, favor those that do, encouraging that behavior)
  • As per above, code reviews are very often abused by bad managers, which can easily lead to worse overall outcomes for orgs

To be clear and fair, code reviews have some tangible benefits, and I wouldn't necessarily dispose of them all together, were I running a dev org. In particular, the potential benefits such as sharing domain knowledge, increasing collaboration, and propagating best-practices (particularly when mentoring more junior developers) are tangible benefits of code reviews or equivalent. There is a reasonably compelling argument that, with good management in place, and when not used for gating and/or misused for blame attribution, code reviews have sufficient positive value to be a good practice.

However, the risks here are real and substantial, and this is not something which is a clear win in all, or perhaps even most, cases. Code reviews impact velocity, and the positive value proposition must be reasonably high for them to have a net positive value, given that. You're not likely to catch many actual bugs in code reviews, and if your developers use them as a crutch for this (psychologically or otherwise), that's another risk. If you have management which thinks thorough enough code reviews will give you "pristine" code, you're almost certainly better off eliminating them entirely (in concept), in my estimation (assuming you cannot replace those terrible managers). Code reviews are something which can have net positive value when used appropriately... but using them appropriately is something I've personally seen far less often than not.

That's my 2c on the topic, anyway.