Friday, November 1, 2024

Documenting a software implementation

Allow me to posit a hypothetical. Imagine you are at a company, and someone (perhaps you) has just completed a project to implement some complex feature, and you have been tasked with ensuring the implementation is documented, such that other/future developers can understand how the implementation works. For the sake of argument, we'll assume the intent and high level design have already been documented, and your task is to capture the specifics of the actual implementation (for example, ways which it diverged from the original design, idiosyncrasies of the implementation, corner cases, etc.). We'll also assume you have free reign to select the best tooling, format, storage, etc. for the documentation, with the expectation that all of these are considered in your work product.

Note: This is, in my experience, a not uncommon request from management, especially in larger companies, so it seems like a reasonable topic for general consideration.

Let's see how it plays out, looking at the various design aspects under consideration, and what the best selections for each might be.

Considerations:

Documentation locality

One aspect which is certainly worth considering is where the documentation will live. One of the very important considerations with any sort of documentation is locality: that is, where the documentation lives. Documentation in an external repository location can be both hard to locate (particularly for developers who are less familiar with the org's practices), and hard to keep in sync with the code (because it's easy for the code to be updated, and the documentation to be neglected). In concept, the documentation should be as "close" to the code as possible. An oft-quoted downside of putting documentation within a source code repository is that it cannot be as easily edited by non-developers, but in this case that will not be a concern, since presumably only developers will have direct knowledge of the implementation anyway. So in concept, the best place for this documentation is in the code repository, and as close to the implementation code as possible, to minimize the chances of one being updated and not the other.

Language and dialect

This might seem like a trivial consideration, particularly if you've only worked within smaller orgs with relatively homogeneous cultures and dev backgrounds, but I would suggest that it is not. Consider:

  • Not all the developers may speak the same native language(s), and nuance may be lost when reading non-primary languages
  • Some developers (or managers) may object to casual nomenclature for business products, but conversely not all developers may want to read, or be capable of writing, business professional text
  • There's also the question of style; for example, writing in textual paragraphs, vs writing in terse bullet points and such

In the abstract, the choice of language and dialect should be such that:

  • All developers can read and understand the nuances expressed in the documentation
  • The language used does not create undue friction for either being too much, or not enough, "business professional"
  • The writing style should be able to express the flow and semantics of the code in a comprehensible manner, while allowing for the various special-cases
    • For example, there should be a manner in which to express special-case notes on specific areas of the implementation, like footnotes or annotations
    • There should also be a way to capture corner cases, and perhaps which cases are expected to work, and which are not

Sync with implementation

This was alluded to in the locality point, but it's important that the documentation stay in sync with the implementation, to the maximum extent possible. If the documentation is out of sync, then it is not only worthless for understanding that piece of the code, but perhaps even a net negative, as a developer trying to understand the implementation from the documentation might be misled, and waste time due to bad assumptions based on the documentation. So in addition to locality (ie: docs near the code), we want to ensure that it is as easy as possible for developers to update the documentation at the same time they make any code changes, so that they will be able and inclined do so.

Expedient vs comprehensive

It would be a bit remiss to not also mention the trade-off in the initial production of the documentation, between being expedient and being comprehensive, and additionally how much the above trade-offs might impact the speed at which the documentation could be produced. Every real-world org is constrained by available resources and time, and presumably you will have some time limit for this project as well. So the quicker you can produce documentation, and the more comprehensive it is, the better your performance on this task will be.

So, what to do?

Admittedly, those readers who have thought about or performed this task already probably have some good ideas at this point, and perhaps the more intelligent readers have already figured out where this is going, just based on the objective analysis above. To recap the considerations, though:

  • We need something in a form which we can produce quickly, but is also as comprehensive of a description of the implementation as possible
  • The language used for the documentation must be readable by all the developers who are familiar enough with the code to work on it, regardless of their native language(s)
    • The documentation must be unquestionably work appropriate (no swear words, slang, obscure references, etc.), but also terse enough to provide value without being excessively verbose
    • There must be some mechanism in the structure to provide footnotes for implementation choices, corner cases, tested inputs, etc.
  • The documentation should be as close to the code as possible, such that it's easy to find, and there is a minimal risk of it getting out of sync with the actual implementation over time
  • It must impose the smallest amount of overhead as would be reasonable to update the documentation along with changes in the implementation over time
    • Note: This is often the hardest thing to get "right" with docs in general, since the value add for future readers must be greater than both the initial production time, and the maintenance time, for documentation to be a net positive value at all

Now, the above might seem like a tall order with lots of hard to answer questions, but let me point out something which might make these decisions a bit easier. A programming language, such as it is, is fundamentally just a way to describe what you want the computer to do in a human readable form. Assuming the hypothetical selection of the same language as the implementation for the documentation, this would be:

  • Able to be produced reasonable quickly
  • Readable to all developers who would be familiar with the implementation code
    • Unquestionably work appropriate
    • Able to provide footnotes (via comments, or ancillary code such as unit tests)
  • Very close to the code (could be in the same files, in fact, right next to the implementation)
  • ... but it would still have some non-trivial overhead to keep in sync with the actual production code

But wait... we can solve that last problem fairly trivially, by just eliding the actual copy or translation of the code into nominal documentation form, and just rely on the code itself! Now we have gained:

  • Produced instantly (once the implementation is done, the documentation is also implicitly done)
  • Zero overhead to keep in sync with the actual implementation (since they are the same)

"But hold on", you might object, "what if the code is incomprehensible?" That is a valid question in the abstract, but I would counter with two observations:

  • If the code is incomprehensible, and you can write more comprehensible documentation (ie: the complexity is in overhead of the implementation, not inherent to the problem space), then you can fix the code to make it more comprehensible
  • If the problem space is inherently complex, then side-by-side documentation will not be less complex, and the code itself is often just as easy for a developer to read and understand than any other form of documentation

Wait, what did we just conclude?

We just concluded, based on an objective analysis of all the various design considerations, that the best way to document a software implementation is to not do any documentation at all, because every single thing you could do is worse than just allowing the code to be self-documenting. You should improve the structure of the code as applicable and possible, and then tell your management that the task is done, and the complete functional documentation is in the repository, ready to be consumed by any and all future developers. Then maybe find some productive work to do.

Note: Selling this to managers, particularly bad ones, might be the hardest part here, so I'm being slightly knowingly flippant. However, I do think the conclusion above is correct in general: wherever possible within an org, code should be self-documenting, and any other form of documentation for an implementation is strictly worse than this approach.

PS: I'm aware that some people who read this post probably have already internalized this, as this is fairly common knowledge in the industry, but hopefully it was at least a somewhat entertaining post if you made it this far and already were well-aware of what the "right" answer here was. For everyone else, hopefully this was informative. :)

Wednesday, October 16, 2024

Just say "no" to code freezes

One of the more insightful conclusions I've reached in my career, if perhaps also one of the more controversial opinions, is that you should always say "no" to code freezes (at least in an optimal development methodology). This isn't always possible, of course; depending on where you are in influence and decision making, you may have only some, or effectively no, input into this decision. However, to the extent that you are prompted with this question and have some input, my advice is to always push back, and I'll elaborate on this below.

The case for code freezes

I've heard a number of different justifications for why people want code freezes, and these desires come in a few forms. Some examples:

  • We need a code freeze
  • We need to defer some code changes for a bit, just until...
  • We need to hold off on this going in because...
  • We have a release schedule where no changes go in during this period
  • etc.

Typically, the justification for the ask is achieving some desired amount of code stability at the expense of velocity, while some "critical" process goes on. The most common case for this is QA for a release, but there are also cases where critical people might be out, before holidays where support might be lacking, etc. In my experience, these asks are also almost always via management, not development, under the pretense that the operational change in necessary to coordinate with other teams and such.

Note that this is, de facto, antithetical to Agile; if you're practicing Agile software development, you're not doing the above, and conversely if you're doing the above, you're not doing Agile. I mention this as an aside, because this is one area where teams and orgs fail at Agile quite regularly.

The reasons this is bad

Any time you're implementing a code freeze, you are impacting velocity. You are also increasing the risk of code conflicts, discouraging continuous improvement of code, and likely increasing overhead (eg: resolving merge conflicts in ongoing work). Furthermore, this can create a strong incentive to circumvent normal workflows and methodologies, by introducing side-band processes for changes during a "code freeze", which can be even worse (eg: "we need to push this change now, we can't follow the normal QA methodology, because we're in a code freeze").

Side anecdote: in a previous company, the manager insisted on a three month code freeze before each release. During this time, QA was "testing", but since QA overlapped with sales support, this was also where all the sales support enhancement requests were injected into the dev queues, as "critical fixes for this release". In essence, the code freeze would have allowed this part of the business to entirely hijack and bypass the normal PM-driven prioritization mechanism for enhancements, and divert the entire dev efforts for their own whims, if not for some push back from dev on the freeze itself (see suggestions below).

Note that this ask is very common; in particular, short-sighted managers (and/or people with higher priority localized goals than overall business success) ask for these fairly frequently, in my experience. It's often the knee-jerk reaction to wanting more code stability, from those who have a myopic view of the overall cost/benefit analysis for process methodologies.

Alternatives and suggestions

To the extent that you have influence on your process, I'd suggest one of the following alternatives when a code freeze is suggested.

Just say "no"

The most preferable outcome is to be able to convince management that this is not necessary, and continue development as normal. This is unlikely in most cases, but I'll list it anyway, because it is still the best option generally, where possible. In this case, I'd suggest emphasizing that better code stability is best achieved by incremental improvements and quick turnaround fixes, as well as better continuous integration testing, and not code divergence. This argument is often not convincing, though, particularly to people with less overall development experience, and/or higher priority myopic goals. It may also not be feasible, given overall org requirements.

Create a branch for the "freeze"

The easiest and cleanest workaround, generally, is to appease the ask by creating a one-off branch for the freeze, and allowing testing/other to be done on the branch code, while normal development continues on the mainline. This is the closest to the Agile methodology, and can allow the branch to become a release branch as necessary. Note that this approach can often require ancillary process updates; for example, pipelines which are implicitly associated with the mainline may need to be adjusted to the branch. But generally, this approach is the most preferable when a freeze is deemed necessary.

Note that the typical drawback/complication with this approach is that developers will frequently be asked to make some changes in parallel in this scenario (ie: on the mainline and the freeze branch). In this case, I suggest mandating that changes happen on the mainline first, then are ported on-demand to the branch. Ideally, this porting would be done by a group with limited resources (to discourage demands for numerous changes to be ported to a "frozen" branch). For extended QA testing, this might encourage re-branching instead if many changes are needed, rather than porting extensively; this is also generally preferable if many changes are asked for during the "freeze".

Create a parallel mainline

This is a functionally identical to creating a branch for the "frozen" code, but can be more palatable for management, and/or compatible with ancillary processes. In essence, in this scenario, dev would create a "mainline_vNext" (or equivalent) when a code freeze for the mainline is mandated, and shift active development to this branch. When the code freeze is lifted, this would then become the mainline again (via branch rename or large merge, whichever is easier).

This approach, as with the above, also induces overhead of parallel development and merging changes across branches. But it satisfies the typical ask of "no active development on the mainline".

Exceptions, or when a real freeze might be necessary

I haven't seen many examples of this, but the one example I have seen is where a company has a truly real-time CI/CD pipeline, where any changes flow directly to production if all tests pass, and there is no mechanism to freeze just the production output, and disruptions to production operations will be catastrophic. In this specific scenario, it might be net positive to have a short code freeze during this period, if the risks cannot be mitigated any other way. In this case, the cost to the org (in productivity) might be justified by the risk analysis, and as long as the time period and process is carefully controlled, seems like a reasonable trade-off.

The ideal

Included just for reference: what I would do if I were dictating this process within a larger org.

  • Allow branches for QA testing and/or stability wants
  • Limit resources for merging into branches (QA or historical)
    • Ideally, have a separate support dev team for all historical branches
    • Encourage re-branching from mainline if merging ask requirements exceed allocated resources
  • Configure pipelines to be flexible (ie: allow release candidate testing on branches, production deployment from release branch, etc.)
  • Mandate no code freeze ever on the mainline (always incremental changes)
    • Solve any asks for this via alternative methods
  • Encourage regular and ancillary integration testing on the mainline (ie: dogfooding)

Anyway, those are my [current] opinions on the matter, fwiw.


Wednesday, October 9, 2024

Real talk about career trajectories

Almost every time I scroll through LinkedIn, I run into one or more posts with some variation of the following:

  • You're wasting your life working for someone else, start your own business...
  • Invest in yourself, not traditional investments, so you can be super successful...
  • [literally today] Don't save for retirement, spend that money on household help so you can spend your time learning new skills...
  • etc.

These all generally follow the same general themes: investing in yourself allows you to start your own business, and that allows you to get wealthy, and if you're working a 9-5 and doing the normal and recommended financially responsible things, you're doing it wrong, and will always be poor. I'm going to lay out my opinion on this advice, and relate it to the career/life path I'm on, and what I would personally recommend.

Let's talk risk

The main factor that most people gloss over, when recommending this approach to a career, is the element of risk. When last I read up on this, at least in the tech sector, roughly 1/20 startups get to an initial (non-seed) funding round, and roughly 1/20 of those get to the point where founders can "cash out" (ie: sold, public, profitable, etc.). That's a huge number of dead bodies along the way, and the success stories in the world are the outliers. When you hear about anyone's success with investing in themselves and becoming wealthy, there's a very heavy selection bias there.

This is where personal wealth comes into play (eg: family wealth, personal resources, etc.), and why people who succeed at starting businesses usually come from money. As someone without financial resources, not working a steady job is a large financial risk: you usually won't make much money, and if your endeavor doesn't pan it, you might be left homeless (for example). People with resources already can take that risk, repeatedly, and still be fine if it doesn't work out; normal people cannot. Starting a business is expensive, often in outlay costs in addition to opportunity costs. Personal resources mitigates that risk.

There's also an element of luck in starting a business: even if you have a good idea, good skills, good, execution, etc., some of the chance of overall success will still be luck. This is something where you can do everything "right", and still fail. This risk is random, and cannot really be mitigated.

The other risk factor in terms of owning and running a business comes later, if/when the business becomes viable. As a business owner, your fortunes go up or down based on the value of the business, and often you'll be personally liable for business debts when the company is smaller. In contrast, an employee is hired for a specific wage and/or compensation agreement, and that is broadly dependent on just performing job functions, not how successful that makes the business overall. Moreover, employees can generally ply their skills for any business, so if the business become bad to work at, they have mobility, whereas the business owner does not. Again, this is the risk factor.

But you still want to be wealthy...

Okay, so here's my advice to put yourself in the best position to be wealthy:

  1. Get really good at networking
  2. Buy rental property

 #1 is the most important factor in overall wealth potential, in my opinion. In addition to being the primary factor in many types of jobs (eg: sales, RE agent), networking will get you opportunities. Also, being good at networking will make you good at talking to people, which is the primary job skill for managers. Since managers are typically compensated better than skill workers, and almost always the people who get elevated to C-level roles, this is the optimal career path in general for being wealthy, even if you don't start your own business.

#2 is the best possible investment type, at least in the US, because the government has made hording property the single most tax advantaged investment possible. It is an absolutely awful social policy in general, as it concentrates wealth among the very rich while creating durable wealth inequality, and making housing unaffordable for much of the middle class. The tax policy is the absolute pinnacle of corruption and/or stupidity in government policy in the US... but rental property is unequivocally the best investment class as a result of the policy, and unless you're in a position to compromise your own wealth for idealism, this is what you should invest in.

Parting thoughts

Most people will be employees (vs self-employed or business owners). If you are in a position and of the mindset to start a business, and are comfortable with the risk, that is a path you can choose. If that's not for you (as it's not for most people), but you still want the best chance of being reasonably wealthy, get really good at talking to people and maintaining social connections, and go into management: it's significantly easier than skilled or manual labor, and happens to pay significantly better also. But at the end of the day, keep in mind that you don't have to luck into the 0.001% who are in position to and are successful in starting a successful business in order to make enough money to have a comfortable life, and there is more to life than just making lots of money.


Saturday, October 5, 2024

The value of knowing the value

Something I've been musing about recently: there's a pretty high value, for managers, in knowing the value of people within a business. I'm kinda curious how many companies (and managers within said companies) actually know the value for their reports, beyond just the superficial stuff (like job level).

Motivating experience: people leave companies. Depending on how open the company is with departures, motivations, internal communications, etc., this may be somewhat sudden and/or unnoticed for some time. Sometimes, the people that leave have historical specific knowledge which is not replicated anywhere else in the company, or general area expertise which cannot easily be replaced, or are fulfilling roles which would otherwise be more costly to the organization if that work was not done, etc. Sometimes these departures can have significant costs for a company, beyond just the lost nominal productivity.

Note that departures are entirely normal: people move on, seek other opportunities, get attractive offers, etc. As a company, you cannot generally prevent people from leaving, and indeed many companies implicitly force groups out the door sometimes (see, for example, Amazon's RTO mandates and managing people out via underhanded internal processes). However, in concept a company would usually want to offer to pay what a person is worth to the company to retain them, and would not want someone to walk away that they would have been willing to pay enough to entice them to stay. That's where knowing value comes into play: in order to do that calculation, you would need the data, inclusive of all the little things which do not necessarily make it into a high level role description.

Not having been a manager (in any sort of serious sense), I don't really have a perspective on how well this is generally understood within companies, but my anecdotal experience would suggest that it is generally not tracked very well. Granted, everyone is replaceable in some sense, and companies do not want to be in a position where they feel extorted, but the converse is that effective and optimal people management means "staying ahead" of people's value in terms of proactive rewards and incentives. I'd imagine that even a company which treats their employees at cattle would be displeased if someone they would have been happy to retain at a higher rate walked, just because their management didn't accurately perceive their aggregate value to the organization.

All of this is to say: there's a high value for an organization in having an accurate sense of the value that people have for an organization, to optimally manage your business relationship with those people. If people leave your company, and you figure out later that they had more value than you were accounting for, that's a failure of management within the organization, and might be a costly one.

Addendum: Lest anyone think this is in relation to myself and my current position, it is not. However, I have been valued both more and less than I perceived my value as at various points in my career, so I know there can be a disconnect there, and I've seen organizations lose valuable expertise and not realize it until the people were gone. I would surmise that this might be more of a general blind spot than many companies realize.

Sunday, September 8, 2024

Zero cost abstractions are cool

There is a language design principle in C++ that you should only pay for what you use; that is, that the language should not be doing "extra work" which is not needed for what you are trying to do. Often this is extrapolated by developers to imply building "simple" code, and only using relatively primitive data structures, to avoid possible runtime penalties from using more advanced structures and methodologies. However, in many cases, you can get a lot of benefits by using so-called "zero cost abstractions", which are designed to be "free" at runtime (they are not entirely zero cost; I'll cover the nuance in an addendum). These are cool, imho, and I'm going to give an example of why I think so.

Consider a very simple, and somewhat ubiquitous in code from less experienced developers, function result paradigm of returning a boolean to indicate success or failure:

bool DoSomething(); 

This works, obviously, and fairly unambiguously represents the logical result of the operation (by convention, technically). However, it also has some limitations: what if I want to represent more nuanced results, for example, or pass back some indication of why the function failed? These are often trade-offs made for the ubiquity of a standard result type.

Processors pass result codes by way of a register, and the function/result paradigm is ubiquitous enough that this is well-supported by all modern processors. Processor register sizes are dependent on the architecture, but should be at least 32bits for any modern processor (and 64bits or more for almost all new processors). So, when passing back any result code, passing 64bits of data is the same as passing one bit, per the above example, runtime performance wise. So we can rewrite our result paradigm as this, without inducing any additional runtime overhead:

uint64_t DoSomething();

Now we have an issue, though: we have broken the ubiquity of the original example. Is zero success now (as would be more common in C++ for integer result types)? What do other values mean? Does each function define this differently (eg: a custom enum of potential result values per-function)? While we have added flexibility, we have impacted ubiquity, and potentially introduced complexity which might negate any other value gained. This is clearly not an unambiguous win.

However, we can do better. We can, for example, not use a numeric type, but instead define a class type which encapsulates the numeric type (and is the same size, so it can still be passed via a single register). Eg:

class Result
{
    int64_t m_nValue;
    bool isSuccess() const { return m_nValue >= 0; }
    bool isFailure() const { return m_nValue < 0; }
};

Now we can restore ubiquity in usage: callers can use ".IsSuccess()" and/or ".isFailure()" to determine if the result was success or failure, without needing to know the implementation details. Even better: this also removes any lingering ambiguity in the first example as well, as we now how methods which clearly spell out intent in readable language. Also, importantly, this has zero runtime overhead: an optimizing compiler will inline these methods to be assembly equivalent to manual checks.

Result DoSomething();

//...
auto result = DoSomething();
if (result.isFailure())
{
    return result;
}

This paradigm can be extended as well, of course. Now that we have a well-defined type for result values, we could (for example) define some of the bits as holding an indicative value for why an operation failed, and then add inline methods to extract and return those codes. For example, one common paradigm from Microsoft uses the lower 16bits to encapsulate the Win32 error code, where the high bits have information for the error disposition and component area which generated the error. This can also be used to express nuance in success values as well; for example, an operation which "succeeded", but which had no effect, because preconditions were not satisfied.

Moreover, if used fairly ubiquitously, this can be used to easily propagate unexpected error results up a call stack as well, as suggested above. One could, if inclined, add macro-based handling to establish a standard paradigm of checking for and propagating unknown errors, and with the addition of logging in the macro, the code could also generate a call stack on those cases. That compares fairly favorably to typical exception usage, for example, both in utility and runtime overhead.

So, in summary, zero cost abstractions are cool, and hopefully the above has convinced you to consider that standard result paradigms are cool too. I am a fan of both, personally.

Addendum: zero cost at runtime

There is an important qualification to add here: "zero cost" applies to runtime, but not necessarily compile time. Adding structures implies some amount of compilation overhead, and with some paradigms this can be non-trivial (eg: heavy template usage). While the above standard result paradigm is basically free and highly recommended, it's always important to also consider the compilation time overhead, particularly when using templates which may have a large number of instantiations, because there is some small but non-zero compilation time overhead there. The more you know.



Wednesday, July 17, 2024

Why innovation is rare in big companies

I have worked for small and large tech companies, and as I was sitting through the latest training course this week on the importance of obtaining patents for business purposes (a distasteful but necessary thing in the modern litigious world), I was reflecting on how much more innovative smaller companies tend to be. This is, of course, the general perception also, but it's really reflective of the realities of how companies operate which motivates this outcome, and this is fairly easy to see when you've worked in both environments. So, I'm going to write about this a bit.

As a bit of background, I have two patents to my name, both from when I worked in a small company. Both are marginally interesting (more so than the standard big company patents, anyway), and both are based on work I did in the course of my job. I didn't strive to get either; the company just obtained them after the fact for business value. I have no idea if either has ever been tested or leveraged, aside from asset value on a spreadsheet.

Let's reflect a bit on what it takes to do projects/code at a typical big company. First, you need the project to be on the roadmap, which usually requires some specification, some amount of meetings, convincing one or more managers that the project has tangible business value, constraining the scope to only provable value, and getting it into the process system. Then you have the actual process, which may consist of ticket tracking, documents, narrowing down scope to reduce delivery risk, getting various approvals, doing dev work (which may be delegated depending on who has time), getting a PR done, getting various PR approvals (which usually strip out anything which isn't essential to the narrow approved scope), and getting code merged. Then there is usually some amount of post-coding work, like customer docs, managing merging into releases, support work, etc. That's the typical process in large companies, and is very reflective of my current work environment, as an example.

In this environment, it's very uncommon for developers to "paint outside the lines", so to speak. To survive, mentally and practically, you need to adopt to the process of being a cog who is given menial work to accomplish within a very heavy weight process-oriented system, and is strongly discouraged from trying to rock the boat. Works gets handed down, usually defined by PM's and approved by managers in planning meetings, and anything you do outside of that scope is basically treated as waste by the organization, to be trimmed by all the various processes and barriers along the way. This is the way big companies operate, and given enough resources and inherent inefficiencies, it works reasonably well to maintain and gradually evolve products in entirely safe ways.

It is not surprising that this environment produces little to no real innovation. How could it? It's actively discouraged by every process step and impediment which is there by design.

Now let's consider a small company, where there are typically limited resources, and a strong driving force to build something which has differentiated value. In this environment, developers (and particularly more senior developers) are trusted to build whatever they think has the most value, often by necessity. Some people will struggle a lot in this environment (particularly those people who are very uncomfortable being self-directed); others will waste some efforts, but good and proactive developers will produce a lot of stuff, in a plethora of directions. They will also explore, optimize, experiment with different approaches which may not pan out, etc., all virtually unconstrained by process and overhead. In this environment, it's not uncommon to have 10x productivity from each person, and only half of that work actually end up being used in production (compared to the carefully controlled work product in larger companies).

But, small companies get some side-benefits from that environment also, in addition to the overall increases in per-person effective productivity. Because developers are experimenting and less constrained by processes and other people's priorities, they will often build things which would never have been conceived of within meetings among managers and PM's. Often these are "hidden" things (code optimizations, refactoring to reduce maintenance costs, process optimizations for developer workflows, "fun" feature adds, etc.), but sometimes they are "interesting" things, of the sort which could be construed as innovations. It is these developments which will typically give rise to actual advances in product areas, and ultimately lead to business value through patents which have meaning in the markets.

Now, I'd be remiss to not mention that a number of companies are aware of this fact, and have done things to try to mitigate these effects. Google's famous "20% time", for example, was almost certainly an attempt to address this head-on, by creating an internal environment where innovation was still possible even as the company grew (note: they eventually got too large to sustain this in the face of profit desires from the market). Some companies use hackathons for this, some have specific groups or positions which are explicitly given this trust and freedom, etc. But by and large, they are all just trying to replicate what tends to happen organically at smaller companies, which do not have the pressure or resources build all the systems and overhead to get in the way of their own would-be success.

Anyway, hopefully that's somewhat insightful as to why most real innovation happens in smaller companies, at least in the software industry.


Friday, July 5, 2024

How not to do Agile

Note: This post is intended to be a tongue-in-cheek take, based on an amalgam of experiences and anecdotes, and is not necessarily representative of any specific organization. That said, if your org does one or more of these things, it might be beneficial to examine if those practices are really beneficial to the org or not.

The following is a collection of things you should not do when trying to do Agile, imho; these practices either run counter to the spirit of the methodology, will likely impede realizing the value from doing such, and/or demonstrate a fundamental misunderstanding of the concept(s).

Mandating practices top-down

One of the core precepts of Agile is that it is a "bottom-up" organization system, which is intended to allow developers to tweak the process over time to optimize their own results. Moreover, it is very important in terms of buy-in for developers to feel like the process is serving the needs of development first and foremost. When mandated from outside of development, even an otherwise optimal process might not get enough support over time to be optimally adopted and followed.

It is very often a sign of "Agile in name only" within organizations when this is mandated by management, rather than adopted organically (and/or with buy-in across the development teams). This is one of the clearest signals that an organization either has not bought into Agile, and/or the management has a fundamental misunderstanding of what Agile is.

Making your process the basis of work

One of the tenants of Agile is that process is there in service of the work product, and should never be the focus for efforts. As such, if tasks tend to revolve around process, and/or are dependent on specific process actions, this should be considered a red flag. Moreover, if developers are spending a non-trivial amount of time on process-related tasks, this is another red flag: in Agile, developers should be spending almost all their time doing productive work, not dealing with process overhead.

One sign that this might be the case is if/when workflows are heavily dependent on the specifics of the process and/or tooling, as opposed to the logical steps involved in getting a change done. For example, if a workflow starts with "first, create a ticket...", this is not Agile (at least in spirit, and probably in fact). If the workflow is not expressed in terminology which is process and tooling independent, the org probably isn't doing Agile.

Tediously planning future release schedules

Many organizations with a Waterfall mindset always plan out future releases, inclusive of which changes will be included, what is approved, what is deferred, etc. This (of course) misses the point of Agile entirely, since (as encapsulated in the concept of Agile) you cannot predict the timeline for changes of substance, and this mentality makes you unable to adopt to changing circumstances and/or opportunities (ie: be "agile"). If your organization is planning releases with any more specificity than a general idea of what's planned for the next release, and/or it would be a non-trivial effort to include changes of opportunity into a release, then the org isn't doing Agile.

Gating every software change

The Agile methodology is inherently associated with the concept of Continuous Improvement, and although the two can be separated conceptually, it's hard to imagine an Agile environment which did not also emphasize CI. Consequently, in an Agile environment, small incremental improvements are virtually always encouraged, both explicitly via process ideals, and implicitly via low barriers. Low barriers is, in fact, a hallmark of organizations with high code velocity, and effectively all high productive Agile dev teams.

Conversely, if an organization has high barriers in practice to code changes (process wise or otherwise), and/or requires tedious approvals for any changes, this is a fairly obvious failure in terms of being Agile. Moreover, it's probably a sign that the organization is on the decline in general, as projects and teams where this is the prevailing mentality and/or process tend to be fairly slow and stagnant, and usually in "maintenance mode". If this doesn't align with management's expectations for the development for a project, then the management might be poor.

Create large deferred aggregate changes

One of the precepts of Agile is biasing to small, incremental changes, which can be early integrated and tested as self-contained units. Obviously, large deferred aggregate changes are the antithesis of this. If your organization has a process which encourages or forces changes to be deferred and/or grow in isolation, you're certainly not doing Agile, and might be creating an excessive amount of wasteful overhead also.

Adding overhead in the name of "perfection"

No software is perfect, but that doesn't stop bad/ignorant managers from deluding themselves with the belief that by adding enough process overhead, they can impose perfection upon a team. Well functioning Agile teams buck this trend through self-organization and control of their own process, where more intelligent developers can veto these counter-productive initiatives from corporate management. If you find that a team is regularly adding more process to try to "eliminate problems", that's not only not Agile, but you're probably dealing with some bad management as well.

Have bad management

As alluded to in a number of the points above, the management for a project/team has a huge impact on the overall effectiveness of the strategies and processes. Often managers are the only individuals who are effectively empowered to change a process, unless that ability is clearly delegated to the team itself (as it would be in a real Agile environment). In addition to this, though, managers shape the overall processes and mindsets, in terms of how they manage teams, what behaviors are rewarded or punished, how proactively and clearly they articulate plans and areas of responsibility, etc. Managers cannot unilaterally make a team and/or process function well, but they can absolutely make a team and/or process function poorly.

Additionally, in most organizations, manager end up being ultimately responsible for the overall success of a project and/or team, particularly when in a decision making role, because they are the only individuals empowered to make (or override) critical decisions. A good manager will understand this responsibility, and work diligently to delegate decisions to the people most capable of making them well, while being proactively vigilant for intrusive productivity killers (such as heavy process and additional overhead). Conversely, a bad manager either makes bad decisions themselves, or effectively abdicates this responsibility through inaction and/or ignorance, and allows bad things to happen within the project/team without acknowledging responsibility for those events. If the project manager doesn't feel that they are personally responsible for the success of the product (perhaps in addition to others who also feel that way), then that manager is probably incompetent in their role, and that project is likely doomed to failure in the long run unless they are replaced.

Take home

Agile is just one methodology for software development; there are others, and anything can work to various degrees. However, if you find yourself in a position where the organization claims to be "agile", but exhibits one or more of the above tendencies, know that you're not really in an organization which is practicing Agile, and their self-delusion might be a point of concern. On the other hand, if you're in a position to influence and/or dictate the development methodology, and you want to do Agile, make sure you're not adding or preserving the above at the same time, lest you be the one propagating the self-delusion. Pick something that works best for you and your team, but make an informed choice, and be aware of the trade-offs.