tag:blogger.com,1999:blog-68472835725172038922024-03-25T22:50:09.244-07:00CodeslingerNickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.comBlogger22125tag:blogger.com,1999:blog-6847283572517203892.post-44802036397275761112024-03-25T22:49:00.000-07:002024-03-25T22:49:21.225-07:00The problem with process<p>Note: This post might more accurately be titled "one problem with process", but I thought the singular had more impact, so there's a little literary license taken. Also, while this post is somewhat inspired by some of my work experiences, it does not reflect any particular person or company, but rather a hypothetical generalized amalgamation.<br /></p><p>There's an adage within management, which states that process is a tool which makes results repeatable. The unpacking of that sentiment is that if you achieve success once, it might be a fluke, dependent on timing or the environment, dependent on specific people, etc., but if you have a process which works, you can repeat it mechanically, and achieve success repeatedly and predictably. This is the mental framework within which managers add process to every facet of business over time, hoping to "automate success".</p><p>Sometime it works, sometimes it doesn't. Often process is also used to automate against failure as well, by automating processes which avoid perceived and/or historical breakdown points. This, more often than not, is where there be landmines.</p><p>Imagine a hypothetical: you're a manager, grappling with a typical problem of quality and execution efficiency. You want to increase the former, without sacrificing the latter (and ideally, with increasing the latter as well). Quality problems, as you <i>know</i>, come from rushing things into production without enough checks and sign-offs; process can fill that gap easily. But you also <i>know</i> that with enough well-defined process, people become more interchangeable in their work product, and can seamlessly transition between projects, allowing you to optimally allocate resources (in man-months), and increase overall execution efficiency.</p><p>So you add process: standard workflows for processing bugs, fields in the tracking system for all the metrics you want to measure, a detailed workflow that captures every state of every work item that is being worked, a formalized review process for every change, sign-offs at multiple levels, etc. You ensure that there is enough information populated in your systems such that any person can take over any issue at any time, and you'll have full visibility into the state of your org's work at all times. Then you measure your metrics, but something is wrong: efficiency hasn't increased (which was expected, it will take time for people to adjust to the new workflows and input all the required data into the systems), but quality hasn't increased either. Clearly something is still amiss.</p><p>So you add more process: more stringent and comprehensive testing requirements, automated and manual, at least two developers and one manager reviewing every change which goes into the code repository, formalized test plans which must be submitted and attested to along with change requests, more fields to indicate responsible parties at each stage, more automated static analysis tools, etc. To ensure that the processes are followed, you demand accountability, tying sign-off for various stages to performance metrics for responsible employees. Then you sit back and watch, sure that this new process is sufficient to guarantee positive results.</p><p>And yet... still no measurable improvement in overall perceived product quality. Worse, morale is declining: many employees feel stifled by the new requirements (as they should; those employees were probably writing the bugs before), they are spending large amounts of time populating the process data, and it's taking longer to get fixes out. This, in turn, is affecting customers satisfaction; you try to assure them that the increased quality will compensate for the longer lead times, but privately your metrics do not actually support this either. The increased execution efficiency is still fleeting as well: all the data is there to move people between project seamlessly, but for some reason people still suffer a productivity hit when transitioned.</p><p>Clearly what you need is more training and expertise, so you hire a Scrum master, and contract for some Scrum training classes. Unsure where everyone's time is actually going, you insist that people document their work time down to 10 minute intervals, associating each block of time with the applicable ticket, so that time can be tracked and optimized in the metrics. You create tickets for everything: breaks, docs, context switches, the works. You tell your underling managers to scrutinize the time records, and find out where you are losing efficiency, and where you need more process. You scour the metrics, hoping that the next required field will be the one which identifies the elusive missing link between the process and the still lacking quality improvements.</p><p>This cycle continues, until something breaks: the people, the company, or the process. Usually it's one of the first two.</p><p>In the aftermath, someone asks what happened. Process, metrics, KPI's: these were the panaceas which were supposed to lead to the nirvana of efficient execution and high quality, but paradoxically, the more that were added, the more those goals seemed to suffer. Why?</p><p>Aside: If you know the answer, you're probably smarter than almost all managers in most large companies, as the above pattern is what I've seen (to some degree) everywhere. Below I'll give my take, but it is by no means "the answer", just an opinion.</p><p>The core problem with the above, imho, is that there is a misunderstanding of what leads to quality and efficiency. Quality, as it turns out, comes from good patterns and practices, not gating and process. Good patterns and practices can come from socializing that information (from people who have the knowledge), but more often than not come from practice, and learned lessons. The quantity of practice and learned lessons come from velocity, which is the missing link above.</p><p>Process is overhead: it slows velocity, and decreases your ability to improve. Some process can be good, but only when the value <i>to the implementers</i> exceeds the cost. This is the second major problem in the above hypothetical: adding process for value of the overseers is rarely if ever beneficial. If the people doing the work don't think the process has value <i>to them</i>, then it almost certainly has net negative value to the organization. Overseers are overhead; their value is only realized if they can increase the velocity of the people doing the work, and adding process rarely does this.</p><p>Velocity has another benefit too: it also increases perceived quality and efficiency. The former happens because all software has bugs, but customers perceive how many bugs escape to production, and how quickly they are fixed. By increasing velocity, you can achieve pattern improvement (aka: continuous improvement) in the code quality itself. This decreases the number of overall issues as a side-effect of the continuous improvement process (both in code, and in culture), with a net benefit which generally exceeds any level of gating, without any related overhead. If you have enough velocity, you can even also increase test coverage automation, for "free".</p><p>You're also creating en environment of learning and improvement, lower overhead, less restrictions, and more drive to build good products among your employees who build things. That tends to increase morale and retention, so when you have an issue, you are more likely to still have the requisite tribal knowledge to quickly address it. This is, of course, a facet of the well-documented problem with considering skill/knowledge workers in terms of interchangeable resource units.</p><p>Velocity is the missing link: being quick, with low overhead, and easily pivoting to what was important without trying to formalize and/or add process to everything. There was even a movement a while ago which captured at least some of the ideals fairly well, I thought: it was called Agile Development. It seems like a forgotten ideal in the environments of PKI's, metrics, and top-heavy process, but it's still around, at least in some corners of the professional world. If only it didn't virtually always get lost with "scale", formalization, and adding "required" process on top of it.<br /></p><p>Anyway, all that is a bit of rambling, with which I hope to leave the reader with this: if you find yourself in a position where you have an issue with quality and/or efficiency, and you feel inclined to add more process to improve those outcomes, consider carefully if that will be the likely actual outcome (and as necessary, phone a friend). Your org might thank you eventually.</p><p> </p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-36271367993053532652024-03-17T16:50:00.000-07:002024-03-17T16:50:58.696-07:00Some thoughts on budget product development, outsourcing<p>I've been thinking a bit about the pros and cons of budget/outsourcing product development in general. By this, I mean two things, broadly: either literally outsourcing to another org/group, or conducting development in regions where labor is cheaper than where your main development would be conducted (the latter being, presumably, where your main talent and expertise resides). These are largely equivalent in my mind and experience, so I'm lumping them together for purposes of this topic.</p><p>The discussion has been top-of-mind recently, for a few reasons. One of the main "headline" reasons is all the issues that Boeing is having with their airplanes; Last Week Tonight had a good episode about how aggressive cost-cutting efforts have led to the current situation there, where inevitable quality control issues are hurting the company now (see: https://www.youtube.com/watch?v=Q8oCilY4szc). The other side of this same coin, which is perhaps more pertinent to me professionally, is the proliferation of LLM's to generate code (aka: "AI agents"), which many people think will displace traditional more highly-compensated human software developers. I don't know how much of a disruption to the industry this will eventually be, but I do have some thoughts on the trade-offs of employing cheaper labor to an organization's product development.</p><p>Generally, companies can "outsource" any aspect of product development, and this has been an accessible practice for some time. This is very common in various industries, especially for so-called "commoditized" components; for example, the automobile industry has an entire sub-industry for producing all the various components which are assembled into automobiles, and usually acquired from the cheapest vendors. This is generally possible for any components which are not bespoke, across any industry with components which are standardized, and can be assembled into larger products.</p><p>Note that this is broadly true in the software context as well: vendors sell libraries with functionality, open source libraries are commonly aggregated into products, and component re-use is fairly common in many aspects of development. This can even be a best-practice in many cases, if the component library is considered near the highest quality and most robust implementation of functionality (see: the standard library in C++, for example). Using a robust library which is well-tested across various usage instances can be a very good strategy.</p><p>Unfortunately, this is less true in the hardware component industries, since high-quality hardware typically costs more (in materials and production costs), so it's generally less feasible to use the highest quality components from a cost perspective. There is a parallel in first-party product development, where your expected highest quality components will usually cost more (due to the higher costs for the people who produce the highest quality components). Thus, most businesses make trade-offs between quality and costs, and where quality is not a priority, tend to outsource.</p><p>The danger arises when companies start to lose track of this trade-off, and/or misunderstand the trade-offs they are making, and/or sacrifice longer-term product viability for short-term gains. Each of these can be problematic for a company, and each are inherent dangers in outsourcing parts of development. I'll expand on each.</p><p>Losing track of the trade-offs is when management is aware of the trade-offs when starting to outsource, but over time these become lost in the details and constant pressure to improve profit margins, etc. For example, a company might outsource a quick prototype, then be under market pressure to keep iterating on it, while losing track of (and not accounting for) the inherent tech debt associated with the lower quality component. This can also happen when the people tracking products and components leave, and new people are hired without knowledge of the previous trade-offs. This is dangerous, but generally manageable.</p><p>Worse that the above is when management doesn't understand the trade-offs they are making. Of course, this is obviously indicative of poor and incompetent management, yet time and time again companies outsource components without properly accounting for the higher long-term costs of maintaining and enhancing those components, and companies suffer as a result. Boeing falls into this category: by all accounts their management thought they could save costs and increase profits by outsourcing component production, without accounting for the increased costs of integration and QA (which would normally imply higher overall costs for any shipping and/or supported product). That's almost always just egregious incompetence on the part of the company's management, of course.</p><p>The last point is also on display at Boeing: sacrificing long-term viability for short-term gains. While it's unlikely this was the motivation in Boeing's case, it's certainly a common MO with private equity company ownership (for example) to squeeze out as much money as possible in the short term, while leaving the next owners "holding the bag" for tech debt and such from those actions. Again, this is not inherently bad, not every company does this, etc.; this is just one way companies can get into trouble, by using cheaper labor for their product development.</p><p>This bring me, in a roundabout way, to the topic of using LLM's to generate code, and "outsource" software product development to these agents. I think, in the short term, this will pose a substantial risk to the industry in general: just as executives in large companies fell in love with offshoring software development in the early 2000's, I think many of the same executives will look to reduce costs by outsourcing their expensive software development to LLM's as well. This will inevitably have the same outcomes over the long run: companies which do this, and do not properly account for the costs and trade-offs (as per above), will suffer, and some may fail as a results (it's unlikely blame will be properly assigned in these cases, but when companies fail, it's almost always due to bad executive management decisions).</p><p>That said, there's certainly also a place for LLM code generation in a workflow. Generally, any task which you would trust to an intern, for example, could probably be completed by a LLM, and get the same quality of results. There are some advantages to using interns (eg: training someone who might get better, lateral thinking, the ability to ask clarifying questions, etc.), but LLM's may be more cost effective. However, if companies largely stop doing on-the-job training at scale, this could pose some challenges for the industry longer-term, and ultimately drive costs higher. Keep in mind: generally, LLM's are only as "good" as the sum total of average information online (aka: the training data), and this will also decline over time as LLM output pollutes the training data set as well.</p><p>One could argue that outsourcing is almost always bad (in the above context), but I don't think that's accurate. In particular, outsourcing, and the pursuit of short-term profits over quality, does serve at least two valuable purposes in the broader industry: it helps new companies get to market with prototypes quickly (even if these ultimately need to be replaced with quality alternatives), and it helps older top-heavy companies die out, so they can be replaced by newer companies with better products, as their fundamentally stupid executives make dumb decisions in the name of chasing profit margins (falling into one of more of the traps detailed above). These are both necessary market factors, which help industries evolve and improve over time.</p><p>So the next some some executive talks about outsourcing some aspect of product development, either to somewhere with cheaper labor or to a LLM (for example), you can take some solace in the fact that they are probably helping contribute to the corporate circle of life (through self-inflicted harm), and for each stupid executive making stupid decisions, there's probably another entrepreneur at a smaller company who better understands the trade-offs of cheaper labor, is looking to make the larger company obsolete, and will be looking for quality product development. I don't think that overall need is going to vanish any time soon, even if various players shuffle around.</p><p>My 2c, anyway.<br /></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-9867110389900806532024-02-19T10:11:00.000-08:002024-02-19T10:11:53.173-08:00Mobile devices and security<p>Generally, passwords are a better form of security than biometrics. There are a few well-known reasons for this: passwords can be changed, cannot be clandestinely observed, are harder to fake, and cannot be taken from someone unwillingly (eg: via government force, although one could quibble about extortion as a viable mechanism for such). A good password, used for access to a well-designed secure system, is probably the best known single factor for secure access in the world at present (with multi-factor including a password as the "gold standard").</p><p>Unfortunately, entering complex passwords is generally arduous and tedious, and doubly so on mobile devices. And yet, I tend to prefer using a mobile device for accessing most secure sites and systems, with that preference generally only increasing as the nominal security requirements increase. That seems counter-intuitive at first glance, but in this case the devil is in the details.</p><p>I value "smart security"; that is, security which is deployed in such a way as to increase protection, while minimizing the negative impact on the user experience, and where the additional friction from the security is proportional to the value of the data being protected. For example, I use complex and unique passwords for sites which store data which I consider valuable (financial institutions, sensitive PII aggregation sites, etc.), and I tend to re-use password on sites which either don't have valuable information, or where I believe the security practices there to be suspect (eg: if they do something to demonstrate a fundamental ignorance and/or stupidity with respect to security, such as requiring secondary passwords based on easily knowable data, aka "security questions"). I don't mind entering my complex passwords when the entry is used judiciously, to guard against sensitive actions, and the app/site is otherwise respectful of the potential annoyance factor.</p><p>Conversely, I get aggravated with apps and sites which do stupid things which do nothing to raise the bar for security, but constantly annoy users with security checks and policies. Things like time-based password expiration, time-based authentication expiration (especially with short timeouts), repeated password entry (which trains users to type in passwords without thinking about the context), authentication workflows where the data flow is not easily discernible (looking at most OAuth implementations here), etc. demonstrate either an ignorance of what constitutes "net good" security, or a contempt for the user experience, or both. These types of apps and sites are degrading the security experience, and ultimately negatively impacting security for everyone.</p><p>Mobile OS's help mitigate this, somewhat, by providing built-in mechanisms to downgrade the authentication systems from password to biometrics in many cases, and thus help compensate for the often otherwise miserable user experience being propagated by the "security stupid" apps and sites. By caching passwords on the devices, and allowing biometric authentication to populate them into forms, the mobile devices are "downgrading" the app/site security to single factor (ie: the device), but generally upgrading the user experience (because although biometrics are not as secure, they are generally "easy"). Thus, by using a mobile device to access an app/site with poor fundamental security design, the downsides can largely be mitigated, at the expense of nominal security in general. This is a trade-off I'm generally willing to make, and I suspect I'm not alone in this regard.</p><p>The ideal, of course, would be to raise the bar for security design for apps and sites in general, such that security was based on risk criteria and heuristics, and not (for example) based on arbitrary time-based re-auth checks. Unfortunately, though, there are many dumb organizations in the world, and lots of these types of decisions are ultimately motivated or made by people who are unable or unwilling to consider the net security impact of their bad policies, and/or blocked from making better systems. Most organizations today are "dumb" in this respect, and this is compounded by standards which mandate a level of nominal security (eg: time-based authentication expiration) which make "good" security effectively impossible, even for otherwise knowledgeable organizations. Thus, people will continue to downgrade the nominal security in the world, to mitigate these bad policy decisions, with the tacit acceptance from the industry that this is the best we can do, within the limitations imposed by the business reality in decision making.</p><p>It's a messy world; we just do the best we can within it.</p><p><br /></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-66473025315829618472024-02-18T23:12:00.000-08:002024-02-18T23:12:16.375-08:00The Genius of FB's Motto<h3 style="text-align: left;">Why "Move Fast and Break Things" is insightful, and how many companies still don't get it</h3><p style="text-align: left;">Note: I have never worked for FB/Meta (got an offer once, but ended up going to Amazon instead), so I don't have any specific insight. I'm sure there are books, interviews, etc., but the following is my take. I like to think I might have some indirect insight, since the mantra was purportedly based on observing what made startups successful, and I've had some experience with that. See: https://en.wikipedia.org/wiki/Meta_Platforms#History<br /></p><p style="text-align: left;">If you look inside a lot of larger companies, you'll find a lot of process, a lot of meetings, substantial overhead with getting anything off the ground, and a general top-down organizational directive to "not break anything", and "do everything possible to make sure nothing has bugs". I think this stems from how typical management addresses problems in general: if something breaks, it's seen as a failure or deficiency in the process [of producing products and services], and it can and should be addressed by improving the "process". This philosophy leads to the above, but that's not the only factor. For example, over time critical people move on, and that can lead to systems which everyone is afraid to touch, for fear of "breaking something" (which, in the organizational directives, is the worst thing you can do). These factors create an environment of fear, where your protection is carefully following "the process", which is an individual's shield against blame when something goes wrong. After all, deficiencies in the process are not anyone's fault, and as long as the process is continually improved, the products will continue to get better and have less deficiencies over time. That aggregate way of thinking is really what leads to the state described.</p><p style="text-align: left;">I describe that not to be overly critical: for many people in those organizations, this is an unequivocal good thing. Managers love process: it's measurable, it has metrics and dashboards, you can do schedule-based product planning with regular releases, you can objectively measure success against KPR's, etc. It can also be good for IC's, especially those who aspire to have a steady and predictable job, where they follow and optimize their work product for the process (which is usually much harder than optimizing for actual product success in a market, for example). Executives love metrics and predictable schedules, managers love process, and it's far easier to hire and retain "line workers" than creatives, and especially passionate ones. As long as the theory holds (ie: that optimal process leads to optimal business results), this strategy is perceived as optimal for many larger organizations.</p><p style="text-align: left;">It's also, incidentally, why smaller companies can crush larger established companies in markets. The tech boom proved this out, and some people noticed. Hence, Facebook's so-called hacker mentality was enshrined.</p><p style="text-align: left;">"Move fast" is generally more straightforward for people to grasp: the idea is to bias to action, rather than talking about something, particularly when the cost of trying and failing is low (this is related to the "fail fast" mantra). For software development, this tends to mean there's significantly less value in doing a complex design than a prototype: the former takes a lot of work and can diverge significantly from the finished product, while the latter provides real knowledge and lessons, with less overall inefficiency. "Most fast" also encapsulates the idea that you want engineers to be empowered to fix things directly, and not go through layers of approvals and process (eg: Jira) to get to a better incremental product state sooner. Most companies have some corporate value which aligns with this concept.</p><p style="text-align: left;">"Break things" is more controversial; here's my take. This is a direct rebuke of the "put process and gating in place to prevent bugs" philosophy, which otherwise negates the ability to "move fast". Moreover, though, this is also an open invitation to risk product instability in the name of general improvement. It is an acknowledgement that development velocity is fundamentally more valuable to an organization than the pursuit of "perfection". It is also an acknowledgement of the fundamental business risk of having product infrastructure which nobody is willing to touch (for fear of breaking it), and "cover" to try to make it better, even at the expense of stability. It is the knowing acceptance that to create something better, it can be necessary to rebuild that thing, and in the process new bugs might be introduced, and that's okay.</p><p style="text-align: left;">It's genius to put that in writing, even though it might be obvious in terms of the end goal: it's basically an insight and acknowledgement that developer velocity wins, and then a codification of the principles which are fundamentally necessary to optimize for developer velocity. It's hard to understate how valuable that insight was and continues to be in the industry.</p><h4 style="text-align: left;">Why the mantra evolved to add "with stable infrastructure"</h4><p style="text-align: left;">I think this evolution makes sense, as an acknowledgement of a few additional things in particular, which are both very relevant to a larger company (ie: one which has grown past the "build to survive" phase, and into the "also maintain your products" phase):</p><ul style="text-align: left;"><li style="text-align: left;">You need your products to continue to function in the market, at least in terms of "core" functionality</li><li style="text-align: left;">You need your internal platforms to function, otherwise you cannot maintain internal velocity</li><li style="text-align: left;">You want stable foundations upon which to build on, to sustain (or continue to increase) velocity as you expand product scope</li></ul><p style="text-align: left;">I think the first two are obvious, so let me just focus on the third point, as it pertains to development. Scaling development resources linearly with code size doesn't work well, because there is overhead in product maintenance, and inter-people communications. Generally you want to raise the level of abstraction involved in producing and maintaining functionality, such that you can "do more with less", However, this is not generally possible unless you have reliable "infrastructure" (at the code level) which you can build on top of, with high confidence that the resulting product code will be robust (at least in so far as the reliance on the infrastructure). This, fundamentally, allows scaling the development resources linearly with product functionality (not code size), which is a much more attainable goal.</p><p style="text-align: left;">Most successful companies get to this point in their evolution (ie: where they would otherwise get diminishing returns from internal resource scaling based on overhead). The smart ones recognize the problem, and shift to building stable infrastructure as a priority (while still moving fast and breaking things, generally), so as to be able to continue to scale product value efficiently. The ones with less insightful leadership end up churning with rewrites and/or lack of code reusability, scramble to fix compounding bugs, struggle with code duplication and legacy tech debt, etc. This is something which continues to be a challenge to even many otherwise good companies, and the genius of FB/Meta (imho) is recognizing this and trying to enshrine the right approach into their culture.<br /></p><p style="text-align: left;">That's my take, anyway, fwiw.<br /></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-46392515181787369462023-08-26T08:57:00.002-07:002023-08-26T08:57:55.456-07:00"Toxic" Answers<p>Preface: This observation is not intended to call out any specific people.</p><p>Something I've observed in the work environment: a tendency from some types of people to provide what I would term "toxic answers". This is when, broadly speaking, someone on a team asks a question (re tech, process, how to do something, etc.), and someone else provides an "answer" which is not really helpful. This can take several forms:</p><p></p><ul style="text-align: left;"><li>Reference to existing documentation which is out of date, incomplete, or inaccurate</li><li>Reference to process which is surface-level related, but not germane to the actual question</li><li>Reference to something which someone else has stated to be the answer, but is not actually the answer, and the person echoing it has not personally verified</li><li>Some related commentary which expresses opinions on the topic, and pretends to answer the question, but isn't actually actionable</li><li>Commentary which expands the scope of the question to include more questions/work, without answering the original question</li><li>etc.</li></ul><p style="text-align: left;">Obviously the above could be deemed "unhelpful", but why do I think of these responses as "toxic"? I will explain.</p><p style="text-align: left;">In a work context, you have various levels of understanding of topics discussed, ranging from your subject matter experts (with in-depth knowledge) to you high level managers (with usually just buzzword familiarity), and levels in between. When someone on a team asks a question, and someone (especially a more senior person) provides a "toxic" answer, this typically has a few effects:</p><p style="text-align: left;"></p><ul style="text-align: left;"><li>The manager(s) believe the question has been addressed by the person providing the response, even though it has not</li><li>The asking person might be disinclined to pursue to topic further, and thus (at best) waste time working on it solo, because they feel they cannot inquire further</li><li>This can create more work for the person asking (in the case of a response which expands the scope), which creates a negative motivation to seek help</li><li>In the case of false/misleading or out of date information, this can waste lots of time going down paths which are ultimately not fruitful</li><li>If the information is known not helpful by the person asking, it can strain the working relationships</li><li>It generally "shuts down" the discussion, with the question effectively unanswered</li><li>Worse, it propagates an inaccurate/damaging perception of value to the team:</li><ul><li>The person asking the question should (possibly) get credit for reaching out for something difficult/nuanced, but instead they are likely perceived as less capable of independently solving problems</li><li>The person providing the response should (probably) be viewed negatively for damaging the team dynamics and time management, but instead will likely get credit from their management for providing timely and helpful answers</li></ul></ul><p style="text-align: left;">In addition to the damage above, it can be challenging even for a "good" employee to navigate the process of trying to improve this behavior, depending on the perceptions of the employees. The secondary harm of someone providing toxic answers is that over time, they are perceived as more valuable team members by their management, so negative feedback about their answers or behavior is typically seen as more of a negative for the reporters than the subject. This is an observable effect within teams, of course: you don't want to criticize the person who management views as a "star employee". This compounds the effects, ultimately driving the actually more productive employees to seek roles elsewhere, away from the toxic influences which they cannot modify.</p><p style="text-align: left;">My advice to companies and managers, with respect to the above, would be this: do proactive follow-ups for inquiries where the outcome is unobvious, and ask the team members if the answers provided led to actual resolutions. Assume people on the team are not going to proactively raise concerns about people viewed as "untouchable" or senior within the org, and factor that into your information gathering. Be on the lookout for people who just provide links for answers, without checking if the information referenced actually solved the issue presented. And understand that your best employees are not the ones providing the most "this might be related" type answers, but the ones providing the most actionable and accurate answers. If you don't identify and curtail people providing toxic answers within a team, you're going to have problems over the long run.</p><p style="text-align: left;"><br /></p><p></p><p></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-61372995457784915182023-08-22T21:47:00.001-07:002023-08-22T21:47:13.940-07:00An amusing employment opportunity interaction<p>So I was recently doing some casual employment opportunity exploration (as one should do periodically, even if the situation is not pressing if nothing else just to see what else might be out there, and to keep one's interviewing skills updated), and a funny thing happened.</p><p>I was being screened by a developer as part of a normal process, and got a typical "test" problem to write an implementation for. In this case, it was something which would be real-world applicable, but still small enough to be feasible for an interview time slot. Germaine to the story is that, for this interview, the other party was not using an online shared text editor for sample code, but rather just having me share my favorite (or handy) IDE/editor from my local system to write the code in.</p><p>Now, for this instance the position I was being evaluated for was primarily Windows-based, so naturally I opened Visual Studio, and switched from my default most recent personal project to a new blank file, where I took down the problem description as described. As I was doing this, though, I realized that what the counterparty was describing was something I had already written for my own open-source library, which was the same project which I had already had open in Visual Studio.</p><p>So I asked if I could just show him the solution I had already written for my open source library, and explain it to him, rather than writing it again. He said that was okay, since I already had it open, and I did so. The total explanation took about a minute, he was satisfied that I fully grasped the solution (I'm sure the working and unit tested code helped with that), and we were done with that section of the examination.</p><p>Now obviously all interviews don't go like this (or mostly any), but it was pretty funny to happen to have code readily available which solved the exact problem being asked for, including being computationally optimal and templated already, which I could just point at. I feel like notwithstanding effort to make interviews objective and separable from any previous work, it would really save a lot of time if one could just point to working code one had previously written (say, open source utility libraries), and assert that you can write code based on those previous efforts. I wouldn't expect that to be the norm (especially since it's comparably easy to fake), but I can attest that it's pretty cool when it does happen like that, to an expedient and positive outcome. :)</p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-69105744377522102882023-08-18T14:20:00.000-07:002023-08-18T14:20:04.117-07:00On Company Review/Info Sites<p>For various reasons, I find myself once again at a point in my career where I am considering both searching out unbiased feedback about organizations, and potentially providing such myself. I perceive a considerable amount of value in the general existence of such information, in a few different ways. From the employee perspective, obviously, it can bias where you take a job, help align the working conditions with your expectations and desires, and consequently increase overall job satisfaction and potential for a good fit. From the general economy perspective, transparent information can be a powerful force in motivating companies to create better working conditions, by aligning that goal with economic incentives (vis-a-vis the ability to recruit better people). It's a general objective good thing.</p><p>It's also a hard thing, mainly because there are a lot of bad actors in the corporate space, and companies can be very punitive and litigious when it comes to negative feedback about them. Just as Yelp.</p><p>Conceptually, it would be great if there were open, pseudo-anonymous forums where this could be done, with some semblance of accountability (ie: to prevent outright falsehoods and misrepresentation, as opposed to just subjective opinions). There are some current efforts (eg: GlassDoor, Blind, etc.), but all of these include some amount of peril for contributors, and suffer from some amount of selection bias. To the former point, it can be somewhat perilous for current or future employment prospects if comments on working conditions are associated with a current or potential employees identity, and even the willingness to provide feedback on an employer might be seen as a risk for employing someone. On the latter point, the current sites tend to have a large amount of selection bias: in general, disgruntled employees are far more likely to post information to the sites, rather than content employees. Both of these tend to skew the overall data, and impede the ability to have comprehensive and unbiased information available.</p><p>Personally, I am hesitant to provide feedback on any current or former employer which could be construed as negative, because I don't want to imperil any future employment potential. This is actively bad for the market and other potential employees, though, as lacking insight and perspective can not only lead to bad choices from others, but prevents the market forces from helping to improve working conditions overall (ie: by financially impacting "bad" employers). My self-interest is unfortunately somewhat opposed to what might be considered an optimal scenario for the market in general.</p><p>Hypothesizing, I think two things would improve the current situation quite a bit here (and note that both are unlikely to transpire, but we've got our idealized world hat on):</p><p></p><ul style="text-align: left;"><li>Government protection for expression of opinions about one's employers (in the same sense as other labor protections and rights, which are often not very protective in practice, but far better than nothing)</li><li>A general pseudo-anonymous authentication system which could validate information (eg: uniqueness, employment, etc.), but did not expose the actual person's identity</li></ul><p style="text-align: left;">This would allow, in theory, employees to be encouraged to provide regular information about employers (working conditions, policies, etc.), and have that data be validated and aggregated. It would really allow much better accessibility to information about employers, and leverage the market to improve working conditions overall. It would be, in short, a "good thing".</p><p style="text-align: left;">Now that just leaves the near-insurmountable hurtle of getting something done in practice which involves both the government acting for the benefit of the people, and the government respecting privacy protections. But we can dream, I suppose.</p><p style="text-align: left;"><br /></p><p></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-70695200778138018562023-08-12T09:19:00.000-07:002023-08-12T09:19:17.317-07:00On Working Relationships<p>I posted an answer recently on a reddit post, where someone was asking if they should stay at a company which fired them, but then offered to rescind the firing after discussing the situation with the employee. Thread is here (at time of post): https://www.reddit.com/r/jobs/comments/15or3du/got_fired_from_job_and_then_rehired_within_an/</p><p>I figured I'd copy my response here, though, since it's general career advice (not developer specific, but within the sphere of development as a profession). My advice is/was as follows.</p><p></p><blockquote><p>You will have a general "working relationship" with every employer, which is a combination of how you are treated, how your input is valued, how you are evaluated, how well your perception of "good work" aligns with that of you management, etc. In a job where you have a good working relationship, those things are broadly positive, and of course there's a spectrum.</p><p>A company taking significant actions to undermine/damage the working relationship (bad performance review, different alignment on value or working expectations, firing then retracting, etc.) is signaling that you probably should not be there, if there are better options available. When this happens, you should start exploring for what other options might be available. The urgency of that exploration will likely be relative to the damage to the working relationship, but at the point where the relationship is damaged, its almost always prudent to explore. This gives rise to sentiments like "one foot out the door", etc., and is why as an employer you never want to damage that relationship if you are not prepared for the employee to depart.</p><p>How quickly you depart (or if at all) will depend on the other options available, and to a lessor extent on any effort from the employer to repair the relationship (which may be new internal directions, comp adjustments, new management, etc., but usually are nothing). You can certainly stay in the short term if you have no better options, but you should internalize (as the company certainly will) that the relationship is damaged, and you are more likely to depart at some point. That is, generally, a bell which is very hard to un-ring, as it should be. </p></blockquote><blockquote><p>Anyway, that's my general advice, fwiw.</p></blockquote><p>As an addendum, in case any employers ever read this blog, and you're not already far more familiar with managing employee psychology than I am, I'd encourage you to consider the implications of the above advice in the context of the employer. The way you manage employees (alignment with values for their productive work, management, reviews, environment, how you communicate changes, company actions, etc.) all affects this working relationship, and is cumulative and "sticky". I recently had a manager tell me that what was being done which was affecting me was not personal, it was "just business". That may be accurate, but that does not in any way diminish the effect on the working relationship, and it would be foolish (and borderline idiotic) to not fully anticipate the downstream effects of such actions. That is not to say that such actions and effects are not a normal part of operating an organization (they are), but it is to say that if, as management within an org, you do not properly account for this when deciding on your actions, that could be damaging enough to the organization as to constitute a termination-level failure on the part of the management involved.</p><p>In a sense, effective companies need to take a page from back-propagation training algorithms in ML networks, and ensure that feedback is propagated enough upstream to affect the higher-level inputs to the downstream outcomes. Moreover, they also need to understand the outsized impact of some seemingly less impactful decisions.</p><p>Anyway, that's my general advice, fwiw.</p><p><br /></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-49252114598393447502021-01-11T11:11:00.002-08:002021-01-11T11:11:55.370-08:00Moving Conversion from Caller to Callee<p>Consider this: you are maintaining a C++ codebase which uses strings. You have some lower-level methods which take/use C-style string, and you have some higher-level methods which use a string wrapper type (we'll use std::string for example purposes). Let's say you're going to convert the lower level code from direct pointers to something like std::string_view.<br /></p><p>In the above case, there is an automatic conversion, re both types being in the std library, so everything just works. But, if you have a large codebase, you have other string types (MFC, other), which may not automatically convert. You may end up writing some inline conversion to call the methods, or you may end up writing some wrapper methods to do the conversion at the call site. Both of these, though, are sub-optimal: the lower-level method is taking a logical string, and there shouldn't need to be additional intellectual or code burden at the call site to convert from one logical string type to another.</p><p>Enter "as_[type]" (eg: "as_string_view", or "as_zstring_view"). These are conceptual types, derived from the base desired types, which simply implement conversion methods (eg: constructors) from the other logically equivalent types in your codebase. Used appropriately, these can simplify the usage of the methods, while allowing seamless conversion of lower-level code to use new (but logically equivalent) data types.</p><p>(As a potentially more real-world example, consider "as_span<const BYTE>", which can accept a std::vector, or a std::string, and interpret both as a logical span of BYTEs. This can eliminate some extraneous overrides, and simply code usage, in places where it is applicable.)</p><p><br /></p>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-39734198417426455392020-07-17T00:13:00.000-07:002020-07-17T00:13:22.665-07:00Member Variable Initialization via With[...] Methods<div>There are a number of ways to initialize member variables in C++. One of the "recommended" methods is via a constructor, and arguments to it. While this is functional, I've personally been gravitating away from this paradigm in my own code of late, especially when there are multiple parameters involved, or multiple constructor overloads for different initialization scenarios. As the use-cases get more complex, it becomes harder to reason about the parameters passed, and more prone to maintenance-related errors.</div><div><br /></div><div>The alternative method I'm gravitating toward is this:</div>
<pre><code>class CFoo
{
<span> </span>int nSomeValue = 0;
<span> </span>inline auto& WithSomeValue( int nValue )
<span> </span>{
<span> </span><span> </span>nSomeValue = nValue;
<span> </span><span> </span>return *this;
<span> </span>}
};</code></pre>
<div>Why is this "better" (for some subjective measurement of better)?</div><div><ul style="text-align: left;"><li>Members are named and self-explanatory, rather than positional</li><li>You do not have to initialize all members (only what you want to set)</li><li>Order is not important (aside from internal constraints)</li></ul><div>Some downsides:</div><div><ul style="text-align: left;"><li>More verbose than positional arguments</li><li>No built-in language support for the paradigm*</li><li>Object must support "partial" initialized state, where some members may be not set yet</li><li>Efficiency is dependent on inlining, which the compiler may not do for some build types and usage scenarios</li><li>If not inline, may copy, which might be bad, especially if copy isn't "clean"<br /></li></ul><div>One of the significant benefits in my mind is the way this paradigm lends itself to code consistency and ease of refactoring. Consider this hypothetical use-case:</div>
<pre><code>auto oObject = CSomeObject{}<br /><span> </span>.WithOneVariable( 42 )<br /><span> </span>.WithAnotherVariable( "blah" )<br /><span> </span>.WithSomeClassAlso( oInstance )<br /><span> </span>;<br /></code></pre><div>What's nice about the above is that it's trivial to move around or comment out individual initialization elements, depending on the use-case. It's also easy to add overrides for custom types and/or other initialization paradigms; no messy constructor overloading, these are all just normal methods. Yes, the class coding is somewhat more verbose, but arguably the endpoint usage is much cleaner, which is a worthwhile trade-off in my mind.</div><div><br /></div><div>* A bit about the lack of language support...</div><div><br /></div><div>It would be really nice if C++ supported this concept as a language-level thing (as some other languages do, and/or are adding). Aside from the possibility that the call may not be inlined (which could cause the object to be copied), another downside is that it doesn't work cleanly with class hierarchy; that is, base class With-style methods return a base class reference, rather than the child class. Thus, you need to be aware of ordering of initialization in some cases, and this can be problematic (as the order is implicitly the reverse of typical constructor initialization order).</div><div><br /></div><div>It would be really nice if C++ supported something like:</div><div><pre><code>class CFoo
{
<span> </span>int nSomeValue = 0;
<span> </span>inline auto(*this) WithSomeValue( int nValue )
<span> </span>{
<span> </span><span> </span>nSomeValue = nValue;
<span> </span>}
};<br /></code></pre>
</div>
In my hypothetical above, the language recognizes the extended "auto(*this)" syntax, and by-definition the method returns a reference to the "most specialized" type of the object upon which the method is called (as known at the point of instantiation, with ambiguity resulting in a compilation error). This would not only eliminate the need to specify the return itself (it being now implicit), but would also eliminate the possible issues with copying the object, and allow the compiler to make additional inferences for optimization. That's the dream, anyway.<br /></div><div><br /></div></div>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-34254400307434018622020-07-07T21:37:00.000-07:002020-07-07T21:38:02.303-07:00std::optional<> is Useful<div>Consider this: How many times have you written code where a specific value for a type represents an implicit "empty" value? For example, a string value where an empty string was implicitly "no value"? A number where zero was the "default", "no value" case?</div><div><br /></div><div>How many times has that bitten you later, where you needed to add secondary variables to indicate if the first value was valid or not? Alternatively, how many data structures have values which may or may not be populated by a caller, and the callee needs to rely on additional or implicit information to discern if the values are valid or not?</div><div><br /></div><div>Enter std::optional<> (or previously boost::optional<>). This type can be used to express the "null" state for any type, without using secondary variables, implicit value meanings, or other mechanisms (eg: using pointers which might be null).</div><div><br /></div><div>Usage is easy:</div><div><pre style="text-align: left;">std::optional<std::string> osValue;<br />osValue.has_value(); // yields false<br />osValue.value_or( "something" ); // yields "something"<br />osValue = "else";<br />osValue.has_value(); // yields true<br />osValue.value_or( "something" ); // yields "else"<br /></pre><p style="text-align: left;">Here's the official reference, for reference: <a href="https://en.cppreference.com/w/cpp/utility/optional">https://en.cppreference.com/w/cpp/utility/optional</a></p><p style="text-align: left;">Next time you think of a case where something might have a value or "nothing", considering using std::optional<>.</p><p style="text-align: left;"><br /></p></div>Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-43814439368066509112017-08-05T15:40:00.001-07:002017-08-05T15:42:04.707-07:00C++0x, using decltype to implement property getters/settersI realize this is old-hat/obvious to many people, but I found it interesting...<br />
<br />
With the addition of decltype to C++, it's now possible to write totally generic getter/setter methods to expose member variables, which do not need to be updated if the type of the member is later updated. To wit:<br />
<br />
<script src="https://gist.github.com/anonymous/7dd8f8242bafd8ba5e1277a54a5838ce.js"></script>
Obviously, just remove the non-const members to create a read-only accessor, etc. It doesn't really save on typing or anything, but it might be nice as a "standard format" accessor template and such.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-66753667938104968372008-12-08T17:43:00.000-08:002008-12-08T17:54:27.425-08:00Variable types and sizesJust a random thought on variable sizes:<br /><br />There's an old design decision for programming languages as to whether the variable size for built-in numeric types be static or dynamic. For example, the .NET framework has static sizes: every type in the System namespace (which are all the built-in types) has a size associated with it, eg: Int32. Conversely, in C/C++, int is of dynamic size, dependent on the compiler and the target environment.<br /><br />There are arguments for both. On the static size, you have deterministic size, so you can predict exactly what values will/won't fit. On the dynamic side, you can automatically use the size which is appropriate for the architecture, which gives you automatic adaptability to architectures with new intrinsic data sizes (eg: 32bit -> 64bit) without speed degradation from extra operations to adapt non-standard variable sizes.<br /><br />With those trade-offs in mind, I'd propose a new? thought for variable declarations: a concept of "at least" x bits. This would give you the best of both worlds: you could say with certainty that values within a target range would fit in your variable, while allowing the compiler to allocate a larger type if that was more optimal for the target architecture. You would sacrifice predictability of variable size, but you could still use fixed-size constructs as a fallback if you needed them.<br /><br />With that in mind, variable declarations might look like: <br /><blockquote>int32p i32bitOrLargerValue;<br />int64p i64bitOrLargerValue;</blockquote><br />... where the 'p' is for 'plus'.<br /><br />Just a thought.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com1tag:blogger.com,1999:blog-6847283572517203892.post-64039636934272928832008-11-13T11:50:00.000-08:002008-11-13T12:08:41.329-08:00The enormous problem with highly dynamic languagesSo you're moving from an "old-school" language like C/C++ into a "new hotness" language like C#, and life is so much easier. Memory management is automatic, type information is part of the runtime, everything is dynamic. You can create these very simple expression to perform very complex operations, all auto-magically, and rapidly prototype applications like never before. Life is great, right?<br /><br />Well, there's a small problem which is the 800lb gorilla in the ideology. See, there's two parts to a language enabling you to writing code which does what you want: letting you express what you want to do, and helping you not express what you didn't want to do. The former is aided by higher level abstractions, patterns, powerful expression syntax, API libraries, etc. The latter is aided by strong compile-time checking, API/structure transparency, clear and predictable behavior, etc. A good language balances both of these.<br /><br />The problem with low-level languages is that they have a lot of the latter, without much of the former. The problem with highly dynamic languages is that they have a lot of the former, at the expense of the latter. The big problem with languages lacking the latter is that lacking the former just slows you down, whereas lacking the latter makes your applications fundamentally less reliable and more prone to subtle systemic problems. The huge problem is that there's no way around that problem: no matter how clean your structure and methodologies, you're always forced to perform runtime verification, and it's nearly impossible to eliminate systemic errors because the runtime implementation is so convoluted and obtuse.<br /><br />I would not be surprised if we see a resurgence of "native" code development because of these issues, because they are so fundamentally intractable in dynamic languages. I know I shutter to think of trying to build a reliable .NET application of any meaningful complexity. We shall see.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-28884009686613516582008-10-22T15:34:00.000-07:002008-10-22T15:42:06.481-07:00Fun with COM interopSo I have a C# object which exposes a COM interface through interop, and it was working. Then I did something, and when I went to reload it, it said the component was not registered. I confirmed that the ProgID was in the registry, and everything appeared to be good.<br /><br />... It turns out that if the constructor for an object which is being constructed for a COM object instantiation throws an exception, the CoCreateInstance call will return that the component is not registered. This is not normally a problem with native C++ COM objects, since they run the constructor code as part of the registration process, so you'd catch the error earlier. However, C# COM object apparently do not, and the error coming from COM is very misleading.<br /><br />Just an interesting tidbit for COM interop.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-66948080466580008622008-10-06T13:58:00.000-07:002008-10-06T14:10:34.102-07:00Bizarre error of the daySo I'm playing with compiling something which is C++/CLI using /clr, and ran across an error while trying to run:<br />'Could not load file or assembly' of my exe itself!<br /><br />So to make a long story short, after some research, it turns out that:<br />- The .NET framework cannot load assemblies which have more than 65k global symbols defined<br />- Every static string in the code apparently is assigned its own global symbol when compiling with /clr<br /><br />The solution, equally bizarre, is to enable string pooling for the Debug build of the exe. This reduces the amount of static strings dramatically, which allows the assembly to load, and the program to run. Talk about a random issue.<br /><br />Oh, and obligatory "yeah, C++/CLI is ready for real world apps...".Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-43045332785928914192007-10-15T11:33:00.000-07:002007-10-15T11:44:40.542-07:00COM attributes, C++, and MS's failAttributes... simplified programming model, hides the complexity of COM, allows easy specification without obscure syntax, must be a good idea, right? Nope, FAIL.<br /><br />The problem lies in the failure of documentation and ubiquitous support for the other MS libraries. Specifically, are you using MFC (and really, most native C++ Windows programmers probably are, duh)? Sorry, attributing breaks your application. Not that you could tell, since converting to attributed leaves all the other files in the project, and creates new hidden files with the current interfaces. Um... UTTER FAIL.<br /><br />Not to mention the attributes themselves. How do they work? Oh, don't worry about that. Where can I see the source? You can't, it's compiled into DLL's which inject it. How can I see what it's generating? Compile with a special flag, and look for the hidden, undocumented files it might generate. How can I debug issues? Um... online support forums maybe? UTTER FAIL.<br /><br />Nothing pisses off native developers like not being able to see what's going on. Over 50% of why people stay native developers is a lack of faith in the ability to diagnose and fix problems because the lower level code will be too hidden to figure out what's going on. What could be better to convince these developers to go managed than introduce some managed-like syntax and show how easy things could be? Except the syntax (attributes) exemplifies the exact reasons why those programmers haven't adopted managed code, and solidifies their decision to stay native as the correct one. Two words for MS: UTTER FAIL.<br /><br />I'm done with attributes for COM objects. Yes, IDL and ATL are hideous, but gobs of hard to read code is better than not being able to fix problems or understand what's going on. You can take that as a free mantra whenever you take the next pass at trying to convert native developers to the next big thing.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-63264749079420427552007-07-24T10:43:00.001-07:002007-07-24T10:46:05.520-07:00Xtreme Toolkit's TaskPanelSo <a href="http://www.codejock.com">Codejock</a> has done what I thought someone should do, make a generic, all Windows versions version of the Task Dialog. It's not perfect (points off for putting your copy of the structure definitions in effectively the global namespace), but it's pretty good.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-45776636197409193932007-07-09T13:32:00.000-07:002007-07-09T13:50:48.299-07:00Bootstrapper for installs is coolThis is cool:<br /><br />http://msdn.microsoft.com/msdnmag/issues/04/10/bootstrapper/<br />http://blogs.msdn.com/chrsmith/rss_tag_Bootstrapper+awesomeness.xml<br /><br />Basically, it's a installer builder for Visual Studio with which you can set dependencies on other modules, which are packaged for the bootstrapper (eg: versions of the .NET framework). When your installer runs, the bootstrapper will verify the installation of the component (including version) you need, and if it's not present, automatically download and install it, before your installer even runs.<br /><br />What's cool about it (versus just packaging in the MSI's for the dependent components) is that you don't bloat your installer very much (bootstrapper is about 500k), but you get virtually the same effect, due to auto-downloading. Very cool.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-21695117556592916062007-07-06T11:31:00.000-07:002007-07-06T11:49:16.043-07:00Some people are dumbSo the /. FotD is bashing on the MS proclamation that their vouchers for Novell to distribute linux do not entitle the redeemer to any software licensed under GPLv3. For reference, see:<br /><br /><a href="http://www.microsoft.com/presspass/misc/07-05statement.mspx">http://www.microsoft.com/presspass/misc/07-05statement.mspx</a><br /><a href="http://linux.slashdot.org/article.pl?sid=07/07/06/1333257">http://linux.slashdot.org/article.pl?sid=07/07/06/1333257</a><br /><br />Now for their part, MS is being fairly smart. The recognize that GPLv3 is crafted to screw them, and are trying to preemptively disclaim any distribution of GPLv3 code. Basically, if you get linux with a MS voucher, your not getting the right to use anything licensed under GPLv3; a perfectly valid condition to impose.<br /><br />Enter the dumb zealots, claiming that this is tantamount to MS declaring the law itself invalid by fiat. It's kinda like the fanatics who don't understand that their religion is <span style="font-style:italic;">opposed</span> to killing people... have you guys even read the GPL, or is it just the magic anti-Microsoft golden idol in your minds?<br /><br />MS doesn't want to distribute GPLv3 code, cause GPLv3's patent provisions are counter to everything MS wants to preserve with their IP (on purpose). They cannot be forced to distribute GPLv3 code; the law just doesn't work that way, as much as the zealots at FSF would like to change it by fiat.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-71108640968522192702007-07-05T17:28:00.001-07:002007-07-05T17:28:12.027-07:00Breaking polymorphism with templatesIntriguing title, huh?<br /><br />So here's the issue. Say I want to have a collection of objects, each of which is an instance of a template class (with various template types). Not going to work, you say, cause each template typed version of the object is a different type, and you can't have a collection of different types, unless they are derived from a base type. Fine, no problem, I can have a non-templated base type as the pointer type for the list; something like this simplified example code. Now the fun starts...<br /><blockquote><br />class CBaseClass<br />{<br />public:<br /> virtual ~CBaseClass();<br />};<br /><br />template< typename TYPE ><br />class CSubClass : public CBaseClass<br />{<br />public:<br /> TYPE m_tValue;<br />};<br /></blockquote><br />Say I want to get the value from an element in the list, where the type of the value is the template type of the subclass of the actual object instance. Simple enough conceptually, but wait... there's an issue. Iterating the list give me pointers to the base class, and I need to call a method which is explicitly or implicitly aware of the subclass type. And here we come to the quintessential example for polymorphism: CShape, CSquare, virtual void Draw(), etc.<br /><br />So I just add a virtual method, specify the type I want to get out as a template parameter, override it in the subclass, return m_tValue, and we're done, right? Something like this, for example:<br /><blockquote><br />class CBaseClass<br />{<br />public:<br /> virtual ~CBaseClass();<br /><br /> template< typename TYPE ><br /> virtual TYPE GetValue() = 0;<br />};<br /><br />bool bHappy = pBaseClass->GetValue< bool >();<br /></blockquote><br />Um... see, here's where C++ is kinda broken. You can't have a template virtual method, it's not allowed.<br /><br />... So how can we get m_tValue, when all we have is a CBaseClass*?<br /><br />Coding gymnastics, using RTTI, explicit type lists, a whole crap load of ugly, runtime-check-only template code, and very limited extensibility / flexibility. Basically, breaking the whole point of templates and polymorphism. Seriously, the code is simpler if you have one class with a void*, a size, and an enum for the type in it, and you forget about virtual functions, templates, or anything fancy designed to eliminate the need for void*'s, sizes, and explicit runtime type storage.<br /><br />Somebody in the C++ committee should seriously look at this, figure out a good solution, and fix the standard, cause it's broken.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0tag:blogger.com,1999:blog-6847283572517203892.post-29679328826939113752007-07-05T17:27:00.001-07:002007-07-05T17:27:33.450-07:00Microsoft API's blow sometimesSo I'm calling UpdateLayeredWindow to do transparency effects, and it's failing with no error code set. Turns out it seems to be an issue with running the app through Terminal Services. Would have been nice for Microsoft to actually produce a useful error code, like, say, I dunno, "this API is broken with Terminal Services cause our coders didn't finish it" or something. Kinda like the multi-threaded apartment version of the IShell interface in Windows XP (oops, we ran out of time, sorry).<br /><br />Seriously, people... documentation, it's what makes a platform usable for developers.Nickhttp://www.blogger.com/profile/05587036619182019599noreply@blogger.com0