Career Stages of A Typical Software Engineer

This is a repost of a long form response to quora question (What are the typical stages in the career of a software engineer?) which itself was derived from an answer – Matthew Tippett’s answer to What does this mean for a software engineer: “You are not at a senior level”?. Adapted and tweaked for the new question, now tweaked for the blog.

Each company will have its own leveling guide, ask your HR department or search your intranet. It should go through a clear set of expectations for each grade and the different attributes that each grade should possess. Your manager may not even be aware of them – but they should provide a basis for you to understand the career progression.

Flat organizations will have only 3 or so (Jr/Mid/Senior/Exec), other organizations will have many (Assoc/Eng/Snr Eng/Staff Eng/Snr Staff Eng/Princ Eng/Snr Princ Eng/Dist Eng/Fellow). Apply the following to your company and don’t expect direct portability between companies.

Grade Levels

First, we’ll go over a hypothetical set of grades – generally well rounded against a lot of companies – some will have different titles, but will generally have a a common set of attributes.

The Career level is arbitrary but what you’d expect the middle of the curve people to be operating at. Individuals will peak at a particular point and will then progress slower. Realistically, most good people will peak at what I am calling a Staff engineer. Some will get frustrated with the leadership aspect of the senior grades and peak at Senior Engineer. The management ladder equivalence is also arbitrary, but should serve as a guide.

  • Junior/Associate Engineer/New College Grad – Assumed to know nothing, can code, have minimal understanding how business work and what a professional life entails. Hand held or teamed with a more senior engineer to help get an understanding. Career level 0–2 years.
  • Engineer – Assumed to be able to work through tasks with minimal supervision. Will come back and ask for more work. Not expected to identify and fix secondary problems. Not expected to drive generalized improvements or be strong advocates for best practices or improvements. Quite simply a “Doer”. Scope is typically at a sub-component level. Career Level 2–5 years.
  • Senior Engineer – Beginning to be self directed. Expected to be able to work through small projects and foresee issues that may come up. Likely expected to mentor or lead sub-teams or development effort. Scope is typically at a component or subsystem level. Career Level 5–10 years – equivalent to a team lead.
  • Staff Engineer/Architect – Runs the technical side of projects, leader and mentor for a team. Holder of a high bar for best practices, quality and engineering workmanship. Scope is across a system, or across multiple subsystems. Career Level 10–20 years – equivalent to a manager.
  • Fellow/Distinguished Engineer – Runs the technical side of an organization. Interlopes on many projects, defines the strategic direction for the technology. Career Level 15–30 years – equivalent to a director or VP.

It’s not about the code

Hopefully it becomes clear from the descriptions that pretty much from Senior Engineer and up, the technical role includes increasing amount of leadership. This is distinct from management. The leadership traits are about having your peers trust and understand you direction, being able to convince peers, managers and other teams about your general direction. Being able to deliver on the “soft” skills needed to deliver code.

Amazon’s Leadership Principles actually give a pretty good indication of some of the leadership needs for engineers.

There is a tendency for organizations to promote based on seniority or time in role, or even worse, based on salary bands.

Applying this to Yourself

  1. Ground yourself what your level means to you, the organization and your team. There may be three different answers.
  2. Introspect and ask yourself if you are demonstrating the non-management leadership aspects of a team leader or junior manager? Do you show confidence? Do you help lead and define? Do you demonstrate an interest in bringing in best practices? Do you see problems before they occur and take steps to manage them?
  3. Consider where you are in your career.

Your Career is a Marathon

A final thought, although you indicate a few years in the industry, I’ve seen engineers gunning for “Senior Engineer” 3 years out of college and staff engineer 3 more years after that. My big advice to them is what the hell are you going to do when you get to 6 years into a 40 or 50 year career and realize that you’ve peaked or you have some serious slow grinding for the next 20 years. I’m concerned about good engineers who become fixated on the sprint to the next title and not the marathon of their career.

Advertisements

ng-Whatever

We’ve all done it, sat around a table dissing the previous generation of our product.  The previous set of engineers had no idea, made some stupid fundamental mistakes that we obviously wouldn’t have made.  They suck, we’re awesome.  You know what, in 3 or 5 years time, the next generation of stewards of the system you are creating or replacing now will be saying the same thing – of you are your awesome system that you are slaving over now.

So what changes?  Is the previous generation always wrong?  Are they always buffoons who had no idea about how to write software.  Unlikely.  They were just like you at a different time, with a different set of contexts and a different set of immediate requirements and priorities.

Understanding Context

The context that a system is created is the first critical ingredient for a system. Look to understand the priorities, the tradeoffs and the decisions that had to be made when the system was first created.  Were there constraints that you no longer have in place, were they restricted by infrastructure, memory, performance?  Were there other criteria that were driving success at that stage, was it ship the product, manage technical debt or were there gaps in the organization that were being made up for?  What was the preferred type of system back then?

Understanding these items allow you to empathize with system creator and understand some of the shortcuts they may have made.  Most engineers will attempt to do their best based on their understanding of the requirements, their competing priorities and their understanding of the best systems that can be implemented in the time given.  Almost every one of these constraints forces some level of shortcut to be taken in the delivering of a system.

Seek first to understand the context before making the decision that the previous team made mistakes.  When you hear yourself making comments about a previous team, a peer team or other group not doing things in the way that you would like to see it, look for the possible reasons.  I’ve seen junior teams making rookie mistakes, teams focused on backend architectures making front-end mistakes, device teams making simple mistakes in back-end systems.   In each of these contexts, it is fairly obvious why the mistakes would be made.  Usually, it will be within your power to identify the shortcoming, determine a possible root cause by understanding the context and shore up the effort or the team to help smooth things over and result in a better outcome.

Constraining Your ng-Whatever

When faced with frustration on a previous system, consider carefully a full re-write into a ng-whatever system, or incremental changes with some fundamental breakpoints that evolve, refactor and replace parts of the system.

It is almost guaranteed that the moment a system gets a “ng-Whatever” moniker attached to it, it becomes a panacea for all things wrong with the old system and begins to accrete not only the glorious fixes for the old system, it will also pick up a persona of its own.   This persona will appear as “When we get the ng-whatever done, we won’t have this problem..”.

These oversized expectations begin to add more and more implicit requirements to the system.  Very few of these will be expectations will be actually fulfilled, leaving a perception of a less valuable ng-Whatever.

Common Defect Density

I’m going to come out and say that most engineering teams, no matter how much of a “Illusory Superiority” bias they may have are going to be at best incrementally better than the previous team.  With that said, their likelihood to have defects in their requirements, design or implementation will be more or less even (depending on how the software is being written this time around).

The impact will typically be that the business will be trading a piece of potentially battle hardened software with known intractable deficiences, with a new piece of software that will both have bugs that will be only be ironed out in the face of production.  Even worse, there will always be a set of intractable deficiencies that are now unknown – only to be discovered when the new software is in production.

When the original system was created, it is highly unlikely that the engineering team baked in a set of annoying deficiencies.  Likewise, the new system will, to the best of your teams understanding,  not baking any deficiencies into the system.  You need to make a conscious decision to take the risk that the new issues will be less painful than the old issues are.  If you can’t make that call, then sometimes refactoring and re-working parts of the system might be a better solution.

 

What have your experiences been with ng-Whatevers?  Have you found that your team can reliably replace an older system with a new system, and see that in a few years time the new system is held with a higher level of esteem than the original system?  Follow this blog for more posts, or post comments below on this topic.

 

Code and the Written Word

Code history is like a narrated history of code.  The ability for git rebase to reorder, rework and polish commits allow a developer (and code reviewers) to curate the code history so that it tells a well structured story.  This post will wander through how strongly the analogy can work.

TL;DR version in the slides.  Read on for the long form.

Continue reading “Code and the Written Word”

Estimating for Software is Like Estimating for Skinning a Cat

As I’ve mentioned a few times, estimation is an imprecise art.    There are ways to increase accuracy of the estimation either through consensus based estimation or other methods.    This post explores why estimations are hard and why the software world struggles to find tools, techniques and methods that provide for consistent and accurate estimations.

I’ve recently been playing with Codewars (connect with me there) and have been intrigued by the variance of the solutions that are provided.  In particular, you have the “smart” developers who come up with one-liners that need a PhD to decode, tight code, maintainable code, and then clearly hacked till it works code.  This variance is likely the underlying reason for the difficulty in getting consistently accurate estimates.

Read on for more some of the examples that I have pulled from the Convert Hex String to RGB kata.  The variance is quite astonishing.  When you dig deeper into the differences, you can begin to see the vast differences in approaches.  I’m not going to dig deeper into my personal views of the pro’s and cons of each one, but it did provide me a lightbulb moment as to the how software in particular always going to be difficult to estimate accurately.

I came up with an interesting analogy when pushed by a colleague on systematic ways of estimating.  His assertion was that other industries (building in his example) have systematic ways of doing time and cost estimates for doing work.  While not diminishing the trades, there are considerably less degrees of freedom for, say, a framer putting up a house frame.  The fundamental degrees of freedom are

  • experience – apprentice or master
  • tools – Nails or nail gun
  • quality of lumber – (not sure how to build this in)

For an master carpenter, framing will be very quick, for an apprentice, it will be slow.  The actual person doing the framing will likely be somewhere in between.   Likewise, using nail and hammer will be slow, and a nail gun will be fast.  The combination of those two factors will be the prime determinant of how long a piece of framing will take to complete.

I code however, we bring in other factors that need to be include in estimates, but are typically not considered until the code is being written.  Looking at the examples below, we see the tools that are available.

  • Array utility operators (ie: slice)
  • string operators (ie: substring)
  • direct array index manipulation (array[index])
  • regular expression (.match)
  • lookup Tables (.indexOf)

Using each of these tools has an general impact on the speed that the code can be written, and the speed that it can be debugged and the general maintainability of the code.

With this simple example, I feel that the current practice of “order of” that is popular in agile is sufficient for the type of accuracy that we will get.  Fibonacci, t-shirt sizes, hours/days/weeks/months/quarters, are really the among the best classes of estimates that we can get.

function hexStringToRGB(h) {
  return {
    r: parseInt(h.slice(1,3),16),
    g: parseInt(h.slice(3,5),16),
    b: parseInt(h.slice(5,7),16)
  };
}
function hexStringToRGB(hexString) {
  return {
    r : getColor(hexString),
    g : getColor(hexString, 'g'),
    b : getColor(hexString, 'b')
  };
}

function getColor(string, color) {
  var num = 1;
  if(color == "g"){ num += 2; }
  else if(color == "b"){ num += 4; }
  return parseInt("0x" + string.substring(num, num+2));
}
function hexStringToRGB(hs) {
  return {r: parseInt(hs[1]+hs[2],16), g : parseInt(hs[3]+hs[4],16), b : parseInt(hs[5]+hs[6],16)}
}
function hexStringToRGB(s) {
  var foo=s.substring(1).match(/.{1,2}/g).map(function(val){return parseInt(val,16)})
  return {r:foo[0],g:foo[1],b:foo[2]}
}
function hexStringToRGB(hexString) {
  var hex = '0123456789ABCDEF'.split('');
   hexString = hexString.toUpperCase().split('');

  function hexToRGB(f,r){
    return hex.indexOf(f)*16 + hex.indexOf(r);
  }

  return {
    r: hexToRGB(hexString[1],hexString[2]),
    g: hexToRGB(hexString[3],hexString[4]),
    b: hexToRGB(hexString[5],hexString[6])
  }
}

About Skinning the Cats

There are many ways to skin a cat” has it’s earliest print etymology in Money Diggers article in the 1840’s Gentleman’s Magazine, and Monthly America Review.  Even that reference implies it is a phrase already in use before then.  Generally the phrase implies there are many ways to achieve something.  In the context of this article, the analogy is that even simple tasks like writing a hex to RGB converter can be achieved in may different ways.


As always, vehemently opposed positions are encouraged in the comments, you can also connect with me on twitter @tippettm, connect with me on LinkedIn via matthewtippett, and finally +Matthew Tippett on google+.

ROI for Engineers

Short form presentation of how engineers can easily make judgements on Return on Investment. Also on SlideShare

High Confidence/Low Information vs High Accuracy/Low Information Estimates

Quite often estimates are needed where there is low-information, but a high-confidence estimate is required.  For a lot of engineers, this presents a paradox.

How can I present a high confidence estimate, when I don’t have all the information?

Ironically, this issue is solved fairly easy by noting the difference between high confidence and high accuracy estimate.  A high confidence estimate is defined by likelihood that a task will be completed within a given timeframe, while a high accuracy estimate provides a prescribed level of effort to complete the task.  This article presents a method of balancing a high confidence estimate balancing analysis effort against accuracy.

This is a refinement on the “Getting Good Estimates” posting from 2011.

The Estimation Model

The basis for this method is captured in the diagram below. The key measures on the diagram are

  • Confidence, the likelihood that the task will be completed by a given date (Task will be complete in 15 days at 90% confidence)
  • Accuracy, the range of effort for an estimate (Task will be complete in 10-12 days)
  • No Earlier Than, absolute minimum effort for a task.

Estimate Confidence Accuracy

In general, I never accept a naked estimate of a number of days. An estimate of a range will usually imply a confidence level. An estimate of a confidence level may or may not need an indication of accuracy – depending on context for the estimate.

Gaming out the Estimate

As a refinement to the method outlined in Getting Good Estimates, the same technique of calling out numbers can be used to pull out an estimation curve from an engineer.  The method follows the same iterative method outlined in Getting Good Estimates By asking the question, “What is the confidence that the task would be complete by date xxx?, you will end up with a results similar to

Question Answer
What’s the lowest effort for this task?  2 weeks
What’s the likelihood it will task 20 weeks  100% (Usually said very quickly and confidently)
What’s the likelihood it will take 10 weeks  95% (Usually accompanied a small pause for contemplation)
What’s the likelihood it will take 5 weeks  70% (Usually with a rocking head indicating reasonable confidence)
What’s the likelihood it will take 4 weeks  60%
What’s the likelihood it will take 3 weeks  30%
What’s the likelihood it will take 2 weeks  5%

That line of questions would yield the following graph.

Worked Estimate

I could then make the following statements based on that graph.

  • The task is unlikely to take less than 2 weeks. (No earlier than)
  • The task will likely take between 4 and 8 weeks (50-90% confidence)
  • We can be confident that the task will be complete within 8 weeks. (90% confidence)
  • Within a project plan, you could apply PERT (O=2, M=4[50%], P=8[50%]) and put in 4.3 weeks

Based on the estimate, I would probably dive into the delta between the 4 and 8 weeks. More succinctly I would ask the engineer, “What could go wrong that would cause the 4 weeks to blow out to 8 weeks?”.   Most engineers will have a small list of items that they are concerned about, from code/design quality, familiarity with the subsystem to potentially violating performance or memory constraints.  This information is critically important because it kick starts your Risk and Issues list (see a previous post on RAID) for the project.  A quick and simply analysis on the likelihood and impact of the risks may highlight an explicit risk mitigation or issue corrective action task that should be added to the project.

I usually do this sort of process on the whiteboard rather than formalizing it in a spreadsheet.

Shaping the Estimate

Within the context of a singular estimate I will usually ask some probing questions in an effort to get more items into the RAID information. After asking the questions, I’ll typically re-shape the curve by walking the estimate confidences again.   The typical questions are:

  • What could happen that could shift the entire curve to the right (effectively moving the No Earlier Than point)
  • What could we do to make the curve more vertical (effectively mitigate risks, challenge assumptions or correct issues)

RAID and High Accuracy Estimates

The number of days on the curve from 50% to 90% is what I am using as my measure of accuracy.  So how can we improve accuracy?  In general by working the RAID information to Mitigate Risks, Challenge AssumptionsCorrect Issues, and Manage Dependencies. Engineers may use terms like “Proof of concept”, “Research the issue”, or “Look at the code” to help drive the RAID.  I find it is more enlightening for the engineer to actually call out their unknowns, thereby making it a shared problem that other experts can help resolve.

Now the return on investment for working the RAID information needs to be carefully managed.  After a certain point the return on deeper analysis begins to diminish and you just need to call the estimate complete.  An analogy I use is getting an electrician to quote on adding a couple of outlets and then having the electrician check the breaker box and trace each circuit through the house.   Sure it may make the accuracy of the estimate much higher, but you quickly find that the estimate refinement is eating seriously into the task will take anyway.

The level of accuracy needed for most tasks is a range of 50-100% of the base value.  In real terms, I am comfortable with estimates with accuracy of 4-6 weeks, 5-10 days and so on.  You throw PERT over those and you have a realistic estimate that will usually be reasonably accurate.

RAID and High Confidence Estimates

The other side of the estimate game deals with high confidence estimates.  This is a slightly different kind of estimate that is used in roadmaps where there is insufficient time to determine an estimate with a high level of accuracy.  The RAID information is used heavily in this type of estimate, albiet in a different way.

In a high confidence estimate, you are looking for something closer to “No Later Than” rather than “Typical”.  A lot of engineers struggle with this sort of estimate since it goes against the natural urge to ‘pull rabbits out of a hat’ with optimistic estimates.  Instead you are playing a pessimistic game where an usually high number of risks become realized into issues that need to be dealt with.  By baking those realized risks into the estimate you can provide high confidence estimates without a deep level of analysis.

In the context of the Cone of Uncertianty, the high confidence estimate will always be slightly on the pessimistic side.   This allows there to be a sufficient hedge against something going wrong.

High Confidence Estimate

If there is a high likelihood that a risk will become realized or an assumption is incorrect, it is well worth investing a balanced amount of effort to remove those unknowns.  It tightens the cone of uncertainty earlier and allows you to converge faster.

Timeboxing and Prototypical Estimates

I usually place a timebox around initial estimates.  This forces quick thinking on the engineers side.  I try to give them the opportunity to blurt out a series of RAID items to help balance the intrinsic need to give a short estimate and the reality that there are unknowns that will make that short estimate wrong.  This timebox will typically be measured in minutes, not hours.  Even under the duress of a very small timebox, I find these estimates are usually reasonably accurate, particularly when estimates carry the caveats of risks and assumptions that ultimately are challenged.

There are a few prototypical estimates that I’ve seen engineers give out multiple times.  My general interpretation of the estimate, and what refinement steps I usually take.  These steps fit into the timebox I describe above.

Estimate Style Interpretation Refinement
The task will take between 2 days and 2 months Low accuracy, Low Information Start with the 2 day estimate and identify RAID items that push to 2 months
The task will take up to 3 weeks
Unknown accuracy, no lower bound Ask for no-earlier-than estimate, and identify RAID items.
The task is about 2 weeks Likely lower bound, optimistic Identify RAID items, what could go wrong.

Agree? Disagree? Have an alternative view or opinion?  Leave comments below.

If you are interested in articles on Management, Software Engineering or any other topic of interest, you can contact Matthew at tippettm_@_gmail.com via email,  @tippettm on twitter, Matthew Tippett on LinkedIn, +MatthewTippettGplus on Google+ or this blog at https://use-cases.org/.

Desk-Checks, Control Flow Graphs and Unit Testing

Recently, during a discussion on unit testing, I made an inadvertent comment about how unit testing is like desk-checking a function.  That comment was treated with a set of blank stares from the room.    It looks like desk-checking is no longer something that is taught in comp-sci education these days.  After explaining what it was, I felt like the engineers in the room were having similar moments I had when a senior engineer would talk about their early days with punch cards just after I entered the field. I guess times have changed…

Anyway…

What followed was a very interesting discussion on what Unit Testing is, why it is important and how Mocking fills in one of the last gaps in function oriented testing.  Through this discussion, I had my final Unit Testing light bulb moment and it all came together and went from an abstract best-practice to an absolutely sane and necessary best practice.  This article puts out a unified view on what Unit Testing is, is not, and how one can conceptualize unit tests.

Continue reading “Desk-Checks, Control Flow Graphs and Unit Testing”