I’m on IPv6, are you?

After reconfiguring the lounge room to move the TV to another corner, I had to do a little bit of rewiring.  As luck would have it, when I reset theResidential Gateway (RG) it didn’t come back.  As part of replacing the RG, I though I’d dig a bit deeper into the getting IPV6.

So after:

  • Call out to ATT,
  • one missed appointment,
  • a visit from an ATT tech,
  • a new RG,
  • another visit from a tech,
  • a fixed outside line,
  • another call to ATT,
  • an email to customer service,
  • seven emails with customer service,
  • a new visit from a tech
  • a new RG

I am finally on the other internet.

screen-shot-2015-01-16-at-9-15-00-pm

Continue reading “I’m on IPv6, are you?”

Getting a Remote Running with Kodi under Ubuntu

[UPDATE] I’ve aborted this effort after a rework on the home network.  I’ve ultimately opted for a Raspberry Pi Complete Starter Kit, with Noobs and Raspbmc.  It worked with the remote out of the box and now I’m up and running.

I use kodi (formerly xbmc) as a home media system.  It was setup working nicely with a Harmony One remote control.  For a few reasons, I updated some packages on the Ubuntu 12.04 system and ended up on Ubuntu 14.10.  As part of this upgrade xbmc became kodi.

Unfortunately now, kodi recognizes only some of the remote commands.  Left, right, up and down and play work well.  Others (OK, Back, etc) don’t.  Debugging and resolving this has proven to be a much bigger challenge than it has been in the past, where i have worked things out without too many hoops.  Unfortunately both Linux and kodi have changed quite a bit over the years and a lot of the online documentation, blog posts and so on are out of date.

My modus operandi is less about hacking things heavily and more about the simplest path to get things working.  I’d rather remove a package than heavily configure and customize a set of files.   This is my way of looking at if I can get things to “Just Work”.

This blog post is intended to be both a narrative on how to get things going, but also serve as a current reference for people out there who run into this sort of problem.  Read on for more details on what I have done.  This post will carry a lot of questions as I resolve issues, and will be updated over time.

Out of the box, we have a remote that does up/down/left/right/pause work.   However the OK, back and other buttons don’t work.

Continue reading “Getting a Remote Running with Kodi under Ubuntu”

Finally – a Granted Patent – US 8838868

After many months, and many years – I finally have a patent credited to my name.  The patent is for an extension of the magnetic coupled connectors (such as magsafe connectors on Macs) to include diagnostic information through the magnets themselves.   Of course watching how the sausage is made has been eye opening on a number of fronts.  Read on for a quick review on the inspiration behind the patent and any interesting observations on how things went down.

Screen Shot 2014-09-28 at 9.53.35 PM

Continue reading “Finally – a Granted Patent – US 8838868”

New #bymoonlight images

New images, with a supermoon sunrise.

Photos by Moonlight (#bymoonlight)

Recently I have become fascinated by photos #bymoonlight.  All of these images are captures direct from the camera, not pushing, processing or anything.  The moon provides sufficient light, ambient light, stars and the moonlight make for a very ethereal way.  Each full moon, I will be out taking photos.

Here are the first few that I am posting.

ROI for Engineers

Short form presentation of how engineers can easily make judgements on Return on Investment. Also on SlideShare

Gamifying the Workplace: Badges IRL with 3D Printing

Heartbleed BadgeWe’ve seen badges and gamificiation appear in everything from a core business plan (Foursquare & Gowalla) to navigation apps (Waze).  I’ve seen them on a user homepage at least two companies.  It helps get people engaged by bringing together groups of common interest and drive involvement in tasks that they might not otherwise be involved in.  You look up a colleagues and find they’ve done something similar to you.

The problem with the virtual badges is that they are too cheap to make (effectively free to create a new one) and only appear when you go to the employee’s homepage.  Having played with 3d printing, I realized that you could make these badges in real life and bring a bit of physical interest to the work place, applying the same rules.    With a few minutes on an online 3d modeling tool, online 3d printing services, and finally a magnet and some super glue,  you can easily end up with full color sandstone badge.

Continue reading “Gamifying the Workplace: Badges IRL with 3D Printing”

3D Printing Board Game Pieces

For Christmas I bought Robot Turtles for the family. It’s a great game, and very easily customizable for different skill levels.

Although the cards are nice, I thought it would be fun to to go from the card playing pieces to 3D printing from ShapeWays.  I came across a turtle model on ShapeWays and contacted the designer asking if they could be extended to support the color schemes (and possibly the addition of lasers on the back.  A few weeks later a TMNT variant of the turtles appear on his shop.  Next ShapeWays purchase I included the little turtles in the request. Read on for pictures of the turtles.

Continue reading “3D Printing Board Game Pieces”

High Confidence/Low Information vs High Accuracy/Low Information Estimates

Quite often estimates are needed where there is low-information, but a high-confidence estimate is required.  For a lot of engineers, this presents a paradox.

How can I present a high confidence estimate, when I don’t have all the information?

Ironically, this issue is solved fairly easy by noting the difference between high confidence and high accuracy estimate.  A high confidence estimate is defined by likelihood that a task will be completed within a given timeframe, while a high accuracy estimate provides a prescribed level of effort to complete the task.  This article presents a method of balancing a high confidence estimate balancing analysis effort against accuracy.

This is a refinement on the “Getting Good Estimates” posting from 2011.

The Estimation Model

The basis for this method is captured in the diagram below. The key measures on the diagram are

  • Confidence, the likelihood that the task will be completed by a given date (Task will be complete in 15 days at 90% confidence)
  • Accuracy, the range of effort for an estimate (Task will be complete in 10-12 days)
  • No Earlier Than, absolute minimum effort for a task.

Estimate Confidence Accuracy

In general, I never accept a naked estimate of a number of days. An estimate of a range will usually imply a confidence level. An estimate of a confidence level may or may not need an indication of accuracy – depending on context for the estimate.

Gaming out the Estimate

As a refinement to the method outlined in Getting Good Estimates, the same technique of calling out numbers can be used to pull out an estimation curve from an engineer.  The method follows the same iterative method outlined in Getting Good Estimates By asking the question, “What is the confidence that the task would be complete by date xxx?, you will end up with a results similar to

Question Answer
What’s the lowest effort for this task?  2 weeks
What’s the likelihood it will task 20 weeks  100% (Usually said very quickly and confidently)
What’s the likelihood it will take 10 weeks  95% (Usually accompanied a small pause for contemplation)
What’s the likelihood it will take 5 weeks  70% (Usually with a rocking head indicating reasonable confidence)
What’s the likelihood it will take 4 weeks  60%
What’s the likelihood it will take 3 weeks  30%
What’s the likelihood it will take 2 weeks  5%

That line of questions would yield the following graph.

Worked Estimate

I could then make the following statements based on that graph.

  • The task is unlikely to take less than 2 weeks. (No earlier than)
  • The task will likely take between 4 and 8 weeks (50-90% confidence)
  • We can be confident that the task will be complete within 8 weeks. (90% confidence)
  • Within a project plan, you could apply PERT (O=2, M=4[50%], P=8[50%]) and put in 4.3 weeks

Based on the estimate, I would probably dive into the delta between the 4 and 8 weeks. More succinctly I would ask the engineer, “What could go wrong that would cause the 4 weeks to blow out to 8 weeks?”.   Most engineers will have a small list of items that they are concerned about, from code/design quality, familiarity with the subsystem to potentially violating performance or memory constraints.  This information is critically important because it kick starts your Risk and Issues list (see a previous post on RAID) for the project.  A quick and simply analysis on the likelihood and impact of the risks may highlight an explicit risk mitigation or issue corrective action task that should be added to the project.

I usually do this sort of process on the whiteboard rather than formalizing it in a spreadsheet.

Shaping the Estimate

Within the context of a singular estimate I will usually ask some probing questions in an effort to get more items into the RAID information. After asking the questions, I’ll typically re-shape the curve by walking the estimate confidences again.   The typical questions are:

  • What could happen that could shift the entire curve to the right (effectively moving the No Earlier Than point)
  • What could we do to make the curve more vertical (effectively mitigate risks, challenge assumptions or correct issues)

RAID and High Accuracy Estimates

The number of days on the curve from 50% to 90% is what I am using as my measure of accuracy.  So how can we improve accuracy?  In general by working the RAID information to Mitigate Risks, Challenge AssumptionsCorrect Issues, and Manage Dependencies. Engineers may use terms like “Proof of concept”, “Research the issue”, or “Look at the code” to help drive the RAID.  I find it is more enlightening for the engineer to actually call out their unknowns, thereby making it a shared problem that other experts can help resolve.

Now the return on investment for working the RAID information needs to be carefully managed.  After a certain point the return on deeper analysis begins to diminish and you just need to call the estimate complete.  An analogy I use is getting an electrician to quote on adding a couple of outlets and then having the electrician check the breaker box and trace each circuit through the house.   Sure it may make the accuracy of the estimate much higher, but you quickly find that the estimate refinement is eating seriously into the task will take anyway.

The level of accuracy needed for most tasks is a range of 50-100% of the base value.  In real terms, I am comfortable with estimates with accuracy of 4-6 weeks, 5-10 days and so on.  You throw PERT over those and you have a realistic estimate that will usually be reasonably accurate.

RAID and High Confidence Estimates

The other side of the estimate game deals with high confidence estimates.  This is a slightly different kind of estimate that is used in roadmaps where there is insufficient time to determine an estimate with a high level of accuracy.  The RAID information is used heavily in this type of estimate, albiet in a different way.

In a high confidence estimate, you are looking for something closer to “No Later Than” rather than “Typical”.  A lot of engineers struggle with this sort of estimate since it goes against the natural urge to ‘pull rabbits out of a hat’ with optimistic estimates.  Instead you are playing a pessimistic game where an usually high number of risks become realized into issues that need to be dealt with.  By baking those realized risks into the estimate you can provide high confidence estimates without a deep level of analysis.

In the context of the Cone of Uncertianty, the high confidence estimate will always be slightly on the pessimistic side.   This allows there to be a sufficient hedge against something going wrong.

High Confidence Estimate

If there is a high likelihood that a risk will become realized or an assumption is incorrect, it is well worth investing a balanced amount of effort to remove those unknowns.  It tightens the cone of uncertainty earlier and allows you to converge faster.

Timeboxing and Prototypical Estimates

I usually place a timebox around initial estimates.  This forces quick thinking on the engineers side.  I try to give them the opportunity to blurt out a series of RAID items to help balance the intrinsic need to give a short estimate and the reality that there are unknowns that will make that short estimate wrong.  This timebox will typically be measured in minutes, not hours.  Even under the duress of a very small timebox, I find these estimates are usually reasonably accurate, particularly when estimates carry the caveats of risks and assumptions that ultimately are challenged.

There are a few prototypical estimates that I’ve seen engineers give out multiple times.  My general interpretation of the estimate, and what refinement steps I usually take.  These steps fit into the timebox I describe above.

Estimate Style Interpretation Refinement
The task will take between 2 days and 2 months Low accuracy, Low Information Start with the 2 day estimate and identify RAID items that push to 2 months
The task will take up to 3 weeks
Unknown accuracy, no lower bound Ask for no-earlier-than estimate, and identify RAID items.
The task is about 2 weeks Likely lower bound, optimistic Identify RAID items, what could go wrong.

Agree? Disagree? Have an alternative view or opinion?  Leave comments below.

If you are interested in articles on Management, Software Engineering or any other topic of interest, you can contact Matthew at tippettm_@_gmail.com via email,  @tippettm on twitter, Matthew Tippett on LinkedIn, +MatthewTippettGplus on Google+ or this blog at https://use-cases.org/.

Desk-Checks, Control Flow Graphs and Unit Testing

Recently, during a discussion on unit testing, I made an inadvertent comment about how unit testing is like desk-checking a function.  That comment was treated with a set of blank stares from the room.    It looks like desk-checking is no longer something that is taught in comp-sci education these days.  After explaining what it was, I felt like the engineers in the room were having similar moments I had when a senior engineer would talk about their early days with punch cards just after I entered the field. I guess times have changed…

Anyway…

What followed was a very interesting discussion on what Unit Testing is, why it is important and how Mocking fills in one of the last gaps in function oriented testing.  Through this discussion, I had my final Unit Testing light bulb moment and it all came together and went from an abstract best-practice to an absolutely sane and necessary best practice.  This article puts out a unified view on what Unit Testing is, is not, and how one can conceptualize unit tests.

Continue reading “Desk-Checks, Control Flow Graphs and Unit Testing”