As a sequel to my previous post, I have been collecting some more myths about portfolio management. These myths, or pitfalls, provide powerful but false guidance in many portfolio management implementations, so recognizing and avoiding them will help improve your portfolio decisions.
Here are myths number 4, 5, and 6:
- “Portfolio management is rank-ordering and then selecting projects.”
- “What portfolio management: project or product portfolio management?”
- “Since the devil is in the details, we need project details..”
Let me go over this list…
Rank ordering: the simplest project selection?
A lot of basic reading material about portfolio management oversimplifies this decision-making process into just project selection. The myth goes along these lines: if we combine all aspects of project value or “goodness” into a score and then sort the projects according to this overall score, we just have to pick as many projects from the top of this list as we can fit in our budget.
This notion that a single “value” or total score indicator can be used to capture the variety of requirements on an innovation portfolio makes no practical sense. The combination of strategic and financial value, the feasibility aspects, and balance over different criteria and time lines cannot be captured in a meaningful single score. At the very least, project selection should take into account optimization over these different dimensions independently.
If this myth prevails in your organization, you may be facing endless discussions about the relative weights of financial value creation and strategic market access and technical feasibility etc. The key to portfolio management is to discuss the trade-offs across these dimensions for a real set of options, not to define a (theoretical) universal trade-off upfront.
Now, even if a scoring mechanism is agreed upon and implemented (let’s say scoring on a scale of 1=low to 100=high), in most cases I have reviewed, the forced-ranked list of projects usually tends to cluster most projects between 60 and 70. This is the automatic consequence of adding a number of scores from different sources on a preselected set of projects. Scores tend not to be a lot lower since projects that are bad in multiple dimensions do not make it onto the list. Projects do not score really high in all dimensions as well since this is considered suspicious…
Now this usually means that the cutoff value between projects selected and not selected would be at say 64. This suggests that the accuracy of the scoring model is good enough to make sure that any project A with score 65 is always superior to project B with score 63 (no matter how the score is built up).
The real way out of this myth is to admit that portfolio “goodness” is inherently multidimensional, and that trade-offs across these dimensions are a key ingredient in each portfolio decision-making round. Tools like bubble plots support this concept way better than force-ranked lists.
Separating project and product portfolio management is dangerous
When I talk about strategic innovation portfolio management as a decision-making process, I need to explain that this covers both the project life-cycle and the product life-cycle of innovations. While it is obviously useful to separate the operational management of a project from that of a product over its life-cycle, it is inherently dangerous to separate the two in the decision flow. Put differently, both aspects of an innovation contribute to the essential management information needed in the portfolio process.
The project aspects covers lead times, effort, budgets, critical resources, feasibility risks, and the product aspects covers benefits (revenue or profit growth, cost reductions, new customers and installed base, and commercial risks). Separating these aspects into independent decision cycles unlinks the cost-focused decisions from value decisions. Loosing the cost-benefit trade-off is a major pitfall I have seen in innovation management.
In a well-run portfolio management process, the individual cases (whether they are called opportunities, programs, options, projects, or products or any other name) are holistically considered from a cost, benefits and risk perspective. This in turn drives the need to organize portfolio management as a multidisciplinary process where R&D, marketing, supply chain, finance and other functions jointly provide the “pieces of the puzzle” for the project and product life-cycle.
The devil is in the details…
Building on the previous point, it is tempting to believe that portfolio decisions can only ever be made if we have detailed information on all aspects of project and product life-cycle. This myth is the main source of indecisive portfolio management processes, for two reasons:
- gathering more details takes more time (and requires more people with more overhead taking even more time…)
- since we deal with plans, details will change more frequently
This does not mean that all details are irrelevant (there is some truth in the saying). Looking at it form a decision quality perspective, a good process focuses on the details that are relevant. The question needs to be which project or product properties have a big influence on cost, value or risk. Sensitivity analysis is the technique to reveal these properties. For these properties or details, a solid check should be run to see if they can be assessed with more precision (if not, why bother…). Then, a sanity check should be performed to see if the costs of gathering more detail is worth it. In decision analysis terms, the Value of Information should be assessed and weighted against the cost of collecting it.
Remember that the biggest cost of gathering more details may not be in the data gathering itself, but in the opportunity cost of the delayed decision. Maybe that is why the saying is about the devil being in the details: it is tempting to get trapped in a cycle of requiring more detail before deciding.
Do you have any more portfolio management myths to add to this list, let me know.