Debunking 3 myths in portfolio management
Just before my summer holiday, we ran a very succesful expert meeting with German portfolio managers in Düsseldorf. The audience was really enthusiastic about my presentation of myths in portfolio management, so I decided to blog about them right after my return.
Here they are (in my favourite order):
- “Portfolio management is a well-defined top-down process.”
- “A good portfolio review is a detailed project-by-project assessment.”
- “A standardized portfolio report is the main ingredient of the portfolio management process.”
Let’s go over them one by one and see if you agree these are myths indeed.
A well-defined, top-down process?
Reading about portfolio management, it looks like a simple process:
- top management defines the strategic goals and turns them into selection criteria for value maximization and portfolio balance
- all running and proposed projects regularly submit their plans and actuals and properly match them against these criteria
- the best match at the portfolio level is selected and resource allocations are made accordingly
In reality, the first step is already flawed, because strategy is not defined in isolation or as a top-down process. It also includes a bottom-up component (read about this topic known as “emergent strategy” in the work of Mintzberg, or dynamic strategy). In addition, executives may not want to (or even be able to) provide black-and-white priority rules, as such rules bring about a myopic view of the portfolio. The risk of so-called “gaming of the system”, where all project reviews will be optimized to meet the criteria, is included as well.
The relevance of different criteria for value maximization and for balance are dependent on the specific set of projects at hand. They cannot not, in reality, be assessed and agreed upon completely upfront. Even where the same executives insist on standard portfolio reports (as per myth number 3…).
The reality I see is that portfolio management is a mix of top-down and bottom-up information flows, with an interactive and iterative character, where the content of the discussion is highly situational. Some might call this a “chaotic” process.
Detailed project reviews or alternative scenarios?
In my previous post about boring portfolio reviews, I already elaborated on this myth. Portfolio reviews should be organized to get the best portfolio decisions. This means they should be about comparing alternative portfolios, with backup details about the underlying projects.
Standardized portfolio reports?
The most practical myth I encounter, is the suggestion that a standard portfolio report, or a standard bubble plot, or a standard set of KPI-s, is the main ingredient of the process. Of course, standardized KPI-s are needed where projects and portfolios are compared to each other. They also serve as a completeness check (so we do not overlook any of the main dimensions of the portfolio). However, with only standard KPI-s, the story about any specific portfolio decision is usually not well-covered. The specific context as well as content of the portfolio options at the moment of the decision needs to be included.
In short, standard reports are necessary but not sufficient for good portfolio management. Situational (or “ad-hoc” analysis) is at least as important. Collecting and presentation just the standard KPI-s is not good enough for good portfolio decisions. Elaborating on the process of deciding between alternative portfolios, one can never be sure upfront which dimensions (or KPI-s) best show the meaningful differences between the options. This requires the processes and tools to support this situational analysis as well, and to offer rich context and detail, not just abstractions.
Myths, yes or no?
I meet each of these myths as part of the portfolio management initiatives in client organizations; since they may mislead improvement focus I think they may be dangerous and should be debunked to get real progress. Would you agree with that?