Twenty years ago, unreliable data was mentioned by well-known researchers as the main problem of portfolio management in new product development. Nowadays, hardly any thorough solution is found to improve the quality of data in portfolio management. Nevertheless, the need for research on data quality has increased further due to the cost of low-quality data in society: unreliable data has cost the US economy around $3.1 trillion in the year 2016. This post contributes to a solution to this problem by providing organisations with a six-step guide in how to measure the accuracy of data and how to take the level of accuracy into account in portfolio decisions.
Organisations typically develop products via a product innovation process that includes the three stages ideation, development and launch. This post focusses on the ideation phase because portfolio management is known to be most important in this first phase of the process. In the ideation phase, ideas are generated and selected for the subsequent development phase. The main goal of the ideation phase is to reduce fuzziness, which is linked to data quality because low data quality results in uncertainties and accordingly fuzziness.
The construct of data quality is divided into many dimensions. Most used dimensions are timeliness, completeness, accuracy, accessibility and relevancy. The accuracy dimension is described as an all-encompassing dimension since early studies on data quality took only this dimension into account. Therefore, it is a common practice to use the term accuracy when referring to whether the data is correct. Because the overall construct of data quality is too comprehensive, this post focusses on the accuracy dimension. In the remainder of this post, this is referred to as data accuracy.
Data Accuracy Defined
Data accuracy is defined as the extent to which the value of the data represents the true value of the attribute in the real world. This definition shows that the assessment of accuracy requires a test with the real-life object, also known as an objective approach to measure data accuracy. However, at the beginning of the new product development process, most data are forecasts of future conditions. For this data, a test with the real-life object is not possible leading to an alternative approach for data in the ideation phase.
This alternative is a subjective approach, which measures accuracy via a survey among the data collectors or data consumers. Because the level of accuracy is in this approach determined by the people that fill in the survey, it highly depends on the knowledge and skills of the people that fill in the survey. To measure it still as reliable as possible, sub-dimensions can be used. The following five sub-dimensions of accuracy are extracted from earlier performed research on data accuracy and interviews with large Dutch high-tech organisations.
- Believability. The extent to which data is regarded as true, correct, and credible.
- Coherence. The extent to which data is focused on one topic or one real-world object.
- Consistency. The extent to which data is consistent with related data.
- Objectivity. The extent to which data is objectively collected, based on facts, and presents an impartial view.
- Reputation. The extent to which data are trusted or highly regarded in terms of their content or source.
New Product Portfolio Management
As discussed, one of the main purposes of the ideation phase is the selection of ideas for further development. This selection belongs to the construct of portfolio management. Although portfolio management is performed during all phases of the product innovation process, it is most important in the ideation phase because good portfolio management in the ideation phase prevents that resources in the subsequent phases are wasted on ideas that are not successful on the market. Ideas are selected on evaluation criteria (e.g. expected sales, technical feasibility, or time- to-market) that are based on data. This includes that portfolio decisions are based on these evaluation criteria, and thus rely on the data underlying these evaluation criteria: evaluation data. Figure 1 shows the process from evaluation data to portfolio decisions. The data is collected by different people in the organisations. Prior to the portfolio meeting, the data is stored in the information system, which can be both advanced software for portfolio management or a simple spreadsheet. Accordingly, evaluation criteria are calculated from the data and included in graphs. Lastly, the decision makers interpret the graphs of evaluation criteria and take a decision on what projects to transfer to the development portfolio.
Because many organisations struggle with the accuracy of their evaluation data, there are already some organisations that measure the accuracy of this data. Although this results in insight in the accuracy of the evaluation data, the methods also suffer from problems. Most mentioned problems are on the one hand a low reliability of the accuracy level because the measurement highly depends on the employee that makes the estimation and on the other hand a lack of insights on what data exactly causes a low accuracy. Both problems are solved by using the guide presented in the next paragraph.
Six Steps to Measure Data Accuracy
Because every organisation has a different product innovation process and approach to portfolio management, there is not a single method that every organisation can use to measure data accuracy. Accordingly, this research has developed a six-step guide that supports organisations in the implementation of data accuracy measurements in the process from data collection to decision making. The six activities are depicted in figure 2 and explained below.
First, it has to be defined from what data the level of accuracy has to be measured. This is done by making an inventory of the data that has to be assessed, which can be both qualitative and quantitative data.
Second, the moment of assessment has to be determined. An appropriate moment is when the data is entered into the information system. This because at that moment the data is formalised and the information of the source of the data is expected to be still available. Moreover, this gives the possibility to measure data accuracy again when data is added or updated, leading to a better estimation.
Third, sub-dimensions of accuracy have to be selected from the list discussed earlier to measure data accuracy as reliable as possible.
Fourth, it has to be defined how the data is actually measured. A recommended method is a three or five-point Likert scale whereafter the sub- dimensions are averaged to an overall accuracy level of an evaluation criterion. Figure 3 shows a sketch of the screen in which the level of data accuracy can be measured.
Five, after the measurement, the level of accuracy has to be stored in the information system together with the evaluation data. Since the assessment is done for every evaluation criteria (e.g. expected market size), the accuracy level has to be linked to the concerning evaluation criteria to trace back data with a low level of accuracy.
Six and last, the presentation of the data accuracy has to be defined. Organisations can decide to present data accuracy in the graphs next to the evaluation criteria at a portfolio meeting, consider data accuracy as an additional evaluation criteria, or show for example warning- messages in case of a low data accuracy. During the interviews, organisations mentioned that in the preparation of a portfolio meeting, a low level of data accuracy gets more attention than a high level of data accuracy.
Implementing the Steps in the Process
Implementing the six steps in the process from data to decision making causes two small additions to the process from data to decision making (figure 1) at the second and fourth action. At the moment the portfolio manager (or data collector) stores the evaluation data in the information system, the system asks to assess the data via several sub- dimensions or accuracy. A sketch of a potential input screen is shown in figure 3. After the data is assessed, the system calculates the overall level of accuracy. This overall level will be presented when the concerning evaluation criteria are retrieved from the system at, for example, the portfolio meeting. Accordingly, the portfolio manager and other decision-makers can take the level of accuracy into account by interpreting the evaluation criteria and taking the portfolio decision.
How an organisation can assess the accuracy of data in the ideation phase depends on the organisation. As a result, every organisation should develop and implement their own approach to assess data accuracy. Although these approaches will differ per organisation, four main aspects of data accuracy assessment should be the same for every organisation. These are discussed below.
First, organisations should use a subjective approach to measure evaluation data in the ideation phase. A subjective approach measures data accuracy via a survey based on sub-dimensions of accuracy. Six sub-dimensions are extracted from the literature that organisations can use: believability, coherence, consistency, complexity, objectivity, and reputation. To achieve an overall level of data accuracy, the scores on the sub-dimensions have to be averaged.
Secondly, organisations should assess the data at the moment it is saved in the information system. Empirical research showed that the process prior to this moment is unstructured, making the estimation unreliable or inefficient.
Thirdly, the level of data accuracy should be stored in the same information system as the evaluation data, making it easy to retrieve. Moreover, the level of data accuracy should not only be linked to a particular project, but also to the evaluation data itself, making it easy to trace back low levels of data accuracy.
Fourthly, the presentation of data accuracy should be focussed during the preparation of portfolio decisions on low levels of accuracy, giving portfolio managers an incentive to react on a low level of data accuracy. The empirical research showed that during portfolio decisions, decision makers want to get both informed on low and high levels of accuracy, which have to be presented next to the evaluation criteria.
To support organisations in the development and implementation of accuracy assessment in the ideation phase, a guide is developed, consisting of six actions and recommendation to these actions. The four aspects discussed above are incorporated in this guide. Different studies already proved that taking the quality of data, including accuracy, into account leads to a better decision outcome. The six-step guide supports organisations to exploit this opportunity, leading to more accurate evaluation criteria and better decision-making.