Like most Treasurers, your cash forecast is probably wrong. This shouldn’t be a surprise (it is after all only a forecast), and if you are happy with the current level of accuracy, then it’s not an issue. However, if you want increased accuracy to make better decisions, then you need a robust process for monitoring and understanding forecast variances – only with this discipline can you identify how you can improve your forecasts going forward.
In this article in the ‘cash forecast basics’ series, we’ll look at the challenges involved in tackling variance analysis. If you’re experiencing significant differences between forecast and actual, have low confidence in explanations given, and little faith that anything will improve, then read on!
For a comprehensive variance analysis framework, core questions you should consider include:
- Measurement Horizon – over what horizon(s) should you monitor and report?
- Accountability Model – how should you classify variances, in order to create insight and drive behaviours?
- Enabling Insight – does your reporting structure facilitate the analysis of variances?
- Reporting Clarity – is your commentary clear, concise and meaningful, so that problem areas and solutions can be identified?
These are examined below.
Many organisations only regularly monitor immediate horizon variances (e.g. actual vs. previous day). This provides no assurance over the remaining forecast horizon, and can also give a misleading impression of forecast performance, since the near-term should be relatively predictable anyway.
In theory, any forecast should extend out to the horizon required for effective business decision-making – and logically this should also be the horizon for variance reporting, to validate the reliability of the data being used. If the forecast data is not actively used, why use precious resources to produce it? Conversely, where companies hold large cash reserves, real benefit can be gained from extending your investment horizon, even in a low interest environment.
Understanding the ‘accuracy horizon’ of your forecast is therefore vital, to establish confidence limits in the data, and identify areas for remedial focus. This can be assessed by tracking a forecast flow from original forecast through to final outcome. Typically the profile will show the following characteristics, (see Figure 1):
- ‘Estimate’ Period – the initial forecast is often estimated (e.g. on historical patterns). This estimate may remain unchanged during the next few forecast updates, as there is no new information available, and therefore the forecast vs. forecast variance will be zero
- ‘Uncertainty’ Period – as the forecast horizon reduces, there may be new information, e.g. sales orders, which allow the original estimate to be updated – although it is not yet certain, and there may be forecast vs. forecast volatility
- Accuracy Horizon – at some point (this will vary by line item), there is usually sufficient data, e.g. AP invoices, for the forecast to be predicted with high accuracy – forecast vs. forecast variances will again reduce. It is this horizon which enables reliable decision-making.
Source & Copyright©2018 - Staples Consultancy
The analysis can be time-consuming, although some forecast applications provide this functionality. You will obtain more insight by performing it at line item level, as this allows focus on flows that show more short-term forecast volatility.
The way variances are classified has real influence on behaviours – incentivising better forecasting, and focussing attention on areas of importance. Two contrasting approaches, which provide different perspectives on the underlying causes, and support different accountability models, are outlined below.
Acceptable vs. Unacceptable (see Figure 2) – this approach puts the accountability focus on ‘unacceptable’ (or ‘controllable’) variances. ‘Acceptable’ (or ‘uncontrollable’) variances are still reported, but excluded for KPI (and reward) purposes. This is inherently ‘fair’ and easy to understand, and should therefore increase business engagement, provided the framework addresses some key ‘human element’ challenges:
- Is there clarity and consensus on the definition of ‘acceptable’ variances?
- Where systemic errors are identified, is accountability correctly assigned? For example, a problem with system data in relation to supplier payments may be an IT issue, or poor invoice processing in Finance
- Are there controls to prevent misclassification or ‘gaming’?
- How is the accountability integrated into overall incentivisation? For example, is there a risk that individuals could be double-penalised for an unacceptable cash variance that overlaps with a P&L item they are also measured on? Because of the potential complexity, this accountability framework is more commonly used in relation to shorter-term horizons
Source & Copyright©2018 - Staples Consultancy
Permanent vs. Timing – unlike the above, this approach inherently supports integrated accountability across cash and P&L. By defining ‘permanent’ cash variances as those resulting from changes to the P&L forecast (and capital expenditure where significant), the remaining variances must be ‘timing’, i.e. arising from changes in cash drivers such as DSO. Since P&L/capex variances are included in operational performance measurement, ‘permanent’ cash forecast variances should be excluded from incentivisation, to avoid double-counting. Instead, cash-related accountability should focus instead on the timing variances, as a measure of how well cash flows are managed.
The main practical issue is ensuring a mechanism that allows permanent variances to be accurately quantified – a spreadsheet that uses P&L and cash driver inputs to model various cash outcomes will help – generally this is mainly used for medium-term and longer forecast horizons.
Whatever the accountability model, the related KPIs/targets must be seen as fair, or it will reduce buy-in and forecast performance. At the same time, there should be a clear ownership framework, i.e. each variance should have a single owner, to ensure accountability and avoid dilution of responsibility.
This dimension covers a number of important design considerations, which will impact on the supporting process infrastructure, including:
- Materiality – what should be the limit, and how should it be defined?
- Too low a threshold may impose a reporting burden, without real benefit
- Absolute variance limits are simpler, but may need to be tailored for different business units – percentages allow a standard accountability framework, but can be complex, e.g. calculated relative to each flow, or against a common denominator like sales?
- Technology can automate variance calculation – but take care with the settings, to avoid creating unhelpful ‘noise’ which obscures the real issues
- Reporting Line Level – should variances be monitored at line item level, or a more consolidated basis? This should take into account materiality, ownership of the data, and the degree of insight that will be obtained, e.g. variances at total receipts level may mask significant off-setting items. Larger line items may require extra focus in variance commentary – in some cases it may be necessary to consider splitting out the item into component lines for greater understanding
- Reporting Unit – at what level of the business should variances be reported, e.g. by bank account, currency, functional unit etc.? The considerations are similar to reporting line level above
- Availability/Granularity of Actual Data – there are specific issues for both direct and indirect forecast methods:
- Direct (Receipts and Payments) Format – most finance systems have not been designed to support detailed reporting of actual cash flows in a receipts and payments format. Creating or approximating this data may require considerable effort – a compromise on the level of detail may be needed, reducing visibility of underlying variance components
- Indirect (Net Assets) Format – since this is an MI-aligned format, data is usually available – but it may not be granular enough for effective understanding. For example, a movement in trade creditors could result from any (or all) of the following:
- Change in supplier invoicing behaviour (earlier/later)
- Change in supplier payment terms/timing (earlier/later)
- Change in business forecast (increase/decrease)
- Change in inventory management strategy (ramping up/de-stocking)
Since management’s response may be different in each case, additional variance granularity will be needed, but this requires further effort and an ability to understand the underlying business.
Even the best variance reporting framework will be compromised if the supporting analysis and interpretation is below par. Typical indicators of poor reporting include:
- Stating the obvious – e.g. “the negative £150k variance is the result of making more supplier payments than forecast” – yes, but why? The explanation gives no insight into whether the forecast was reasonable, or how it could be improved
- Excessive detail – e.g. “the positive £50k variance in collections was made up of the following customer elements” followed by a long list of small amounts. Such detail may look comprehensive, but does not compensate for lack of analytical interpretation
- Partial analysis – e.g. “the largest component of the £150k negative variance was an unforecast ‘top-up’ payment made to HMRC of £60k” – but what about the other £90k?
Key factors that contribute to poor reporting include a low sense of business ownership for forecast variances, underdeveloped reporting skills, and a perceived lack of interest at group level. The first two can be addressed relatively easily, through aligned accountability and incentives, and central guidance on expected reporting content and quality, although it will take initial time and resource – but if it results in meaningful, actionable analysis, the investment is worth it.
Perceived lack of interest at group level
This last issue is probably the most damaging if not managed. To support this, implement robust follow-up and challenge on material/unexplained variances, and maintain an action list of planned improvements (with owners), so progress can be tracked. After all, there is absolutely no value in reporting/monitoring variances if they are not used for remedial action.
To sum up, there’s only one certainty in cash forecasting – that your forecast will be wrong. And there’s only one sin – being unable to explain why, and therefore unable to forecast better. If you can’t learn from your mistakes, it’s very likely you’ll repeat them.
Like this item? Get our Weekly Update newsletter. Subscribe today