So, we have established the principle

There is an old joke that runs like this. An ugly grizzled economist accosts a beautiful young courtesan “Will you sleep with me for a million dollars?” he asks. “Yes”, she hesitantly replies after a very long pause. “Excellent” he retorts, “we have established the principle, let’s negotiate the price.” Has the Australian Energy Regulator (‘AER’), with its use of economic benchmarking in in its latest round of decisions, taken on the role of the ugly old economist? My own view is that they might have. If so, the courtesans, in this case the electricity distributors (perhaps not so beautiful and young) should beware.

Let us wind back a bit to review how we got here. There is a strong case that the disparate regulatory agents overseeing the prices and revenues of Australia’s distribution businesses struggled to strike the right balance in the past. Tough reliability standards (pushed as much by political pressures as economic ones), expectations of rapid growth in demand that did not materialise, and expensive environmental imposts, among other factors, drove huge network investment and rapid increases in network charges, way ahead of inflation or wage growth.  The community as a whole asked whether this was fair; the economists amongst us asked whether it was efficient. The regulator, mediated by the good offices of policy makers and the Australian Energy Market Commission (‘AEMC’), sought new ways to address both.

You could argue that the fairness aspect was managed through more extensive consultation and an obligation to take account of customers’ views. You could argue that the efficiency aspect was managed by encouraging the AER to place a greater reliance on economic benchmarking (although it was doubtful that the AEMC anticipated benchmarking immediately becoming the central tool for setting the operating cost component of allowable revenues).

The premise behind economic benchmarking, which is a good one, is that if we can reliably measure the costs of an efficient regulated firm, we can limit the revenues of an inefficient firm to that same level. We reward the best performers by allowing them to keep some or all of the gains from their superior efficiency. Customers of the inefficient firm then don’t then have to pay for that inefficiency, and the firm itself, if it is motivated by profit, has strong incentives to improve.

The AER, to its credit, commissioned some of Australia’s foremost experts to benchmark the economic efficiency of Australia’s transmission and distribution businesses. Those experts conducted a comprehensive consultation and education program, canvassing a range of different efficiency measurement techniques and a range of different ways of representing the businesses in their stylised economic models.

I first thought that the AER might rely on an efficiency measurement technique called multilateral total factor productivity (‘MTFP’) to estimate the efficiency of each Australian firm. In summary, MTFP is the ratio of the total outputs produced by a firm to the total inputs consumed, but calculated in a way that allows comparison between firms and across time. To some extent it takes account of the different characteristics of the firm, for example, the geographical density of the customers they serve. MTFP is not without its challenges, but has a number of attractive features, not least of which is that it is a total measure. It takes account of all inputs and all outputs. By way of illustration, on the input side it takes account of both capital (e.g. poles, wires, transformers) and operating costs (e.g. inspections, repairs, vegetation management), not just operating costs. On the output side it takes account of number of customers, peak demand, energy, distance and, importantly, supply reliability.

So far so good.

But as it turned out the AER, on the advice of their experts, did not use MTFP in a formal sense in any total expenditure forecast assessments for the electricity distributors or transmission service providers (they do publish MTFP scores). Rather, they pretty much abandoned this type of benchmarking as a means of assessing transmission revenues because of the vast difference between the Australian transmission providers and the paucity of international data which could be used as an additional point of comparison. And for the distributors, they relied on an approach called ‘stochastic frontier analysis’ (‘SFA’) to assess allowable opex alone.

I will not elaborate on all of the pros and cons of SFA but will highlight a few of its interesting features, as applied by the AER:

  • it seeks to compare the efficiency of a target firm with the most efficient or ‘frontier’ firm in the sample;
  • in this instance, it was used to measure whether the operating costs of Australian distributors were efficient, not whether their total costs were efficient. Where that leaves us in terms of past and future longer-term capital versus operating cost substitution decisions is unclear; and
  • the database that was used included New Zealand and Canadian distributors, without which the approach would have been econometrically inapt (as some commentators still contend it is), but that necessitated the omission of some drivers of costs which might well be important, such as supply reliability, and inevitably gave rise to issues over whether the data sources were consistent and comparable.

There were many criticisms levied against the approach followed by a spirited defence by the AER’s learned experts. They have taken account of, they say, the litany of data definition issues, data inconsistencies and operational environment factors which may make Canada incomparable with Australia, or Ergon Energy incomparable with Citipower, and have placed them all on an equal footing, at least from the perspective of measurement. So it is clear that the AER believes that its ‘benchmarking is robust, reliable and reasonable’ while it is equally clear that some others disagree. We must leave it to the auspices of the market rules to resolve which view is the better.

So where do the economist and the courtesan fit in? The AER is seeking to seduce us all into agreeing that benchmarking should become the sole or at least predominant basis for determining operating cost allowances for distributors.

The principle that they seek to establish, then, is the central use of SFA to set allowable operating costs. However, if we are to use SFA as our benchmarking approach then the principle should surely mean the following: a score of 100% represents fully efficient operation; the inefficient firm has a lesser score, say 80%. If that firm could reduce inputs by 20% or increase outputs by 25% it would become efficient.

But that is not what the AER has done. Even though the international sample is apparently valid, because no one firm in the sample achieves a score of 100% and the best performing firm is foreign, the frontier firm (which scored 97%) is not used as the target. That makes sense, surely? The best Australian firm should be the target because the adjustments made to place firms from all three countries on the same plane may be imperfect.

Apparently that is not enough. In the NSW/ACT draft decisions, the AER decided that the best Australia firm should not be the target. Instead, the target should be the average score of the Australian distributors in the top quartile (i.e. with scores above 75%). It turns out that this new target is 86%. That makes sense, surely? The adjustments made to place firms from all three countries on the same plane, and the adjustments made to account for differences in the Australian distributors that affect costs, are imperfect.

Apparently that is still not enough. We now need to ask how the new frontier firm (you know, the one with the score of 86%) would perform if it faced the same challenges as the firm being benchmarked. Tougher OHS rules, higher network voltages, nasty weather, old assets, termites, bush fires, etc. There are some factors we can individually quantify. According to the draft decisions, NSW has lower bush fire costs than Victoria (-2.4%), NSW has tougher OH&S regulations (0.5%), NSW has more sub-transmission than most (3.5%-5.5%). However, there are lots of factors that we cannot individually measure. So let’s assume all these affect costs by 10% (the figures for the NSW distributors adopted in the Final Decisions ranged between 10.7% and 12.9%); the target frontier moves from 86% to 78% (i.e. 0.86/1.1). That makes sense, surely? We need to adjust the frontier or target score to account for the factors that, unlike say customer density, are not automatically accounted for in the SFA.

Apparently that is still not enough. In the NSW and ACT Final Decisions, the AER has decided to set the target at the lowest score of the distributors in the top quartile (i.e. of those with scores above 75%). This is equivalent to using the average score of Australian distributors in the top third (i.e. with scores above 67%). There are five such firms, whose average score is 77%. So the new target is reduced further, before any adjustments, to 77% (from 86%). Then, if you make the same 10% adjustment for firm specific factors, the target become 70% (0.77/1.1). That makes sense, surely? We need to adjust the frontier to include an ‘appropriately wide margin [to account] for potential modelling and data errors for the purposes of forecasting’. It is interesting to note that this new target is very close to the average SFA score of all the firms in the overall sample, Australia, New Zealand and Canada.

Ah, sense at last? Perhaps not.

We are left with the confident claim of the AER ‘that a firm that performs below this level [77% before other adjustments, 70% after adjustments] is therefore spending in a manner that does not reasonably reflect the opex criteria.’

Even if we assume this claim is sound, we still cannot determine whether a quarter of the 13 regulated DNSPs are spending in a manner that does not reasonably reflect the opex criteria. A benchmarking approach that cannot resolve the inefficiency of a quarter of the sample is hardly robust.

Or are we really saying that the AER should let Powercor and Citipower increase their operating costs by 20% next time around? Or better still, do we allow Powercor and Citipower to set prices based on the view that they actually incur a 77% ‘efficient’ level of operating cost, and pocket the surplus as profit?

If it is the latter, then the AER’s decisions might hark back to the original theory of yardstick competition, where those firms better than the average get extra profits, and those worse than the average make lower profits or losses. If so, it would be half-hearted; not all the firms in the efficiency model are regulated using the yardstick (New Zealand, Canada), which is a central feature of yardstick competition, and capital costs are excluded from the yardstick.

Or perhaps the AER will opt for the worst of all asymmetric worlds: no extra profit to reward superior performance, but losses for inferior performance. Whichever it is, firms and their customers would be better off if the AER made it clear.

But suppose the AER’s confidence in their approach is misplaced such that a firm with a score less than 77% actually may be spending in a manner that reasonably reflects the opex criteria. Is this really possible? Could a firm with an adjusted score of, say 65%, really be efficient?

Here is the problem. I don’t share the AER’s confidence that it could not be. Ergon Energy for example, is so different from other Australian distributors (weather, geographical extent, technology choice, termites etc.) that I doubt econometrics (which typically measures the average effect of these factors) shows just how these factors affect efficient costs in Ergon’s specific case. CitiPower is similarly markedly different from the rest.  A fortiori, when Australian distributors as a group are also likely to be considerably different from New Zealand distributors as a group and Canadian distributors as a group. And there are, of course, other measures of efficiency of which the AER is well aware that do not quite accord with the measures that the AER has chosen to rely on. If we cannot tell with confidence, on what basis are the measures robust?

So this is where the distributors stand, as the courtesans, to the question posed by the AER. “Will you accept revenue based on my SFA analysis, with all its shortcomings, with wide error bands so it is not too painful at the outset?” If they answer “Yes,” what does the AER say for the next couple of decades? “Excellent, we have established the principle, let’s negotiate the size of the error bands.”

Distributors should say “No”.

Using this logic, what should my original courtesan have said to the economist? Also “No”, “Come back to me again in five years’ time, less dismal, with a six pack, hair, and a new Ferrari. Then ask me again.”

The AER’s benchmarking may well be a good start, but the distributors should say the same. “Come back in five years’ time with benchmarking that works, which is robust and which looks at total costs. Then ask me again.”

In the meantime, why not use benchmarking as the AEMC appears to have originally intended, namely as a tool to assist the AER, such as to raise questions about the efficiency of the businesses and to guide its investigation as to where inefficiencies may arise? Or to place distributors in bands with different X factors in their allowed revenues, as the Canadians have done? But not as if it were the “silver bullet” for definitively gauging the ‘right’ level of operating costs.

7 May 2015