AUSTRALIAN COMPETITION TRIBUNAL

Applications by Public Interest Advocacy Centre Ltd and Ausgrid [2016] ACompT 1

Citation:

Applications by Public Interest Advocacy Centre Ltd and Ausgrid [2016] ACompT 1

Review from:

Australian Energy Regulator

Applicant:

PUBLIC INTEREST ADVOCACY CENTRE LTD

AUSGRID

File number:

ACT 1 of 2015

ACT 4 of 2015

Tribunal:

MANSFIELD J, PRESIDENT

MR R DAVEY, MEMBER

DR D ABRAHAM, MEMBER

Interveners in

ACT 1 of 2015

ACT 4 of 2015:

AusNet Services (Distribution) Pty Ltd

AusNet Services (Transmission) Ltd

Australian Gas Networks Ltd

Citipower Pty Ltd

Powercor Australia Ltd

SA Power Networks

United Energy Distribution Pty Ltd

Ergon Energy Corporation Ltd

Minister for Resources, Energy and Northern Australia

Interveners in

ACT 1 of 2015:

Ausgrid

Interveners in

ACT 4 of 2015:

Public Interest Advocacy Centre Ltd

Date of Determination:

26 February 2016

Catchwords:

ENERGY AND RESOURCES – applications under s 71B of the National Electricity Law (NEL) for review of a distribution determination by the AER – consideration of the legislative background to the NEL and the National Electricity Rules (NER) – amendments to the NER made in 2012 by the Australian Energy Market Commission (AEMC) – amendments to the NEL made in 2013 – role of the AER - national electricity objective (NEO) - consultation and notification obligations under s 16(1)(b) – interrelationship of constituent components and preferable reviewable regulatory decision by the AER under s 16(1)(c), and (d) – role of the Tribunal on review - interrelationship of constituent components and materially preferable NEO decision under s 71P(2a), and (2b) – conduct of consultation process under s 71R(1) – topics for review – operating expenditure (opex) – X-factor – efficiency benefit sharing scheme (EBSS) – return on equity – return on debt – gamma – metering services

ENERGY AND RESOURCES – operating expenditure (opex) – use of econometric benchmarking – comparability of RIN data – comparability of overseas data – adjustments the AER made to the outcomes of the econometric benchmarking – operating environment factors - lowering of the efficiency target comparison points – efficiency of vegetation management costs – efficiency of labour costs – transition path

ENERGY AND RESOURCES – efficiency benefit sharing scheme (EBSS) – whether the AER was correct in adjusting the provisions expense reported - exclusion of allowances for efficiency gains as a result of movements in provisioning – whether there was retrospective exclusion of particular cost categories

ENERGY AND RESOURCES – return on equity – achievement of the rate of return objective - efficient financing costs of a benchmark efficient entity with a similar degree of risk as that which applies to a service provider - requirement that regard be had to relevant estimation methods, financial models, market data and other evidence - foundation model approach - adjustment of parameters to reflect other models

ENERGY AND RESOURCES – return on debt - transition between methods of deciding the return on debt– transition to trailing average approach - efficient financing costs of a benchmark efficient entity with a similar degree of risk as that which applies to a service provider - features of a benchmark efficient entity – whether the NER provide for more than one benchmark efficient entity – actual historical debt financing practices of network service providers – features of the individual regulated service provider

ENERGY AND RESOURCES – gamma – significance of the substitution of “the value of imputation credits” for “the assumed utilisation of imputation credits” in the definition of gamma – interpretation of “the value of imputation credits”– interpretation consistent with the NER – relevance of “market value” - proper use of equity ownership data, tax statistics and market studies - calculation of distribution rate

ENERGY AND RESOURCES – metering services - use of multi-year averaging to calculate metering opex

Legislation:

National Electricity Law

National Gas Law

National Electricity Rules

National Gas Rules

Energy Services Corporations Act 1995 (NSW)

National Electricity (South Australia) Act 1996 (SA)

National Electricity (South Australia) (New National Electricity Law) Amendment Act 2005 (SA)

Australian Energy Market Commission Establishment Act 2004 (Cth)

Trade Practices Act 1974 (Cth)

Competition and Consumer Act 2010 (Cth

National Electricity (South Australia) National Electricity Law (Miscellaneous Amendments) Amendment Act 2007 (SA)

National Electricity (South Australia) (National Electricity Law – Australian Energy Market Operator) Amendment Act 2009 (SA)

Statutes Amendment (National Electricity and Gas Laws – Limited Merits Review) Act 2013 (SA)

National Electricity (South Australia) National Electricity Law (Miscellaneous Amendments) Act 2007 (SA)

Fair Work Act 2009 (Cth)

Cases cited:

Applications by Public Interest Advocacy Centre Ltd, Ausgrid, Endeavour Energy and Essential Energy [2015] ACompT 2

Application by ActewAGL Distribution [2015] ACompT 3

Application by Jemena Gas Networks (NSW) Limited [2015] ACompT 4

Application by ElectraNet Pty Limited (No 3) [2008] ACompT 3

Application by Envestra Limited (No 2) [2012] ACompT 3

East Australian Pipeline Pty Ltd v Australian Competition and Consumer Commission (2007) 233 CLR 229

Tillmans Butcheries Pty Ltd v Australian Meat Industry Employees Union (1979) 27 ALR 367

Monroe Topple & Associates Pty Ltd v Institute of Chartered Accountants (2002) 122 FCR 110

Seven Network Limited v News Limited (2009) 182 FCR 160

Australian Gas Light Co v ACCC (2003) 137 FCR 317

Application by DBNGP (WA) Transmission Pty Ltd (No 3) [2012] ACompT 14

Application by EnergyAustralia [2009] ACompT 8

Application by WA Gas Networks (No 3) [2012] ACompT 12

Commissioner of Stamps (SA) v Telegraph Investment Co Pty Ltd (1995) 184 CLR 453

Applications by Public Interest Advocacy Centre Ltd and Endeavour Energy [2016] ACompT 2

Applications by Public Interest Advocacy Centre Ltd and Essential Energy [2016] ACompT 3

Application by ActewAGL Distribution [2016] ACompT 4

Application by Jemena Gas Networks (NSW) Ltd [2016] ACompT 5

Toyota Motor Corporation Australia Limited v Marmara [2014] 222 FCR 152

Teys Australia Beenleigh Pty Ltd v Australasian Meat Industry Employees Union [2015] 317 ALR 636

Re East Australian Pipeline Limited [2004] ACompT 8

Wellington International Airport Limited & Ors v Commerce Commission [2013] NZHC 3289

Paciocco v Australia and New Zealand Banking Group Limited [2015] FCAFC 50

SPI Electricity Pty Ltd v Australian Competition Tribunal (2012) 208 FCR 151

Rathbone v Abel (1964) 38 ALJR 293

R v Hunt; Ex parte Sean Investment Pty Ltd (1979) 180 CLR 322

Turner v Minister for Immigration and Ethnic Affairs (1981) 35 ALR 388\

Australian Broadcasting Tribunal; Ex p 2HD Pty Ltd (1979) 144 CLR 45 AER

Application by Jemena Gas Networks (NSW) Ltd (No 3) [2011] ACompT 6 Application by AAPT Allgas Energy Ltd (No 2) [2012] ACompT 5

Application by Energex Limited (Gamma) (No 5) [2011] ACompT 9

Dates of hearing:

21-25 September 2015; 28-30 September 2015

1-2 October 2015; 6-9 October 2015

Place:

Darwin (via video link to Sydney, Melbourne, Brisbane and Adelaide)

Number of paragraphs:

1230

Counsel for the Public Interest Advocacy Centre Ltd:

S Horgan QC with T Clarke

Solicitor for the Public Interest Advocacy Centre Ltd:

Public Interest Advocacy Centre Ltd

Counsel for Endeavour Energy:

C Moore SC with K Morgan, A Hochroth and C Dermody

Solicitor for Endeavour Energy:

Herbert Smith Freehills

Counsel for the Australian Energy Regulator:

S Lloyd SC and M O’Bryan QC with S Balafoutis, A Mitchelmore, J Arnott, T Phillips, D Tucker and F St John

Solicitor for the Australian Energy Regulator:

Corrs Chambers Westgarth

Counsel for the Commonwealth Minister for Resources, Energy and Northern Australia:

T Howe QC and B Lim

Solicitor for the Commonwealth Minister for Resources, Energy and Northern Australia:

Australian Government Solicitor

Counsel for AusNet Services (Distribution) Pty Ltd, AusNet Services (Transmission) Ltd, Australian Gas Networks Ltd, Citipower Pty Ltd, Powercor Australia Ltd, SA Power Networks and United Energy Distribution Pty Ltd:

P Brereton SC with R Higgins

Solicitor for AusNet Services (Distribution) Pty Ltd, AusNet Services (Transmission) Ltd, Australian Gas Networks Ltd, Citipower Pty Ltd, Powercor Australia Ltd, SA Power Networks and United Energy Distribution Pty Ltd:

Jones Day

Counsel for Ergon Energy Corporation Ltd

T Bradley QC with A Coulthard and E Hoiberg

Solicitor for Ergon Energy Corporation Ltd

Minter Ellison

IN THE AUSTRALIAN COMPETITION TRIBUNAL

ACT 4 of 2015

RE:

APPLICATIONS UNDER S 71B OF THE NATIONAL ELECTRICITY LAW FOR A REVIEW OF DISTRIBUTION DETERMINATION MADE BY THE AUSTRALIAN ENERGY REGULATOR IN RELATION TO AUSGRID PURSUANT TO RULE 6.11.1 OF THE NATIONAL ELECTRICITY RULES

BY:

AUSGRID

TRIBUNAL:

MANSFIELD J, PRESIDENT

MR R DAVEY, MEMBER

DR D ABRAHAM, MEMBER

DATE OF DETERMINATION:

26 February 2016

WHERE MADE:

DARWIN (VIA VIDEO LINK TO SYDNEY, MELBOURNE, BRISBANE AND ADELAIDE)

THE TRIBUNAL DETERMINES THAT:

1.    Pursuant to s 71P(2)(c) of the National Electricity Law, the Final Decision Ausgrid distribution determination 2015-16 to 2018-19, April 2015, including attachments, (the Final Decision) is set aside and remitted to the Australian Energy Regulator (AER) to make the decision again in accordance with the following directions:

(a)    the AER is to make the constituent decision on opex under r 6.12.1(4) of the National Electricity Rules in accordance with these reasons for decision including assessing whether the forecast opex proposed by the applicant reasonably reflects each of the operating expenditure criteria in r 6.5.6(c) of the National Electricity Rules including using a broader range of modelling, and benchmarking against Australian businesses, and including a “bottom up” review of Ausgrid’s forecast operating expenditure;

(b)    the AER is to make the constituent decision on return on debt in relation to the introduction of the trailing average approach in accordance with these reasons for decision;

(c)    the AER is to make the constituent decision on estimated cost of corporate income tax (gamma) in accordance with these reasons for decision, including by reference to an estimated cost of corporate income tax based on a gamma of 0.25; and

(d)    the AER is to consider, and to the extent to which it considers appropriate to vary the Final Decision in such other respects as the Australian Energy Regulator considers appropriate having regard to s 16(1)(d) of the National Electricity Law in the light of such variations as are made to the Final Decision by reason of (a)-(c) hereof.

IN THE AUSTRALIAN COMPETITION TRIBUNAL

ACT 1 of 2015

RE:

APPLICATIONS UNDER S 71B OF THE NATIONAL ELECTRICITY LAW FOR A REVIEW OF DISTRIBUTION DETERMINATION MADE BY THE AUSTRALIAN ENERGY REGULATOR IN RELATION TO PUBLIC INTEREST ADVOCACY CENTRE LTD PURSUANT TO RULE 6.11.1 OF THE NATIONAL ELECTRICITY RULES

BY:

PUBLIC INTEREST ADVOCACY CENTRE LTD

ACT 4 of 2015

Re:

APPLICATION UNDER S 71B OF THE NATIONAL ELECTRICITY LAW FOR A REVIEW OF DISTRIBUTION DETERMINATION MADE BY THE AUSTRALIAN ENERGY REGULATOR IN RELATION TO AUSGRID PURSUANT TO RULE 6.11.1 OF THE NATIONAL ELECTRICITY RULES

BY:

AUSGRID

TRIBUNAL:

MANSFIELD J, PRESIDENT

MR R DAVEY, MEMBER

DR D ABRAHAM, MEMBER

DATE:

26 february 2016

PLACE

DARWIN (via video link to sydney, MELBOURNE, BRISBANE AND ADELAIDE)

REASONS FOR DETERMINATION

INTRODUCTION

[1]

The Distribution Determinations

[5]

The Access Arrangement Decision

[9]

The Review Applications

[12]

The Legislative Background

[18]

The 2012 Rule Amendments

[29]

The 2013 Legislative Amendments

[31]

The Consultation Process

[50]

The Materially Preferable NEO/NGO Decision

[65]

The Tribunal’s Role on Review

[87]

The Grounds of Review

[102]

The Structure of the Decision

[109]

OPERATING EXPENDITURE (OPEX)

[115]

The Opex Issues

[123]

The principal issue

[123]

Overview of the parties’ challenges

[125]

Background

[138]

Opex in the context of the NEL and the NER

[138]

Rule 6.5.6 and the 2012 Rule Amendments

[142]

The EI model

[145]

The First EI Report

[146]

The use of overseas data in the EI model

[156]

Country dummy variables

[160]

EI’s outputs specification criteria

[163]

The EI model’s specifications

[167]

The Second EI Report

[170]

The AER’s lowering of the EI model’s comparison point

[175]

The AER’s operating environment factors (OEFs) adjustments

[178]

The AER’s application of the benchmarking opex factor (rule 6.5.6(d)(4))

[198]

The AER’s application of the other rule 6.5.6(d) opex factors

[225]

The Parties’ Submissions on the Principal Issue

[227]

Inadequacies in the EI model’s data set and comparability issues

[230]

The AER’s lowering of the EI model’s comparison point

[309]

Other OEF issues

[354]

The efficiency of the DNSPs’ vegetation management costs

[355]

Labour costs – Networks NSW’s challenge

[409]

The AER’s use of the EI model as the sole determinative of opex

[443]

Consideration of the Principal Opex Issue

[463]

Transition Path

[486]

Conclusion on Opex (subject to s 71P(2a) and (2b))

[495]

X FACTOR

[498]

Background

[498]

The X Factor Decision

[507]

The Grounds of Review

[520]

Consideration

[522]

EFFICIENCY BENEFIT SHARING SCHEME (EBSS)

[539]

Background

[553]

The AER Decision

[567]

EBSS Issues

[578]

The Grounds of Review

[586]

Consideration

[593]

Conclusion

[628]

RETURN ON EQUITY

[632]

The Regulatory Background

[640]

The AER’s Final Decisions

[655]

PIAC’s Contention

[681]

The Grounds of Review: Network Applicants

[701]

Consideration

[709]

The relevant Rules

[709]

The application of the Rules

[712]

The use of the SL CAPM model

[719]

The challenged findings of fact

[736]

The Unreasonableness of the Final Decisions

[805]

RETURN ON DEBT

[815]

The AER’s Final Decisions

[834]

The Transition: The AER Approach

[858]

Consideration

[870]

The Benchmark Efficient Entity

[877]

Was this issue raised and maintained by Networks NSW and ActewAGL?

[877]

Is the Benchmark Efficienty Entity a Regulated Entity?

[891]

Is the Benchmark Efficient Entity a common entity for all DNSPs?

[891]

The Transition

[923]

PIAC’s contentions

[944]

Separate issues of Networks NSW

[964]

Other General Issues

[996]

Ergon’s issue

[998]

JGN’s separate issues

[1004]

GAMMA

[1006]

Historical and Legislative Content

[1019]

The AER’s approach to setting Gamma

[1030]

Interpretation of “The Value of Imputation Credits”

[1059]

Consideration

[1083]

AER’s CAPM framework

[1083]

AER’s conceptual approach to and estimation of theta

[1090]

Adjustment of SFG theta estimate for personal costs

[1101]

Estimation of the distribution rate

[1104]

Conclusion

[1110]

METERING SERVICES OPEX

[1121]

The Decision

[1129]

Grounds of Review

[1139]

Costs of Type 5 and Type 6 meters

[1143]

Averaging from 2008-09 to 2012-13

[1150]

Consideration

[1153]

Conclusion

[1163]

FINAL CONCLUSIONS

[1166]

General

[1166]

A Materially Preferable NEO Decision?

[1176]

The AER’s Approach

[1184]

Consideration

[1205]

PIAC’s contentions

[1207]

Application of the prescribed test

[1216]

Determination

[1227]

A final observation

[1228]

INTRODUCTION

1    These reasons relate to two of seven applications made to review decisions of the Australian Energy Regulator (AER) made on 30 April 2015 under the National Electricity Law (NEL). The issues relating to those decisions, and to a further decision of the AER made on 3 June 2015 under the National Gas Law (NGL) have been heard together.

2    That is because a number of the issues arising in relation to these two applications are common to issues in relation to the six other review applications referred to below, although there are a few issues particular to one or more of the applications, and in some respects the matters raised on certain issues differed slightly. It was common ground that the substantial commonality of issues raised in the eight applications made it preferable for them to be heard together.

3    These reasons deal only with the applications to review the AER decision concerning Ausgrid. They will serve as the “lead” reasons insofar as the Tribunal’s general considerations, on the significant matters of common concern, and its consideration of aspects of particular topics may not need to be repeated in full by the Tribunal in its consideration of the other applications.

4    All the applications were made in respect of a regulatory decision-making process by the AER and a regulatory review process by the Tribunal which were each significantly different from the regulatory decision-making process and the review process previously existing. The parties understandably spent considerable time addressing those differences and their significance. These reasons contain the Tribunal’s consideration of those submissions. Where appropriate that consideration will be incorporated by reference into its determinations in relation to the other applications, rather than be repeated.

The Distribution Determinations

5    On 30 April 2015, the AER published a distribution determination final decision under r 6.11.1 of Chapter 6 of the National Electricity Rules (NER) in relation to each of Ausgrid, Endeavour Energy (Endeavour) and Essential Energy (Essential). Each of the final decisions include an overview and attachments, the overviews being entitled: Final Decision Ausgrid distribution determination 2015-16 to 2018-19, Overview, April 2015; Final Decision Endeavour Energy distribution determination 2015-16 to 2018-19, Overview, April 2015; and Final Decision Essential Energy distribution determination 2015-16 to 2018-19, Overview, April 2015. Ausgrid, Endeavour and Essential are State owned corporations incorporated under the Energy Services Corporations Act 1995 (NSW) and are referred to collectively as Networks NSW. Each is the owner and operator of a monopoly electricity distribution network in New South Wales.

6    On the same day, the AER published a distribution determination final decision including an overview and attachments under the same provision in relation to ActewAGL Distribution (ActewAGL): Final Decision ActewAGL distribution determination 2015-16 to 2018-19 Overview, April 2015. ActewAGL is the owner of the electricity distribution network in the Australian Capital Territory.

7    The AER determined as follows:

(1)    Ausgrid can recover $6576.4m ($ nominal) from its customers over the 2015-2019 regulatory control period.

(2)    Endeavour can recover $3182.8m ($ nominal) from its customers over the 2015-2019 regulatory control period.

(3)    Essential can recover $3826.1m ($ nominal) from its customers over the 2015-2019 regulatory control period.

(4)    ActewAGL can recover $590.9m ($ nominal) from its customers over the 2015-2019 regulatory control period.

Each of those entities may be referred to as a Distribution Network Service Provider (DNSP).

8    Distribution charges represent a significant component of the annual bill for customers of the DNSPs and so for consumers of electricity in their respective distribution areas. The AER estimated that its decisions would (noting that it was providing estimates only and that there are other factors that will affect a consumer’s electricity bill, such as the wholesale price of electricity) have the following estimated impact:

( )    Ausgrid: For residential customers, a reduction in their average annual electricity bills by $165 (or 8 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period. For small business customers, a reduction in their average annual electricity bills by $264 (or 8 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period.

(a)    Endeavour: For residential customers, a reduction in their average annual electricity bills by $106 (or 5.3 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period. For small business customers, a reduction in their average annual electricity bills by $152 (5.3 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period.

(b)    Essential: For residential customers, a reduction in their average annual electricity bills by $313 (or 11.9 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period. For small business customers, a reduction in their average annual electricity bills by $528 (or 11.9 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period.

(c)    ActewAGL: For residential customers, a reduction in their average annual electricity bills by $112 (or 5.8 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period. For small business customers, a reduction in their average annual electricity bills by $168 (or 5.8 percent) in 2015-16 and remaining relatively stable over the rest of the regulatory control period.

The Access Arrangement Decision

9    On 3 June 2015, the AER published a full access arrangement final decision in relation to Jemena Gas Networks (NSW) Ltd (JGN), pursuant to rr 62 and 64 of the National Gas Rules (NGR): Final Decision – Jemena Gas Networks (NSW) Ltd Access Arrangement 2015-20 Overview, June 2015, which, like the final distribution decisions mentioned above, includes a number of attachments.

10    By that decision, the AER determined that JGN could recover $2,229.0m ($ nominal) from its customers over five years commencing 1 July 2015.

11    As with the DNSPs, the revenue that the AER determined affects the distribution component of a consumer’s final gas bill. For residential and small business customers, the AER estimated that:

(a)    average annual gas bills would fall by around 9.2 percent in 2015-16, translating into a $96 reduction in bills for residential customers and a $462 reduction for small business customers;

(b)    bills will continue to fall over the following three years; and

(c)    there would be a small increase of 1 percent in 2019-20.

The Review Applications

12    Each of the distribution determinations that the AER made on 30 April 2015 is a “reviewable regulatory decision within the meaning of subcl (a) of the definition of that term in s 71A of the NEL. The AER’s access arrangement decision in relation to JGN is also a “reviewable regulatory decision within the meaning of subcl (d) of the definition in s 244 of the NGL. It is convenient to refer to each of those five decisions as a Final Decision.

13    The term “Final Decision” is used to distinguish between the separate decision-making stages specified in the Rules. Under the NEL and NER a network service provider is required to submit to the AER a regulatory proposal that the AER must consider. The AER must then publish a draft decision, allowing the network service provider the opportunity to submit a revised regulatory proposal that the AER must consider. The AER must then publish a draft decision, allowing the network service provider the opportunity to submit a revised regulatory proposal prior to a Final Decision being made. A like process must be followed under the NGL and NGR by a service provider submitting, and the AER considering, an access arrangement revision proposal prior to a Final Decision being made. In respect of each of Networks NSW, ActewAGL and JGN, it is convenient to refer to each of those proposals and draft decisions as a “Regulatory Proposal”, “Draft Decision” or “Revised Regulatory Proposal” respectively.

14    By applications made on 21 May 2015 pursuant to s 71B of NEL, ActewAGL and Networks NSW applied to the Tribunal for leave to review the AER’s respective Final Decisions concerning them. The Public Interest Advocacy Centre Ltd (PIAC) also applied for leave to review each of the AER’s three Final Decisions relating to Networks NSW. On 24 June 2015, JGN filed an application for leave to review the AER’s Final Decision concerning it.

15    It is convenient hereafter to refer collectively to Networks NSW, ActewAGL and JGN as the Network Applicants from time to time.

16    On 17 July 2015, the Tribunal granted leave in respect of each of the applications relating to the four Final Decisions made on 30 April 2015: Applications by Public Interest Advocacy Centre Ltd, Ausgrid, Endeavour Energy and Essential Energy [2015] ACompT 2 and Application by ActewAGL Distribution [2015] ACompT 3. On 4 August 2015, the Tribunal granted leave in relation to the Final Decision relating to JGN: Application by Jemena Gas Networks (NSW) Limited [2015] ACompT 4.

17    In addition to their applications for review, PIAC and Networks NSW sought and were each granted leave to intervene in the other respective review applications concerning the AER’s Final Decisions relating to Networks NSW. There were other interveners granted leave to appear before the Tribunal in the review applications generally, clearly in the case of other DNSPs because regulatory review determinations were in the process of being made by the AER. A list of the interveners is:

    PIAC (in relation to the Networks NSW applications);

    Networks NSW (in relation to the PIAC applications);

    AusNet Services (Distribution) Pty Ltd, AusNet Services (Transmission) Ltd, Australian Gas Networks Ltd, Citipower Pty Ltd, Powercor Australia Ltd, SA Power Networks and United Energy Distribution Pty Ltd (Vic/SA Network Interveners);

    Ergon Energy Corporation Ltd (Ergon); and

    the Commonwealth Minister for Industry and Science. On 21 September 2015, the title of the Commonwealth Minister with responsibility for energy matters changed from the Minister for Industry and Science to the Minister for Resources, Energy and Northern Australia. For consistency, hereafter he is referred to as the Minister.

The Minister’s intervention was confined to making submissions on the proper construction and application of the relevant provisions of the NEL, the NER, the NGL and the NGR. Although the Ministers of participating jurisdictions are entitled to intervene, and were notified by the Tribunal of the several applications for review of the Final Decisions (once leave to apply for review had been given), none of the relevant Ministers did intervene.

The Legislative Background

18    The national electricity market was established following the enactment of the National Electricity (South Australia) Act 1996 (SA), in broad terms providing for the competitive trading and regulation of the generation, transmission, distribution and supply of electricity (then) in south-eastern Australia. The national electricity market was to be a competitive wholesale market comprising a comprehensive set of trading arrangements. The initial version of the NEL was a schedule to that Act.

19    It is clear enough, from that legislation, its context, and the events preceding it, that it was a consequence of the competition policy reforms and the pro-competitive policy mindset following the Hilmer reforms in the early 1990s. It was designed, wherever possible, to introduce competition into the provision of electricity to consumers through structural reform, by ensuring that government enterprises competed in an appropriate form, and in the case of monopoly infrastructure to provide for regulated access to monopoly infrastructure with independent authorities to oversee prices. That Act was amended by the National Electricity (South Australia) (New National Electricity Law) Amendment Act 2005 (SA), substituting the current version (since amended) of the NEL, and providing for the conferral of functions and powers in respect of the national electricity governance on the Australian Energy Market Commission (AEMC) (established under the Australian Energy Market Commission Establishment Act 2004 (Cth)) and upon the AER (established under the Trade Practices Act 1974 (Cth) (now the Competition and Consumer Act 2010 (Cth)). It was at the time contemplated, and it has now been effected, that the AER would operate as the economic regulator of both electricity and gas transmission and distribution networks for all jurisdictions other than Western Australia. The previous State based regulatory structure was removed.

20    The National Electricity (South Australia) National Electricity Law (Miscellaneous Amendments) Amendment Act 2007 (SA) really introduced the present structure under which, subject to amendments to the NER and to the NGR by the AEMC in 2012 and to legislative amendments to the NEL and the NGL in 2013, the AER and the Tribunal are conducting their present functions.

21    The Act referred to in the preceding paragraph set out to establish a single national regulatory framework for electricity networks, and introduced important changes to the AER’s powers, including the prescription of the national electricity objective (the NEO), and the revenue and pricing principles (RPP), to guide the AER in making regulatory decisions, and in other respects. It also introduced new merits review provisions under which the Tribunal (also a creation of the then Trade Practices Act 1974) was given the responsibility of reviewing certain decisions of the AER.

22    It was said in the Second Reading Speech by the Minister for Mineral Resources and Development (South Australia) that the proposed Act would lower the barriers to competition: Legislative Council, South Australia, 16 October 2007 Hansard p 883. It was specifically noted (Hansard p 886) that the NEO did not extend to “broader social and environmental objectives”. The Minister’s Second Reading Speech said:

The purpose of the National Electricity Law is to establish a framework to ensure the efficient operation of the national electricity market, efficient investment in, and the effective regulation of electricity networks. As previously noted, the national electricity objective also guides the Australian Energy Market Commission and the Australian Energy Regulator in performing their functions. This should be guided by an objective of efficiency that is in the long term interests of consumers. Environmental and social objectives are better dealt with in other legislative instruments and policies which sit outside the National Electricity Law.

23    It was the same legislation which introduced the RPP, which, it was said (p 887) are fundamental to ensuring achievement of the intention of enhancing efficiency in the national electricity market. The principles were said to maintain a framework for an efficient network investment irrespective of the evolution of the regulatory regime (by changes to the NER). The same was proposed for the new NGL and the national gas objective (NGO).

24    The AEMC from July 2005 was responsible for developing the NER. Similarly, it later became responsible for developing the NGR.

25    As noted, that legislation introduced also the limited merits review by the Tribunal of certain regulatory decisions under the NEL (and contemplated under the NGL). It was intended that network service providers, users and consumer associations could seek review of primary transmission and distribution determinations made by the AER. The review was confined to specified grounds of review, but more importantly there were two elements limiting the scope of review: the review could only address issues which were raised and addressed previously to the AER, and secondly the review could only address those issues on the material presented to the AER. There was to be no review of an AER decision on arguments not made to, or on material not presented to, the AER.

26    At about the same time, extensive amendments to Chapter 6 of the NER were introduced by the AEMC to guide the AER.

27    The National Electricity (South Australia) (National Electricity Law – Australian Energy Market Operator) Amendment Act 2009 (SA) then effected further amendments by which the Australian Energy Market Operator (AEMO) came to be the national energy market operator for both the electricity and gas markets, so as to reinforce the national character of energy market governance. It is not necessary to discuss in detail the effect of those amendments.

28    It is within that structure that, for present purposes, it can fairly be said that the first regulatory cycle (2008-13) of decision making by the AER became the subject of determinations for transmission and distribution networks under the NEL, and under the NGL, by the AER. Given the complexity of the task, it is hardly surprising that the Standing Committee on Energy and Resources (SCER) and the AEMC determined that significant reforms to the provisions of the NEL and the NER (and in turn the NGL and the NGR) should be introduced in the light of that initial experience.

The 2012 Rule Amendments

29    On 15 November 2012, the AEMC published its Final Position Paper National Electricity Amendment (Economic Regulation of Network Service Providers) Rule 2012 National Gas Amendment (Price and Regulation of Gas Services) Rule 2012 (the Final Postiion Paper). On 29 November 2012, the AEMC published its Rule Determination National Electricity Amendment (Economic Regulation of Network Service Providers) Rule 2012 National Gas Amendment (Price and Revenue Regulation of Gas Services) Rule (the 2012 Rule Amendments). It will be necessary to refer to those particular amendments in the course of considering particular matters raised on this and the related applications. It should be noted, however, that the 2012 Rule Amendments required the AER to publish guidelines specifying the approach that the AER proposed to use to assess forecasts of operating expenditure (opex) and capital expenditure (capex), and to set out the methodologies that it proposed to use in estimating the allowed rate of return, and the methods, financial models, market data and other evidence that it proposed to take into account in estimating the return on equity and the return on debt. This requirement resulted in the AER’s Better Regulation Rate of Return Guideline, December 2013, (the RoR Guideline) and its Better Regulation Rate of Return Guideline Explanatory Statement, December 2013 (the RoR Explanatory Statement). Significant amendments were made to r 6.5.2 of the NER which are discussed in relation to the topics of return on debt and return on equity which were each the subject of matters of debate in the course of this application and the related applications. Similarly significant amendments were made to r 6.5.6 in relation to how the AER should address the opex factors and capex factors.

30    To allow for a transition to the new rules, the Savings and Transitional Rules in Division 2 of Part ZW of Chapter 11 of the NER provided for a two stage process for the regulation of ACT and NSW DNSPs over the five year period commencing on 1 July 2014 (the 2014-19 period), comprising:

(a)    the transitional regulatory control period from 1 July 2014 and ending on 30 June 2015 (transitional regulatory control period); and

(b)    the subsequent regulatory control period from 1 July 2015 and ending on 30June 2019 (subsequent regulatory control period).

The 2013 Legislative Amendments

31    In addition, the Statutes Amendment (National Electricity and Gas Laws – Limited Merits Review) Act 2013 (SA) (the 2013 Legislative Amendments) refined the responsibility of the AER under s 16(1)(b) of the NEL and amended the Limited Merits Review Regime in Pt 6, Div 3A, of the NEL by facilitating the participation of “reviewable regulatory decision process participants”. It also, in a complementary way, added s 16(1)(c) and 16(1)(d) to the NEL to ensure the AER had a focus on the NEO and the NGO and s 71P (and related provisions) modifying the Tribunal’s power to vary or set aside a determination to circumstances where a substituted decision would, or would be likely to, better serve the NEO or the NGO. The detailed nature of those amendments is set out below so far as they are relevant.

32    As observed above, during 2013, the AER in response to those changes adopted its RoR Guideline which it described in some detail in its Final Decision in relation to Ausgrid (see the Overview at pp 55-56) and in its other Final Decisions. It consulted widely with stakeholders to develop a number of guidelines as required. The guidelines included the Expenditure Forecast Assessment Guideline (EFA Guideline), concerning a networks forecast opex proposal, and the RoR Guideline referred to above. The process of producing the Better Regulation guidelines involved active consultation with interested entities, including with PIAC which (as noted above) has itself sought to review the three Networks NSW decisions of the AER.

33    The NEL was substantially amended by the 2013 Legislative Amendments. Relevant to the role of the AER, apart from having a definition of “constituent components” in s 2(1) of the NEL, for present purposes the significant alteration was made by the substitution of s 16(1)(b) and the addition of s 16(1)(c) and, perhaps more importantly, the addition of s 16(1)(d).

34    Section 16(1)(b) requires extensive consultation and notification obligations to be fulfilled by the AER for the purposes of interested persons (including network service users or prospective users of the relevant services, and user or consumer associations that have an interest in the determination) being given an opportunity to address the issues being considered by the AER, and s 16(1)(c) requires the AER to specify in its decision the manner in which the constituent components of the decision relate to each other and how that inter-relationship has been taken into account.

35    Section 16(1)(d) provides that the AER must:

if the AER is making a reviewable regulatory decision and there are 2 or more possible reviewable regulatory decisions that will or are likely to contribute to the achievement of the national electricity objective –

(i)    make the decision that the AER is satisfied will or is likely to contribute to the achievement of the national electricity objective to the greatest degree (the preferable reviewable regulatory decision); and

(ii)    specify reasons as to the basis on which the AER is satisfied that the decision is the preferable reviewable regulatory decision.

36    That obligation on the AER in exercising its economic regulatory power was of particular relevance to the three applications by PIAC.

37    Effectively, like obligations were prescribed under the NGL by the deletion and substitution of s 28(1) of the NGL.

38    In relation to the Tribunal, and merits review under Div 3A of the NEL, there were also extensive and presently relevant amendments. Section 71A was amended by inserting some additional definitions (to which it will be appropriate to refer as necessary having regard to the substantive amendments).

39    Section 71C, specifying the grounds for review, did not change.

40    However, the additional recognition of the need for the regulatory decision, whether by the AER or on review by the Tribunal, to reflect the “materially preferable NEO decision” (as defined in s 71P(2a)), and the desirability of ensuring that the long term of interests of consumers are properly identified and addressed through the consumer groups was reinforced. It is worth repeating s 71C(1) setting out the (unchanged) grounds of review, and reciting s 71C(1a). They provide:

(1)    An application under s 71B(1) may be made only on 1 or more of the following grounds:

(a)    the AER made an error of fact in its finding of facts, and that error of fact was material to the making of the decision;

(b)    the AER made more than 1 error of fact in its findings of facts, and that those errors of fact, in combination, were material to the making of the decision;

(c)    the exercise of the AER’s discretion was incorrect, having regard to all the circumstances;

(d)    the AER’s decision was unreasonable, having regard to all the circumstances.

(1a)    An application under section 71B(1) must also specify the manner in which a determination made by the Tribunal varying the reviewable regulatory decision, or setting aside the reviewable regulatory decision and a fresh decision being made by the AER following remission of the matter to the AER by the Tribunal, on the basis of 1 or more grounds raised in the application, either separately or collectively, would, or would be likely to, result in a materially preferable NEO decision.

41    Section 71E prescribes the circumstances in which the Tribunal must not grant leave to apply for review. Those circumstances were extended by adding the additional criterion that it must appear to the Tribunal that the applicant for review has established a prima facie case that any decision or determination by the Tribunal varying the reviewable regulatory decision or setting it aside and remitting the matter back to the AER to make the decision again, on the basis of one or more grounds of review raised in the application:

... either separately or collectively, would, or would be likely to, result in a materially preferable NEO decision.

42    Section 71M was amended by adding s 71M(1a) requiring an intervener who raises a new ground of review to provide appropriate particulars of that ground, including how, if accepted or made out, it would, or would be likely to, result in a materially preferable NEO decision.

43    Section 71O was deleted and substituted in its entirety. It deals with the matters that may or not be raised in a review. It firstly ensures that the AER is not confined in a review application so as to prevent it from raising issues which might be considered under s 71P(2a) and (2b). It provides that the applicant, if it is a regulated network service provider, may only raise for review matters that have been raised and maintained in submissions to the AER. The same restriction applies to a regulated network service provider whose commercial interests might be materially affected by that decision. Any other affected or interested person or body may not raise in relation to an issue a matter that was not raised by that body in submissions to the AER (the requirement of having been raised and maintained is not precisely adopted).

44    Section 71O(2)(d) provides that, subject to those restrictions, the applicant or an intervener who has raised a new ground of review under s 71M is also entitled to raise any matter relevant to the issues to be considered under s 71P(2a) and (2b) (set out below) and otherwise any person or body may not raise any matter relevant to those issues unless it is in response to a matter raised by the AER, the applicant or an intervener.

45    The obligation of the Tribunal to make a determination under s 71P has also been substantially refined. It is necessary to set out ss 71P(2a) and (2b) in full to understand their significance. Those provisions in broad terms mirror the obligations imposed upon the AER by s 16(1)(d), and there are parallel obligations upon the Tribunal as upon the AER imposed by s 71P(2c) requiring the Tribunal to explain how it has taken into account the inter-relationship between constituent components of the reviewable regulatory decision and why it has proceeded to make the order which it has determined to make in that light.

46    Section 71P(1)-(2b) provides:

(1)    If, following an application, the Tribunal grants leave in accordance with section 71B(1), the Tribunal must make a determination in respect of the application.

Note –

See section 71Q for the time limit which the Tribunal must make its determination.

(2)    Subject to subsection (2a), a determination under this section may –

(a)    affirm the reviewable regulatory decision; or

(b)    vary the reviewable regulatory decision; or

(c)    set aside the reviewable regulatory decision and remit the matter back to the AER to make the decision again in accordance with any direction or recommendation of the Tribunal.

(2a)    Despite subsection (2), the Tribunal may only make a determination –

(a)    to vary the reviewable regulatory decision under subsection (2)(b); or

(b)    to set aside the reviewable regulatory decision and remit the matter back to the AER under subsection (2)(c).

if –

(c)    the Tribunal is satisfied that to do so will, or is likely to, result in a decision that is materially preferable to the reviewable regulatory decision in making a contribution to the achievement of the national electricity objective (a materially preferable NEO decision) (and if the Tribunal is not so satisfied the Tribunal must affirm the decision); and

(d)    in the case of a determination to vary the reviewable regulatory decision – the Tribunal is satisfied that to do so will not require the Tribunal to undertake an assessment of such complexity that the preferable course of action would be to set aside the reviewable regulatory decision and remit the matter to the AER to make the decision again.

(2b)    In connection with the operation of subsection (2a) (and without limiting any other matter that may be relevant under this Law) –

(a)    the Tribunal must consider how the constituent components of the reviewable regulatory decision interrelate with each other and with the matters raised as a ground for review; and

(b)    without limiting paragraph (a), the Tribunal must take into account the revenue and pricing principles (in the same manner in which the AER is to take into account these principles under section 16); and

(c)    the Tribunal must, in assessing the extent of contribution to the achievement of the national electricity objective, consider the reviewable regulatory decision as a whole; and

(d)    the following matters must not, in themselves, determine the question about whether a materially preferable NEO decision exists:

(i)    the establishment of a ground for review under section 71C(1);

(ii)    consequences for, or impacts on, the average annual regulated revenue of a regulated network service provider;

(iii)    that the amount that is specified in or derived from the reviewable regulatory decision exceeds the amount specified in section 71F(2).

47    As noted above, the review by the Tribunal of any decision of the AER was, and in broad terms still is, confined in the following ways:

    the requirement of the Tribunal to be satisfied of certain pre-conditions before leave to apply for review is given by it;

    the limitation upon the material which the Tribunal may have regard to in making its determination as to whether a ground of review has been made out on the material that was before the AER; and

    the requirement that the issue before the Tribunal upon which a ground of review is sought to be established was properly advanced before the AER.

Those pre-conditions have been maintained, and to a degree refined.

48    In relation to the second of those matters, s 71R defined and still defines review related matter in the same way. In one significant respect, the material relevant to a review has been extended to include matters arising as a result of the consultation required by s 71R(1)(b). That obliges the Tribunal, before making a determination, to take reasonable steps to consult with network service users of the relevant services, and any user or consumer associations or user or consumer interest groups that the Tribunal considers to have an interest in the determination (excluding a user or consumer association or interest group that is a party to the review). In addition, the opportunity for the Tribunal to seek additional information relevant to the relief which it might otherwise contemplate granting has been extended by amendments to s 71R(3) and by the addition of ss 71R(5a) and (5b).

49    In substance, parallel amendments were made to the merits review of AER determinations under the NGL under Part 5 of the NGL, including the extension of the required circumstances in which leave to review may be given under s 248, and by the insertion of s 259(4a) and (4b) equating to s 71P(2a) and (2b) of the NEL, and the deletion and substitution of s 261(1) and insertion of s 261(3a) and (3b) equating to s 71R(1) and s 71R(5a) and (5b) of the NEL.

The Consultation Process

50    The consultation process referred to in s 71R(1)(b) of the NEL and s 261(1)(b) of the NGL is an additional procedural step which the Tribunal must take and, ideally, be accommodated within the target time prescribed by s 71Q of the NEL and s 260 of the NGL. The Tribunal, having given leave to apply for review in these and the related matters on 17 July 2015 (other than the JGN application where leave to apply for review was given on 30 July 2015) sought information from the AER as to all of the interest groups or persons who might have an interest in the review by the Tribunal under s 71R(1)(b) of the NEL and s 261(1)(b) of the NGL.

51    The Tribunal then conducted an extensive communication process directly with each of those entities or persons to invite them to indicate whether they wished to consult with the Tribunal in relation to any of the Final Decisions, as to the nature of their proposed participation, and as to how the consultation might best be carried out. In the light of that material, the Tribunal consulted with all of those persons on 6 and 7 August 2015. To ensure a satisfactory process, the Tribunal issued a Consultation Agenda under which it provided for those who wished to speak to the Tribunal on that occasion either personally or on behalf of an organisation, to do so. It arranged for the speakers to be listed randomly, so that there was no bias in the sequence of presenting particular perspectives, other than (of course) endeavouring to accommodate the personal circumstances and convenience of each of the proposed participants. During those consultations, members of the Tribunal sought clarification, and sometimes supplementation of comments or submissions or further development in the views expressed so that they were better understood or appreciated by the Tribunal. The transcript of that consultation process has been included by the Tribunal on its website relating to each of these applications. A list of those persons or entities who chose to make submissions, during the consultation process or in writing as a complement or supplement to oral submissions, or only by making written submissions or comments is also listed on the Tribunal’s website.

52    In the course of the consultation process, a number of significant issues of concern to consumers and consumer interests were identified. It is fair to say that price was a significant concern. It is also fair to say that there were a number of persons who participated, and whose concern was to ensure the quality, safety, reliability and security of the supply of electricity either because of their particular circumstances or their particular geographical location, or for other reasons. The balance, as the submissions exposed, is a very difficult one.

53    The applicants to the several applications (including PIAC), the AER and the interveners did not participate in the consultation process. That was appropriate, of course, because they each participated in the hearing before the Tribunal. The matters which emerged in the course of the consultation process, apart from informing the Tribunal about the concerns or views expressed, also provided the foundation for matters the Tribunal raised with the applicants, the AER and the interveners during the hearing. They also served as the focus for questions of the Minister, during the hearing, as to how the various concerns or matters raised were to be taken into account by the Tribunal in reaching its decisions on the several applications. Those matters are addressed later in these reasons for decision.

54    It is a mark of the sophistication of the participants in the consultation process that the range of matters discussed was not extensive. Of course, many focused on the price of electricity and the impact of the present and potential price and the ability of the less well-off in the community either to afford access to the electricity network at all, or at least to do so only at considerable personal cost. There was material showing the number of disconnections over time. The issue of price was not simply raised by consumers or representatives of consumers in a lower socio-economic setting, but by some smaller commercial enterprises, primary producers and others.

55    The Tribunal does not intend to do injustice to the process by listing, without setting out in detail, the views presented.

56    It is, in the view of the Tribunal, helpful to note the broad themes presented during the consultation process as they were identified by the Tribunal. That list was then circulated to the participants for comment. As it was not then suggested that it was inaccurate or incomplete, it is set out below:

    Consumer engagement and understanding

o    The Tribunal’s approach to engagement

o    Consumer education and access to information

o    Participation in the AER processes

    Impact of electricity prices on consumers

o    General consumer impact

o    Vulnerable customers

o    Rural and regional customers

o    Price stability (including pace of any change)

o    Long-term interests of consumers relating to price

    The regulatory framework

o    Success of previous regimes

o    Policy observations regarding the framework

    Balancing the NEO and NGO for the long-term interests of consumers

o    The meaning of “long term interests of consumers

o    The price and reliability trade off

o    The disconnection 'death spiral'

    Operating expenditure

o    AER benchmarking approach

o    Adherence to operating expenditure guidelines

o    Impact of industrial agreements

o    Vegetation management and bushfire risks

    Rate of return

o    Adherence to the Guideline

o    Estimation of return on debt

o    Estimation of return on equity

o    Ultimate pricing impact of the rate of return

    Demand and energy forecasts

o    Inflation of demand and energy forecasts

    Demand management and innovation

o    Expenditure on innovation

    Materially preferable NEO/NGO decision

57    As noted, the price of electricity was the most significant issue raised during the consultation, but there were also significant numbers of consumers whose emphasis was more on the reliability of the supply of electricity in their particular circumstances.

58    In the matters concerning the Final Decisions of the AER regarding the Networks NSW businesses, in large measure (and as set out later in these reasons) the role of PIAC was a particularly helpful one. Its three applications sought to have the relevant AER Final Decisions set aside, and to have substituted determinations through the Tribunal which would substantially lessen the amounts recoverable by the Networks NSW businesses over the regulatory period 2015-19, as well as presenting the viewpoint that the matters raised by Networks NSW to have the recoverable amounts increased were erroneous. Many of the views put forward by those on the consultation process were therefore reflected or represented by the contentions of PIAC during the hearing.

59    There is one particular feature of the consultation process views which it is appropriate to comment on at this point.

60    The Tribunal was assisted in the course of the hearing by the intervention of the Minister. It was appropriate, of course, for the Minister to intervene, given the significant changes to the legislation so far as they relate to the Tribunal’s role. The Tribunal is appreciative of those submissions.

61    The Minister, consistently with the submissions of the AER and of the interveners, took the view that the consumers referred to in the NEO and the NGO must be treated as a generic group, so that the Tribunal could not and should not address the particular circumstances of particular consumers or consumer interests. The Tribunal adopts that approach. When the National Electricity (South Australia) National Electricity Law (Miscellaneous Amendments) Act 2007 (SA) was first introduced, the point was then made that the objective of the national electricity market generally was to achieve an efficient and so far as possible competitive market for the supply and consumption of electricity and where that could not be achieved that the regulatory structure for access to monopoly services should endeavour to reflect that structure. It was noted (see [22] above) that social and environmental objectives should be a matter of separate policy of the legislature reflected in different ways. The Tribunal, as the Minister submitted, should take, and does take, that view.

62    Nevertheless, the consultation process did identify and inform the Tribunal significantly as to the matters of concern to significant sections of the consumer community and how some consumer representatives regarded the long-term interests of consumers were best served.

63    As also noted, in relation to this decision (and those concerning Essential and Endeavour), the Tribunal has had the benefit of the applications by PIAC and its helpful submissions. That has enabled the Tribunal to be acutely aware of its obligation ultimately to ensure that its decisions in relation to these applications are those which, in its view, best serve the long-term interests of consumers in terms of the NEO. It has also, by the grounds of review raised by PIAC alleging error on the part of the AER, led to a focus on those particular matters where, it is said, the AER itself has failed to respond appropriately to s 16(1)(d) of the NEL or has otherwise fallen into error to the detriment of consumers. The particular errors asserted by PIAC are addressed in the course of this decision, and to the extent to which they require separate consideration, in the course of considering the reviews of the Final Decisions of the AER in relation to Essential and Endeavour.

64    Given the role of PIAC, and the relevance of its submissions to the Tribunal’s functions and responsibilities under the legislation, the Tribunal has not needed in these matters to address separately the matters which emerged in the course of the consultation process. The role and submissions of PIAC have encompassed those matters.

The Materially Preferable NEO/NGO Decision

65    At this point in the reasons of the Tribunal, it is not appropriate to discuss except in a conceptual way how it should approach the question of whether, in the event of finding error on the part of the AER, that is that a ground of review or grounds of review are made out, how it should address the requirements of s 71P(2a) and (2b) in the NEL or s 259(4a) and (4b) of the NGL.

66    It is clear enough from those legislative provisions that, if it is established that in one or more respects the AER has fallen into reviewable error, that is that a ground or grounds of review have been made out, it will not be a materially preferable decision simply to provide for the correction of that ground of review. Section 71P(2b)(d)(i) and s 259(4b)(d)(i) respectively make that plain.

67    It will be necessary to address how the constituent components of the reviewable regulatory decision inter-relate. To the extent to which they inter-relate, and in the light of the revenue and pricing principles, it would be necessary to determine whether, then, the materially preferable decision is to allow the decision of the AER to stand, or to vary it, or to remit the matter to the AER to address a particular aspect or aspects further.

68    To identify how constituent components of a reviewable regulatory decision do inter-relate, the Tribunal will routinely have the benefit of submissions from the AER, and in this instance has had the benefit also of the submissions of PIAC, the Minister as well as the DNSPs. It is clear that several of the issues raised by the parties giving rise to grounds of review do inter-relate, so that the Tribunal, if it finds for instance that significant grounds of review are made out in relation to the allowance for opex, would have to consider the nature and extent to which, if at all, there are consequences for other elements of the AER decision concerning the Service Target Performance Incentive Scheme (STPIS), the Efficiency Benefit Sharing Scheme (EBSS) and the X-factor (those terms are described below), and possibly metering services. Those factors are, to some extent (as acknowledged by all parties) inter-related. There is also some scope for an asserted inter-relationship between the allowance for capex and the allowance for opex, and perhaps more obviously some scope for inter-relationship between the allowance for return on equity and the allowance for return on debt, as they combine to estimate the rate of return. Those inter-relationships were not significantly developed in the course of submissions, except in a general way.

69    The Tribunal would expect that, routinely, if there were direct and measurable relationships between elements making up the AER determination, the AER would indicate how the adjustment of one would or should lead to the adjustment of the other. The AER in each of its decisions has not, and did not need to, explain those inter-relationships in detail although it has made in its general “Overview” section some comments about what might be seen to have been allowances made with some generosity towards the particular DNSP (and about which PIAC has complained) and which in a general sense might be set off against any error or a ground of review which might be exposed by a DNSP.

70    The Tribunal, when it has come to consider this aspect, has primarily looked to the inter-relationships as they have been identified and quantified by the AER, PIAC and in other submissions. It has, in addition, sought to step back and to look at the wider picture where there is no such obvious inter-relationship and no direct evidence quantifying the respective way in which one element of the building blocks as prescribed by the NER or NGR affect other elements. It is in that light, as discussed at the end of these reasons, that the Tribunal has proceeded.

71    In that context, it is also appropriate to observe that the task imposed on the AER is a protean one.

72    It has referred to that task in its written submissions. Following the 2013 Legislative Amendments, the AER in exercising its regulatory function under the NER or the NGR is required to “perform or exercise that function or power in a manner that will or is likely to contribute to the achievement of” the NEO or the NGO: see s 16(1) of the NEL and s 28(1) of the NGL.

73    In relation to a reviewable regulatory decision under the NEL or the NGR, the AER is required, inter alia (see s 16(1)(b) to (d) of the NEL and s 28(1)(b) of the NGL):

(a)    to specify the manner in which the constituent components of the decision relate to each other, and the manner in which that interrelationship has been taken into account in the making of the decision; and

(b)    if there are 2 or more possible reviewable regulatory decisions that will or are likely to contribute to the achievement of the NEO or NGO:

(i)    make the decision that the AER is satisfied will or is likely to contribute to the achievement of the NEO or NGO “to the greatest degree (the preferable reviewable regulatory decision (in the case of the NEL) or the preferable designated reviewable regulatory decision (in the case of the NGL)); and

(ii)    specify reasons as to the basis on which the AER is satisfied that the decision is the preferable reviewable regulatory decision or the preferable designated reviewable regulatory decision.

74    The NEO is set out in s 7 of the NEL:

The objective of this Law is to promote efficient investment in, and efficient operation and use of, electricity services for the long term interests of consumers of electricity with respect to –

(a)    price, quality, safety, reliability and security of supply of electricity; and

(b)    the reliability, safety and security of the national electricity system.

75    The NGO is set out in s 23 of the NGL:

The objective of this Law is to promote efficient investment in, and efficient operation and use of, natural gas services for the long term interests of consumers of natural gas with respect to price, quality, safety, reliability and security of supply of natural gas.

76    Apart from the NEO and the NGO, the AER is required to take into account, in the prescribed circumstances, the RPP which are set out in s 7A of the NEL (RPP), and s 24 of the NGL respectively: s 16(2) of the NEL and s 28(2) of the NGL. None of those provisions, nor the NEO or the NGL, were amended by the 2013 Legislative Amendments.

77    The ultimate objective reflected in the NEO and NGO is to direct the manner in which the national electricity market and the national natural gas market are regulated, that is, in the long term interests of consumers of electricity and natural gas respectively with respect to the matters specified. The provisions proceed on the legislative premise that their long term interests are served through the promotion of efficient investment in, and efficient operation and use of, electricity and natural gas services. This promotion is to be done “for the long term interests of consumers. It does not involve a balance as between efficient investment, operation and use on the one hand and the long term interest of consumers on the other. Rather, the necessary legislative premise is that the long term interests of consumers will be served by regulation that advances economic efficiency.

78    In broad terms, it can be said that the economic foundations of the regulatory regime are well understood. In Application by ElectraNet Pty Limited (No 3) [2008] ACompT 3 at [15], the Tribunal said:

The national electricity objective provides the overarching economic objective for regulation under the Law: the promotion of efficient investment in the long term interests of consumers. Consumers will benefit in the long run if resources are used efficiently, i.e. resources are allocated to the delivery of goods and services in accordance with consumer preferences at least cost. As reflected in the revenue and pricing principles, this in turn requires prices to reflect the long run cost of supply and to support efficient investment, providing investors with a return which covers the opportunity cost of capital required to deliver the services.

79    As noted above, reference to the Second Reading Speech on the introduction of the National Electricity (South Australia) (New National Electricity Law) Amendment Bill (South Australian House of Assembly Hansard, 9 February 2005, p 1451) states (at p 1452):

The national electricity market objective in the new National Electricity Law is to promote efficient investment in, and efficient use of, electricity services for the long term interests of consumers of electricity with respect to price, quality, reliability and security of supply of electricity, and the safety, reliability and security of the national electricity system. The market objective is an economic concept and should be interpreted as such. For example, investment in and use of electricity services will be efficient when services are supplied in the long run at least cost, resources including infrastructure are used to deliver the greatest possible benefit and there is innovation and investment in response to changes in consumer needs and productive opportunities. The long term interests of consumers of electricity requires the economic welfare of consumers, over the long term, to be maximised. If the National Electricity Market is efficient in an economic sense the long term economic interests of consumers in respect of price, quality, reliability, safety and security of electricity services will be maximised. ... Applying an objective of economic efficiency recognises that, in a general sense, the national electricity market should be competitive, that any person wishing to enter the market should not be treated more or less favourably than persons already participating in the market, and that particular energy sources or technologies should not be treated more or less favourably than other energy technologies.

80    Thus, in Application by Envestra Limited (No 2) [2012] ACompT 3 (Envestra (No 2)), the Tribunal summarised with approval certain submissions by the AER, which were not challenged in that case (at [183]):

The AER submitted that rule 91 requires the AER to permit service providers a reasonable opportunity to recover what the AER considers "legitimate costs". Legitimacy, according to the AER is informed by the NGO and, in particular, means costs that would be incurred in a "workably competitive market". The requirement for replication of a workably competitive market outcome is said to be derived from the intent of the regulatory framework. This phrase appears to come from the Australian Energy Market Commission, Rule Determination, National Electricity Amendment (the Economic Regulation of Transmission Services) Rule 2006 No. 18, published on 16 November 2006. In this determination, the Australian Energy Market Commission, at page 93, describes the fundamental objective of regulation as being:

to reproduce, to the extent possible, the production and pricing outcomes that would occur in a workably competitive market in circumstances where the development of a competitive market is not economically feasible...

81    A similar point was made in the High Court with respect to a very similar progenitor under the then applicable gas regime, in East Australian Pipeline Pty Ltd v Australian Competition and Consumer Commission (2007) 233 CLR 229 at [18] (East Australian Pipeline):

The context and purpose of the Code is well understood, not least because the objectives of the legislation are articulated in the legislation itself in considerable detail. The Code as a whole provides for a regulatory regime of a kind which is "a surrogate for the rewards and disciplines normally provided by a competitive market". Competitive pressures in a market stimulate efficiency of production and resource allocation, they stimulate efficient investment decisions and they minimise costs. No party disputed the fact that the regulatory process set out in the legislation was directed to eliminating monopoly pricing whilst nevertheless providing a rate of return to pipeline owners, commensurate with a competitive market.

82    Those references have a particular significance in this matter as appears in the Tribunal’s consideration of the issues concerning rate of return on debt.

83    It is convenient, at this point, to note how the AER, in the Overview section of its Final Decision, explained how it had sought to fulfil its obligation under s 16(1)(d) of the NEL and s 28(1)(b)(iii) of the NGL to make the decision it is satisfied will or is likely to contribute to the achievement of the NEO/NGO to the greatest degree, that is to make the preferable designated reviewable regulatory decision, and to specify its reasons for doing so. Of course, there is further relevant discussion in the various Attachments to its Final Decisions.

84    In the Overview to the Ausgrid Final Decision, Section 1.2 “Contribution to the achievement of the NEO”, the following first appears (at p 10):

We are satisfied that the total revenue approved in our final decision contributes to the achievement of the NEO to the greatest degree. This is because our total revenue reflects the efficient, sustainable costs of providing network services in Ausgrid’s operating environment and the key drivers of efficient costs facing Ausgrid. Our decision will promote the efficient investment in, and efficient operation and use of, electricity services for the long term interests of consumers, as required by the NEO. We set out our reasons below and in our attachments.

85    There follows a description of the key drivers of costs facing a network service provider, acceptance that the key drivers may change from one regulatory period to the next, the most important factors impacting on Ausgrid’s costs in the 2015-2019 regulatory control period, and its overall conclusion. The AER says that the two constituent components of its decision which drive most of the differences between Ausgrid’s proposed revenue and its Final Decision are rate of return (equity and debt), and opex. Those two differences are then explained in some detail.

86    Section 1.4 “Assessment of options under the NEO” (at pp 19-20) returns to the legislative requirement. It attracted considerable attention, particularly by PIAC in it submissions. It is desirable to set it out in full:

The NER recognises that there may be several decisions that contribute to the achievement of the NEO. Our role is to make a decision that we are satisfied contributes to the achievement of the NEO to the greatest degree.

For at least two reasons, we consider that there will almost always be several decisions that contribute to the achievement of the NEO. First, the NER requires us to make forecasts, which are predictions about unknown future circumstances. As a result, there will likely always be more than one plausible forecast. Second, there is substantial debate amongst stakeholders about the costs we must forecast, with both sides often supported by expert opinion. As a result, for several components of our decision there may be several plausible answers or several point estimates within a range. This has the potential to create a multitude of potential overall decisions. In this decision we have approached this from a practical perspective, accepting that it is not possible to consider every possible permutation specifically. Where there are several plausible answers, we have selected what we are satisfied is the best outcome, under the NEL and NER.

In many cases, our approach results in an outcome towards the end of the range of options materially favourable to Ausgrid (for example, our choice of equity beta). While it can be difficult to quantify the exact revenue impact of these individual decisions, we have identified where we have done so in our attachments. Some of these decisions include:

    selecting at the top of the range for the equity beta

    setting the return on debt by reference to data for a BBB broad band credit rating, when the benchmark is BBB+

    the cash flow timing assumptions in the post-tax revenue model

    the point at which we have set the benchmark for opex

    the allowances we have made for operating environment factors in our benchmarking analysis.

We set out our detailed reasons in the attachments. They demonstrate that the constituent components of our decision comply with the NER’s requirements. At an overall level our decision reflects the key reasons set out above, which indicate that Ausgrid should recover less revenue than it has proposed or recovered in recent years. Our decision reflects these at both the constituent component and overall revenue levels.

Given our approach, we are satisfied that our decision will or is likely to contribute to the achievement of the NEO to the greatest degree.

The Tribunal’s Role on Review

87    The legislative and regulatory background referred to above highlights the complex task of the Tribunal since the 2013 Legislative Amendments.

88    Although the Minister submitted that the 2013 Legislative Amendments may have required a reconstruction of certain provisions of the NEL and the NGR, even unamended provisions, the Tribunal did not discern from the Minister’s submissions that it was necessary to refine or re-define the way the earlier and unamended provisions of the NEL or the NGR have been applied or construed.

89    Rather, it can be accepted, the 2013 Legislative Amendments:

(a)    clarify or emphasise the alignment of both the decision making role of the AER by s 16(1)(d)(i) of the NEL and s 28(1)(b)(iii)(A) of the NGL and the decision making role of the Tribunal by s 71P(2a)(c) of the NEL and s 259(4a)(c) of the NGL, with the achievement of the NGO and the NGL objectives; and

(b)    require the Tribunal, when deciding whether to vary the AER decision under review or to remit it to the AER for further consideration to do so only if it is satisfied that that will result in an improved decision fulfilling better the NEO or the NGO in the long term interests of consumers. The Minister’s submission, which the Tribunal accepts, is that a regulatory regime such as the NEL and NGL serves as a “surrogate for the rewards and disciplines normally provided by a competitive market” (as originally cited at [81] above in East Australian Pipeline (2007) 233 CLR 229 at [18]). As a surrogate, of course, it can only approximate those rewards and disciplines.

90    As the AER and the Minister point out, this feature of the regulatory regimes acknowledges the existence of a range of possible decisions which equally, or approximately equally, promote economic efficiency. But that is not to say that such decisions will equally, or even approximately equally, promote the long term interests of consumers. As was said in introducing the 2013 Legislative Amendments:

The national electricity objective and national gas objective explicitly target economically efficient outcomes that are in the long term interests of consumers, but the nature of decisions in the energy sector are such that there may be several possible economically efficient decisions, with different implications for the long term interests of consumers. (emphasis added)

See: South Australia, House of Assembly, Hansard, 26 September 2013 at 7172 (The Hon J R Rau).

91    Consequently, the correction of error or errors in a decision under review will not necessarily lead to a materially preferable decision. Whether there is a preferable decision to the decision made by the AER depends upon an assessment of the decision as a whole, and a comparison of that decision with a putative alternative decision; it does not depend simply on an assessment of errors in individual components of the decision under review. That reflects the Minister’s comments that the 2013 Legislative Amendments:

Require the [Tribunal] to undertake a holistic assessment of whether the setting aside or varying of the reviewable regulatory decision, or remission of the matter back to the original decision maker, will or is likely to deliver a materially preferable outcome in the long term interests of consumers.

See: South Australia, House of Assembly, Hansard, 26 September 2013 at 7173 (The Hon J R Rau).

92    The 2013 Legislative Amendments reflect a deliberate policy decision to change the NEL and NGL and, in particular, to change the scope of the Tribunal’s limited merits review function. They introduce a series of steps which require the Tribunal, even if it is satisfied of one or more grounds of review arising from one particular aspect of the AER’s decision, to consider whether and how the potential consequences of that ground being established may be reduced, counterbalanced or rendered immaterial following the processes mandated by ss 71P(2a), 71P(2b)(a) and 71P(2b)(c) of the NEL and ss 259(4a), 259(4b)(a) and 259(4b)(c) of the NGL.

93    Nevertheless, as the Minister said, it is axiomatic in the principles of regulatory economics, that promoting allocative, productive and dynamic efficiency generally serves the long term interests of consumers. However, the 2013 Legislative Amendments contemplate that there can be more than one available decision that is economically efficient – and certainly more than one available decision that is roughly so, having regard to the unavoidable approximations involved.

94    The role of the AER and the Tribunal in giving effect to the NEO and NGO is to promote the “long term interests of consumers” with respect to the matters stipulated. This will always involve an attempt to promote efficient investment in, and operation and use of, services, but will also require taking into account other factors as appropriate. The Minister gave emphasis to taking into account the appropriate “long term” character of the consumer interests that are to be promoted. He also said that it may also, in an appropriate case, require taking into account the distribution to consumers of the benefits of efficiencies, even though this may not bear on the economic efficiency of the decision. Of course, the benefits of efficiencies are to be awarded to consumers – that is, to use the Minister’s word “axiomatic”. The Tribunal has previously acknowledged that “in some circumstances” it may, in effect, be preferable for the benefits of economic efficiencies to be passed to consumers in their long term interests rather than wholly retained or captured by the regulated entity: Envestra (No 2) [2012] ACompT 3 at [265].

95    It is nevertheless accepted that the decision which is to be made must comply with, or meet, the requirements of the relevant Rules – whether the NER or the NGR. The AEMC has specified the means by which the AER (and the Tribunal) is to reach its decision. An “holistic” assessment of the AER decision by the Tribunal cannot entitle it to ignore the relevant Rules or the RPP in s 7A of the NEL or s 24 of the NGL which inform proper application of the relevant Rules made by the AEMC. They are made precisely because, in both the national electricity network and the national gas network, the underlying objective is the establishment and maintenance of competition in the long term interests of consumers. Where there is monopoly infrastructure (such as the transmission and distribution networks), the regulatory process is to reach an outcome by the relevant Rules set down by the AEMC, and then by the AER applying the Rules as appropriate to simulate as best as can be done under the relevant Rules the outcome for the DNSPs by its reviewable regulatory decisions which would reflect the outcome of a competitive market.

96    The AER made that point in its submissions. It pointed out that, in a workably competitive market, an inefficient business with higher costs than its efficient competitors still receives the market price which is set by the efficient competitors. As all businesses in workably competitive markets receive the market price, an inefficient business will only be able to obtain a price which is lower than its costs. It could not ask its customers to pay higher prices (generally, even for a transitional period) to fund the costs of it moving away from inefficient practices and, accordingly, will be unable to achieve the same returns to shareholders as an efficient business.

97    It is desirable to say something about the expression “will, or is likely to, result in ....” in s 71P(2a)(c).

98    The submissions acknowledge that the word “likely” may be given a range of, or shades of, meaning: see eg Tillmans Butcheries Pty Ltd v Australian Meat Industry Employees Union (1979) 27 ALR 367 per Deane J at 380.

99    As the Minister has noted, if the words “is likely to” are used in contradistinction to the preceding word “will”, they connote a lower standard of satisfaction than “will”. Several provisions of the Competition and Consumer Act 2010 (Cth), directed to proscribing restrictive trade practices, are enlivened in relation to conduct that “has, or is likely to have, the effect of substantially lessening competition” (ss 45 and 47) or “have the effect, or be likely to have the effect, of substantially lessening competition” (s 50). A number of cases suggest that “likely”, in this context, connotes a “real chance” – something more than a mere possibility, but less than a likelihood on the balance of probabilities: see especially Monroe Topple & Associates Pty Ltd v Institute of Chartered Accountants (2002) 122 FCR 110; Seven Network Limited v News Limited (2009) 182 FCR 160 at [750]; Australian Gas Light Co v ACCC (2003) 137 FCR 317 at [348].

100    In the present context, the legislature has identified a particular goal – the regulatory decision that advances to the greatest degree the NEO or NGO (s 16(1)(d) of the NEL and s 28(1)(b)(iii)(A) of the NGL). It has charged the AER with responsibility to achieve that goal, and empowered the Tribunal to set aside or vary the AER’s attempt only if “satisfied” that to do so will or is likely to result in a materially preferable decision.

101    In that context, the Tribunal considers it appropriate to proceed on the basis that the phrase “will, or is likely to” should be construed as a compendious expression of a standard of likelihood that is equivalent to “more likely than not”. It considered that that approach best achieves the purpose or object of the NEL and NGL. It reflects consistency in the intended standard of satisfaction on the part of the Tribunal with the standard which the AER was required to apply. It is also consistent with the Second Reading Speech referred to above, in which the amendments were explained as being directed towards “ensur[ing] that the limited merits review only results in changes to decisions under review where the [Tribunal] concludes that there is a materially preferable decision in the long term interests of consumers”.

The Grounds of Review

102    Understandably, both at the stage of considering whether leave to apply for review and on the reviews, there was considerable debate about the character of the available grounds of review and whether, in the particular circumstances a ground of review had been made out. Concern was expressed by the AER about the breadth of expression adopted by the various DNSPs to invoke the grounds of review.

103    With one qualification, the Tribunal considers that it is preferable to defer addressing that debate until the point of considering separately the particular issues which the various applicants raised, and how the asserted error was then described. It did not discern, in the course of the three intense weeks of competing submissions, that ultimately the respective applicants had expressed their contentions in a way which caused any surprise or unfairness to the AER. That is so, even though in some instances, the relevant applicant sought to engage one or more of the available grounds of review as applicable to a particular contention of error. It is preferable to deal with those matters as they come to arise in their context.

104    As will be seen, the Tribunal is well alive to the terms of s 71C of the NEL and s 246 of the NGL. The onus is on a particular applicant to satisfy the Tribunal that the AER determination, in the respect being debated, is in error within one or more of the available grounds: Application by DBNGP (WA) Transmission Pty Ltd (No 3) [2012] ACompT 14 at [483]-[486]. See also the Tribunal’s remarks in Application by EnergyAustralia [2009] ACompT 8 at [70]; Application by WA Gas Networks (No 3) [2012] ACompT 12 at [22] (WA Gas Networks).

105    The one possible reservation to those comments is by PIAC’s submission that s 16(1)(d) of the NEL as introduced by the 2013 Legislative Amendments has widened the grounds of review available under s 71C(1) of the NEL.

106    It may be accepted that the amendment of one provision in an Act may impliedly alter the meaning of an unamended provision of that Act: Commissioner of Stamps (SA) v Telegraph Investment Co Pty Ltd (1995) 184 CLR 453 at 463.

107    However, the Tribunal does not consider that s 16(1)(d) expands in any way the grounds of review in s 71C(1) of the NEL. There is no textual or contextual reason why it should do so, and the extraneous material to which it is permissible to refer does not suggest that that was intended.

108    The PIAC submissions focused upon particular steps in the AER decision making process to demonstrate, in the terms of s 71C, the ground of review being made out. Whilst there is clearly a heavy responsibility on the AER to make the decision which contributes to the NEO “to the greatest degree”, the PIAC contention that it had not done so was not at large, but in the context of the particular asserted grounds of review. A ground or grounds of review, if made out, enlivens the further steps to be addressed by the Tribunal under s 71P of the NEL. The Tribunal has not found it either necessary or helpful, in considering whether PIAC has made out the grounds of review for which it contends, to give any expanded meaning to the available grounds of review. It has taken them as they appear, and as they have been explained in other decisions. On the other hand, the Tribunal has not taken the view that the satisfaction of the AER that it has met the requirement of s 16(1)(d) precludes the Tribunal from itself having to address those steps in s 71P once it has been satisfied that a ground or grounds of review have been made out. If, as a result of that process, the Tribunal determines (as it has done in some respects) to set aside the AER determination and to remit the matter to the AER, it has not needed to separately conduct a qualitative assessment of the AER’s satisfaction under s 16(1)(d) because (it has been satisfied) that its assessment was in part the product or consequence of the error now exposed by the established ground of review.

The Structure of the Decision

109    The review applicants (variously supported by the interveners) take issue with aspects of the following building blocks in the AER’s decisions:

    Opex: PIAC; Networks NSW; ActewAGL

    X-factor: Networks NSW

    Efficiency Benefit Sharing Scheme (EBSS): Networks NSW

    Service Target Performance Incentive Scheme (STPIS): ActewAGL

    Return on equity: PIAC; Networks NSW; ActewAGL; JGN

    Return on debt: PIAC; Networks NSW; ActewAGL; JGN

    Gamma: Networks NSW; ActewAGL; JGN

    Metering services - opex: ActewAGL

    Metering classification: ActewAGL

    Metering services: Ausgrid

    Market Expansion Capital Expenditure (ME Capex): JGN

110    The Tribunal has approached each review application in that sequence (where it applies to a particular applicant or applicants for review).

111    Under the transitional arrangements, the AER was required to make placeholder distribution determinations for the ACT and NSW DNSPs for the transitional regulatory control period which would apply for one year, from 2014-15 to 2015-16. The AER was then required to carry out a full regulatory determination process and make distribution determinations for the ACT and NSW DNSPs for the subsequent regulatory control period, from 2015-16 to 2018-19.

112    Due to the substantial commonality of issues raised, it was common ground that it would be appropriate for the applicants to prepare common written submissions in relationto those issues or topics which it had substantially in common with other applicants. On this basis, and pursuant to the Tribunal’s directions of 5 August 2015:

(a)    the Network Applicants prepared common written submissions on the issues of return on equity and the value of imputation credits;

(b)    Networks NSW and ActewAGL prepared common written submissions on return on debt; and

(c)    Networks NSW prepared common written submissions on framework, opex, X-factor, EBSS, the application of s 71O of the NEL and materially preferable NEO decision.

113    The applicants were also represented during the hearing by common counsel in respect of those issues or topics which it had in common with the other applicants. Relevantly, common counsel appeared on behalf of each of the Network Applicants in relation to return on equity and gamma, on behalf of Networks NSW and ActewAGL in respect of return on debt and on behalf of each of Networks NSW in relation to framework, opex, X-factor, EBSS, s 71O of the NEL and materially preferable NEO decision. In addition, during the course of the hearing the Network Applicants and the interveners adopted the submissions of other parties where it was appropriate to do so.

114    As noted above, where there is a “shared” issue or topic, the Tribunal has endeavoured to incorporate by reference the general or common consideration or matters addressed above. The particular aspects of the application are of course separately addressed. The Tribunal’s reasons so far as they relate to the particular aspects of the six applications not specifically addressed here are outlined in Applications by Public Interest Advocacy Centre Ltd and Endeavour Energy [2016] ACompT 2; Applications by Public Interest Advocacy Centre Ltd and Essential Energy [2016] ACompT 3; Application by ActewAGL Distribution [2016] ACompT 4 and Application by Jemena Gas Networks (NSW) Ltd [2016] ACompT 5.

OPERATING EXPENDITURE (OPEX)

INTRODUCTION

115    This topic occupied a substantial part of the hearing, and a considerable volume of the very extensive documentary material in the review-related material: s 71R of the NEL (and s 261 of the NGL).

116    Before turning to the particular opex issues, it is helpful to note more broadly the structure of Ch 6 of the NER, relating to the economic regulation of distribution services. Section 6.1 deals with introductory matters. Rule 6.1.1 affirms that the AER is responsible for the economic regulation of distribution services.

117    Of general relevance is that the structure of Ch 6 set out in r 6.12 includes that Part C sets out the building block approach to the regulation of services of the character provided by Networks NSW and ActewAGL. Part B amongst other things, obliges the AER to make a distribution determination for each DNSP: r 6.2.4. Rule 6.2.8 requires the AER to make and publish, amongst others, the RoR Guideline and the EFA Guidelines. It does not oblige the AER to adhere to those guidelines, but it must explain in its relevant decision why it has departed from them: r 6.2.8(c).

118    Part C of Ch 6 is of immediate and direct relevance. Rule 6.3 defines a building block determination as a component of a distribution determination. The procedure to get to that point is contained in Part E of Ch 6, including for each DNSP to submit a building block proposal as prescribed.

119    Rule 6.3.2 prescribes the contents of a building block determination. Rule 6.4.3 says that the building block generally are, for each regulatory year of a regulatory control period, to provide (as relevant to the present applications) for:

(1)    indexation of the regulatory asset base: see r 6.5.1;

(2)    a return on capital for that year: see r 6.5.2 – within r 6.5.2 both the return on equity and the return on debt, as well as the RoR Guideline, are addressed;

(3)    the estimated cost of corporate income tax of the DNSP for that year: r 6.5.3;

(4)    the revenue increments or decrements for that year (relevantly) from the EBSS and the STPIS: rr 6.5.8, 6.5.8A and 67.6.2;

(5)    the forecast opex for that year: r 6.5.6.

120    Those, and the other elements of the building blocks, including the forecast capex for the regulatory control period: r 6.5.7, are then detailed and cross-referenced to the relevant rules, as referred to in r 6.5.

121    As indicated, these reasons generally deal only with the application to review the AER’s Final Decision relating to Ausgrid. The following focuses on the attachment to that decision (Attachment 7 – Operating expenses) which provides an overview of the AER’s assessment of Ausgrid’s opex and, in appendixes to the attachment, an analysis of its assessment of that opex.

122    However, because much of the AER’s reasons for not accepting Ausgrid’s forecast opex is common to its decisions in relation to the opex forecasts of the other two Networks NSW businesses, namely Essential and Endeavour, and to ActewAGL, it is convenient to consider their challenges along with those of PIAC and Ergon which also address the principal issue under this heading. It is noted that, as well as the three Networks NSW applications (and the ActewAGL application), PIAC separately challenged the opex allowance for the Networks NSW businesses in its three applications. JGN did not raise this issue.

The Opex Issues

The principal issue

123    The principal issue that may be drawn from the following overview of the parties’ challenges is whether the AER’s application of what the parties referred to as the EI model discharged its obligations under rr 6.5.6 and 6.12.1(4).

124    That issue was expanded by Mr O’Bryan QC for the AER as follows:

It’s said that the AER has not complied with the regulatory rules; and if made out, that would involve an incorrect exercise of discretion, and it’s also said principally that the EI model that was used by the AER, both in assessing the DNSPs proposed opex and also in estimating required opex was flawed. And that argument is properly characterised as either the AER incorrectly exercising its discretion or making an unreasonable decision. Broadly, it will be necessary for the Tribunal to determine whether the AER’s approach is compliant with the regulatory framework and whether the decision had a reasonable basis.

… … …

…all econometric models are an approximation, they’re a simplification of the real world. They can never reflect absolutely all the on-the-ground features of the real world and that must be recognised. That being recognised, a regulator will take steps, or ought to take steps, acting reasonably, to make allowances for what’s not revealed by the model, and so the question then becomes as what the AER has done in its estimation, has it taken sufficient reasonable steps to make those allowances.

Overview of the parties’ challenges

Networks NSW

125    The three Networks NSW DNSPs challenge the AER’s estimate (made under r 6.12.1(4)) of their required opex because, in their contention, the AER’s estimates were too low. The figures in this part of the reasons are presented by the parties, some of which did not specify whether they are nominal or real figures. Networks NSW estimate the negative impact on each of its DNSPs to be: Ausgrid $731m, Endeavour $264m and Essential $737m.

126    It is Networks NSW view that the opex actually incurred by a DNSP is the best source of information for the AER of its required opex. Networks NSW submits that the AER has, however, ignored that information and instead relied on an unsound and untested econometric model (developed by Economic Insights Pty Ltd (EI) and which the parties refer to as the EI model) to estimate opex for each of the three Networks NSW DNSPs by reference to other businesses against which they were benchmarked. The Tribunal will adopt the term EI model as used by the parties.

127    Networks NSW also submits that the issue is not whether the EI model is better than alternative models but whether, having regard to the data limitations and other matters, any of the models are fit to be given 100 percent weight in assessing an appropriate level of opex.

ActewAGL

128    ActewAGL also challenges the AER’s estimate of its required opex because, in its view, the AER’s estimate is too low. The AER’s decision to not accept ActewAGL’s opex forecast resulted in a $130.6m ($2013-14) reduction in the forecast.

129    ActewAGL identifies what it perceives as three broad areas of deficiency in the AER’s decision.

130    First, the AER’s methodology is inconsistent with the methodology prescribed by rr 6.5.6(c) and 6.12.1(4) of the NER, and thus contrary to law.

131    Secondly, the AER’s benchmarking has such serious technical deficiencies that it has no value as a means of assessing ActewAGL’s efficient costs.

132    Thirdly, the AER is in error by confining itself in its consideration of the question whether ActewAGL’s forecast opex reasonably reflects the efficient costs of a prudent operator, having regard only to exogenous considerations (ie matters that are beyond the control of a DNSP, such as the weather and geography). That is to say, in making its decision the AER has assumed that the NER prohibits it from taking into account the real world consequences of its decision on consumers and others to the extent that those consequences arise from endogenous considerations (ie matters within the control of ActewAGL, such as its previous business decisions). This, ActewAGL submits is:

(a)    wrong as a matter of law; and

(b)    having regard to what it describes as the serious impact of the AER’s decision on ActewAGL’s ability to deliver safe and reliable supplies of electricity, is a matter that the AER should have considered.

PIAC

133    PIAC challenges the AER’s estimates of the three Networks NSW DNSPs required opex because, in its view, the AER’s estimate of opex for each of them is too high: by $365m for Ausgrid, $196m for Endeavour and $291 for Essential.

134    PIAC generally endorses the AER’s benchmarking methodology but takes issue with the AER’s adjustments to the EI model that are, in the words of PIAC, “arbitrary and illogical”. It submits that the adjustments “disguised a very substantial further relaxation from a position that, at the draft decision stage, the AER had already described as “cautious” and “conservative”.

135    It is PIAC’s submission that the “relaxation” results in the Networks NSW DNSPs receiving opex allowances:

(a)    well in excess of the efficient opex requirements of a prudent operator; and

(b)    substantially higher,

than if the AER had applied the results of its benchmarking techniques in an internally logical manner and without a quantitative basis in favour of the DNSPs.

Ergon

136    Ergon (as an intervener in these matters) challenges the decisions because the AER used a flawed model to arrive at its estimates of opex.

137    Further particulars of what the parties perceive as deficiencies in the EI model, the AER’s application of it and the AER’s benchmarking methodology generally appear below.

Background

Opex in the context of the NEL and the NER

138    As observed, the NEL and the NER regulate the revenue that a DNSP may derive from the provision of electricity distribution services and the NEL provides that the AER is responsible for the economic regulation of electricity distribution services, including determination of the DNSP’s annual revenue requirements.

139    The annual revenue requirement for a DNSP for each year of a regulatory control period must be determined using a building block approach. One of those building blocks is the forecast opex for that year (r 6.4.3(a)(7)).

140    Rule 6.4.3(b)(7) provides that a DNSP’s opex for the year is its forecast opex as accepted or substituted by the AER in accordance with r 6.5.6.

141    The parties’ submissions focused on:

(a)    whether the AER’s substitution of forecast opex was in accordance with the requirements of r 6.5.6 which, as a result of the 2012 Rule Amendments, is in a form different from that previously applied by the AER in determining a DNSP’s opex allowance; and

(b)    the significance of the 2012 Rule Amendments vis á vis r 6.5.6.

Rule 6.5.6 and the 2012 Rule Amendments

Rule 6.5.6

142    Briefly, rule 6.5.6 requires that:

(a)    a DNSP’s building block proposal must include the total forecast opex it considers is required to achieve each of four opex objectives in r 6.5.6(a)(1)-(4);

(b)    the AER must accept the DNSP’s forecast if it is satisfied that it reasonably reflects each of three opex criteria in r 6.5.6(c)(1)-(3);

(c)    if the AER is not so satisfied, it must not accept the forecast: r 6.5.6(d);

(d)    in deciding whether it is satisfied that a DNSP’s forecast reasonably reflects each of three opex criteria, the AER must have regard to eleven opex factors in r 6.5.6(e): r 6.5.6(c).

143    Rule 6.12.1 provides that a distribution determination is predicated on a number of “constituent decisions by the AER that are specified in that rule. Rule 6.12.1(4) specifies that one of those constituent decisions is a decision in which the AER either:

(a)    acting in accordance with r 6.5.6(c), accepts the DNSP’s forecast opex (ie is satisfied that it reasonably reflects each of the three opex criteria: r 6.12.1(4)(i); or

(b)    acting in accordance with r 6.5.6(d), does not accept the DNSP’s opex forecast, in which case the AER must set out its reason for its decision and an estimate of the total the DNSP’s required opex that the AER is satisfied reasonably reflects the three opex criteria in r 6.5.6(c) taking into account the eleven opex factors in r 6.5.6(e): r 6.12.4(ii).

The 2012 Rule Amendments vis á vis rule 6.5.6

144    The relevant 2012 Rule Amendments vis á vis r 6.5.6 as identified in the parties’ respective submissions may be summarised as follows:

(a)    a change to r 6.5.6(c)(2) which, in effect, changed the focus of the rule from the costs that a prudent operator in the circumstances of the relevant DNSP (ie the DNSP whose opex forecast is being assessed by the AER) to the costs of a prudent operator per se;

(b)    the deletion of what was opex factor 6.5.6(e)(4) (benchmark opex that would be incurred by an efficient DNSP) and the insertion of a new opex factor 6.5.6(e)(4) (the most recent annual benchmarking report published by the AER under r 6.27 and the benchmark opex that would be incurred by an efficient DNSP);

(c)    the insertion of opex factor 6.5.6(e)(7) (the substitution possibilities between opex and capital expenditure);

(d)    the insertion of opex factor 6.5.6(e)(12) (any other factor that the AER considers relevant and which it has notified the DNSP prior to the DNSP submitting its revised regulatory proposal);

(e)    the deletion of r 6.12.3(f) which constrained the AER’s discretion in developing a substitute estimate under 6.12(4)(ii) by providing, in effect, that if the AER refused to approve an estimate, the substitute estimate must be:

(i)    determined on the basis of the DNSP’s regulatory proposal; and

(ii)    amended from that basis only to the extent necessary to enable it to be approved in accordance with the NER; and

(f)    the amendment of r 6.2.8(a)(1) and the introduction of r 6.4.5 requiring the AER to develop and publish EFA Guidelines specifying the approach that the AER proposes to use to assess a DNSP’s opex and capex forecasts.

The EI model

145    The substance of what the parties referred to as the EI model as it was applied by the AER appears in two reports that EI prepared for the AER, namely:

(a)    the First EI Report: Economic Benchmarking Assessment of Operating Expenses for NSW and ACT Electricity DNSPs, 17 November 2014; and

(b)    the Second EI Report: Economic Insights, Response to Consultants’ Reports on Economic Benchmarking of Electricity DNSPs, April 2015, prepared by EI in response to the DNSPs’ criticism of the AER’s reliance on the First EI Report in its draft decisions.

The First EI Report

146    The release of the First EI Report:

(a)    on 18 November 2014 pre-dated the AER’s publication of its draft decisions for the Networks NSW DNSPs and ActewAGL on 27 November 2014 by but nine days; and

(b)    constituted the first signal of EI’s reliance on overseas data in its model and the AER’s acceptance of such data.

147    It appears from the First EI Report that the AER engaged EI to assist it with the application of economic benchmarking and to advise it on:

(a)    whether the AER should make adjustments to base opex for Networks NSW and ActewAGL based on the results from economic benchmarking models; and

(b)    the productivity change to be applied to forecast opex for these DNSPs.

148    To that end, EI developed:

(a)    three econometric benchmarking models; and

(b)    two Productivity Index Number (PIN) benchmarking models.

149    An econometric model seeks to:

(a)    estimate a relationship between opex and output or explanatory variables, such as those used in the EI model (as explained below, customer numbers, circuit length, maximum ratcheted demand and the proportion of underground circuits); and

(b)    use the variation in costs not explained by the output or explanatory variables to derive an estimate of inefficiency for each benchmarked DNSP.

150    An econometric model will produce only an estimate of the relationships between opex and the specified output variables. Thus, taking Ausgrid as an example and using its customer numbers, circuit length, maximum ratcheted demand and share of undergrounding for a particular year, the output of an econometric model would not exactly equal Ausgrid’s actual opex for that year.

151    As observed by the Productivity Commission: Electricity Network Regulatory Frameworks, Vol 1, 9 April 2013, p.182:

Benchmarking models do not actually estimate inefficiency, although this is how they are generally interpreted. The results of any benchmarking model show the extent to which the model fails to explain performance … That is, the inefficiency of any business is the difference between the business’s observed performance and that predicted by a set of cost drivers. This can reflect missing cost drivers, data errors, incorrect estimation methods, and invalid assumptions about the functional form and error distributions.

152    Reasons why an econometric model’s estimate of a DNSP’s inefficiency may differ from its actual inefficiency include:

(a)    the data used is not accurate (eg suffer from some measurement error or are not comparable);

(b)    the sample used may be too small to produce accurate estimates;

(c)    the variables included in the model may be either inappropriate or incomplete (in the sense they do not reflect all the relevant drivers of opex); or

(d)    the assumptions underpinning the econometric model are inappropriate.

153    As noted in in a report prepared by Pacific Economics Group Research LLC (PEG), Statistical Benchmarking for NSW Distributors, 19 January 2015 (the Second PEG Report) for Networks NSW at p 18:

Some of these sources of error may not be detectable based on the model results alone, and therefore must be guarded against through the careful application of economic theory and sector-specific knowledge.

(This report is referred to as “the Second PEG report because prior to the draft determinations, the AER retained PEG to compile a US data set for the AER which, in the event, it did not use.)

154    The three econometric benchmarking models developed by EI are:

(a)    a Cobb Douglas stochastic frontier analysis (CD SFA) opex cost function model (EI selected this CD SFA model as its preferred econometric model. It is, as noted by the AER at p 7-26 of Attachment 7 to the Ausgrid Final Decision and elsewhere, its “preferred model” and is the model the parties refer to as the EI model);

(b)    a Cobb Douglas least squares econometric (LSE): an econometric opex cost function using the Cobb Douglas functional form; and

(c)    a translog LSE: an econometric opex cost function using the translog functional form.

155    The PIN benchmarking models developed by EI are:

(a)    a multilateral total factor productivity (MTFP) model (which assesses the productivity of all inputs, opex and capital, relative to total output); and

(b)    a multilateral partial factor productivity (MPFP) model (which assesses the productivity of opex as an input relative to total output).

The use of overseas data in the EI model

156    While EI derived its MTFP and MPFP scores using only the Australian DNSPs’ responses to the AER’s regulatory information notices (RINs), each of EI’s econometric models used data derived from 68 DNSPs as follows:

(a)    all 13 Australian DNSPs’ responses to the AER’s RINs;

(b)    18 New Zealand DNSPs; and

(c)    37 Ontario DNSPs.

As a result, 19 percent of the data for the EI model was derived from Australian DNSPs, 26 percent from New Zealand and 54 percent from Ontario DNSPs.

157    EI’s explanation for its use of what it described as: “… comparable regulators’ data from New Zealand and Ontario” appears in the First EI Report at pp 28-29 as follows:

After a careful analysis of the economic benchmarking RIN data we concluded that there was insufficient variation in the data set to allow us to reliably estimate even a simple version of an opex cost function model.

… … …

We thus concluded that to obtain robust and reliable results from an econometric opex cost function analysis we needed to look to add additional cross sectional observations which meant drawing on overseas data, provided largely comparable DNSP data were available.

158    EI’s First Report emphasised that the reason for its inclusion of the overseas data was to increase the sample size to obtain what it described as:

(a)    more robust estimates of the slope coefficients in the cost function; and

(b)    more robust opex efficiency comparisons among the Australian DNSPs.

159    Benchmarking the Australian DNSPs against their international counterparts was not, it said, one of its objectives. Thus, it explained at p 31, it included country-level dummy variables (for New Zealand and Ontario) in its cost functions to:

… control for possible cross-country differences/inconsistencies in accounting definitions, price measures, regulatory and physical operating environments, etc. As a consequence, all cost efficiency scores obtained are relative to Australian best practice and NOT relative to international best practice.

Country dummy variables

160    As EI could not be certain it had exactly the same opex coverage across the three countries it included country dummy variables for New Zealand and Ontario to pick up differences in opex coverage (as well as systematic differences in operating environment factors such as the impact of harsher winter conditions in Ontario). As explained by EI in the First EI Report at p 31, the country dummy variables also pick up differences in conversion factors not adequately captured by its use of the Organisation for Economic Cooperation and Development’s (OECD) gross domestic product (GDP) purchasing power parities to convert financial variables to Australian dollars.

161    Thus, in very simplified terms, drawing on EI’s explanation:

(a)    a country dummy variable for Ontario may be implemented by, say, a value of 1 being assigned to a DNSP based in Ontario and 0 otherwise and will be the coefficient representing the percentage by which a DNSP’s opex is higher or lower if it is located in Ontario, all other things being equal;

(b)    while the inclusion of a dummy variable in an econometric model will change the ‘intercept’, it will not change the ‘slope’ of the model, ie where a dummy variable is used (here to indicate the different countries in which a DNSP may be located) for each DNSP the slope coefficient of the relationship (customer numbers, ratcheted maximum demand etc) remains the same. So in the case of the EI Model, the addition of dummy variables for Ontario and New Zealand assumes that the underlying relationship between, say, customer numbers and opex is the same for a DNSP regardless of the country in which it is located.

162    Relying on the Second EI Report, the AER submissions provide the following examples of what the country dummy variables correct for:

(a)    differences in opex coverage (eg the inclusion or exclusion of opex associated with very high voltage assets);

(b)    differences in DNSP activity coverage (eg whether the DNSP performs meter reading);

(c)    operating environment differences, such as the impact of harsher winter conditions in Ontario;

(d)    regulatory environment differences;

(e)    accounting differences, such as different definitions of reporting categories; and

(f)    differences in currency conversion factors not adequately captured by the use of OECD GDP purchasing power parities.

EI’s outputs specification criteria

163    EI’s selection of outputs are based on three criteria referred to in the First EI Report at p 10. First, that the output aligns with the rule 6.5.6 opex objectives.

164    Secondly, that the output reflects a service provided to a customer rather than an activity undertaken by a DNSP which does not directly affect what the customer receives. As explained by EI, if an activity is undertaken by a DNSP which does not directly affect what its customers receive is included as an output, there is a risk that the DNSP would have an incentive to over-engage in the activity and not concentrate sufficiently on meeting its customers’ needs at an efficient cost.

165    Thirdly, that the output is significant. That is, as explained by EI, while a DNSP has a wide range of outputs, its costs are dominated by a few key outputs and only those key outputs should be included to keep the analysis manageable and to be consistent with the high level nature of economic benchmarking (eg a call centre’s operations are not normally a large part of a DNSP’s costs and so the centre’s performance is not normally included as an output in a DNSP economic benchmarking study).

The MTFP and MPFP outputs

166    Attachment 7 to the Ausgrid Draft Decision, provides (at pp 7-56 and 7-59) the following explanation (without footnotes) for EI’s selection of output specifications for its MTFP and MPFP models to which the AER had regard its Final Decision:

Economic Insights' preferred output specification for the MTFP and MPFP includes:

    Customer numbers

    Ratcheted maximum demand

    Circuit line length

    Energy throughput

    Reliability (measured as total customer minutes off supply).

… this specification takes into account the operating environment variable of customer density by including both customers and line length as outputs. It similarly includes some allowance for differences in energy density and demand density by including energy delivered and a measure of maximum demand as outputs. Further this specification includes reliability as an output.

The MTFP analysis uses opex and capital as inputs. In this analysis capital is split into five distinct components – subtransmission overhead lines, distribution overhead lines, subtransmission underground cables, distribution underground cables and transformers and other. Each input is measured in terms of its physical quantity. This measure of inputs aligns with Economic Insights' preferred input specification which is justified in our explanatory statement to our Guideline.

Several submissions on our draft benchmarking report said that we did not allocate an appropriate weight to line length. Economic Insights consider that the weighting for overhead lines is appropriate because it has been developed through a Leontief estimation of the cost function.

Some submissions also noted that Economic Insights' lines and cables input index for MTFP analysis might be multiplicative in nature placing a greater weighting on high voltage lines than is warranted. Economic Insights addressed this concern by creating separate input indexes for subtransmission and distribution lines. The weighting given to high voltage lines will not influence our alternative assessment techniques that examine the productivity of opex. These techniques, unlike MTFP, are not sensitive to the weighting given to individual capital inputs.

… … …

In addition to accounting for these factors in the model specification, Economic Insights tested the effect of the following operating environment factors on the MPFP scores in a second-stage regression analysis:

    customer numbers (to check whether additional scale effects are significant)

    customer, energy and demand network densities

    the share of underground cable length in total circuit kilometres

    the share of single stage transformation capacity in single stage plus the second stage of two stage transformation capacity at the zone substation level, and

    system average interruption duration index (SAIDI)146.

Economic Insights found, using these tests, that none of these variables are statistically significant in their effect on the MPFP scores. This indicates that the MPFP results have appropriately captured the effects of these variables.

The EI model’s specifications

167    Having regard to EI’s three output specification criteria outlined above and to output specifications used by PEG’s research in work for the Ontario Energy Board, EI selected the following outputs for the EI model:

(a)    customer numbers;

(b)    circuit length; and

(c)    maximum ratcheted demand (as explained by EI in the First EI Report at p 11, this variable is simply the highest value of peak demand observed in the period up to the year in question for each DNSP which recognises the capacity that has actually been used to satisfy demand and gives the DNSP credit for this capacity in subsequent years, even though annual peak demand may be lower in subsequent years.);

168    The EI model also specifies the proportion of the DNSP circuits that are underground as opposed to aboveground, not as an output variable, but as an operating environment variable. As EI explained it:

Undergrounding: by including an operating environment variable for the proportion of underground cables in total line and cable length in our cost functions, we explicitly allow for the impact of this factor.

169    The following explanation (without footnotes) of the EI model’s specifications appears at 7-26 of Attachment 7 to the Ausgrid Draft Decision:

Model specifications

The opex cost functions incorporate the significant output variables of customer numbers, circuit length, and ratcheted maximum demand. Unlike the MTFP model the opex cost function models do not include energy delivered and reliability. Economic Insights excluded energy delivered because it was highly correlated with ratcheted maximum demand. The estimated coefficients of either energy delivered or ratcheted maximum demand were generally insignificant in these models. Economic Insights found that the correlation coefficient between these two variables was larger than 0.99 and the behaviour of their coefficients was almost certainly a consequence of multicollinearity problems.

Hence Economic Insights excluded energy delivered. As energy delivered is highly correlated with ratcheted maximum demand the model will pick up the effect of energy delivered.

Reliability was not included because consistent reliability data is not available for the international distributors. We are comfortable with Economic Insights not including reliability in the econometric models. A primary driver of reliability performance is capital expenditure. Expenditure on maintenance may prevent outages. However, individual network outages lead to opex associated with rectifying the outages.

The opex cost function models also include the proportion of underground circuits as an operating environment factor. This is consistent with the MTFP analysis which has separate input indexes for overhead and underground lines. As expected the coefficient of this variable is negative. Underground cables will require less ongoing maintenance than overhead cables. Further, underground cables do not incur vegetation management costs.

The Second EI Report

170    The Second EI Report was commissioned by the AER to assist it with its application of economic benchmarking and to advise:

(a)    whether the AER should make adjustments to base year opex for New South Wales, Australian Capital Territory and Queensland DNSPs based on the results from economic benchmarking models; and

(b)    the productivity change to be applied to forecast opex for the NSW, ACT and Queensland DNSPs.

171    Referring to the First EI Report, the Second EI Report notes at p iv:

After choosing a conservative efficiency target based on the weighted average performance of the five top performing DNSPs and making additional allowance for factors not included in the econometric models, downwards adjustments were recommended for the base year opex of each of the NSW and ACT DNSPs.

The NSW and ACT DNSPs’ revised regulatory proposals included a number of supporting consultants’ reports critiquing the analysis in … [the First EI Report] …. These included reports by Pacific Economics Group Research, Frontier Economics, Cambridge Economic Policy Associates (CEPA), Advisian and Huegin. A number of consultants’ reports were also submitted by the Queensland DNSPs including ones by Frontier Economics, Huegin and Synergies.

We have reviewed both the critiques presented by the consultants and the alternative models presented in detail and have found no reason to change the approach adopted in … [the First EI Report] … benchmarking analysis. We do, however, consider there is a case for revising the opex efficiency target. And updated and more detailed information on the impact of operating environment factors not explicitly included in the opex cost function model is now available.

172    Expanding on the observation that there is a case for revising the opex efficiency target, the Second EI Report stated at p x:

… we are of the view there may be a case for setting an even more conservative target than that used in …[the First EI Report ] … . This is particularly the case given that this is the first time economic benchmarking is being used as the primary basis for an Australian regulatory decision.

… … …

Incorporating the more conservative efficiency target and updated information on the impact of operating environment factors not included in the econometric models produces the base year opex reductions listed in table A for the NSW, ACT and Queensland DNSPs. Since Endeavour Energy is already exceeding its (conservatively set) target, no adjustment to its base year opex is required.

173    Table A, as it appeared in the Second EI Report, is reproduced below:

Table A    NSW, ACT and Queensland DNSP opex efficiency scores, adjusted efficiency targets and base year opex adjustments to reach the target

DNSP

Efficiency score

Target allowing for additional OEFs

Reduction to base year opex

Ausgrid

44.7%

68.7%

24.0%

Endeavour

59.3%

68.0%

0.0%

Essential Energy

54.9%

69.4%

26.4%

ActewAGL

39.9%

62.4%

32.8%

Energex

61.8%

65.6%

15.5%

Ergon Energy

48.2%

61.7%

10.7%

174    In a nutshell, the Second EI Report resulted in no change to the AER’s application of the EI model and benchmarking methodology other than the AER’s (contentious):

(a)    lowering of the comparison point for determining the AER’s alternative estimate of base opex in the EI model from Citi Power to AusNet; and

(b)    further adjustments to operating environment factors (OEFs) as outlined below.

The lowering of the comparison point and the further OEF adjustments did result in the AER increasing its opex allowances for the DNSPs. While the increases were not enough to satisfy the DNSPs, they were, in PIAC’s submission, overly generous.

The AER’s lowering of the EI model’s comparison point

175    In its Draft Decisions, the AER used as the benchmark comparison point (ie each DNSP’s efficiency target) the weighted average of the top quartile of opex efficiency scores generated by the EI model for all DNSPs. This was the average of the efficiency scores of the five most efficient DNSPs, each of which was over 0.75.

176    In its Final Decisions, based on advice from EI, the AER used as a benchmark comparison point the opex efficiency score of the DNSP whose score was at the bottom of the upper third of the scores of all DNSPs (ie the DNSP with the lowest opex efficiency score above 0.75). This, as may be seen by reference to Table 7.4 that is reproduced below, is AusNet. It reduced the benchmark comparison point from 0.86 to 0.77.

177    The AER’s lowering of the comparison point is a matter of some controversy from PIAC’s perspective and is also criticised by the DNSPs as pointing to weaknesses in the AER’s approach to benchmarking. The controversy and criticism are canvassed below.

The AER’s operating environment factors (OEFs) adjustments

178    At page 7-180 of Attachment 7 to the Ausgrid Final Decision, the AER notes (without footnotes):

It is important to recognise that service providers do not operate under exactly the same operating environment factors (OEFs). OEFs may have a significant impact on measured efficiency through their impact on a service provider's opex. It is desirable to adjust for material OEF differences to ensure that when comparisons are made across service providers, we are comparing like with like to the greatest extent possible. By identifying the effect of OEFs on costs one can determine the extent to which cost differences are exogenous or due to inefficiency.

Some key OEFs are directly accounted for in Economic Insights’ SFA model. Where this has not been possible, we have considered the quantum of the impact of the OEF on the NSW service providers’ opex relative to the comparison firms. We have then adjusted the SFA efficiency scores based on our findings on the effects of OEFs.

Like paragraphs also appear in Attachment 7 to the other Final Decisions in issue in the DNSPs applications.

179    In its post-modelling adjustment process, the AER assessed the effects of 65 OEFs, being potential differences between DNSPs that were not directly accounted for in the model specification. This included factors nominated by the AER during the decision process and factors put forward by DNSPs following the issue of the Draft Decisions.

180    The assessment was undertaken in two stages. First, the AER assessed each OEF against three OEF criteria, namely, exogeneity, materiality, and duplication as explained by the AER in the following paragraphs.

181    The first, exogeneity, is that an OEF should be outside the control of a DNSP because adjusting for an OEF that a DNSP can control itself may mask inefficient investment or expenditure.

182    Although, as explained further below, a collective adjustment was made for individually immaterial factors, the second criterion, materiality, is that an OEF should create a material difference in a particular DNSP’s opex. An OEF was considered to be material where it would affect a DNSP’s opex by 0.5 percent or more.

183    To avoid double-counting the effects of an OEF, the third criterion, duplication, is that the OEF should not have been accounted for elsewhere.

184    Where an OEF satisfied all three criteria in relation to a particular DNSP, the AER made an adjustment to the DNSP’s target opex to allow for its effects.

185    The second stage involved the AER providing an additional single adjustment for each DNSP to account for factors that satisfied the exogeneity and duplication criteria but did not independently have a material effect on opex.

186    In order to determine whether an OEF was likely to have a material effect on a DNSP’s opex, the AER assessed all available information, including information provided by the DNSPs.

187    The AER identified four OEFs that fulfilled all three of the material OEF criteria for the Networks NSW DNSPs: subtransmission configurations, licence conditions, occupational health and safety regulations and termite exposure.

188    For ActewAGL, the OEFs that satisfied the materiality criteria were backyard reticulation, capitalisation practices, occupational health and safety regulations and standard control services connections.

189    The AER made adjustments to each DNSP’s target efficiency scores to take account of the above mentioned material OEFs.

190    The AER then estimated the collective effect of the OEFs that had been found to be exogenous and non-duplicative, but not individually material (immaterial OEFs).

191    Where the AER considered that an immaterial OEF was likely to disadvantage a DNSP, or where it was uncertain whether the OEF would advantage or disadvantage a DNSP (directionally ambiguous OEFs), the AER allowed 0.5 percent in the DNSP’s favour. Where an immaterial OEF was likely to advantage the DNSP, the AER subtracted 0.5 percent. There was one exception to this procedure: where the AER was able to quantify the effect of an immaterial OEF, it made an adjustment only for that amount.

192    The AER then made a further adjustment to each DNSP’s target efficiency scores to account for the collective effects of the immaterial OEFs.

193    Table A 6 Summary of final decision on OEF adjustments in Attachment 7 to the Ausgrid Final Decision details the OEF adjustments made to each of the Networks NSW DNSPs’ target opex and is reproduced below:

Table A. 6    Summary of final decision on OEF adjustments

Factor

Ausgrid

Endeavour

Essential

Reasons against OEF criteria551

Subtransmission

5.2%

4.9%

3.1%

    The boundary between distribution and transmission is not determined by service providers

     Data from Ausgrid's regulatory accounts suggest that subtransmission assets are up to twice as costly to operate as distribution assets.

    Economic Insights' SFA model does not include a variable that accounts for subtransmission assets.

Licence

conditions

1.2%

0.7%

1.2%

    The network planning requirements in the NSW service providers licence conditions are not determined by service providers.

    Category analysis and economic benchmarking RIN data suggest that the increased transformer capacity to meet the 2005 and 2007 change in licence conditions may lead to a material increase in maintenance expenditure.

    Economic Insights' SFA model does not include a variable that accounts for changes in licence conditions.

OH&S regulations

0.5%

0.5%

0.5%

    OH&S regulations are not set by service providers.

    Data from the ABS and a PwC report commissioned by the Victorian Government suggest that differences in OH&S regulations may materially affect service provider's opex.

    Economic Insights' SFA model does not include a variable that accounts for differences in OH&S legislation.

Termite Exposure

0.0%

0.2%

0.6%

    The prevalence of termites in a geographic area is beyond service providers’ control.

    Data on Powercor’s termite management costs and data from the CSIRO on the range of termites suggest that the Essential Energy may have a material cost disadvantage due to termite exposure.

     Economic Insights’ SFA model does not include a variable that accounts for differences in termite exposure.

Immaterial factors

4.7%

6.7%

5.4%

There are various exogenous, individually immaterial factors not accounted for in Economic Insights' SFA model that may affect service providers' costs relative to the comparison firms. While individually these costs may not lead to material differences in opex, collectively they may.

Total

11.7%

12.9%

10.7%

The AER said in its footnote to that Table that its OEF criteria exogeneity, materiality, and duplication, are explained in detail in its section on its approach to OEFs.

194    Table A 8 Summary of individually immaterial OEF adjustment in Attachment 7 to the Ausgrid Final Decision provides a summary of the quantification of the effect of immaterial factors on each of the three Networks NSW DNSPs. It is also reproduced below:

Table A 8 Summary of individually immaterial OEF adjustment

Factor

Ausgrid

Endeavour

Essential

Asset lives

0.5%

0.5%

-0.5%

Building regulations

0.5%

0.5%

0.5%

Bushfires

-0.5%

-0.5%

-0.5%

Capitalisation Practices

-0.5%

0.5%

-0.5%

Corrosive environments

0.5%

0.5%

0.5%

Cultural heritage obligations

0.5%

0.5%

0.5%

Environmental Regulations

0.5%

0.5%

0.5%

Environmental variability

-0.5%

-0.5%

0.5%

Extreme weather events

0.5%

0.5%

0.5%

Grounding conditions

0.5%

0.5%

0.5%

Network access

-0.1%

0.5%

0.4%

Planning regulations

0.5%

0.5%

0.5%

Proportion of 11kV and 12kV lines

0.5%

0.5%

0.5%

Rainfall and humidity

0.5%

0.5%

0.5%

Specialised skills

0.5%

0.5%

0.5%

Solar uptake

-0.5%

-0.5%

-0.5%

Topography

0.5%

0.5%

0.5%

Traffic management

0.5%

0.5%

0.5%

Transformer capacity owned by customers

-0.2%

0.1%

0.0%

Division of vegetation management responsibility

0.5%

0.5%

0.5%

Total

4.7%

6.7%

5.4%

Source:    AER analysis

    The totals do not reconcile entirely due to rounding

195    Table A.6 Summary of final decision on OEF adjustments in Attachment 7 to the ActewAGL Final Decision details the OEF adjustments made to its target opex and is reproduced below.

Table A. 6    Summary of final decision on OEF adjustments

Factor

Adjustment

Reasons against OEF criteria

Capitalisation Practices

8.5%

    Although capitalisation practices are the result of management decisions, differences in capitalisation practices can lead to material differences that are unrelated to efficiency.

    ActewAGL's capitalisation practices, with regard to vehicle and IT costs, provide it with a material cost disadvantage relative to the comparison firms.

    Economic Insights' SFA model does not include variables that account differences in capitalisation practices between the NEM service providers.

Backyard reticulation

5.6%

    Backyard reticulation has been required by ACT planning approaches.

    ActewAGL has provided evidence that backyard reticulation materially increases its costs.

    Economic Insights' SFA model does not include variables that account for backyard reticulation between the NEM service providers.

Standard control services connections

4.0%

    The AER determines service providers' service classifications.

     Standard control services connections opex accounts for a material amount of ActewAGL's standard control services opex.

    Economic Insights' SFA uses network services data. Connection services are not included in network services.

OH&S regulations

0.5%

    OH&S regulations are not set by service providers.

    Data from the ABS and a PwC report commissioned by the Victorian Government suggest that differences in OH&S regulations may materially affect service provider's opex.

    Economic Insights' SFA model does not include a variable that accounts for differences in OH&S legislation.

Individually immaterial factors

4.4%

There are various exogenous, individually immaterial factors not accounted for in Economic Insights' SFA model that may affect service providers' costs relative to the comparison firms. While individually these costs may not lead to material differences in opex, collectively they may.

Total

23.0%

Source:    AER analysis

The AER points out that the OEF criteria exogeneity, materiality, and duplication, are explained in detail in its section on its approach to OEFs.

196    Table A.8 Summary of individually immaterial OEF adjustments in Attachment 7 to the ActewAGL Final Decision details the quantified effect of immaterial factors and is reproduced below.

Table A 8 Summary of individually immaterial OEF adjustment

Factor

Adjustment

Asset lives

-0.5%

Bushfires

0.5%

Building regulations

0.5%

Corrosive environments

0.5%

Cultural heritage obligations

0.5%

Environmental regulations

0.5%

Environmental variability

-0.5%

Extreme weather events

-0.5%

Grounding conditions

0.5%

Humidity and rainfall

0.5%

Network access

-0.1%

Planning regulations

0.5%

Proportion of 11kV and 12kV lines

0.5%

Solar uptake

-0.5%

Specialised skills

0.5%

Termites

0.0%

Traffic management

0.5%

Transformer capacity owned by customer

0.1%

Topography

0.5%

Underground services

0.4%

Total

4.4%

Source:    AER analysis

197    The AER’s OEF adjustments are also a matter of controversy. PIAC challenges them as being too generous. The DNSPs challenging them as subjective and arbitrary. The challenges are canvassed below.

The AER’s application of the benchmarking opex factor (rule 6.5.6(d)(4))

198    Figure 7.2: Our assessment approach in Attachment 7 to the Ausgrid Final Decision outlines the AER’s five step approach to forming its alternative estimate of opex as follows:

Step 1 –Start with service provider’s opex

We typically use the service provider’s actual opex in a single year as the starting point for our assessment. We call this the base year. While categories of opex can vary year to year, total opex is relatively recurrent. We typically choose a recent year for our assessment.

Step 2 – Assess base year opex

We assess whether opex the service provider incurred in the base year reasonably reflects the opex criteria. We have a number of techniques including economic benchmarking by which we can test the efficiency of opex in the base year.

Step 3 – Add a rate of change to base opex

As the opex of an efficient service provider tends to change over time due to price changes, output and productivity we trend our estimate of base opex forward over the regulatory control period to take account of these changes. We refer to this as the rate of change.

Step 4 – Add or subtract any step changes

We then adjust base year expenditure to account for any forecast cost changes over the regulatory control period that would meet the opex criteria that are not otherwise captured in base opex or rate of change. This may be due to new regulatory obligations in the forecast period and efficient capex/opex trade-offs. We call these step changes.

Step 5 – Other opex

Finally we add any additional opex components which have not been forecast using this approach. For instance, we forecast debt raising costs based on the costs incurred by a benchmark efficient service provider.

Having established our estimate of total forecast opex we can compare our alternative opex forecast with the service provider’s total forecast opex. If we are not satisfied there is an adequate explanation for the difference between our opex forecast and the service provider's opex forecast, we will use our opex forecast.

The AER refer to it as a “revealed cost method” in its EFA Guideline (and sometimes refer to it as the “base-step-trend method” in its past regulatory decisions).

199    Table 7.3: Assessment of Ausgrid’s base opex in Attachment 7 to the Ausgrid Final Decision outlines the main techniques used by the AER to test the efficiency of Ausgrid’s base opex.

Table 7.3 Assessment of Ausgrid’s base opex

Technique

Description of technique

Findings

Economic

benchmarking

Economic benchmarking measures the efficiency of a service provider in the use of its inputs to produce outputs.

The economic benchmarking techniques we used to test Ausgrid's efficiency included Multilateral Total Factor Productivity, Multilateral Partial Factor Productivity and opex cost function modelling. We compared Ausgrid's efficiency to other service providers in the NEM.

Despite differences in the techniques we used, all benchmarking techniques show Ausgrid does not perform as efficiently as most other service providers in the NEM.

We consider that differences in Ausgrid's operating environment not captured in the benchmarking models do not adequately explain the different benchmarking results between Ausgrid and other service providers.

Review of

labour and

workforce

practices

Labour costs represent a large proportion of all NSW service providers' opex. We engaged Deloitte Access Economics (Deloitte) to review the NSW service providers' labour and workforce practices.

Deloitte found that because of labour and workforce management issues, Ausgrid's base year would not likely represent efficient costs.

Deloitte concludes that:

    the NSW service providers have high labour costs because they have too many employees. They all engaged permanent staff in preference to contractors over the 2009–14 period for transitory capex work. Now, due to EBA restrictions on redundancies, they have stranded labour

    because the NSW service providers employ a high proportion of their employees through EBAs (more than 75 percent) restrictive EBA clauses have a significant impact on workforce flexibility

    the optimum level of outsourcing is likely to be higher than the level the NSW service providers outsourced at over the 2009–14 period; this is a key distinguishing factor from the Victorian service providers

    while the NSW service provider have been implementing efficiency improvements, many efficiencies have not been realised until after the 2012–13 base year.

Source: AER analysis

The reference to Deloitte is to Deloitte Access Economics (Deloitte). A footnote to Table 7.3 cited NSW distribution network service providers labour analysis: addendum to 2014 report, April 2015, pp. ii–vii; and Deloitte NSW Distribution Network Service Providers Labour Analysis, November 2014, p. iv.

200    A like Table 7.3 appeared in each of the Attachments 7 to the AER’s Endeavour, Essential and ActewAGL Final Decisions. Subject only to the name of the relevant DNSP, the “Economic benchmarking” paragraphs in each of those tables are the same. Subject to an additional bullet point in the case of Endeavour and an additional paragraph in the case of Essential, so too are the “Review of labour and workforce practices” paragraphs in each of the Endeavour and Essential tables.

201    The additional bullet point in the case of Endeavour is as follows:

    Deloitte considered that Endeavour Energy’s base year opex was likely more efficient than Ausgrid’s and Essential Energy’s because it had commenced implementing efficiency improvements earlier. However, all NSW service providers (including Endeavour Energy) had efficiencies they were yet to realise because the reforms they had implemented to date did not consider potential opportunities to improve efficiency outside of the three NSW businesses. That is, they compared efficiency among themselves, but not to businesses in other jurisdictions.

202    The additional paragraph in the case of Essential is as follows (without footnote):

Further, in response to submissions in its revised proposal about the adverse impact of the dispersed nature of its network on labour costs, Deloitte found that Essential Energy could potentially achieve significant cost savings by implementing a local service agent (LSA) model. Powercor achieved significant efficiencies from implementing an LSA model following privatisation.

203    Table 7.3 in Attachment 7 to the Essential Final Decision also had the additional following entry relating to “Vegetation management”:

Technique

Description of technique

Findings

Vegetation

management

Essential Energy's vegetation management costs have increased significantly over the 2009–14 period. Category analysis showed Essential Energy has very high costs compared to most of its peers and Essential Energy’s regulatory proposal included a step down in vegetation management for the forecast period acknowledging that its 2009–14 practices required reform. Therefore, we decided to review Essential Energy’ vegetation management practices in detail.

Our overall findings for vegetation management remain the same as those from our draft decision. That is, Essential Energy's own documentation, including a report it commissioned from Select Solutions, provide evidence that its vegetation management practices in the base year (2012–13) were inefficient.41

Select Solutions' review found that Essential Energy must move to a "significantly more efficient" vegetation management model to reduce the impact of its expenditure on customer prices.42 Select Solutions found several causes of inefficiency, including:

    attributing too much vegetation management effort to reactive spot clearing rather than proactive cyclic maintenance

    primarily engaging contractors for cutting on a demonstrably less efficient hourly rate basis

    less than optimal outsourcing.

We discuss our vegetation management findings in more detail in Appendix A.5.

Source: AER analysis

The footnoted references are to Essential, Regulatory Proposal, 2014, p. 73 and its paper Essential Energy, Vegetation Management Strategy and Implementation Plan for Additional Expenditure – FY 2013 to 14, February 2013; and to Select Solutions, Review of Essential Energy Vegetation Management Strategy–Final Report, 22 March 2013.

204    While the “Economic benchmarking” paragraphs in Table 7.3 in the ActewAGL Final Decision are the same as those in the Networks NSW DSNP’s equivalent tables, the “Review of labour and workforce practices” and “Review of vegetation management” paragraphs in Table 7.3 ActewAGL are different, as reproduced below:

Technique

Description of technique

Findings

Review of

labour and

workforce

practices

Labour costs represent a large proportion of ActewAGL’s opex (approximately 80 percent). Category analysis showed ActewAGL had high labour costs relative to most of its peers and ActewAGL’s regulatory proposal suggested labour costs were a reason ActewAGL overspent its opex allowance in 2012–13.

Therefore, we decided, with the assistance of EMCa, to conduct a detailed review of ActewAGL’s labour and workforce practices.

EMCa considered that there is evidence that ActewAGL’s work practices, processes and systems in 2012–13 were ineffective. EMCa considered that this lead to inefficient use of labour in the office and field. This inefficiency is characterised by duplication of effort in work planning and scheduling, loss of field productivity through ineffective works management and through ineffective data and information management.

EMCa also considered that ActewAGL’s labour levels were not reasonably efficient in 2012–13, noting that ActewAGL has steadily increased its ASL based on assumed future growth scenarios and adopting an internal resourcing strategy.

EMCa considered that if ActewAGL had outsourced more of its work, it would likely have benefited from increased labour flexibility and reduced operating costs.

EMCa found a lack of compelling evidence to demonstrate that ActewAGL’s labour costs in 2012–13 were reflective of an efficient service provider. EMCa consider this was evident by the relatively high level of internal resources used and the extent to which work was outsourced on an hourly rate bases for the urgent clearance of vegetation..

Review of

vegetation

management

ActewAGL's vegetation management costs have increased significantly over the 2009–14 period. Category analysis showed ActewAGL has very high costs compared to most of its peers and ActewAGL's regulatory proposal suggested vegetation management was a reason ActewAGL overspent its opex allowance in 2012–13. Therefore, we decided, with the assistance of EMCa, to review ActewAGL's vegetation management practices in detail.

EMCa found that ActewAGL did not act prudently and efficiently to manage costs associated with increased vegetation growth that occurred prior to 2012–13 because its vegetation management practices and its strategic and tactical responses were inadequate.

EMCa also found evidence of inefficient vegetation management costs in 2012–13 due to the manual processes between the office and field and the extent of clearance work that was deemed to be urgent, and which was therefore undertaken with a resultant higher cost. It is EMCa’s view that a service provider acting to efficiently minimise costs would have incurred a lower level of urgent clearance work.

Source: AER analysis.

205    It appears from Attachment 7 to the final decisions that:

(a)    the AER used the EI model to adjust Ausgrid, Essential and ActewAGL’s base opex to determine a starting point for a forecast that it considered would reasonably reflect the criteria; and

(b)    while the AER considered that it was unable to use Ausgrid, Essential or ActewAGL’s historical opex to arrive at an alternative forecast because such a forecast would not result in a forecast that would reasonably reflect the opex criteria, it was not satisfied that a forecast based on Endeavour’s actual opex in 2012–13 could be regarded as materially inefficient.

206    While in relation to Endeavour, the AER concluded it was not satisfied that a forecast based on Endeavour’s actual opex in 2012–13 could be regarded as materially inefficient, it formed the view that Endeavour’s forecast opex involved two additional items which it analysed (but rejected) as “step changes (ie additional expenditure not incurred by Endeavour during the base year), namely:

(a)    $240.7m ($2013-14) in respect of increased vegetation management; and

(b)    $17.3m ($2013-14) in redundancy costs.

207    As is illustrated by the following passage (without footnotes) from Attachment 7 to the Endeavour Final Decision at pp 7-268 to 7-269, in rejecting what it described as Endeavour’s “step changes, the AER relied on the EI model:

For Endeavour Energy's base year opex, because we are not satisfied that it contains material inefficiency it does not require an adjustment. We, therefore, consider it appropriate to use Endeavour Energy's base opex when developing our alternative forecast. This is a departure from our draft decision. Our benchmarking analysis is nevertheless relevant in our assessment of other components of Endeavour Energy's alternative total opex forecast, such as a consideration of its proposed step changes.

We disagree with the service providers’ submissions that advocate we should abandon our benchmarking techniques and the extent to which we rely upon our benchmarking results. Therefore, we continue to place significant weight on the results of Economic Insights’ preferred [EI] model (Cobb Douglas SFA) in estimating necessary reductions in base opex.

208    More particularly, the AER rejected the increased vegetation management opex which Endeavour proposed to enable it to meet increased outsourced providers’ contract prices to be incurred in targeting improvements to conform with the minimum risk standards it is obliged to meet regarding the clearance distance between its mains and vegetation across its network area.

209    In rejecting the increase the AER observed (at p 7-288 of Attachment 7 to the Endeavour Final Decision) that:

(a)    Endeavour had stated that it did not face any change to the minimum risk standards with which it must comply;

(b)    without persuasive evidence that a DNSP’s total historical opex was too low to achieve the opex objectives, it did not consider increased contract costs to be a reason to increase the total opex forecast to meet what are unchanged regulatory obligations;

(c)    Endeavour does not benchmark well when compared to other DNSPs in the national electricity market; and

(d)    it would be inconsistent with the application of Endeavour’s EBSS to include the vegetation management opex in the opex forecast (Endeavour proposed that it retain gains from its EBSS rather than share them with its customers).

210    The AER rejected the proposed redundancy expenditure because it considered that it was needed only because Endeavour was not currently operating as efficiently as it could. In this respect it relied on the April 2015 Deloitte report referred to above (Deloitte Access Economics, NSW distribution network service providers labour analysis: addendum to 2014 report, April 2015) concerning labour and workforce management issues affecting Networks NSW. That Deloitte report in turn relied on the EI Model.

211    Table 7.4: Arriving at our alternative estimate of base opex in the Ausgrid Final Decision outlines the steps that the AER took to arrive at Ausgrid’s base opex.

Table 7.4    Arriving at our alternative estimate of base opex

Description

Output

Calculation

Step 1 – Start with Ausgrid's average opex over the 2006 to 2013 period.

Ausgrid's network services opex was, on average, $509.3 million ($2013) over the 2006 to 2013 period.

$509.3 million

($2013)

Step 2 - Calculate the raw efficiency scores using our preferred economic benchmarking model

Our preferred economic benchmarking model is Economic Insights’ Cobb Douglas SFA model. We use it to determine all service providers' raw efficiency scores.

Based on Ausgrid's customer numbers, line length, and ratcheted maximum demand over the 2006 to 2013 period, Ausgrid's raw efficiency score is 44.7 percent.

44.7 percent

Step 3 – Choose the comparison point

For the purposes of determining our alternative estimate of base opex, we did not base our estimate on the efficient opex estimated by the model.

The comparison point we used was the lowest performing service provider in the top quartile of possible scores, AusNet Services. According to this model AusNet Services' opex is 76.8 percent efficient based on its performance over the 2006 to 2013 period. Therefore to determine our substitute base we have assumed a prudent and efficient Ausgrid would be operating at an equivalent level of efficiency to AusNet Services.

76.8 percent

Step 3 – Adjust Ausgrid's raw efficiency score for operating environment factors

The economic benchmarking model does not capture all operating environment factors likely to affect opex incurred by a prudent and efficient Ausgrid.

We have estimated the effect of these factors and made a further reduction to our estimate where required. We have determined an 11.7 percent reduction to Ausgrid's comparison point based on our assessment of these factors.

A material operating environment factor we considered was not accounted for in the model is the different subtransmission configurations in NSW.

68.7 percent

= 0.768 /

(1 + 0.117)

Step 4 – Calculate the percentage reduction in opex

We then calculate the opex reduction by comparing Ausgrid's efficiency score with the adjusted comparison point score.

35.0 percent

= 1 – (0.447 /

0.687)

Step 5 – Calculate the midpoint efficient opex

We estimate efficient opex at the midpoint of the 2006 to 2013 period by applying the percentage reduction in opex to Ausgrid's average opex over the period.

This represents our estimate of efficient opex at the midpoint of the 2006 to 2013 period.

330.9 million

($2013)

= (1 – 0.350)*

509.3 million

Step 6 – Trend midpoint efficient opex forward to 2012–13

Our forecasting approach is to use a 2012–13 base year. We have trended the midpoint efficient opex forward to a 2012–13 base year based on Economic Insights’ opex partial factor productivity growth model. It estimates the growth in efficient opex based on growth in customer numbers, line length, ratcheted maximum demand and share of undergrounding.

It estimated the growth in efficient opex based on Ausgrid’s growth in these inputs in this period to be 8.48 percent.

359.0 million

($2013)

= 330.9 x (1 =

0.0848)

Step 7 – Adjust our estimate of 2012–13 base year opex for CPI

The output in step 6 is in real 2013 dollars. We need to convert it to real 2013–14 dollars for the purposes of forming our substitute estimate of base opex. This reflects one and a half years of inflation. This is our estimate of base opex.

374.2 million

($2013-14)

= 359.0 x (1 +

0.042)

Source: AER analysis

212    As may be seen from Table 7.4 reproduced above, the AER started with Ausgrid’s average opex over the period 2006 to 2013, namely, $509.3m.

213    It then, in Step 2, used the EI model to calculate a raw efficiency score of 44.7 percent.

214    The third step involved the choice of a comparison point of 76.8 percent, based on AusNet, which is at the bottom of the top quartile of possible scores using the EI model.

215    The fourth step (described as Step 3 in Table 7.4) involved an adjustment to the raw efficiency score for OEFs – the adjustment being made to the comparison point on the assumption that the comparator (AusNet) would have to contend with the same environmental factors as Ausgrid. This adjustment, made by the AER once the model had been run, lowers the comparison point to 68.7 percent.

216    The mathematical consequence of this “after-the-event adjustment, is that the efficiency score of Ausgrid becomes 44.7 percent and, therefore, in what is described as Step 4 in Table 7.4, it is calculated that Ausgrid is, in effect, 35 percent below the comparison point.

217    Then in what is described as Step 5 in Table 7.4, the AER calculated a mid-point efficient opex (ie the average efficient opex over the 2006-13 period based on the EI model). In Step 6 adjusted that mid-point forwards to 2012-13 following that adjustment in Step 7 with an adjustment of its estimate of Ausgrid’s 2012-13 base year opex for consumer price index (CPI) to arrive at a figure of $374.2m ($2013-14).

218    A table similar to the Table 7.4: Arriving at our alternative estimate of base opex which appeared in Attachment 7 to the Ausgrid Final Decision as reproduced above also appeared in each of the Attachments 7 to the AER’s Essential and ActewAGL Final Decisions. Those tables also included two steps designated as Step 3. The table in the Essential Final Decision also included an additional Step 8 as reproduced below:

Description

Output

Calculation

Step 8- Convert to

final year estimate

The guideline specifies that we will convert our estimate of base year opex into a final year estimate.

We used the formula in the guideline to determine our unadjusted estimate of opex for 2013–14.

We used the 2012–13 efficient opex value in step 7 to determine what the efficiency adjustment would be for 2012–13 (–26.3%), taking into account changes to Essential Energy's service classification.

To arrive at our adjusted final year estimate, we applied the efficiency adjustment to our unadjusted estimate of 2013–14 opex.

311.9

million ($2013-14)

See AER opex model

for Essential Energy

Source: AER analysis

219    It may be seen from Table 7.4 in Attachment 7 to the Essential Final Decision that:

(a)    the AER started with Essential’s average opex over the period 2006 to 2013, namely, $352.5m ($2013);

(b)    it then, in Step 2, used the EI model to calculate a raw efficiency score of 54.9 percent;

(c)    the third step involved the choice of a comparison point of 76.8 percent, based on AusNet which is at the bottom of the top quartile of possible scores using the EI model;

(d)    the fourth step (described as Step 3 in the table) involved an adjustment to the raw efficiency score for OEFs – the adjustment being made to the comparison point on the assumption that the comparator (AusNet) would have to contend with the same environmental factors as Essential. This adjustment, made by the AER once the model had been run, lowers the comparison point to 69.4 percent;

(e)    the mathematical consequence of this “after-the-event adjustment, is that the efficiency score of Essential is calculated in what is described as Step 4 in the table at 20.9 percent below the comparison point;

(f)    then in what is described as Step 5 in the table, the AER calculated a mid-point efficient opex ie, the average efficient opex over the 2006-13 period based on the EI model. It then in Step 6 adjusted that mid-point forwards to 2012-13 following that adjustment in Step 7 with an adjustment of its estimate of Essential’s 2012-2013 base year opex for CPI to arrive at a figure of $308.2m ($2013-2014); and

(g)    finally, in what is described as Step 8 in the table, the AER converts its estimate of Essential’s base year opex into a final year estimate of $311.9m ($2013-14).

220    As observed above, because the AER was not satisfied that a forecast based on Endeavour’s actual opex could be regarded as materially inefficient, the AER did not undertake the steps taken outlined in Table7.4 to each of the Ausgrid and Essential Final Decisions to arrive at their base year opex to arrive at a base year opex for Endeavour.

221    The AER did, however, undertake those steps to arrive at ActewAGL’s base year opex. It may be seen by reference to Table 7.4 in Attachment 7 to the ActewAGL Final Decision that:

(a)    the AER started with ActewAGL’s average opex over the period 2006 to 2013, namely, $59.9m ($2013);

(b)    it then, in Step 2, used the EI model to calculate a raw efficiency score of 39.9 percent;

(c)    the third step involved the choice of a comparison point of 76.8 percent, based on AusNet which is at the bottom of the top quartile of possible scores using the EI model;

(d)    the fourth step (described as Step 3 in the table) involved an adjustment to the raw efficiency score for OEFs – the adjustment being made to the comparison point on the assumption that the comparator (AusNet) would have to contend with the same environmental factors as ActewAGL. This adjustment, made by the AER once the model had been run, lowers the comparison point to 62.4 percent;

(e)    the mathematical consequence of this “after-the-event adjustment, is that the efficiency score of ActewAGL is calculated in what is described as Step 4 in the table at 36.2 percent below the comparison point; and

(f)    then in what is described as Step 5 in the table, the AER calculated a mid-point efficient opex (ie, the average efficient opex over the 2006-2013 period based on the EI model). It then in Step 6 adjusted that mid-point forwards to 2012-2013 following that adjustment in Step 7 with an adjustment of its estimate of ActewAGL’s 2012-2013 base year opex for CPI to arrive at a figure of $45.1m ($2013-14).

222    Table A.1: Final determination estimates of efficient base year opex ($million 2013–14) in Attachment 7 to the Ausgrid Final Decision sets out the AER’S final determination estimates of base year opex for each Networks NSW DNSP as follows.

Table A.1    Final determination estimates of efficient base year opex ($million 2013-14)

Ausgrid

Endeavour

Essential

Revealed base opex (adjusted)a

492.2

225.7

418.0

AER base opex

374.2

233.3

308.2

Difference

118.0

-7.6b

109.8

Percentage base opex reduction

24.0%

N/A

26.3%

Note:    (a)    This number is the revealed 2012–13 opex, so it differs from the starting number in Table 7.4, which is average opex over 2006–13. We have adjusted the service providers’ revealed opex for debt raising costs, new CAM [cost allocation method] (if applicable) and new service classifications.

    (b)    Our estimate of base opex for Endeavour is slightly higher than Endeavour's because the reduced benchmark comparison point means its revealed costs are lower than the efficiency target.

Source:    AER analysis.

223    In a paragraph following Table A.1 in the Ausgrid Final Decision, the AER observed at pp 7-52 to 7-53 that:

Our reduction to Endeavour Energy's revealed opex was lower than that for Essential Energy and Ausgrid. Our analysis showed that Endeavour Energy had implemented efficiency programs earlier and to a greater extent than its two peers. However, we considered that as at 2012–13 (the base year), Endeavour Energy had further efficiency improvements to realise.

224    A like table in the ActewAGL Final Decision is reproduced below:

Table A.1    Final determination estimates of efficient base year opex ($million 2013-14)

ActewAGL

Revealed base opex (adjusted)a

67.2

AER base opex

45.1

Difference

22.1

Percentage base opex reduction

32.8%

Note:    (a)    This number is the revealed 2012–13 opex, so it differs from the starting number in Table 7.4, which is average opex over 2006–13. We have adjusted ActewAGL’s revealed opex for debt raising costs, new CAM (if applicable) and new service classifications.

Source:    AER analysis.

The AER’s application of the other rule 6.5.6(d) opex factors

225    Table 7.7: Our consideration of opex factors which appears in Attachment 7 to the Ausgrid Final Decision, reproduced below provides, a convenient summary of the AER’s application of the benchmarking opex factor in r 6.5.6(e)(4) and the opex factors in rr 6.5.6(e)(5), (5A), (6), (7), (8), (9A) and (10):

Table 7.7    Our consideration of opex factors

Opex factor

Consideration

The most recent annual benchmarking report that has been published under rule 6.27 and the benchmark operating expenditure that would be incurred by an efficient Distribution Network Service Provider over the relevant regulatory control period.

There are two elements to this factor. First, we must have regard to the most recent annual benchmarking report. Second, we must have regard to the benchmark operating expenditure that would be incurred by an efficient distribution network service provider over the period. The annual benchmarking report is intended to provide an annual snapshot of the relative efficiency of each service provider.

The second element, that is, the benchmark operating expenditure that would be incurred an efficient provider during the forecast period, necessarily provides a different focus. This is because this second element requires us to construct the benchmark opex that would be incurred by a hypothetically efficient provider for that particular network over the relevant period.

We have used several assessment techniques that enable us to estimate the benchmark opex that an efficient service provider would require over the forecast period. These techniques include economic benchmarking, opex cost function modelling, category analysis and a detailed review of Ausgrid's labour and workforce practices. We have used our judgment based on the results from all of these techniques to holistically form a view on the efficiency of Ausgrid's proposed total forecast opex compared to the benchmark efficient opex that would be incurred over the relevant regulatory control period.

The actual and expected operating expenditure of the distribution network service provider during any proceeding regulatory control periods.

Our forecasting approach uses the service provider's actual opex as the starting point. We have compared several years of Ausgrid's actual past opex with that of other service providers to form a view about whether or not its revealed expenditure is sufficiently efficient to rely on it as the basis for forecasting required opex in the forthcoming period.

The extent to which the operating expenditure forecast includes expenditure to address the concerns of electricity consumers as identified by the distribution network service provider in the course of its engagement with electricity consumers.

We understand the intention of this particular factor is to require us to have regard to the extent to which service providers have engaged with consumers in preparing their regulatory proposals, such that they factor in the needs of consumers.103 We have considered the concerns of electricity consumers as identified by Ausgrid in assessing its proposal – particularly those expressed in the consumer-focussed overview provided as an attachment to its regulatory proposal. For example, a clear theme present in this document is that customers consider electricity prices are too high.104

The relative prices of capital and operating inputs

We have considered capex/opex trade-offs in considering step changes for Ausgrid's head office building and for demand management expenditure. We considered the relative expense of capex and opex solutions in considering these step changes.

We have had regard to multilateral total factor productivity benchmarking when deciding whether or not forecast opex reflects the opex criteria. Our multilateral total factor productivity analysis considers the overall efficiency of networks in the use of both capital and operating inputs with respect to the prices of capital and operating inputs.

The substitution possibilities between operating and capital expenditure.

As noted above we considered capex/opex trade-offs in considering step changes for Ausgrid's head office building and for demand management expenditure. We considered the substitution possibilities in considering these step changes.

Some of our assessment techniques examine opex in isolation – either at the total level or by category. Other techniques consider service providers' overall efficiency, including their capital efficiency. We have relied on several metrics when assessing efficiency to ensure we appropriately capture capex and opex substitutability.

In developing our benchmarking models we have had regard to the relationship between capital, opex and outputs.

We also had regard to multilateral total factor productivity benchmarking when deciding whether or not forecast opex reflects the opex criteria. Our multilateral total factor productivity analysis considers the overall efficiency of networks with in the use of both capital and operating inputs.

Further, we considered the different capitalisation policies of the service providers and how this may affect opex performance under benchmarking.

Whether the operating expenditure forecast is consistent with any incentive scheme or schemes that apply to the distribution network service provider under clauses 6.5.8 or 6.6.2 to 6.6.4.

The incentive scheme that applied to Ausgrid's opex in the 2009–14 regulatory control period, the EBSS, was intended to work in conjunction with a revealed cost forecasting approach.

In this instance, we have forecast efficient opex based on benchmark efficient service provider. We have considered this in deciding how the EBSS should apply to Ausgrid in the 2009–14 regulatory control period and the 2014–19 period.

The extent the operating expenditure forecast is referable to arrangements with a person other than the distribution network service provider that, in our opinion, do not reflect arm's length terms.

Some of our techniques assess the total expenditure efficiency of service providers and some assess the total opex efficiency.

Given this, we are not necessarily concerned whether arrangements do or do not reflect arm's length terms. A service provider which uses related party providers could be efficient or it could be inefficient. Likewise, for a service provider who does not use related party providers. If a service provider is inefficient, we adjust their total forecast opex proposal, regardless of their arrangements with related providers.

Whether the operating expenditure forecast includes an amount relating to a project that should more appropriately be included as a contingent project under clause 6.6A.1(b).

This factor is only relevant in the context of assessing proposed step changes (which may be explicit projects or programs). We did not identify any contingent projects in reaching our final decision.

The extent the distribution network service provider has considered, and made provision for, efficient and prudent non-network alternatives.

We have not found this factor to be significant in reaching our final decision.

Source: AER analysis

226    Table 7.8: Other factors we have had regard to (reproduced below) in the Ausgrid Final Decision summarises other factors that the AER considered relevant and, pursuant to r 6.5.6(e)(12), notified each DNSP of prior to the DNSP submitting its revised regulatory proposal.

Table 7.8     Other factors we have had regard to

Opex factor

Consideration

Our benchmarking data sets, including, but not necessarily limited to:

1. data contained in any economic benchmarking RIN, category analysis RIN, reset RIN or annual reporting RIN

2. any relevant data from international sources

3. data sets that support econometric modelling and other assessment techniques consistent with the approach set out in the Guideline

as updated from time to time.

This information may potentially fall within opex factor (4). However, for absolute clarity, we are using data we gather from NEM service providers, and data from service providers in other countries to provide insight into the benchmark operating expenditure that would be incurred by an efficient and prudent distribution network service provider over the relevant regulatory period.

Economic benchmarking techniques for assessing benchmark efficient expenditure including stochastic frontier analysis and regressions utilising functional forms such as Cobb Douglas and Translog.

This information may potentially fall within opex factor (4). For clarity, and consistent with our approach to assessment set out in the Guideline, we are have regard to a range of assessment techniques to provide insight into the benchmark operating expenditure that an efficient and prudent service provider would incur over the relevant regulatory control period..

Source: AER Analysis

The Parties’ Submissions on the Principal Issue

227    Networks NSW and ActewAGL’s main submissions addressing the principal issue whether the AER’s application of the EI model discharged its obligations under rr 6.5.6 and 6.12.1(4) are addressed below under the following headings:

(a)    inadequacies in the EI model’s data set and comparability issues;

(b)    the AER’s lowering of the EI model’s comparison point;

(c)    the AER’s OEF adjustments;

(d)    the efficiency of the DNSPs’ vegetation management costs; and

(e)    the AER’s use of the EI model as the sole determinative of opex;

228    PIAC also addressed the principal issue, contending in particular that the AER had erred in its application of the EI model by its lowering of the EI model’s comparison point and in its OEF adjustments. The DNSP interveners broadly speaking supported the position taken by Networks NSW and ActewAGL, but it is noted that Ergon made a more refined criticism of the AER’s use of historic costs.

229    Other opex issues identified in the AER’s written opex submissions, addressed to the extent necessary having regard to the Tribunal’s conclusion on the principle issue, are:

(a)    did the AER fail to corroborate the EI model’s results;

(b)    did the AER’s have proper regard to endogenous circumstances;

(c)    did the AER have proper regard to the consequences;

(d)    alleged errors with respect to partial performance indicators (PPIs);

(e)    alleged errors with respect to labour costs; and

(f)    alleged errors relating to average or current efficiency.

Inadequacies in the EI model’s data set and comparability issues

230    The DNSPs’ submissions under this heading address what they see as a twofold weakness in the EI model’s data set. First, reliance on the Australian RIN benchmarking data. Secondly, the augmentation of the Australian RIN data with overseas data.

The RIN data

231    Networks NSW contends that the first weakness stems from the AER’s failure to collect data pursuant to its RINs in a manner sufficiently rigorous to be useful for benchmarking. It submits that the RINs were unclear and the requests for eight years of data resulted in:

(a)    different DNSPs recording data differently; and

(b)    some DNSPs estimating and backcasting some data.

232    Responding to the submission of a lack of clarity in the RINs resulting in DNSPs recording data differently, the AER:

(a)    points to the extensive consultation between it and the DNSPs to ensure that the DNSPs understood what was required of them; and

(b)    contends that the process of consulting with stakeholders and making necessary clarifications was part of the AER’s planned procedure for ensuring that the RINs were clear and the instructions and definitions were sufficient.

233    Support for the AER’s contention is to be found in Table A.2 Full process of the development of benchmarking data set which appears at pp 7-101 to 7-102 of Attachment 7 to the Ausgrid Final Decision. Table A.2 summarises the development of the AER’s benchmarking methodology, including the RINs, but stops short of the circulation of the EI model. The origins of the EI model does, however, appear as the final item in a three page chronology submitted by Mr O’Bryan on behalf of the AER.

234    The chronology shows that the AER’s development of its benchmarking methodology commenced in November 2011 with a joint AER / Australian Competition and Consumer Commission examination of benchmarking opex and capex in energy networks. The chronology then outlines some 41 steps (including seven workshops with DNSPs, consultants and consumer groups between December 2012 and May 2013). Those steps included:

(a)    papers developed by EI and issued by the AER;

(b)    workshops and public forums hosted by the AER;

(c)    input from the applicant DNSPs;

(d)    papers reviewing different benchmarking methods and an examination of the benchmarking practices of overseas regulators;

(e)    consultations between the AER and stakeholders on the development of the RIN pursuant to s 28D of the NEL for the purposes of obtaining information from each DNSP relevant to the AER’s obligation under r 6.27 to prepare and publish an annual benchmarking report (the first of which was required to be published on 30 September 2013 (r 6.27(d));

(f)    the AER publishing its economic benchmarking RINs (Economic benchmarking RIN-For distribution network service providers-Instructions and Definitions -NSP Name (ACN xxx xxx xxx), November 2013) together with an explanatory statement and instructions and definition documents (Better regulation - Explanatory statement - Regulatory information notices to collect information for economic benchmarking, November 2013);

(g)    the DNSPs submitting to the AER audited and certified responses to the economic benchmarking RINs on 30 April 2014;

(h)    the AER publishing its first annual benchmarking report AER, Electricity distribution network service providers–annual benchmarking report, November 2014, (six weeks after the date specified in r 6.27(d)); and

(i)    the circulation on 27 November 2014 of the First EI Report.

235    The chronology also shows that the DNSPs were given four opportunities to comment on the RIN templates before submitting their unaudited responses and then their audited RIN responses.

236    Networks NSW provides the following examples of the disparate reporting of opex in response to the RINs:

(a)    some Victorian DNSPs reporting nil vegetation management;

(b)    DNSPs adopting different allocations of their respective regulatory asset bases (RABs);

(c)    DNSPs applying different policies in classifying their opex and capex; and

(d)    DNSPs applying different policies in classifying expenditure as opex or provisions.

237    Networks NSW’s submissions attribute the disparity in vegetation management reporting to an unclear definition of “vegetation management activities” in the RINs. A report by its consultant PricewaterhouseCoopers (PwC), Ausgrid, Essential Energy and Endeavour Energy – Appropriateness of RIN data for benchmarking, 9 January 2015 (PWC January 2015 Report), states at p 38 that:

Most businesses found that the definitions of ‘vegetation management activities’ provided by the AER were unclear, deeming them unworkable.

238    The AER dismisses Networks NSW’s submission on the disparity in vegetation management reporting by noting that vegetation management expenditure data was not used separately in its econometric benchmarking – the relevant opex measure being the aggregated network services opex figure.

239    It also dismisses Networks NSW’s submission relating to the disparity in classification of RAB values as irrelevant because such values were not included in its econometric models and, insofar as RAB data was used to weight the volume of inputs and outputs in the MTFP model, because MTFP is an index-based benchmarking method, the outcomes of that model are less sensitive to the weighting of inputs than they will be to the quantum of the inputs. Therefore any comparability issues in the RAB data have only a minimal impact on the MTFP results.

240    The AER also summarily dismisses Networks NSW’s concerns about the DNSPs classifying their opex and capex differently by observing that was adjusted, where necessary, through an OEF adjustment. This summary dismissal does not address the concern expressed by Networks NSW’s consultant PwC in its January 2015 Report that:

The capex / opex split between the businesses differs, ranging from 62% capex / 38% opex at SA Power Networks compared to 74% capex / 26% opex at CitiPower. This could be due to a range of factors including the relative age of the networks, capitalisation policies and cost allocation approaches. If there is more capitalisation, the operating expenditure reported by the business will be lower. Cost allocation methodologies and capitalisation policies affect the data provided by the DNSPs in the RIN, in particular the allocation of labour costs and overheads. This affected the AER’s calculation of the opex efficiency score and the level of reductions to opex for each of the three NSW DNSPs.

241    Nor does it address the conclusion in the PwC January 2015 Report at p 37 that the AER did not meet the AEMC’s necessary preconditions for benchmarking because (with emphasis as in the original):

    the benchmarking data is not long term reliable information as it was not provided on a like-for-like basis due to differences in capitalisation policies and approaches;

    the benchmarking data is not high quality due to the different cost allocation approaches undertaken by the DNSPs which impact the cost structures and expenditure incurred;

    the benchmarking data is not consistent time series data due to the differences in allocation of indirect costs over the last decade;

    the benchmarking data is not based on consistent definitions for the purpose of benchmarking.

242    The AER notes that Networks NSW’s final example of the disparate reporting of opex in response to the RINs (differing policies in classifying expenditure as opex or provisions) was incorrect in implying that opex and provisions are separate cost categories. It says provisions simply affect when a DNSP records a cost (ie costs which have been incurred but will be paid in the future) and each of the examples in Networks NSW’s submissions is an opex provision meaning that the cost of the provision is reflected in network services opex.

243    Expanding on its submission to the effect that weaknesses in the RINs resulted in some DNSPs estimating and backcasting some data, Networks NSW describe backcasting as involving the DNSPs creating estimates based on a set of assumptions for data points related to the past where actual results were not classified or categorised in the way the RIN required the information.

244    Expanding on its description of backcasting, Networks NSW notes that:

(a)    the RINs were issued in November 2013 and required the DNSPs to supply data on the values and quantities of outputs, inputs and OEFs for an eight year period (from 2005-06 to 2012-13) within a short space of time (three months for an initial draft, and two further months for the final); and

(b)    the data requested was not all kept in the ordinary course of the DNSPs’ business. Accordingly, the DNSPs were required, to “backcast the data required.

245    Networks NSW provided the following examples of backcasted data:

(a)    Ausgrid and Endeavour were required to backcast data for all current opex categories for FY2006 to FY2010, as a result of a material change in the Annual Reporting Requirements by the AER in 2011; and

(b)    Endeavour was required to backcast data for 2009 to 2012 for certain OEFs, including the total number of spans and the average urban and rural maintenance span cycles.

246    Networks NSW also submitted that in a significant number of instances the DNSPs did not have accurate records of the required data and had to provide an estimate in lieu thereof – in support of its submission it provided details of Ausgrid, Endeavour and AusNet estimating route line length, CitiPower estimating its overhead conductors and underground cables and referred to a number of DNSPs estimating the number of urban/rural maintenance spans.

247    Networks NSW’s consultant, Frontier Economics Pty Ltd (Frontier), recognised some backcasting is, perhaps, inevitable when compiling a new dataset such as that for the purposes of the AER’s annual benchmarking report. It did, however, make the point that compiling several years of RIN data at once “carries major risks.” In its report Review of AER’s econometric models and their application in draft determinations for Networks NSW, January 2015 (the Frontier Report), it illustrated (at p 78) the risks as follows:

… if a network misinterprets how it ought to report certain data (which is very possible for the first time it reports RIN data), that mistake may be propagated through the full eight years of information reported. That, in turn, would distort comparisons with other networks not just for a single year but for all years that the data are reported. Such misreporting over the entire period would impact directly on the measures of ‘inefficiency’ derived by EI’s modelling.

248    Having identified how RIN data errors and inconsistencies may arise, the Frontier Report noted (at p 82) that:

Given the very real scope for data errors …, owing to the newness of the RIN data collection process and the lack of opportunity for learning and refinement, it is surprising to us that EI and the AER apparently have such confidence in the reliability of the modelling results.

249    Responding to Networks NSW’s submissions that the AER failed to collect the RIN data in a manner sufficiently rigorous to be useful for benchmarking, the AER submits that those submissions “create a misleading picture” for three reasons:

(a)    first, many of the illustrations of estimated data referred to concern data that was not used in the benchmarking models, eg the backcast data for rural spans as described by Networks NSW was not included in any of the AER models on which its decision was based;

(b)    secondly, contrary to Networks NSW’s submission, estimates were not used in “a significant number of cases”, most of the Australian data in the AER benchmarking models is actual data, not estimates; and

(c)    thirdly, the general assertion that estimated or backcast data used in the AER benchmarking models is unreliable is not supported by evidence.

250    The AER sought to support its second reason by reference to the results of an examination of each of the six categories of data used in its models, namely, network services opex, customer numbers, ratcheted maximum demand, circuit line length, proportion of undergrounding and system average interruption duration index (SAIDI) reliability data (the SAIDI data being used only in the MTFP and MPFP models).

251    As presented by the AER the examination shows:

(a)    network services opex: 8 of the 13 DNSPs provided actual data;

(b)    customer numbers: since 2006 the Australian Energy Market Operator has required the DNSPs to maintain a unique National Metering Identifier (NMI) for each customer and information for reporting data in this category was available to each DNSP – estimates only having to be made where the NMI could not identify a “de-energised” customer (ie a property not currently receiving electricity) or in respect of a very small portion of customers who are unmetered;

(c)    ratcheted maximum demand: 10 of the 13 DNSPs provided actual data – the remainder estimated data in some years and actual data in the others;

(d)    circuit length: 7 of the 13 DNSPs provided actual data – only one, Essential, provided an estimate for all years and the other five provided an estimate for some years and actual data in others;

(e)    proportion of undergrounding: 7 of the 13 DNSPs provided actual data in every year – only one, Essential, provided an estimate for all years and the other five provided an estimate for some years and actual data in others; and

(f)    SAIDI reliability data: 11 of the 13 DNSPs provided actual SAIDI data in every year – only one DNSP estimated SAIDI data in all years and the other estimated data in some years and actual data in the others.

252    While, as presented by the AER, the results of its examination of the six categories of data does show that the majority of the 13 DNSPs may have responded to the RINs with actual data, the following alternative reading of the results shows:

(a)    5 of the 13 DNSPs estimated network services opex data;

(b)    3 of the 13 DNSPs estimated ratcheted maximum demand data in some years and provided actual data in the others;

(c)    6 of the 13 DNSPs estimated circuit length – one, Essential, provided an estimate for all years and the other five provided an estimate for some years and actual data in others;

(d)    6 of the 13 DNSPs estimated their proportion of undergrounding – one, Essential, provided an estimate for all years and the other five provided an estimate for some years and actual data in others; and

(e)    2 of the 13 DNSPs estimated SAIDI reliability data – one in all years and the other in some years with actual data being provided in the others.

253    This alternative reading of the results supports Networks NSW’s submission that a significant number of DNSPs estimated three categories of data used by the AER in its models. It also shows that estimates were used in another two categories (by three DNSPs in one category and by two in another).

254    Such an alternative reading should have put the AER on notice that it may, at this point in the evolution of the RIN data, have to treat the RIN data with greater caution than it did and not rely on it to the extent that it did, particularly as observed by its consultant, EI in the Second EI Report at pp x and 25:

… this is the first time economic benchmarking is being used as the primary basis for an Australian regulatory decision.”

… it is important to recognise that the characteristics of the Australian RIN data make any econometric model estimated using only the RIN data insufficiently robust to support regulatory decisions.

255    Also, as ActewAGL submits, it is in light of the second of the quotations from the Second EI Report, “somewhat surprising” that EI ran a model using only RIN data to corroborate its EI model.

256    Expanding on the third reason for Networks NSW submissions on the robustness of the RIN data capture creating a misleading impression (because it is not supported by evidence), the AER submits that:

(a)    where DNSPs provided estimates, they were not generally required to create a completely new series of data;

(b)    in most cases, the DNSPs produced estimates by drawing together actual information from a number of different sources in their business records, eg although Essential estimated data for network services opex, circuit length and proportion of undergrounding, the estimates were based directly on information that Essential had been collecting for the whole benchmarking period; and

(c)    in each case where data was estimated by a DNSP, “… the DNSP’s CEO signed a statutory declaration attesting to the robustness of the data.

257    As Networks NSW points out, the statement that “… the DNSP’s CEO signed a statutory declaration attesting to the robustness of the data” (which appears at p 7-128 of Attachment 7 of the Ausgrid Final Decision) is “wrong, and quite misleading.” According to Networks NSW, the form of statutory declaration which CEOs were required to sign did not attest to the “robustness” of the data. Rather, it submits, each CEO attested that the actual information provided was true and accurate and, where it was not possible to provide actual information, that a best estimate had been provided along with the basis of the estimate. In fact, Networks NSW submits, many businesses pointed out that the data was not robust and could not be relied upon for the purposes for which the AER apparently sought it.

258    Likewise, ActewAGL submits in reply “[A]t no point did ActewAGL’s CEO attest to the “robustness” of the data or as to its fitness for purpose in benchmarking. Rather, he attested to the fact that the data was ActewAGL's ‘best estimate’.”

259    The DNSPs’ submissions to the effect that the AER’s reliance on the RIN data point to a weakness in its benchmarking are persuasive. Support for those submissions is found in the above quoted passages from the DNSPs’ consultants’ reports. That support is reinforced by the passages from their consultants’ reports quoted below under the heading “The AER’s use of the EI model as the sole determinative of opex”.

260    Having regard to the above paragraphs under the heading “RIN data” and to paragraphs appearing below following the heading “The AER’s use of the EI model as the sole determinative of opex”, it is the view of the Tribunal that at this point in its evolution the RIN data is not data upon which the AER might rely on to the extent that it did:

(a)    in its application of r 6.5.6(e)(4) to determine the benchmark opex that would be incurred by an efficient DNSP; or

(b)    to corroborate the AER’s use of the EI model.

Overseas data

261    Turning now to outline what the DNSPs perceive as the second weakness in the EI model’s data, namely, the augmentation of the Australian RIN data with overseas data.

262    As may be observed from the following extracts from the First EI Report at p 29, ActewAGL rightly submits that it appears that EI chose to incorporate data from New Zealand and Ontario, not because the DNSPs in those countries were comparable, but because data was available that appeared to be in a similar form to the data in the RIN database.

Given that the New Zealand database has been constructed in a largely similar fashion to the AER’s economic benchmarking RIN database in terms of variable coverage, it is a prime candidate for use in supplementing the number of observations available from the RIN database.

… … …

The other jurisdiction that has a relatively long and consistent history of electricity DNSP productivity measurement is Ontario. Pacific Economic Group Research (PERG 2013) recently undertook … benchmarking work for the Ontario Energy Board (OEB) using a similar output specification to that used in section 3 above [ie ratcheted maximum demand, customer numbers and circuit length]. The OEB has put the database used in the public domain. While the Ontario database has similar coverage of outputs (other than reliability) to that used above and has good detail on opex, it is much more limited with regard to capital input and operating environment factor variables. Asset values are based on historic cost, for example, and, while there is data on the number of transformers, there is no data on transformer capacity. While Ontario’s climate is somewhat different to Australia’s, a significant attraction of the OEB database is the number of observations it offers with data for 73 DNSPs over 11 years from 2002 to 2012.

263    ActewAGL also highlights the significant disparities in size of the Australian DNSPs, on the one hand, and the overseas DNSPs, on the other, by reference to the following table which appeared in the Frontier Report at p 26:

Australia

Ontario

New Zealand

Australian value as multiple of Ontarian value

Australian value as multiple of New Zealand value

Energy (GWh)

11,038

3,073

1,441

4

8

Maximum Demand (MW)

2,346

603

287

4

8

Ratcheted Maximum Demand (MW)

2,516

651

313

4

8

Customer Numbers

731,308

124,270

96,577

6

8

Circuit Length (kms)

56.561

5,045

6,771

11

8

264    It may be concluded from the Frontier table reproduced above that the Australian DNSPs are, on average:

(a)    four times larger than the companies in Ontario when compared using energy delivered and demand, six times larger when compared using customer numbers and eleven times larger when compared using circuit length; and

(b)    eight times larger than the DNSPs in New Zealand when compared against all these measures of scale.

265    Also, as may be observed from the following extracts from the First EI Report at p 11, ActewAGL rightly submits that it appears that EI’s choice of output variables in the EI model (ratcheted maximum demand, customer numbers and circuit length) have been largely determined, in a somewhat circular manner, by the data that was available between all three countries.

… we agree with PEGR [PEG] that the four output specification covering energy throughput, ratcheted maximum demand, customer numbers and circuit length represents a useful way forward as it captures the key elements of DNSP functional output in a linear fashion and introduces an important demand side element to the measurement of system capacity outputs. Because we have reliable data on all four output variables, all four are included in our analysis.

266    ActewAGL rightly points out that:

(a)    in essence both the construction of the EI models, and the data used within those models, were dictated by what was available to EI, not what was likely to give the most reliable indicator of efficient costs; and

(b)    this ought to cause the Tribunal significant concern about the reliability of the output of EI’s models.

267    Each of the following issues that the AER identifies as the parties raising in relation to the EI model’s use of overseas data are addressed seriatim:

(a)    whether the New Zealand and Ontario regulators used the overseas data for benchmarking opex;

(b)    whether the parties had an adequate opportunity to verify the accuracy of the overseas data;

(c)    whether the overseas data contains errors;

(d)    whether there are errors in vegetation management costs which are not reported separately in New Zealand or Ontario;

(e)    whether the overseas entities adopted different definitions for the purpose of collecting data – particularly in relation to maximum demand;

(f)    whether there was a failure by either the AER or EI to conducted sensitivity testing in relation to potential errors in the overseas data, or inconsistencies between the overseas data and the Australian data, in order to quantify their potential impact; and

(g)    whether the AER and EI were correct to ignore other potential sources of overseas data which yielded different results – particularly US data which is preferable to the Ontario data.

268    Addressing the first of those overseas data issues identified by the AER (whether the New Zealand and Ontario regulators used the overseas data for benchmarking opex) both ActewAGL and Networks NSW point to s 53P of the Commerce Act 1986 (NZ). Part 4 of that Act establishes a regulatory regime with a number of similarities to the NEL and s 53P provides, in effect, that before the end of a regulatory period the NZ Commerce Commission must set out the starting prices, rates of change and quality standards that are to apply for the following regulatory period. However, s 53P(10) precludes the Commission from using:

… comparative benchmarking on efficiency in order to set starting prices, rates of change, quality standards, or incentives to improve quality of supply.

269    Thus, ActewAGL submits, it cannot be inferred, that the Commission has made any attempt to ensure that the data has been recorded in a manner that makes it fit for the purpose of benchmarking.

270    As to the Ontario data, Networks NSW submit that:

(a)    the regulator (the Ontario Energy Board (OEB)) has not adopted a similar model to the EI model and, perhaps, in recognition of the difficulties in classification of expenditure between opex and capex and of the legitimate trade-offs which electricity distribution entities must make, benchmarks on total expenditure (totex, ie opex plus capex); and

(b)    there is therefore no particular reason to assume that the overseas data is suitable for benchmarking opex.

271    The AER rejects the implications that might be drawn from the ActewAGL and Networks NSW submissions addressing the issue whether the New Zealand and Ontario regulators use the overseas data for benchmarking opex.

272    In relation to the New Zealand data, relying on an earlier October 2014 EI report, the AER observes that the Commerce Commission:

(a)    uses a total factor productivity measurement (a form of economic benchmarking), in forming a view about long-term productivity growth for New Zealand DNSPs to set their X factors in their default price paths; and

(b)    measures opex productivity growth to form a view about the opex productivity growth rate to include in its DNSP opex forecasts which are used to set starting prices in its DNSP default price paths and that these productivity growth rates are industry average rates formed from the same database used in the EI’s model.

273    As to the claim that the OEB does not use the Ontario data for benchmarking opex, the AER makes the points that:

(a)    it does use the data as part of benchmarking total cost performance (ie opex, plus the return of and on capital);

(b)    the fact that the data is combined with other data to estimate total cost is irrelevant; and

(c)    the OEB is conducting benchmarking using the same opex data used in the AER econometric models.

274    While the Final Decisions do not fully address issues relating to New Zealand and Ontario regulators’ respective use of the data and application of benchmarking, the AER’s submissions are persuasive in answering the DNSPs’ submissions on those particular issues. That is not to take the step of saying that the benchmarking by the AER is itself free of error of the character under consideration by the Tribunal.

275    The Tribunal now turns to consider the second of the issues identified by the AER as being raised against the EI model’s use of overseas data, namely, whether the parties had an adequate opportunity to verify the accuracy of the data.

276    While ActewAGL does not raise what it describes as “the relative speed” by which the AER developed its benchmarking methodology as a ground of review, it rightly submits that the circumstances in which the AER developed its methodology and implemented it is:

… an important contextual matter, and one which informs the extent to which the Tribunal can have confidence in the robustness of the AER’s approach.

277    The steps taken by the AER in the development of its benchmarking methodology are outlined above.

278    The thrust of r 6.5.6(e)(4) is that the AER must have regard to the most recent annual benchmarking report that has been published under r 6.27 and the benchmark opex that would be incurred by an efficient DNSP.

279    Rule 6.27 enlivens r 8.74 in relation to the steps that the AER must take before preparing and publishing an annual benchmarking report. Those steps include consultation with the DNSP and a 30 day period to make submissions before the report is published including an opportunity to comment on material of a factual nature to be included in the report.

280    It appears from the above mentioned chronology submitted on behalf of the AER and from ActewAGL’s leave application and its submissions that:

(a)    on 5 August 2014, the AER provided the DNSPs with a copy of its Draft Annual Benchmarking Report;

(b)    on 18 November 2014 (nine days before the Draft Decisions were published on 27 November 2014) the AER provided the DNSPs with:

(i)    a copy its Annual Benchmarking Report; and

(ii)    a copy of the First EI Report;

(c)    the Annual Benchmarking Report was directed towards the use of a MTFP benchmarking model; and

(d)    the DNSPs’ submissions in the course of the consultation process leading to the publication of the Draft and Final Annual Benchmarking Report were directed to the use of a MTFP model, not the econometric EI model which was (with adjustments) adopted by the AER in its Draft Decisions and, with further adjustments, in its Final Decisions when it did not accept the DNSPs’ opex forecast and made its own estimates of their required opex.

281    It is ActewAGL’s view that the manner in which the AER prepared its Draft Decisions in reliance on the EI model deprived the AER of a robust exchange of views with the DNSPs and their experts. Such an exchange, it rightly submits, was:

… particularly important in a context where the AER proposed to rely on an ambitious modelling technique, for the first time, as the sole basis on which to reach an assessment of the quantum of the DNSPs’ opex allowances.

282    ActewAGL supported its submission with the following extracts from pp 104 and 105 the Frontier report:

A key flaw of the analysis undertaken by the AER is the application of very ambitious modelling techniques, such as … [the EI model]…, to very imperfect data. Indeed, it appears that the main reason the AER has felt the need to employ overseas data, without appropriate checks for robustness and consistency, is its desire to employ sophisticated techniques such as … [the EI model].

We recognise that the AER is obliged to undertake benchmarking under the National Electricity Rules (NER). However, the NER also provide the AER with considerable flexibility to choose the most appropriate benchmarking techniques and methodologies. The AER should not feel constrained to restrict itself to benchmarking using formal statistical techniques alone.

Given the limitations of the Australian RIN data, and the lack of time for learning and iterative improvement of the data, we recommend that the AER rely on much simpler benchmarking techniques. We reiterate that regulators in Europe, who have had considerably more experience, and time to compile consistent data, than has the AER, typically use much simpler, and more pragmatic benchmarking techniques.

The AER has applied a very narrow interpretation of benchmarking. Its Expenditure Forecast Assessment Guideline sets out a very long list of potential benchmarking techniques, all of which would be recognised in Europe and many of which are used by regulators overseas. Whilst it canvassed in its Guideline the potential use of many alternative techniques, its assessment of relative efficiency seems to drive off only one technique, … [the EI model]…, and that too in a very mechanistic fashion. Given the sensitivity of such techniques to the quality of the data, and the fact that the RIN data are very new and relatively untested, the AER should not have, in our view, placed so much reliance on statistical techniques such as … [the EI model]. Rather, in our view, the AER should have initially tried much simpler, less ambitious techniques and then aimed to build up to more complex techniques once it, and networks and customers, have greater confidence in the data and in the AER’s approach to benchmarking.

283    ActewAGL is correct in its submission that having regard to the limited time between the date of publication of the Draft Decisions and the date by which the Final Decisions had to be made, the AER had little opportunity to develop an alternative methodology to estimate the required opex in accordance with r 6.12.1(4)(ii). A fortiori, it says, as the AER would need to discharge its obligations to accord procedural fairness under administrative law and s 16(1)(b) of the NEL in respect of any new methodology.

284    This, ActewAGL submits, is not to suggest that the AER did not conscientiously examine submissions received after the Draft Decisions. It is:

… simply to recognise that the AER’s decision making approach could hardly have been more unfortunate, in that it did not ensure that its methodology was exposed to an unhurried and carefully considered dialogue with all interested stakeholders, without the imminent deadline of the date by which the Final Determination had to be made.

285    ActewAGL’s submission is reinforced when regard is had to the numerous DNSPs’ consultants’ reports critical of the AER’s application of the EI model, many of which disparaged the model’s use of overseas data.

286    Networks NSW submitted six such reports. Each raised issues which, in circumstances where the AER is employing a sophisticated and complex benchmarking model for the first time, would have benefited from a wider critical exposure and response through the AER’s consultation procedures than just the responses by the AER and EI. The consultants (each highly qualified and well recognised experts) their reports and the issues they raised are:

(a)    The Frontier Report. As fairly summarised by Networks NSW, the Frontier report is critical of the inadequacy of the RIN data, comparability and other issues associated with the use of overseas data, the failure of the EI model to consider alternative explanations of heterogeneity and the AER’s deterministic application of the EI model’s results.

(b)    Huegin Consulting, Huegin’s response to Draft Determination on behalf of NNSW and ActewAGL - Technical response to the application of benchmarking by the AER, 16 January 2015 (the Huegin Report). Huegin highlights the inadequacies of the AER’s approach by comparing it to best regulatory practice and identifying what it perceives as errors associated with the cost drivers chosen by EI, inadequate consideration of environmental variables and the use of average results over an eight year period. Huegin also criticises EI’s sensitivity analysis and questions the corroborative value of the alternative methods presented by EI.

(c)    Cambridge Economic Policy Associates Ltd (CEPA), Networks NSW - AER draft determination, 16 January 2015 (the CEPA Report). The CEPA report observes that the author’s:

“… investigations, initially of ActewAGL, but applying more generally to all DNSPs, … cast doubt on the claim that the AER has correctly carried out the opex benchmarking, and at least has not given sufficient consideration to the limits of its opex benchmarking.

(d)    The Second PEG Report. This Report provides a detailed introductory background to benchmarking, the salient considerations in the benchmarking of network services opex, reviews the practice of international regulators and critiques EI’s work on behalf of the AER. It particularly identifies issues with their use of the RIN and overseas data, and the use of the results by the AER.

(e)    Advisian Pty Ltd (Advisian), Review of AER Benchmarking Networks NSW, 16 January 2015. Advisian found that there were significant differences between the DNSPs used for benchmarking purposes, with limited meaningful consideration by the AER or EI to ensure that the benchmark data has appropriately been normalised. Advisian also identified issues with, amongst other things, the AER’s benchmarking approach relating to:

(i)    comparability of the DNSPs used for benchmarking purposes;

(ii)    the failure to appropriately consider the effect of spatial density (customers/km2) in addition to linear density (customers/km) on efficient opex;

(iii)    the need for each DNSP to operate and maintain, in a safe and reliable manner, the assets it actually has, rather than the assets it might have had;

(iv)    a failure to account for exogenous factors (such as the nature of the assets and the development history of the network) that have influenced the development of the existing asset base; and

(v)    the AER’s assessment of Essential’s vegetation management expenditure to support its conclusion that the NSW DNSPs are inefficient.

(f)    The PwC January 2015 Report reviewed the RIN data for the NSW DNSPs and five other DNSPs (CitiPower, Powercor, AusNet, United Energy and SA Power). The author identified seven issues with the RIN data where he considered a correction should be made and considered when assessing the efficiency of the DNSPs. In the author’s opinion the issues are a central part of the AER’s MTFP and PPI analysis and directly impact the AER’s benchmarking results.

287    ActewAGL also submitted reports by CEPA, Huegin and Advisian raising similar issues to those raised in their reports on behalf of Networks NSW. In addition to those reports, ActewAGL submitted the report by AECOM Australia Pty Ltd (The impact of the AER’s Draft Decision on ActewAGL’s Service and Safety performance, 15 January 2015 which, like its other reports and those submitted by Networks NSW, would have benefited from a wider critical exposure and response through the AER’s consultation procedures than just the responses by the AER and EI. The report concluded, amongst other things:

(a)    the benchmarks used by the AER have little or no relevance to a DNSP, and judgments made based on them are of limited value;

(b)    that a forced reduction in ActewAGL’s replacement / renewal expenditure (repex) and opex of the scale suggested by the AER would have a significant impact on the level of service it is able to provide including a potential impact on safety levels associated with its assets; and

(c)    ActewAGL has experienced engineers who use all the information available to them and sophisticated analysis tools to optimise total cost of ownership for critical assets and therefore determine the optimal timing for replacement / renewal (and therefore for repex) whereas the AER relied on generic econometric models and distantly related data from other sources to over-rule ActewAGL’s experienced projections, in many cases using ‘average’ asset lives that are almost double ActewAGL’s experience-based estimates.

288    The limited opportunity afforded the DNSPs and the lack of opportunity afforded to PIAC and other interested parties to comment on the AER’s application of a benchmarking methodology reliant on overseas data does not of itself give rise to a relevant ground of review. However, as the DNSPs submitted, it tends to tell strongly against the acceptance of that methodology and the resulting estimates of the DNSPs’ required opex that the AER derived from it.

289    In response to the third issue identified by the AER as being raised against the EI model’s use of overseas data (whether it contains errors) the AER notes that there is no evidence that suggests the data collected by the NZ Commerce Commission and the OEB is unreliable and that, contrary to the DNSPs’ submissions, the Commission and the OEB do rely on the data – data, which the AER asserts, is collected and verified in a manner similar to the AER’s collection and verification process.

290    In responding to Networks NSW’s concerns that sudden, unexpected and unexplained changes in the overseas data appear to be the product of human error, the AER relies on the Second EI Report’s conclusion that, with one exception, they are unlikely to be errors, but simply movements attributable to the small size of some of the DNSPs.

291    That response, however, gives credence to a submission by Networks NSW that a finding that the relationship between cost drivers and opex is not consistent across Australia, New Zealand and Ontario DNSPs and that it makes no sense to compare small New Zealand and Canadian DNSPs, which may experience volatile changes, to large DNSPs in Australia which do not.

292    In relation to the fourth issue on overseas data (vegetation management costs) it is Networks NSW’s contention that because such costs are not reported separately in New Zealand or Ontario for the period of the RIN data, it is impossible to verify whether there are errors in relation to this category of opex. The AER dismisses this contention as irrelevant to the AER models because the input measure used in its models is total opex and vegetation management costs are not a separate input measure. Thus, in the AER’s view, as long as total opex is consistent within each country, it is not necessary to review the individual components of that figure across countries.

293    Networks NSW relies on the Second PEG Report to develop its submissions on the fifth overseas data issue, whether the overseas entities adopted different definitions for the purpose of collecting data – particularly in relation to the definitions of:

(a)    opex – opex in Ontario includes the costs of customer care services such as metering and billing but excludes costs of maintaining substations with incoming voltage exceeding 50kV; and

(b)    ratcheted maximum demand – Australian and New Zealand DNSPs report “coincident ratcheted maximum demand” representing the peak level of demand across a DNSP’s entire network at any one time whereas Ontario DNSPs report ‘non-coincident ratcheted maximum demand’, representing the aggregate of the peak levels of demand for the individual constituent parts of a DNSP’s network (the constituent parts usually being aggregated at either the subtransmission substation level or the zone substation level).

294    It is Networks NSW’s submission that non-coincident ratcheted maximum demand is generally higher than coincident ratcheted demand, because different parts of a DNSP’s network may experience peak levels of demand at different times and the difference can be substantial – for the 13 Australian DNSPs during the RIN data period, in 5 cases the difference was over 10 percent and in one case over 25 percent.

295    The AER’s answer to Networks NSW’s overseas definition issues is that the differences do not have any material effect on the results of the EI model because the country dummy variables are explicitly designed to correct for systematic reporting differences. Networks NSW’s reply to the AER’s answer is a blunt, on target, “It will not.”

296    The Tribunal is not of the view that the country dummy variables, in the present circumstances, correct for systematic reporting differences. As Networks NSW rightly submits, the type of country dummy used by EI assumes the relevant relationships between cost drivers and opex is the same across the three jurisdictions and cannot control for the situation where one of the relevant cost drivers has been defined differently in one jurisdiction, thereby altering the relationship between it and opex for that jurisdiction. Thus, for the same reason, the country dummy variables do not “correct for the differences in the examples provided by the AER relying on the Second EI Report, as outlined above.

297    As observed by the following Networks NSW consultants:

(a)    the CEPA Report at p 17:

Including a dummy variable in the model specification does not necessarily control for these within and across country differences. A dummy variable only controls for level differences between datasets not cost relationship differences.

(b)    the Frontier Report at p ix:

The consequences of the significant differences in operating environment across the sample is that the business models applied by the businesses are likely to be very different – for example, an Ontarian business operating in a harsh wintry environment will have a completely different business model to achieve a given level of security of supply than a rural Australian network operating over an enormous service region. In turn, this will mean that the relationship between costs and cost drivers is quite different across the two jurisdictions, and is not amenable to being captured by a relatively small number of high level explanatory factors combined with country dummy variables (as per EI’s approach).

298    Also, relying on the following extract from the CEPA Report at p 16, ActewAGL submitted that a dummy variable is entirely inapposite to adjust for differing cost relationships between jurisdictions:

The introduction of the dummy variable takes a fixed amount of Country A’s opex per network length to bring its average in line with Country B's. However, the slope of the line (the relationship between opex and network length) is not impacted by the introduction of the country dummy variable. A proper econometric analysis is more complex than this and should take account of country-specific slopes, which will require more variables to take this into account. For example, if the relative prices of labour and capital differ, then one would expect a different relationship between cost and customer numbers (e.g. higher labour costs should lead to more capex and lower maintenance costs, but higher costs of dealing with customers).

299    As seen by ActewAGL, the issue is of “significant concern. That is because most of the data used within EI’s models, including the EI model, comes from overseas. Australia only accounts for 19 percent of the data points used. Accordingly, even with the use of a “dummy variable, the slopes (coefficients) estimated by the regression models will closely follow the overseas DNSPs, rather than the Australian DNSPs, because of the sheer volume of data that comes from overseas. That is, the model will reflect cost relationships between opex and drivers of opex that exist in the overseas DNSPs, rather than modelling relationships that exist in Australia.

300    The AER’s response to CEPA, Frontier and PEG’s concerns about the use of country dummy variables appears in Attachment 7 to the Ausgrid Final Decision at p 7-121. Noting that:

(a)    while CEPA agrees that the dummy variables control for level differences between databases but considers they do not account for cost relationship differences; and

(b)    similarly, Frontier and PEG submit that each service provider's costs are influenced by factors not captured by the explanatory variables in the EI model,

the AER observes that:

In response to this, Economic Insights considers for such differences to have a material impact on the model results, significant differences in the technology to distribute electricity would need to exist. Economic Insights notes the international service providers deliver the same services using poles, wires and transformers so it does not agree that such a fundamental difference exists.

301    It is the view of the Tribunal that that observation:

(a)    glosses over the multitude of differences that no doubt exist between the poles, wires and transformers of the Australian DNSPs and those of their overseas counterparts – differences due to such matters as topography, climate and regulations relating to the standards of the poles, wires and transformers; and

(b)    fails to address the fact that the EI model’s use of country dummy variables impugns the robustness claimed for the model by the AER.

302    Indeed, the EI model’s use of country dummy variables reveals a significant weakness in the model.

303    In respect of the sixth issue, raised by Networks NSW, relating to the use of overseas data (an alleged failure by either the AER or EI to conduct sensitivity testing in relation to potential errors in the overseas data, or inconsistencies between the overseas data and the Australian data, in order to quantify their potential impact) the AER says that the allegation is not correct because EI undertook extensive sensitivity testing by creating four different economic models and PPI analysis, some of which incorporated the overseas data and some of which did not. Moreover, EI calculated each of the AER econometric models using different combinations of overseas data (the “full”, “large”, “medium” and “small” datasets), as well as different combinations of included outputs, different functional forms and different estimation methods. The AER submits that:

(a)    as a result EI found that the modelling results were relatively insensitive to the dataset used and to small changes in specification; and

(b)    EI’s finding indicates the results of the AER models are robust and insensitive to possible errors of the type alleged.

304    In reply, Networks NSW asserts that the AER’s submission should be rejected because the sensitivity testing referred to by the AER involved the following:

(a)    the econometric models estimated originally by EI which contained five variables (energy throughput, customer numbers, circuit length, ratcheted maximum demand and share of undergrounding);

(b)    EI subsequently dropping the energy throughput variable as it was correlated with ratcheted maximum demand causing the variables to lack statistical significance;

(c)    EI running four econometric models (an LSE model with Cobb-Douglas and translog functional forms and an SFA model with Cobb-Douglas and translog functional forms) however, EI did not present the results of the translog SFA model;

(d)    concerns by EI that the inclusion of very small DNSPs in the Ontario and New Zealand data sets might exert undue influence on the results, causing EI to run each of the models on different data sets with a different cut off of customer numbers. These were the Full (all DNSPs), Large (DNSPs > 10,000 customers), Medium (DNSPs > 20,000 customers) and Small (DNSPs > 50,000 customers) data sets. The preferred data set was the medium data set, containing 68 DNSPs, of which 37 were from Ontario and 18 from New Zealand and, as submitted by Networks NSW, this data set still includes a large number of DNSPs that are very small compared to the Australian businesses;

(e)    EI has not presented the results from any data set aside from the medium data set.

305    Thus Networks NSW claims that the “sensitivity testing done by EI was not sensitivity testing at all because all econometric models presented and relied upon use the same data and the same variables. Networks NSW went on to submit that the following options for genuine sensitivity testing might have been adopted by EI:

(a)    using any of the data from the United States which PEG had prepared at the request of the AER (even as a cross-check);

(b)    seeing what would be the effect of “normalising the data prior to modelling by removing DNSP specific unusual expenditure (which was done by its consultant CEPA, producing substantially different results);

(c)    using data envelopment analysis (DEA) – a methodology that the AER had said in its EFA Guidelines it would use;

(d)    using fixed effects or random effects SFA models, which allow for heterogeneity at the DNSP level (as opposed to EI’s SFA model which essentially assumes no latent heterogeneity);

(e)    inclusion of any additional variables (as some of Networks NSW’s consultants have done).

306    There is merit in Networks NSW’s suggested options for sensitivity testing to identify potential errors in the overseas data, or inconsistencies between the overseas data and the Australian data, in order to quantify their potential impact. While it might be understandable that time constraint pressures on the AER precluded it pursuing all the options, that they were not pursued does not increase confidence in the AER’s reliance on overseas data to arrive at its estimate of a DNSP’s required opex.

307    The seventh and final issue relating to the use of overseas data advanced by Networks NSW (whether the AER and EI were correct to ignore other potential sources of overseas data – particularly US data compiled by PEG for the AER which is preferable to the Ontario data) is rejected by the AER because:

(a)    the data was assembled from a disparate range of sources, most of which were for the purpose of financial reporting, rather than economic benchmarking, meaning that the data mainly focused on financial variables;

(b)    much of the US data was derived from vertically integrated businesses, which resulted in numerous cost-allocation issues; and

(c)    many quantity measures are either not reported at all or not reported consistently in the US, including fundamental variables such as line length, maximum demand and reliability.

The AER notes that:

(a)    having regard to the limitations PEG was only able to assemble data for 15 United States DNSPs, consisting of only 170 observations;

(b)    the sample was widely unbalanced, eg three DNSPs had 19 years of data and three DNSPs had only two years of data;

(c)    the data was not generally comparable with the New Zealand and Ontario data in terms of coverage and definition, nor did it support EI’s preferred specification using Australian data;

(d)    the database did not have consistent coverage of the variables used in the AER econometric models and the AER’s productivity models; and

(e)    for these reasons, EI concluded that the US database was not fit for the purpose of being included in the AER’s economic benchmarking.

308    While the AER addresses Networks NSW’s submission on the seventh and final issue relating to the use of overseas data, there is significant substance to other overseas data issues raised by the DNSPs – in particular the second of those issues, namely, limitations on the opportunity to enhance the AER’s reliance on overseas data by subjecting the data to greater consultation through the mandated consultation processes had it been flagged earlier in the chronology leading to the Draft Decisions. As the Tribunal has noted, the fact of a lack of opportunity to consult more fully is not itself a ground of review relied upon by the DNSPs.

The AER’s lowering of the EI model’s comparison point

309    As observed above, the AER lowered the efficiency target comparison point from the 0.86 in its Draft Decisions to 0.77 in its Final Decisions.

310    Attachment 7 to the Ausgrid Final Decision details (at p 7-276) the AER’s reasons for lowering the comparison point as follows (without footnotes):

We have decided, on balance, for this decision, that the appropriate benchmark comparison point is the lowest of the efficiency scores in the top quartile of possible scores rather than the average approach we used in our draft decision. This is equivalent to the efficiency score for the business at the bottom of the upper third (top 33 percent) of companies in the benchmark sample (represented by AusNet Services). Our revised comparison point is appropriate for the following reasons.

First, our draft decision averaging approach produced an unusual result for service providers ranked in the top quartile of efficiency scores, but below the average of that top quartile. These service providers would require an efficiency adjustment to reach the average benchmark comparison point (because their scores are below the average) despite being efficient enough to be ranked in the top quartile and, hence, included in the average.

Second, given it is our first application of benchmarking, it is appropriate to adopt a cautious approach. We have decided to increase the margin for error for modelling and data issues provided for in the draft decision (which reduced the benchmark comparison point from 0.95 to 0.86).

Third, we consider this approach better achieves the NEO and RPPs. In particular we have considered:

    the principle that we should provide service providers with an opportunity to recover at least their efficient costs

    we wish to create a high-powered efficiency incentive (which supports making an adjustment when it is clear there is material inefficiency in revealed costs) but we are mindful of providing sufficient stability to promote efficient investment

    our decision should allow a return that is commensurate with both regulatory and commercial risks.

A number of service providers, representing more than a third of the NEM [national electricity market], and operating in varied environments, are able to perform at or above our benchmark comparison point. We are confident that a firm that performs below this level is, therefore, spending in a manner that does not reasonably reflect the opex criteria. An adjustment back to an appropriate threshold is sufficient to remove the material over-expenditure in the revealed costs while still incorporating an appropriately wide margin for potential modelling and data errors and other uncertainties. Economic Insights agrees that this approach is appropriate.

Our approach of using benchmarking as a basis for making adjustments to opex is also consistent with Ofgem's approach.

311    There is merit in the following submissions that PIAC advances against the AER’s above quoted reasons for lowering the comparison point:

(a)    The AER’s first reason is irrelevant to its decision to reduce the benchmark comparison point. The AER’s “unusual result” is simply the consequence of deriving the benchmark comparison point as the average of any grouping of networks, rather than the score achieved by a single network. It would arise if the average were taken of the top five networks as it would if the average were taken of the top eight or nine networks.

(b)    The AER’s second reason is a repetition of its original explanation of why it set the benchmark comparison point below the efficiency frontier, merely reciting that it has decided to increase that downward allowance, in order to take a “cautious” approach. No explanation is provided why the AER has concluded “modelling and the data issues” justify an initial adjustment more than twice the magnitude of the reduction in the Draft Decisions.

(c)    The three points comprising the AER’s third reason amount to no more than unreasoned box-ticking of some, but not all, of the RPP in s 7A of the NEL.

(i)    Invoking the principle of providing service providers with an opportunity to recover “at least their efficient costs” is circular: the purpose of the benchmarking exercise is to ascertain what level of performance may be said to “reasonably reflect” the DNSPs’ efficient costs.

(ii)    No explanation is given of what “regulatory and commercial risks” the doubling of the initial downward adjustment is supposed to reflect.

(iii)    The AER’s reference to “providing sufficient stability to promote efficient investment” is wholly out of place in its consideration of whether a DNSP’s proposed opex requirements reflect the efficient operating costs of a prudent network operator.

(iv)    If the AER is suggesting that it had decided to reduce the benchmark comparison point to insulate the DNSPs’ shareholders from the effect of an immediate reduction in their opex allowance, that suggestion is at odds with the AER’s observation in Attachment 7 to the Ausgrid Final Decision that:

If a transition is a “premium above the efficient costs that a prudent operator would require, we cannot include that premium in our estimate of total forecast opex that we are satisfied reasonably reflects these opex criteria. Conversely, if a transition is included as part of a forecast that does reasonably reflect the opex criteria, no further premium is required or possible.

(d)    Insofar as the AER relies on EI’s advice to justify its further reduction, EI’s own reasoning is shown to be informed by nothing more than the adoption of an even more conservative benchmark than the adjustment it described as “conservative” and “generous” at the Draft Decision stage – see the Second EI Report at p 65:

We have previously noted that it is prudent to adopt a conservative approach to choosing an appropriate benchmark for efficiency comparisons. Adopting a conservative approach allows for general limitations of the models with respect to the specification of outputs and inputs, data imperfections and other uncertainties. While we have not found any of the criticisms made of our 2014 economic benchmarking study warrant changes to be made to our underlying approach, we are of the view there may be a case for setting a more conservative target than that used in Economic Insights (2014). This is particularly the case given that this is the first time economic benchmarking is being used as the primary basis for an Australian regulatory decision.

We are of the view that instead of using the customer weighted average of efficiency scores in the top quartile of possible scores, a more conservative approach of using the lowest of the efficiency scores in the top quartile of possible scores is appropriate. This would make the average efficiency score of 0.77 achieved by AusNet Distribution the appropriate opex efficiency target, before allowance for additional operating environment factors not included in the econometric modelling. Being a predominantly rural DNSP also makes choosing AusNet Distribution’s score a relatively conservative choice for the efficiency target. This change represents a 9 percentage point reduction in the opex efficiency target (from 0.86 to 0.77) and so is a generous additional allowance for any remaining modelling limitations, data imperfections and other uncertainties. It also represents a lower target of around the bottom of the top third of DNSPs compared to Ofgem’s target of the 75th percentile DNSP. Allowance for the additional operating environment factors not included in the econometric modelling then further reduces this target again (to between 0.62 and 0.69, depending on the DNSP).

(e)    Finally, PIAC contends that the AER’s assertion of the approach of consistency with the UK regulator, the Office of Gas and Electricity Markets’ (Ofgem) approach is, for the following reasons advanced by PIAC, inapt:

(i)    Ofgem’s totex benchmarking uses the upper quartile level of performance, under a weighted average of three benchmarking models. That is explained by Ofgem in the following extract from its RIIO-ED1: [Revenue using Incentives to deliver Innovation and Outputs-Electricity Determination 1] Final determinations for the slow-track electricity distribution companies - overview, 28 November 2014 set out in PIAC’s opex submissions at [79]:

4.12.    We benchmark the efficient level of totex for each DNO [distribution network operator] using the upper quartile (UQ) of the combined outputs from the three models. This addresses the risk that the combination of three separate UQ benchmarks might result in a benchmark that is tougher than any of the DNO forecasts. We use UQ rather than the frontier to allow for other factors that may influence the DNOs’ costs. The UQ level of efficiency (lower quartile level of costs) is the 25th percentile in the distribution of efficiency scores.

(ii)    Properly understood, the AER’s approach is not a “top quartile” approach. Rather, the AER applied “the lowest of the efficiency scores in the top quartile of possible scores (represented by AusNet Services)” ie the DNSP whose EI model efficiency score was the lowest above 0.75 or the fifth most efficient of the 13 DNSPs (Attachment 7 to the Ausgrid Final Decision at p 7-721).

(iii)    The AER described this as “equivalent to the efficiency score for the business at the bottom of the upper third (top 33 percent) of companies in the benchmark sample” (Attachment 7 to the Ausgrid Final Decision at p 7-721).

(iv)    Yet even that understates how far the benchmark had been lowered at this initial step as AusNet lay outside the upper third of the sample. In distributional terms, AusNet sat at the 62nd percentile, which most closely approximates the cut-off for the top three octiles, ie implying a 50 percent larger reduction than if the AER had adhered to Ofgem’s “upper quartile” approach.

(v)    Moreover, an important facet of Ofgem’s totex benchmarking method is that, having set the benchmark at that upper quartile level, it makes only very limited allowances for what may be described as OEFs. In particular, Ofgem granted “company specific factors” allowances to only three out of 14 of its DNOs – see Ofgem’s RIIO-ED1: Final determinations for the slow-track electricity distribution companies Business plan expenditure assessment, 28 November 2015, at [4.1] ff which explains Ofgem’s approach as follows:

4.1.    We consider whether DNO submitted data require adjustments prior to carrying out our comparative benchmarking. This is to ensure that the comparisons are on a like-for-like basis. Where we decide adjustments are appropriate, we adjust the DNO submitted costs before our totex and disaggregated assessments. These adjustments fall into four broad categories:

1.    Regional labour costs. These adjustments are made as operating in certain parts of the country attracts significantly higher labour costs. These apply to the two totex models and the disaggregated model in the same way.

2.    Company specific factors. These are additional costs associated with operating a particular DNO network. The size of the adjustments differs in the disaggregated model compared to the two totex models. For some activities the disaggregated analysis already factors in the special case and to apply these adjustments again would be a double count. For example, if the special case is based on the need to do more volumes of work and our disaggregated model allows all the submitted volumes, we would not make a further company specific adjustment.

3.    Exclusions from totex models. These are costs that are inappropriate for comparative benchmarking because they are not adequately explained by cost drivers that are being used in the totex models or because there is a substantial change in the nature of the activity between DPCR5 and RIIO-ED1. These exclusions only apply to the totex models. This does not apply to the disaggregated analysis. At the disaggregated level each cost activity is assessed by a bespoke model which uses the most intuitive cost driver and accounts for any changes in historical and/or forecast costs.

4.    Other adjustments. Three other adjustments we make are to remove costs outside the price control, to remove non-controllable costs and to account for indirect cost allocation. These apply to the two totex models and the disaggregated model in the same way.

4.2.    Once we estimate the modelled costs for each activity and for totex, we reverse the regional labour adjustments and company specific adjustments and add back an efficient view of those cost items excluded from our benchmarking analysis.

(vi)    Thus, it is PIAC’s submission, that properly understood, Ofgem’s “upper quartile” method involved both a smaller lowering of the benchmark comparison point from the efficiency frontier, and a considerably more selective approach to making any additional ad hoc, network-specific adjustments that would further lower the efficiency benchmark comparison point. Unlike the AER’s method, Ofgem’s “upper quartile” benchmarking method encompasses allowances for all but the most material variances in operating environment.

312    There is also merit in ActewAGL’s submission that the AER’s decision to lower the EI model’s comparison point was arbitrary and that it indicates that the AER was not confident in the results of the EI model – a submission made notwithstanding that ActewAGL benefits from the lowering of the comparison point.

313    ActewAGL questions why, if EI and the AER regard the criticisms of its benchmarking methodology to be without foundation, was the shift made? What was the underlying rationale for the decision to make the shift? If the AER was confident in the EI model and its OEF adjustments, why not set ActewAGL’s opex by reference to the most efficient Australian firm, as assessed by the EI model? Those questions are, ActewAGL submits, incapable of definitive answer, because the decision about where the frontier should sit does not have an analytical premise.

314    That decision, it submits, is based on nothing more than unease about the reliability of the EI model and the OEF adjustments. Setting ActewAGL’s opex allowance by reference to the bottom firm of the top third of efficient firms (as per the Final Decisions) is no more defensible than setting it by reference to the average of the top third of DNSPs (as per the Draft Decisions). ActewAGL concludes its submissions on the AER’s lowering of the comparison point by observing the fact that such an arbitrary choice is apparently necessary, because of deficiencies in the EI model, suggests that no reliance should be placed on the results of that model.

315    Networks NSW’s submissions in relation to the AER’s lowering of the comparison point are mainly directed at rebutting PIAC’s submissions on the issue. To that end, Networks NSW makes two submissions:

(a)    first, PIAC’s contentions rest on the assumption that the EI model and the AER’s application of it is valid and if the Tribunal were to uphold Networks NSW’s submissions to the contrary, PIAC’s submissions would fall away; and

(b)    second, PIAC has put on no evidence, nor suggested any reason, why the average of the upper quartile is a more appropriate benchmark than the bottom of the upper quartile in support of its suggestion that the AER return to the average of the upper quartile.

316    The above quoted reasons advanced by the AER for lowering the comparison point make:

(a)    a significant acknowledgment of general limitations in its models with respect to the specification of outputs and inputs, data imperfections and other uncertainties; and

(b)    do nothing to assuage concerns about the use of such a sophisticated econometric model as the EI model where economic benchmarking is being used for the first time in Australia.

317    The submissions put by PIAC against those reasons are cogent. So too are the unanswered questions raised by ActewAGL.

The AER’s OEF adjustments

318    Each of Networks NSW, ActewAGL, Ergon and PIAC submit that the AER erred in making its post modelling OEF adjustments described above.

319    As observed above, recognising that the EI model does not account for many OEFs relevant to estimating opex, the AER made ex post adjustments for them. For ease of reference the adjustment process detailed above may be summarised as follows:

(a)    first, the AER decided whether the OEF was material or immaterial;

(b)    if in the AER’s opinion the OEF was material, the AER assessed its quantum vis á vis the DNSP under consideration;

(c)    if, however, the AER was of the opinion that the OEF was individually immaterial and:

(i)    it was likely to provide a cost disadvantage, it made a uniform +0.5 percent adjustment;

(ii)    was, in PIAC’s terms “directionally ambiguous”, ie there was doubt whether it was likely to provide a cost advantage or disadvantage, it made a uniform +0.5 percent adjustment; and

(iii)    it was likely to provide a cost advantage, it made a uniform 0.5 percent adjustment.

320    The results of each adjustment for the Networks NSW DNSPs and for ActewAGL appear in two tables in Attachment 7 to the relevant Final Decisions (Table A.6 Summary of final decision on OEF adjustments; and Table A.8 Summary of individually immaterial OEF adjustments, respectively) which are reproduced above.

321    PIAC notes that that the seemly inconsequential 0.5 percent adjustments in Table 6.A have a very substantial overall effect of the Networks NSW DNSPs’ aggregate opex allowances over the five-year regulatory period. PIAC estimates that each individual +0.5 percent adjustment results in an increase in the base-year opex of approximately $1.7m for Ausgrid, $1.1m for Endeavour, and $1.4m for Essential. Applied across the regulatory period, PIAC estimates that each individual +0.5 percent adjustment in turn is equivalent to an increase in the nominal aggregate opex allowance of $9.8m, for Ausgrid, $6.1m for Endeavour and $8.1m for Essential. When the effect of a single +0.5 percent adjustment is multiplied by the number of directionally ambiguous OEF allowances, PIAC estimates that the overall impact of the directionally ambiguous OEF allowances is approximately $108m for Ausgrid, $73m for Endeavour and $89m for Essential. There was no submission directly controverting those arithmetical estimates.

322    ActewAGL and Networks NSW submit that the AER’s OEF adjustment process was subjective and arbitrary.

323    Networks NSW also submits that the AER’s OEF adjustments underscore the novelty of the AER’s approach to econometric benchmarking and that the approach changed in the course of its application by that approach. Moreover, the adjustments were, in Networks NSW’s submissions:

(a)    an acknowledgement by the AER that the four EI model parameters do not adequately account for differences between the DNSPs; and

(b)    an “attempted bandaid on a flawed process”.

324    Further submissions by Networks NSW are to the effect that the AER did not:

(a)    have regard to data supplied by Essential relating to extreme weather events or corrosive environments and by Networks NSW relating to the difficulties and cost-inefficiencies associated with trying to enforce vegetation management on landowners; and

(b)    make an adjustment for Networks NSW DNSPs’ proportion of wooden poles.

325    Both PIAC and Networks NSW complain that they were not given the opportunity to be heard on the AER’s post-draft decisions to make the OEF adjustments. As earlier noted, that is not a complaint which they translate on these applications to a complaint of failing to accord procedural fairness in a way which directly enlivens a ground of review.

326    PIAC also submits that:

(a)    it was erroneous and inconsistent with the selection of a lower comparison point for the AER to ascertain each OEF’s effect on opex by comparing its effect on the opex of each of the Networks NSW DNSPs to the weighted average of the effect it had on the opex of the DNSPs whose efficiency scores were equal to or above the benchmark comparison point; and

(b)    the AER erred by allowing a +0.5 percent adjustment for directionally ambiguous (and immaterial) OEFs;

327    Ergon submits that the AER’s OEF adjustments do not overcome the underlying error that the AER made in considering the EI model as sufficiently robust and credible to produce a forecast opex allowance for the applicant DNSPs and otherwise adopts the submission made by Networks NSW.

Were the OEF adjustments arbitrary?

328    The AER responds to the assertions that it was arbitrary in its selection of a materiality threshold for its OEF adjustments by way of a submission stating that it exercised its discretion guided by EI’s and its own expertise and that the decision to use 0.5 percent was reasonable for the following reasons:

(a)    the 0.5 percent threshold is substantially lower than the level (5 percent) required by accounting principles and the selection of a low materiality threshold increased the detail in which it examined the effects of OEFs, which ultimately resulted in a more accurate determination; and

(b)    the same threshold was chosen for the RIN process in order to engender a high degree of confidence in the cost data applied in the RINs (DNSPs calculated the difference that their previous and current cost allocation methods would produce in reported opex and, if the figure was above 0.5 percent, they were required to backcast their costs using the current method).

329    The AER’s response is more an explanation of why it choose the figure of 0.5 percent rather than a justification of its reasonableness. As Networks NSW rightly submits:

No reasonable regulator could accept, as the AER does, that its benchmark model does not properly account for many OEFs without then seeking to take those into account.

330    ActewAGL makes a similar pertinent submission – the 23 percent OEF adjustments that the AER made to ActewAGL’s efficiency score generated by the EI model demonstrates that even the AER considers there are serious comparability issues that cannot be addressed by the specification of the EI model. In that context, it submits, it is difficult to understand how the AER considers it to be reasonable to place reliance on the outcome of that model as the sole determinant of ActewAGL’s opex allowance.

331    In support of its submission that the OEF adjustments are arbitrary, ActewAGL cites the following passage from, the Huegin Report (at pp 51 and 52):

[t]here is ... no detailed analysis or explanation of the justification for deeming other variables insignificant in the draft decision to support the AER’s claim that only a few of the factors have a material effect on total opex

… … …

For all the consideration of environmental variables individually in the draft decision, the adjustment amount allowed for the collective influence of these variables is merely a subjective estimate.

332    ActewAGL rightly submits that Huegin’s analysis applies equally to the approach taken by the AER in the Final Decisions.

333    Moreover, even if the OEF adjustments were properly quantified, the manner in which they have been applied is flawed. As ActewAGL submits, adjustments, where required, should be made before modelling, by normalising the data set, rather than ex post modelling. In support of its submission it cites the following passage from the CEPA Report Benchmarking and setting efficiency targets for the Australian DNSPs: ActewAGL Distribution, January 2015 (CEPA ActewAGL Report), (at pp 10-11) which concludes that it would be more appropriate to make the adjustments before the modelling as inconsistent data may be affecting the modelling:

Economic Insights has taken account of these adjustments and proposed that the frontier for AAD [ActewAGL] could be adjusted by 30% as a result. While I do not disagree that adjustments should be made where data are inconsistent, given the magnitude of the adjustments proposed by Economic Insights I consider that it would be more appropriate to make these adjustments before modelling … as the inconsistent data are likely to affect the modelling.

334    The conclusion is apt. If adopted by the AER it would bring its approach more in to line with Ofgem’s as outlined in the above quoted passage from its RIIO-ED1: Final determinations for the slow-track electricity distribution companies Business plan expenditure assessment, 28 November 2015.

335    As submitted by ActewAGL, the difficulty with the AER’s approach is that despite making post modelling OEF adjustments, the efficiency scores of the EI model have been affected by the inclusion of non-comparable data. Post modelling adjustments do not address the fact that the costs relationships within the model, including those DNSPs for which no OEF adjustments have been made, have been affected by the non-comparable data. Thus, those cost relationships are skewed by heterogeneous differences between the DNSPs. The output of the model is therefore skewed by flawed data. That skewed cost relationship cannot be corrected by post modelling OEF adjustments made to some only of the DNSPs (ie the three Networks NSW DNSPs and ActewAGL).

336    ActewAGL concluded its submission on this point noting that while EI has recognised the importance of making adjustments to data before modelling to create a comparable data set, EI failed to do so in its work for the AER leading to the Final Decisions. In ActewAGL’s view, the reason that EI did not do so appears to be because it was not possible to normalise the data used within the EI model. There is, therefore, in ActewAGL’s opinion, a recurring theme in EI’s approach: its methodology for assessing the DNSPs opex was driven by data (and in particular, the use of international data), rather than an a priori decision about what would provide the most robust means of assessing each DNSP’s opex. While ActewAGL acknowledges that is understandable at a practical level, it submits that it does not alter the fact that those practical limitations have diminished the probative value of the model to the point of non-existence.

Should directionally ambiguous OEF adjustments been made?

337    Expanding on its submission that the AER erred by allowing a +0.5 percent adjustment for directionally ambiguous adjustments, PIAC submits that the only approach reasonably open to the AER in relation to such OEFs was to make no adjustment at all.

338    It is PIAC’s submission that:

(a)    the possibility that a DNSP’s base-year opex might be affected indiscernibly – in one direction or the other, if at all – by such an OEF is something for which the AER’s lowering of the comparison point provided more than an adequate allowance; and

(b)    the AER exercised its discretion incorrectly and made an unreasonable decision which does not contribute to the NEO.

339    In support of its submission PIAC cites the following extract from the transcript of the consultations held pursuant to s 71R of the NEL at p 36 which records the view of the Energy Users Association of Australia’s representative, Mr Hugh Grant as follows:

In essence, the AER’s approach to those adjustments … in my view … are arbitrary, unprincipled and in many cases illogical. And it’s very important to note that the NSW distributors are also challenging the logic of those adjustments, labelling them as fundamentally flawed, unreliable, unreasonable and arbitrary.

340    The AER response to PIAC’s criticism of its allowance of a +0.5 percent for directionally ambiguous OEFs is that it was an appropriate conservative approach, consistent with the RPP in s 7A of the NEL, that would ensure that the DNSPs would have a reasonable opportunity to recoup at least the efficient costs incurred as a result of those OEFs. That response is drawn from Attachment 7 to the Ausgrid Final Decision at p 7-182 where the AER observes:

In future, as our information set improves we may reconsider our approach to immaterial OEFs.

341    There is merit in PIAC’s reply to the AER’s submission that the AER has not endeavoured to defend the adjustment for directionally ambiguous OEFs as an adjustment that was justified or necessary in order to ensure that it would make the decision that it was satisfied would advance the NEO to the greatest degree.

342    It is also to be noted that the AER’s response is an acknowledgement by it of the immaturity of its data. An acknowledgement that should have alerted it to the vagaries of relying on the data to the extent that it did.

343    While, perhaps, the AER’s citation of the Second EI Report’s observation that “… more detailed and improved estimates are now incorporated for factors with small impacts” provides a partial answer to PIAC’s submission that the AER’s approach on directionally ambiguous OEFs was not based on a recommendation by EI, it does not address PIAC’s submissions that EI’s advice in the Second EI Report at p 98 and in Table 5.1 was that:

(a)    any adjustment should be “positive for factors that disadvantage the DNSP being reviewed and negative for those that advantage it”; and

(b)    in EI’s tabulation of the OEF adjustments, the directionally ambiguous OEFs were swept up into a category titled “accumulated other factors”, about which EI offered no further explanation.

344    PIAC’s submission that the directionally ambiguous OEFs have an “unobservably small” impact elicits a response from the AER that that: “… the direction of the advantage of these OEFs could not be ascertained not necessarily because their effects were unobservably small, but because sufficient data about their effects had not been provided to the AER” – a response that confirms the applicants’ concerns about the adequacy of the data upon which the AER founded its benchmarking. Indeed, another acknowledgement by the AER of the immaturity of its data that should have put it on notice of the consequences of relying on such data.

345    That confirmation of the applicants’ concerns about the adequacy of the data is reinforced by the AER’s response to PIAC’s submission that the AER erred by applying an adjustment where detailed cost data was unavailable – again, the response advances “deficiencies in the information provided by DNSPs” to explain why exact quantification of the OEFs was not always possible.

346    A further flaw advanced by PIAC in its challenge to the AER’s adjustments for immaterial OEFs is that:

(a)    it defines the materiality threshold so as to categorise an OEF as immaterial if it produces a cost advantage / disadvantage of –0.5 percent;

(b)    but in making a quantitative adjustment for each immaterial OEF, the AER applies an adjustment of ±0.5 percent of base year opex, ie at the outer bounds of the range of immaterial opex impacts

347    Thus, as PIAC correctly submits, by the AER’s own definition, it is to be expected that:

(a)    insofar as their direction can be ascertained, the immaterial OEFs will, on average, have an absolute opex impact of less than ±0.5 percent;

(b)    the directionally ambiguous OEFs will, on average, have a zero opex impact;

(c)    but the AER gives no explanation why it applied adjustments at the outer bounds of the immaterial range for immaterial OEFs that would, by definition, have absolute magnitudes only less than (or equal to) those outer bounds.

The OEF adjustments vis á vis the lower comparison point

348    Citing the following passage from submissions to the consultations held pursuant to s 71R of the NEL by the Major Energy User’s representative, Mr David Headberry at transcript p 116, PIAC submits that it was reasonable for the AER to make some minor adjustment of the comparison point below the efficiency frontier, in order to make allowance for inherent limitations in the benchmarking model.

The AER initially used an average of the highest quintile [sic] of the benchmark outcomes to set that efficient allowance … and, again, when you look at the highest quintile the firms that are in that highest quintile include South Australia Power Networks, Powercor, CitiPower, United Energy; all of which have got similar characteristics to those in the NSW networks. So if you accept that there’s a problem that the NSW networks’ opex is too high and they’re not efficient, how do you go about setting the right number? And my view is that the approach that was used initially by the AER of using the average of the highest quintile is probably about right because it has a mix of the CBD, urban, regional cities and also rural and remoter areas. And when you look at the mix of the networks that are included in that top quintile they are not too bad a surrogate for what we actually see in NSW.

349    But, PIAC went on to submit, having made an adjustment to the comparison point, the AER was wrong to compare the effect the OEF had on the opex of a DNSP under consideration to the weighted average of the effect it had on the opex of the DNSPs whose efficiency scores were equal to or above the benchmark. This it submitted is inconsistent with the AER’s decision to use the single efficiency score of AusNet, the DNSP at the bottom of the upper third of DNSPs as the comparison point.

350    PIAC expanded on its submissions under this heading by outlining what it saw as the following effects of lowering the comparison point:

(a)    the initial reduction from the efficiency frontier (CitiPower, at 0.950) to the benchmark comparison point was more than doubled from the Draft Decision to the Final Decision:

Efficiency frontier

Benchmark

comparison point

Reduction from

frontier

Draft decision

0.950

0.862

9.3%

Final decision

0.950

0.767

19.3%

(b)    the new OEF unadjusted benchmark comparison point is lower than the post-OEF-adjustment comparison point used for each of the Networks NSW DNSPs in the Draft Decision, namely, 0.786.

(c)    thus, when the AER made the final decision OEF adjustments, it started from a lower comparison point than when making the OEF adjustments in the Draft Decisions.

351    Responding to PIAC’s submission under this heading the AER drew on Attachment 7 to the Ausgrid Final Decision (at p 7-183) to explain that:

(a)    it used the average of the top five DNSPs for the OEF process because in that process it is necessary to estimate an OEF’s effect on the opex of an efficient DNSP and such effects may vary between DNSPs – it cannot be assumed that the effect of an OEF on the opex of the DNSP at the benchmark comparison point would be replicated in other efficient DNSPs;

(b)    if it were to be so assumed, it could lead to OEF adjustments that unfairly advantage or disadvantage DNSPs; and

(c)    thus, an average is more accurate than a comparison to a single firm.

352    Rather than challenging the premise of PIAC’s submission that the AER’s averaging approach artificially inflates the impact of the OEFs and improves Networks NSW DNSPs’ apparent relative efficiency, the AER submitted that there is no evidence that its approach affects their efficiency scores positively rather than negatively.

353    Finally, in response to PIAC’s submission that the lowering of the benchmark comparison point was sufficient to allow for any potential modelling and data uncertainties, including the immaterial OEFs, the AER repeated that it adopted its approach to immaterial OEFs in order to provide DNSPs with the opportunity to recoup at least their efficient costs, consistently with the RPP – a submission which, for reasons outlined above, was rightly exposed as tenuous by PIAC.

Other OEF issues

354    Having regard to the conclusions that may be drawn from the above considerations of the parties’ submissions (particularly those of PIAC) challenging the AER’s approach to determining the OEFs, it is not necessary consider the following challenges to particular OEFs , other than to note them:

(a)    PIAC’s claim that the AER’s reduction of its estimate of the quantum of the advantage that Networks NSW DNSPs enjoyed vis á vis the Victorian DNSPs (from 2.4 percent in the Draft Decisions to 0.5 percent in the Final Decisions;

(b)    PIAC’s claim that the AER erred in applying information provided by Essential in relation to fungal decay in wooden poles to Ausgrid and Essential;

(c)    claims by PIAC and by Networks NSW that they were not given an opportunity to be heard on the AER’s OEF adjustments; and

(d)    Networks NSW’s claim that the AER erred by not making an adjustment for a DNSP’s portion of wooden poles.

The efficiency of the DNSPs’ vegetation management costs

355    Each of Endeavour, Essential, and ActewAGL challenge the AER’s assessment that the vegetation management costs in their respective 2012-13 base year opex were not efficient.

356    The gravamen of Endeavour’s challenge is premised on the AER’s refusal to provide an allowance for increased vegetation management occasioned by its retendering of outsourced contracts for that purpose. Endeavour claims that the AER’s refusal results in a reduction of some $240.7m in Endeavour’s forecast of its vegetation management opex.

357    The essence of Essential’s challenge is premised on a failure by the AER to examine whether the forecast reasonably reflects the opex criteria because it took the view that Essential’s forecast of a reduction in 2014-19 vegetation management costs was an admission of past vegetation management inefficiencies. In its regulatory proposal, Essential forecast a $151m reduction based on it implementing efficiencies identified in a report by Select Solutions, Review of Essential Energy Vegetation Management Strategy, 22 March 2013 (the Select Solutions Report). While Essential maintained the reduction in its revised regulatory proposal, it increased its opex forecast by $67m for rectification of non-compliant clearance levels and other deficiencies identified as a result of the introduction of new technology

358    In summary, ActewAGL’s challenge is directed at the AER’s conclusion that a report by Energy Market Consulting associates (EMCa), Review of ActewAGL Distribution’s Labour Resourcing and Vegetation Management Practices at 2012/13, April 2015, (the EMCa Report) confirmed a finding in the ActewAGL Draft Decision that its labour and vegetation management costs are likely drivers of its poor benchmarking performance.

Endeavour’s challenge

359    It is Endeavour’s contention that its vegetation management opex forecast is:

(a)    required to enable it to comply with a NSW industry standard, Industry Safety Steering Committee 3 Standard – Guidelines for Managing Vegetation Near Powerlines, December 2005 (ISSC 3); and

(b)    directed at meeting the opex objectives in r 6.5.6(a)(2), (3) and (4).

360    Endeavour claims that it has been seeking to improve its compliance with ISSC 3 since 2009 but that by 2011-12 its compliance level was only 76 percent because contrary to their contracts, some contractors were trimming vegetation only to the minimum clearances and were not making any allowance for regrowth as required by ISSC 3. Thus, commencing in 2011-12, it sought new contracts by way of competitive tender. It is Networks NSW’s contention that the new contracts resulting from a tender process resulted in an increase of contractor opex from $21.3m for 2012-13 to $37.4m in 2013-14. The new contracts (together with increased opex costs attributable to internal vegetation management work, internal processes and overheads associated with those contracts) resulted in the compliance level of 92 percent in 2013-14.

361    Endeavour considers the increase in opex required to comply with the safety and reliability standards in ISSC 3 is within r 6.5.6(c)(3), ie a realistic expectation of the demand forecast and cost inputs required to achieve the opex objectives.

362    Relevantly, the opex objectives speak of the forecast opex which the relevant DNSP considers is required to achieve each of the opex objectives including:

(1)    comply with all applicable regulatory obligations or requirements associated with the provision of standard control services;

(2)    to the extent that there is no applicable regulatory obligation or requirement in relation to:

(i)    the quality, reliability or security of supply of standard control services; or

(ii)    the reliability or security of the distribution system through the supply of standard control services,

to the relevant extent:

(iii)    maintain the quality, reliability and security of supply of standard control services; and

(iv)    maintain the reliability and security of the distribution system through the supply of standard control services;

363    In support of its vegetation management opex forecasts, Networks NSW:

(a)    relied on its competitive tendering process to demonstrate efficiency in contractor costs; and

(b)    provided to the AER a copy of an internal analysis (an attachment to a Networks NSW Executive Leadership Group Meeting, dated 17 October 2013) which, Networks NSW submits, confirms that there had been an increase in Endeavour’s vegetation management costs due to a need to manage contractors more closely and enforce clearance standards.

364    In Attachment 7 to the Endeavour Final Decision, the AER dismissed the competitive tendering process at p 7-297 as follows:

competitive process only helps in demonstrating the efficiency of the contracts that were tendered. Whether this amount reasonably reflects the prudent and efficient costs of complying with regulatory obligations will depend on whether the scope of works listed in the contracts reflects a prudent scope of works.

365    Attachment 7 to the Endeavour Final Decision at p 7-297 also dismissed the Networks NSW analysis as suggesting that Endeavour’s:

… vegetation management practices are too risk averse, … [and] … are contributing to relatively high vegetation management costs.

366    As observed above, the AER was not prepared to classify Endeavour’s vegetation management opex forecast as a “step change in terms of its EFA Guideline – see Attachment 7 to the Endeavour Final Decision at p 7-288ff where it sets out its reasons why it was not prepared to accept that opex forecast as a step change.

367    The 2013 EFA Guideline at pp 11 and 27 describes step changes and its approach to them in two places as follows:

Step changes

Our approach is to separately assess the prudence and efficiency of forecast cost increases or decreases associated with new regulatory obligations and capex/opex trade-offs. For capex/opex trade-off step changes, we will assess whether it is prudent and efficient to substitute capex for opex or vice versa.

For step changes arising from new regulatory obligations, we will assess (among other things):

    whether there is a binding (that is, uncontrollable) change in regulatory obligations that affects their efficient forecast expenditure

    when this change event occurs and when it is efficient to incur expenditure to comply with the changed obligation

    what options were considered to meet the change in regulatory obligations

    whether the option selected was an efficient option––that is, whether the DNSP took appropriate steps to minimise its expected cost of compliance from the time there was sufficient certainty that the obligation would become binding

    when the DNSP can be expected to make the changes to meet the changed regulatory obligations, including whether it can be completed over the regulatory period

    the efficient costs associated with making the step change

    whether the costs can be met from existing regulatory allowances or from other elements of the expenditure forecasts.

We will assess changes in regulatory obligations in the context of the core category they affect, which will ensure consistency across DNSPs. Accordingly, DNSPs must allocate step changes arising from regulatory obligations to our expenditure categories (for example, augmentation, replacement, vegetation management).

We will not allow step changes for any short-term cost to the DNSP of implementing efficiency improvements in expectation of being rewarded through expenditure incentive mechanisms such as the EBSS. We expect DNSPs to bear such costs and thereby make efficient trade-offs between bearing these costs and achieving future efficiencies.

4.3 Step changes

Step changes may be added (or subtracted) for any other costs not captured in base opex or the rate of change that are required for forecast opex to meet the opex criteria.

We will assess step changes in accordance with section 2.2 above. Step changes should not double count costs included in other elements of the opex forecast:

    Step changes should not double count the costs of increased volume or scale compensated through the output measure in the rate of change.

    Step changes should not double count the cost of increased regulatory burden over time, which forecast productivity growth may already account for. We will only approve step changes in costs if they demonstrably do not reflect the historic 'average' change in costs associated with regulatory obligations. We will consider what might constitute a compensable step change at resets, but our starting position is that only exceptional events are likely to require explicit compensation as step changes. Similarly, forecast productivity growth may also account for the cost increases associated with good industry practice.

    Step changes should not double count the costs of discretionary changes in inputs. Efficient discretionary changes in inputs (not required to increase output) should normally have a net negative impact on expenditure.

If it is efficient to substitute capex with opex, a step change may be included for these costs (capex/opex trade-offs).

368    In rejecting the increase the AER observed in the Endeavour Final Decision – Attachment 7 at p 7-288 that:

(a)    Endeavour had stated that it did not face any change to the minimum risk standards with which it must comply;

(b)    without persuasive evidence that a DNSP’s total historical opex was too low to achieve the opex objectives, it did not consider increased contract costs to be a reason to increase the total opex forecast to meet what are unchanged regulatory obligations;

(c)    Endeavour does not benchmark well when compared to other DNSPs in the national electricity market; and

(d)    it would be inconsistent with the application of Endeavour’s EBSS to include the vegetation management opex in the opex forecast (Endeavour proposed that it retain gains from its EBSS rather than share them with its customers).

369    It is Networks NSW’s submission that the AER was wrong to analyse the proposed vegetation management opex in terms of it being a “step change” because it lead it to an ex ante view that that opex might only be accepted if it were the result of an opex / capex trade-off or new regulatory obligation whereas the correct approach was to first consider whether that opex reasonably reflected the r 6.5.6(c) opex criteria.

370    The AER’s response to that submission notes that its approach to assessing step changes, as set out in the Endeavour Final Decision, adopts the approach in the 2013 EFA Guideline and cites the following passage from Attachment 7 to the Endeavour Final Decision at p 7-291:

We only include a step change in our alternative opex forecast if we are satisfied a prudent and efficient service provider would need an increase in its opex to reasonably reflect the opex criteria.

That question, the AER submits, is precisely directed to the AER’s task under r 6.5.6(c).

371    While that question may be so directed, a reading of Attachment 7 to the Endeavour Final Decision reveals that what may be described as a step change analysis was very much in the forefront of the AER’s considerations of Endeavour’s vegetation management opex forecast in applying its benchmarking methodology.

372    Thus, the AER did not consider the fact that outsourcing its vegetation management contracts by way of competitive tender was sufficient to demonstrate that Endeavour’s vegetation management opex forecast reasonably reflects the r 6.5.6(c) opex criteria or the opex objectives. Thus too it dismissed the abovementioned internal analysis.

373    In circumstances where Endeavour:

(a)    provided its abovementioned analysis to the AER; and

(b)    is committed to paying contractors retained through a competitive tender process so that it may comply with an applicable regulatory obligation,

the AER’s reasons for rejecting Endeavour’s vegetation management opex forecast are tenuous. A fortiori, having regard to the significant adverse consequences that may flow from a failure to comply with regulatory vegetation management requirements as demonstrated by the Victorian bushfires.

374    The AER also submitted that its assessment was influenced in part by a Deloitte Report NSW Distribution Network Service Providers Labour Analysis, Final Addendum to 2014 Report, 28 April 2015 (the 2015 Deloitte Labour Report). The AER previously engaged Deloitte to conduct an analysis of the Networks NSW DNSPs’ labour costs in the 2009-2014 regulatory period. Its report, NSW Distribution Network Service Providers Labour Analysis, 17 November 2014 (the 2014 Deloitte Labour Report) informed the AER’s assessment of the DNSPs’ 2015-2019 capex and opex forecasts, and was referenced in the Draft Decisions for each of the DNSPs.

375    The 2015 Deloitte Labour Report concluded:

Our view remains that the NSW DNSPs have higher labour costs than their peers (driven by the number of employees rather than costs per employee) due to in [sic] restrictive EBA provisions, a high degree of unionisation and inefficient labour practices, which means that their base year opex was not efficient.

376    Insofar as the 2014 Deloitte Labour Report may have informed the AER’s Draft Decision Networks NSW had an opportunity to respond to it. It is, however, Networks NSW’s unchallenged submission that it was not given the opportunity to be heard in relation to the 2015 Deloitte Labour Report.

377    As noted elsewhere, of itself s 71C of the NEL does not provide as a ground of review that procedural fairness was not accorded. It is understandable that, with the range of reviewable regulatory decisions being made by the AER and the extensive refined obligations imposed on it by the 2012 Rule Amendments, the regulatory period 2015-19 presented significant administrative challenges. The timeframe in which the DNSPs and the AER must operate in relation to the commencement of the relevant regulatory control period is set out in rr 6.8 to 6.11 of the NER. Recognising those time constraints, r 6.11.1(c) imposes a “best endeavour” obligation on the AER to publish any post Draft Decision analysis for the purposes of its Final Decision.

378    There is presently no basis for thinking that the AER did not properly comply with r 6.11.1(c). It was not the focus of submissions. The DNSPs focus was to rely on the timing and quality of the analysis upon which they did not have an opportunity to comment as part of the picture on which a ground or grounds of review under s 71C(1) of the NEL are made out. Certainly, it is a matter of common sense that a report such as the 2015 Deloitte Labour Report might carry greater weight if it had been the subject of any response from Endeavour, depending of course on the terms of that response.

Essential’s challenge

379    In rejecting Essential’s forecast of its vegetation management opex, Attachment 7 to the Essential Final Decision states (at pp 7-160 and 7-162):

… we consider the Select Solutions report (and Essential Energy documentation discussing it) submitted by Essential Energy with its regulatory proposal demonstrates there are inefficiencies in its vegetation management practices in the 2012–13 base year. While Essential may have since improved its practices, the evidence suggests it had not done so in 2012–13. Therefore, the costs in 2012–13 are overstated.

… … …

We placed most weight on the findings of the Select Solutions review, noting that Essential Energy had proposed a step down in its vegetation management opex in the forecast period (rather than in the base year). Therefore, we maintain our draft decision view that Essential Energy's performance on our economic benchmarking techniques is likely to be partly driven by its vegetation management opex.

380    It is Essential’s submission that the AER:

(a)    contrary to r 6.5.6, performed no analysis, and gave no consideration to, whether Essential’s forecast vegetation management over the 2014-19 period, including the cost reductions proposed, was efficient or prudent;

(b)    made no attempt to quantify any inefficiencies in terms of the effect on opex of the vegetation management issues identified, thus its analysis provided no probative corroboration of inefficiencies of the scale identified by the AER in reliance upon the EI Model; and

(c)    did not ascribe a figure quantifying inefficiencies in Essential’s vegetation management.

381    Thus, in Essential’s submission:

(a)    the AER’s findings regarding vegetation management did not, and could not, justify a quantitative assessment of efficient opex; and

(b)    the AER has conducted an abstract and superficial assessment of Essential’s vegetation management practices, focused on only one year, without actually undertaking the task of identifying the amount of inefficient expenditure.

382    It is Essential’s contention that any inefficiency that may be implied from Essential’s forecast reductions does not amount to the inefficiency identified by the AER’s benchmarking exercise. In support of its contention, Essential claims that:

(a)    its proposed reduction in vegetation management costs in its regulatory proposal was approximately 16.5 percent, which drops to 9.2 percent with the increases in its revised regulatory proposal; and

(b)    by comparison, the EI model identified Essential as having an efficiency score of 54.9 percent compared to a frontier (after making the OEF adjustments) of 69.4 percent, implying a 26.4 percent reduction to opex.

383    It is also Essential’s contention that the AER ignored evidence before it that Essential compares favourably to most other Australian DNSPs in terms of vegetation management opex per vegetation management span (per kilometre of overhead circuit which passes through an area requiring vegetation management) because a “service provider’s estimation assumptions seem to influence the data on maintenance spans” (Attachment 7 to the Essential Draft Decision at page 7-84). Essential notes in this regard that its consultant Advisian observed that: “it is not logical to simply ignore it in a detailed assessment of vegetation management”.

384    Essential’s challenge to the AER’s vegetation management findings concludes with a submission that what data is available on vegetation management spending per vegetation management span is wholly at odds with the conclusions the AER sought to draw from the Select Solutions Report and Essential’s proposed reductions.

385    Insofar as Essential’s challenge is premised on a supposition that the NER requires the AER to conduct a line-by-line, bottom up review of each category of forecast expenditure, it is rejected by the AER. It is the AER’s contention that:

(a)    rule 6.5.6(c) requires the AER to assess whether the total of the DNSP’s forecast opex reasonably reflects the opex criteria;

(b)    as described in its 2013 EFA Guideline, the AER undertakes that assessment using the base-step-trend approach, the first step being an assessment whether the DNSP’s base year opex reflects the opex criteria;

(c)    the AER assessed Essential’s base year opex using a number of assessment methods, including a review of vegetation management costs as a key category of expenditure; and

(d)    the purpose of the review was to investigate whether vegetation management in the base year indicated inefficiency and supported and explained Essential’s poor performance on other assessment techniques – quantification of the inefficiency was not the purpose of the review, and was not required by the AER’s assessment process.

386    It appears from Attachment 7 to the Essential Final Decision that the AER’s focus was on the Select Solutions Report and its 16 recommendations to improve Essential’s vegetation management. At p 7-162 of the Attachment, the AER observes that Essential had noted that recommendations from the report were in the process of being implemented and concluded that:

… they could not have been implemented in the 2012–13 base year. This year is the relevant year for determining the appropriateness of Essential Energy's revealed costs as the starting point for determining an estimate of efficient and prudent total forecast opex.

387    The AER dismisses Essential’s contention based on the inconsistency between the extent of the reduction to vegetation management costs proposed by Essential (9.2 percent) and the AER’s comparison to the efficient frontier (26.4 percent) as a non-sequitur.

388    The AER also dismisses Essential’s submissions that it ignored evidence that Essential performs comparatively well in terms of vegetation management opex per vegetation management span by submitting it did not ignore the evidence, it simply did not find it persuasive for the reason stated by it in Attachment 7 to the Essential Draft Decision.

389    While, as submitted by the AER, neither the NEL nor the NER mandate a line-by-line, bottom-up review of each category of forecast opex, in circumstances where benchmarking in Australia is in its infancy, sensible administration dictates that the AER should not have cast aside its previous practice of conducting bottom-up reviews in favour of the emphasis it placed on benchmarking. A fortiori, in circumstances where its preferred EI model’s reliance on overseas data and the AER’s final OEF adjustments could not have the benefit of full exposure to the consultation processes mandated by the NEL and the NER.

390    Viewed in that context, the AER’s apparently untested conclusion that the recommendations of the Select Solutions Report could not have been implemented in the 2012–13 base year and its preference for its assessment of Essential’s overall opex based on the EI model are unconvincing. Likewise, the Tribunal is not convinced by its dismissal of Essential’s submission that the AER should have quantified its vegetation management inefficiencies and should not have ignored Essential’s comparative vegetation management opex per vegetation management performance.

ActewAGL’s challenge

391    Attachment 7 to the ActewAGL Final Decision notes (at p 7-54) that:

(a)    the AER’s analysis of ActewAGL’s opex categories showed it had “very high costs on labour and vegetation management metrics compared to most of its peers;

(b)    because those categories account for a significant proportion of ActewAGL's opex (labour is approximately 80 percent) the AER conducted detailed reviews of labour and vegetation management opex; and

(c)    the “detailed review”, referred to in these reasons as the EMCa Review, found significant issues in those categories of ActewAGL’s opex, which the AER considered evidence of base year inefficiency, supporting its benchmarking results.

392    ActewAGL submits that the “detailed review” does not support the AER’s benchmarking, is “fundamentally flawed”, and “as a ‘qualitative assessment’ has no utility in determining the accuracy and reliability of a quantitative assessment of ActewAGL’s opex … .”

393    Expanding on its submission that the EMCa review does not support the AER’s benchmarking, ActewAGL challenges the following findings by the AER in Attachment 7 to the ActewAGL Final Decision (at p 7-146 and p 7-153):

(a)    ActewAGL’s labour costs are “… driven by having too many employees rather than by cost per employee”; and

(b)    “ActewAGL could potentially achieve efficiencies by outsourcing more”

on the basis that:

(c)    the AER did not provide evidence to establish that a higher level of outsourcing would deliver more efficient expenditure; and

(d)    the evidence of ActewAGL’s expert, Advisian, which was submitted to the AER establishes that: “… the question of whether opex or capex tasks are carried out by internal or external labour is largely irrelevant to the efficiency of the outcome.” (Advisian, Opex cost drivers: ActewAGL, January 2015, at p 96).

394    ActewAGL responds to the AER’s finding in Attachment 7 to the ActewAGL Final Decision (at p 7-153) that due to restrictions on outsourcing in ActewAGL’s Enterprise Bargaining Agreement, it does not “appear to be adopting the lowest cost option” and its conclusion (at p 7-156 based on the EMCa review) that ActewAGL’s “lack of outsourcing is a key reason why its labour costs in 2012-13 are not reflective of those of a prudent and efficient service provider”. It does so by citing the findings in a report by Australian Business Lawyers & Advisors Pty Ltd Review and comparison of ActewAGL’s enterprise agreement provisions against other electricity network service providers, 13 January 2015) that:

(a)    many of the criticisms made about the operation of the ActewAGL EA [Enterprise Bargaining Agreement] are unfounded; and

(b)    contrary to the conclusions reached in the AER Draft Decision, the ActewAGL EA is equivalent to and in many respects demonstrably more flexible than the norm in the electricity sector by comparison to other major electricity providers’ enterprise agreements.

395    ActewAGL challenges EMCa’s claim (at p 11 of EMCa Review) that two reports generated by ActewAGL’s consultants, support evidence of systemic issues in ActewAGL’s work practices, processes and systems that existed in 2012-2013 that have translated into material operational cost inefficiency.

396    Attachment 7 to the ActewAGL Final Decision records (at p 7-156) EMCa’s findings based on the first of those reports (Marchment Hill Consulting (MHC) report Organisation Review-ActewAGL Energy Networks, February 2011 (the MHC Report) as follows (without footnotes):

EMCa’s review included an examination of how ActewAGL runs its business. EMCa examined the MHC report, which ActewAGL commissioned in 2011. MHC found that problems exist in all areas of ActewAGL’s operations, from the way ActewAGL plans its work, through to delivery, and how it monitors and controls its performance operationally and strategically. ActewAGL advised that it had ‘rolled-out’ the majority of its 34 initiatives in response to the 26 issues identified by MHC in 2011.

EMCa disagrees with ActewAGL's view that it could have implemented the recommendations from the MHC report, which ActewAGL consider are implicit in its forecast productivity growth. In EMCa's opinion, a service provider would require 3 to 5 years to extract the full net benefits from the recommendations of the MHC report. However, ActewAGL has indicated the time period to implement these recommendations was only 6-9 months.

EMCa accept that some of the initiatives could be implemented in twelve months or less but the substantial net benefits are typically achieved over a longer time period, particularly given MHC observed that improvements were needed to all elements of ActewAGL’s Operating Model – changing the organisational structure alone would not address all of the issues sustainably.

EMCa considers that in the absence of compelling evidence – ActewAGL has not provided evidence of quantified efficiency gains – ActewAGL has not made significant efficiency gains quickly enough to offset the implementation costs by 2012–13.

397    It is, however, ActewAGL’s submission that the primary objective of the MHC Report was to understand and address performance issues identified by ActewAGL’s management over the longer term and, except for small direct salary savings, does not quantify any cost savings that would flow from an implementation of its recommendations. In support of that submission it points to a letter, dated 4 March 2015, from MHC which states (at p 3):

A close examination of the 2011 MHC report finds no explicit references to inefficiency or poor productivity associated with that Reviews’ [sic] findings.

Given that the scope of MHC’s review was intentionally wide ranging, if such concerns had been identified, they would have been noted.

MHC considered all efficiency opportunities which might flow from the recommendations made in our earlier 2011 report. We did not state an explicit efficiency benefit as these were not directly apparent from our work.

398    Based on the MHC letter, ActewAGL rightly submits that because EMCa’s analysis of the MHC review proceeds on an incorrect basis (ie that the MHC review relates to opex efficiency) there is no basis for EMCa’s conclusions with respect to the MHC review or for EMCa’s conclusion that ActewAGL’s 2012-13 labour costs were inefficient.

399    EMCa relied on the following sentence in the in the second of the reports (Sinclair Knight Merz (SKM) report Resource Planning to deliver ActewAGL’s Program of Works for the FY 2012/13, Final Report, 27 March 2012 at p 5 (the SKM Report)) to point to “a systemic issue with projects and program delivery”:

However, the business does not have a consolidated works management system making resource scheduling and forecasting on an ongoing basis, difficult.

400    That sentence, however, as ActewAGL submits (also rightly), appears in a context referring to its capex, not its opex: see the SKM Report at p 5. And, as ActewAGL goes on to point out, SKM’s conclusion with respect to ActewAGL’s opex is to the contrary, as the following extract from the SKM Report at p 7 shows:

Finally a very high level review of the operational expenditure under the AMSP [Asset Management Strategy Plan] was conducted. This revealed that most programs involving inspection and maintenance work were being achieved within reasonable tolerances for quantity and budget.

401    Thus, ActewAGL submits (again rightly) that no significant probative value should be attached to the EMCa Report and that it does not provide any support for the AER’s conclusion of inefficiencies in ActewAGL’s labour practices.

402    Nor, in its submission, does the EMCa Report support the AER’s finding at p 7-146 of Attachment 7 to the ActewAGL Final Decision that:

ActewAGL’s labour costs are driven by having too many employees rather than by cost per employee.

403    In support of that submission ActewAGL points to a statement in the EMCa Report that:

staffing levels should be determined as part of comprehensive resourcing analysis

and notes that neither the AER nor the EMCa conducted such an analysis. In ActewAGL’s submission, absent such an analysis, the AER erred in its conclusions regarding ActewAGL’s staffing levels.

404    Noting that:

(a)    in the ActewAGL Draft Decision the AER relied on analysis in the 2014 Deloitte Labour Report which, despite its requests, was not provided to ActewAGL; and

(b)    while the AER excised any reference to that report in the ActewAGL Final Decision, the overall conclusions reached in the Final Decision are largely identical to those expressed in the Draft Decision from which it may be inferred that the AER continued to place reliance, or at least took into account, the contents of the report in the Final Decision,

ActewAGL submits that because it was deprived of an opportunity to review and make submissions in relation to the 2014 Deloitte Labour Report, ActewAGL was denied procedural fairness and the conclusions reached by the AER about ActewAGL’s workforce practices cannot be considered to be reliable. As to procedural fairness, the Tribunal refers to its observations above. Having regard to the following paragraphs, it is not necessary for the Tribunal to decide whether the second part of that submission has merit.

405    In response to criticism from ActewAGL that the Draft Decision analysis of its vegetation management expenditure did not corroborate the AER’s benchmarking results because it did not identify at least 40 percent of ActewAGL’s vegetation management expenditure as inefficient, Attachment 7 to the ActewAGL Final Decision states (at p 7-158) that the AER was not applying the detailed review in the manner suggested by ActewAGL and that:

The evidence we present in the detailed review will not necessarily explain the entire performance gap quantified in the economic benchmarking because our intention is not to examine all of opex. Economic benchmarking techniques, on the other hand, do assess opex in totality. The detailed review helps us to identify if the benchmarking results are consistent with our more detailed examinations of ActewAGL’s opex.

406    The AER’s submissions fail in their endeavour to defend the EMCa Report by:

(a)    addressing some detail of ActewAGL’s criticisms of the review; and

(b)    stating that its purpose of the review was not to quantify inefficiency.

407    The EMCa Report is, in its own words at p i, no more than a “limited scope review”. To put it another way, it is but a “desk top” qualitative review which, as rightly submitted by ActewAGL, relied on, but misconstrued, earlier reports commissioned by ActewAGL.

408    Where, as here, the application of a new untested benchmarking model is applied to arrive at a total opex figure, sensible administration suggests that the regulator responsible for its application would apply some form of quantitative “reasonableness check bottom-up analysis to at least some, if not all, of the opex components. That is, however, not the case here.

Labour costs – Networks NSW’s challenge

409    Networks NSW challenge the AER’s findings that inefficiencies in the NSW DNSPs’ labour management practices are, in part, responsible for the gap between them and the frontier DNSPs identified in the AER’s economic benchmarking analysis.

410    As may be seen by reference to Table 7.3: Assessment of Ausgrid’s base opex reproduced above and the paragraphs that follow that table, the findings in respect of the Networks NSW DNSPs are based on the 2014 and 2015 Deloitte Labour Reports – being the reports referenced above in canvassing the DNSPs’ challenges to the AER’s finding on their vegetation management practices.

411    The 2015 Deloitte Labour Report concluded (at p 20):

… the NSW DNSPs have a relatively high number of employees compared to private DNSPs in the NEM [national electricity market]. Considering that labour costs represent the vast majority of opex and given that their unit labour costs do not appear to be greater than their peers’, the higher number of employees in Ausgrid, Endeavour and Essential is likely the primary factor driving high opex costs per customer in NSW. Although the number of employees in NSW DNSPs is high due to historical workforce decisions, the current high number of employees is likely being sustained by restrictive EBA provisions relating to no forced redundancies and a relatively high proportion of employees employed under EBAs.

412    A footnote to the above quoted passage from the 2015 Deloitte Labour Report noted that while the “no forced redundancies provisions is not unique to the Networks NSW DNSPs, the fact that they currently have large workforces makes the provisions more relevant as they impede any large reduction in workforce size.

413    It is Networks NSW’s submission that because the vast majority of the Networks NSW DNSPs workforce are engaged pursuant to Enterprise Bargaining Agreements (EBAs) and compliance with the EBA is a “regulatory obligation or requirement” within the meaning of r 6.5.6(a)(2), the AER’s analysis of Networks NSW’s labour costs do not establish that the DNSPs’ revised regulatory proposals include any labour costs which do not reasonably reflect the operating expenditure criteria.

414    Rule 6.5.6(a)(2) provides:

(a)    A building block proposal must include the total forecast operating expenditure for the relevant regulatory control period which the Distribution Network Service Provider considers is required in order to achieve each of the following (the operating expenditure objectives):

… … …

(2)    comply with all applicable regulatory obligations or requirements associated with the provision of standard control services;

415    The phrase regulatory obligations or requirements is relevantly defined in s 2D(b)(v) of the NEL as follows:

an Act of a participating jurisdiction, or any instrument made or issued under or for the purposes of that Act … that materially affects the provision, by a regulated network service provider, of electricity network services that are the subject of a distribution determination or transmission determination.

416    Networks NSW submits that the NSW DNSPs’ obligations to comply with the EBAs constitute a regulatory obligation or requirement in terms of s 2D(b)(v) of the NEL as the EBAs are made pursuant to Part 2 - 4 of the Fair Work Act 2009 (Cth) – that Act being an Act of a “participating jurisdiction” which obliges the Networks NSW DNSPs to comply with the EBAs. Networks NSW advances its submission by citing Toyota Motor Corporation Australia Limited v Marmara [2014] 222 FCR 152 at [97] and Teys Australia Beenleigh Pty Ltd v Australasian Meat Industry Employees Union [2015] 317 ALR 636 at [92] in support of its contention that the EBAs are not mere contractual agreements; they are specific instruments made under a detailed regime and enforceable only as provided by the Fair Work Act 2009 (Cth).

417    Attachment 7 to the Ausgrid Final Decision shows that the thrust of the AER’s decision to reject the EBAs as a regulatory obligation or requirement is that:

(a)    they are a creature of Commonwealth law; and

(b)    the Commonwealth is not a “participating jurisdiction.

418    Noting that in its Final Decisions the AER maintained the view that the Commonwealth is not a “participating jurisdiction, Networks NSW draws on the Minister’s intervention in these proceedings to submit that the AER now acknowledges that status. It is, however, unnecessary to delve further into whether the Minister’s intervention amounts to a concession on the part of the AER. That is because the EBAs may be reasonably regarded as:

(a)    otherwise required to achieve an opex objective, namely, the r 6.5.6(a)(4) objective to: “maintain the safety of the distribution system through the supply of standard control services”; and

(b)    reasonably reflecting the opex criteria in r 6.5.6(c)(3): “a realistic expectation of the demand forecast and cost inputs required to achieve the operating expenditure objectives.”

419    That the EBAs may be so regarded may be seen in the following paragraphs (without footnotes) from Attachment 7 to the Ausgrid Final Decision (at p 7-86) in which the AER, while rejecting the EBAs as a 6.5.6(a)(2) “regulatory obligation or requirement’, recognised that the EBAs may affect the Networks NSW DNSPs’ provision of standard control services:

We also disagree with the service providers’ submissions that compliance with the terms of their own EBAs is a ‘regulatory obligation or requirement’. For example, service providers have referred to redundancy costs ‘required to be paid as a regulatory obligation’.

… of the six possible (and exhaustive) categories of obligations or requirements … , EBAs could conceivably only fall with an Act or instrument made or issued that ‘materially affects a service provider's provision of electricity network services’. This is because the terms of an EBA could plausibly materially affect a service provider’s provision of standard control services. However, that Act or instrument must be made by a ‘participating jurisdiction’. Given a participating jurisdiction must have passed a version of the NEL, an EBA made under the Commonwealth’s Fair Work Act 2009 appears to be imposed by a law other than of a participating jurisdiction. Further, the terms of an EBA itself are not contained in the Fair Work Act 2009.

420    Consistent with the above quoted extract from Attachment 7 to the Ausgrid Final Decision, the AER’s submissions do not contend that the EBAs, and labour costs more generally, are irrelevant to its assessment of required forecast opex. Indeed, the AER submits that the opex criteria include a realistic expectation of cost inputs and labour costs are one such input.

421    The AER’s recognition that the terms of an EBA “… could plausibly materially affect a service provider’s provision of standard control services” enlivens Networks NSW’s submission that its opex allowances must be such as to permit them to comply with their obligations under the EBAs. This, Networks NSW submits, includes (but is not limited to) making sufficient allowance for redundancy payments that Networks NSW will be required to pay under the EBAs in relation to the forecast reductions in employee numbers expected over the 2014-19 period.

422    However, as the following extract (without footnotes) from pp 7-41ff of Attachment 7 to the Ausgrid Final Decision illustrates, the AER rejects such submissions:

Consistent with our approach in our draft decision, we do not agree with these submissions. We are not denying the service providers the ability to transform their businesses and pay staff their entitlements. Recruitment and removal of staff are both ‘legitimate costs’ that the service providers would need to incur. However, we do not ‘fund’ the service providers for these (or any specific) activities. We assess a service provider's revealed opex in order to form a view on whether it reasonably reflects the opex a prudent and efficient (objective) service provider would require in the future to comply with its obligations. Service providers have broad discretion about all contractual arrangements and the manner in which they carry out those obligations.

423    Notwithstanding the AER’s statement that: “Recruitment and removal of staff are both ‘legitimate costs’ that the service providers would need to incur.” and its submission that labour costs are relevant to its assessment of required opex, its focus on benchmarking (in particular the EI model and its total opex outcomes) have lead it to treat the EBAs as endogenous (rather than exogenous) – an endogenous factor to be ignored in the AER’s estimate of the total required opex made pursuant to r 6.12.1(4)(ii).

424    The AER’s approach to endogenous factors is illustrated at p 7-184ff of Attachment 7 to the Ausgrid Final Decision where the AER observed (without footnote):

Differences in work practices and operating techniques are endogenous. The AEMC provides guidance on what it considers to be an endogenous factor that should not be taken into account when benchmarking. It stated:

Endogenous factors not to be taken into account may include:

    the nature of ownership of the NSP;

    quality of management; and

    financial decisions.

Differences in opex due to work practices and operating techniques are a direct outcome of management decisions. Therefore we do not provide an OEF adjustment for them. In general we consider that any OEFs that are a result of the quality of management do not meet the exogeneity OEF criterion. [Emphasis added]

425    While the extract from p 113 of the 2012 Rule Amendments determination quoted by the AER in the above extract provides some support for the AER’s reasoning, the AER’s transformation of the AEMC’s “may” to “should not” somewhat overextends the AEMC’s guidance. As a preceding paragraph to the above extract from p 113 of the the 2012 Rule Amendments shows, the AEMC’s view on when endogenous factors may or may not be taken into account is not an inflexible rule:

The final rule gives the AER discretion as to how and when it undertakes benchmarking in its decision-making. However, when undertaking a benchmarking exercise, circumstances exogenous to a NSP should generally be taken into account, and endogenous circumstances should generally not be considered. In respect of each NSP, the AER must exercise its judgement as to the circumstances which should or should not be included. [emphasis added]

426    While the AER’s submissions recognise that it is not an absolute rule, that is not how it was applied vis á vis the EBAs.

427    Thus, although the EBAs may lack either the NEL’s s 2D jurisdictional foundation or the genus of a safety or reliability standard etc of a r 6.5.6(a)(3) “regulatory requirement or obligation”, the Networks NSW DNSPs are bound by their EBAs as a matter of law. Unlike a contract, which according to its terms may be terminated, an EBA continues in force until its nominal expiry date after which it may, with the approval of the Fair Work Commission, be terminated by agreement between an employer and the employees it covers (ss 219-224 of the Fair Work Act 2009 (Cth)). Absent agreement, an application must be made to the Fair Work Commission to terminate an EBA. Termination may only occur if the Commission is satisfied that to do so is not contrary to the public interest and is appropriate in all the circumstances (ss 225-227 of the Fair Work Act 2009 (Cth)).

428    After reviewing the DNSPs’ revised regulatory proposals, the 2015 Deloitte Labour Report found (at p 16) that the primary driver of the NSW DNSPs’ labour costs being higher than their peers is the number of employees rather than cost per employee.

429    It appears from the 2015 Deloitte Labour Report that the higher number of employees is attributable to changes to Ministerial licence conditions in 2005 and in 2007 which placed considerable pressure on the NSW DNSPs during the 2009-14 regulatory period. In particular, clause 14.2 of the 1 December 2007 Design, Reliability and Performance Licence Conditions For Distribution Network Service Providers required that the NSW DNSPs be:

…as compliant as reasonably practicable with the applicable design planning criteria in Schedule 1 in relation to all network elements by 1 July 2014; and fully compliant with the applicable design planning criteria in Schedule 1 in relation to all network elements by 1 July 2019.

430    The 2015 Deloitte Labour Report noted at p 2ff that:

The 2014 Report set out our view that, given the licence requirement to be ‘as compliant as reasonably practicable’, the DNSPs acted in a manner consistent with a prudent and efficient DNSP by aiming to be largely compliant by 2014. Had they not strived to do so, and particularly had a major network incident occurred that could have been avoided had compliance with the new standards been achieved, the DNSPs would rightly have been criticised.

431    The report also agreed with the Networks NSW DNSPs’ view that the EBAs as a whole are no more generous in terms of base level wages and other employee conditions than those of their peers – that agreement being qualified by a note that the EBAs contain a range of generous terms and a citation of the following passage from Essential’s Revised Regulatory Proposal:

In general we agree with the observations made in the [2014] Deloitte Report that high levels of unionisation in the electricity supply sector can result in more restrictive work practices which are difficult to remove once negotiated in enterprise agreements. This can lead to relatively inflexible, high cost and unproductive work practices once labour costs become entrenched in EBAs.

432    The 2015 Deloitte Labour Report also found (p 18) that the majority of distributors are not allowed to carry out forced redundancies as a result of provisions in their respective EBAs and that this is an important impediment to any program of reductions in workforce size, outside of natural attrition.

433    Networks NSW rightly submit that insofar as the 2014 and 2015 Deloitte Reports suggested inefficiencies in the NSW DNSPs’ labour practices the reports do not quantify those inefficiencies and provide no corroboration of inefficiencies of the scale identified by the AER in reliance on the EI model.

434    As Networks NSW submit, Ausgrid, Essential and Endeavour are bound by the EBAs and remain bound by them and they should not be viewed as an endogenous managerial choice. At least not in circumstances where the AER has quite radically shifted from an itemised bottom-up approach to assessing opex to benchmarking total opex per se – particularly where that benchmarking has not been exposed to the rigors of the consultation the NEL and NER envisage for such a radical change.

435    The AER having flagged its approach to EBAs may be better placed to defend such an approach to an EBA when its approach to benchmarking is on a firmer footing and where there is hard information to support a finding that a DNSP’s labour practices are inefficient vis á vis its peers. But, having regard to the 2015 Deloitte Labour Report, that is not the case here. Here the Networks NSW DNSPs are shackled with EBAs that effectively restrict their ability to efficiently reduce their workforce in the regulatory period – that restriction being attributable to an exogenous factor, namely, the Fair Work Act 2009 (Cth).

436    It may be said that, in the view of the Tribunal, it is the policy of the legislative arm of government that, to the extent that the EBA’s are (if they are) an inefficient imposition on the DNSPs, nevertheless they are a cost to be borne by the consumers of electricity. The AER may, of course, assess the extent of inefficiency reflected by the number of employees. It may review the terms upon which the number of employees may be reduced under the EBAs. It may consider the timing for the expiration of the EBAs. But, having regard to the regulatory prescriptions, the Tribunal does not accept that it may, by the use of the EI model, simply select the measurement of efficiency which it did in this respect without regard to the obligations under the EBAs as they presently exist. Over time, and probably during the new current regulatory period, any such inefficiencies as the AER considers to exist may progressively be reduced by the reduction in employee numbers to what the AER considers to be the efficient number, and any allowances under the EBAs (as they expire) which the AER considers to be inefficient may also by the same elapse of time be reduced to an efficient level.

437    It is not necessary to canvass Networks NSW’s other grounds for challenging the AER’s labour costs decisions as set out in [494] of its submissions, namely:

(a)    there is no proper basis for Deloitte’s conclusions that:

(i)    the Networks NSW DNSPs employ too many staff; or

(ii)    that their past practice of hiring permanent labour left them with too many staff;

(c)    the AER failed to take into account the efficiency programs implemented by the Networks NSW DNSPs; and

(d)    in relation to Essential, the AER incorrectly weighed Deloitte’s view that there is a possibility Essential could realise significant cost savings by using a Local Service Agent model.

438    In that regard, however, it is proper to note that the Networks NSW’s submissions assert approximately $3 billion in efficiencies in capex and opex over the 2009-14 regulatory control period and that within these savings were reductions in the number of employees.

439    In Attachment 7 to the Ausgrid Final Decision at p 7-158, and correspondingly in Attachment 7 to the Endeavour Energy and Essential Energy Final Decisions, the AER concluded that the service providers had managed to achieve significant reductions in labour costs through reducing the number of staff, and were forecasting further savings, but that most of the reductions took place after the 2012/13 base year. The AER adopted its position based on the 2015 Deloitte Labour Report which stated that the scale and speed of the reductions in staff suggested there were still cost efficiencies to be realised.

440    The AER at p 7-286 of Attachment 7 to the Ausgrid Final Decision stated that the efficiency programs represented “catch up in productivity, and efficient distributors would not be implementing the same productivity improvements.

441    The AER at p 7-52 of Attachment 7 to its Ausgrid Final Decision noted that Endeavour’s revealed opex was lower than that for Essential and Ausgrid because it had implemented its efficiency programs earlier and to a greater extent than its two peers. It still considered that there were efficiency gains to be realised in the 2012-13 base year.

442    As the 2015 Deloitte Labour Report contended that the NSW DNSPs did not have an efficient workforce in the base year and compared employee numbers across the regulatory control period with other DNSPs, the AER will have to consider how the efficiency programs implemented by the NSW DNSPs into the 2014-19 regulatory control period have been effective.

The AER’s use of the EI model as the sole determinative of opex

443    It is Networks NSW’s submission that the AER has used an experimental model as the sole determinant of opex, contrary to sensible regulatory practice including significant experience of modelling in other jurisdictions.

444    Networks NSW’s submission is cogent. There are lessons to be learnt from overseas regulators, particularly the UK regulator, Ofgem which the AER cites in support of its approach – see for example Attachment 7 to the Ausgrid Final Decision, at p 7-60, where, noting that that Ofgem assesses totex rather than capex and opex separately, the AER states:

Our approach of using benchmarking as a basis for making adjustments to opex is consistent with Ofgem’s approach.

445    Ofgem is a regulator with over a decade’s experience in benchmarking and, because of that long history, is the primary point of reference when it comes to assessing the soundness of another regulator’s approach to benchmarking and its benchmarking models. As the Huegin Report observes, at p 20:

The Productivity Commission report and the AER’s Guideline and the associated documents that fed into both rely heavily on the experiences of regulators such as OFGEM.

446    It is, however, Networks NSW’s submission that the AER’s approach is nothing like Ofgem’s. That submission is supported by the following observation by CEPA’s Chairman (Professor David Newbery, who has led numerous CEPA assignments for Ofgem) (see CEPA Report at p 30):

Using a top-down model to assess opex (or totex) is consistent with best practice in the UK as it does not enforce choices on the companies as to which activities to undertake, however, using only a single model with few explanatory variables and no bottom-up assessment is not best practice. For instance, Ofgem in its RIIO-ED1 decision stated:

Our use of three models [two top-down and one bottom-up] acknowledges that there is no definitive answer for assessing comparative efficiency and we expect the models to give different results. There are advantages and disadvantages to each approach. Totex models internalize operational expenditure (opex) and capital expenditure (capex) trade-offs and are relatively immune to cost categorisation issues. They give an aggregate view of efficiency. The bottom-up, activity-level analysis has activity drivers that can more closely match the costs being considered.

His reference is to Ofgem (2014), RIIO-ED1: Final determinations for the slow-track electricity distribution companies: Business plan expenditure assessment.

447    In contrast to the AER’s post modelling OEF adjustments, Ofgem adjusts to the data supplied by the Distribution Network Operators (DNOs) (the UK equivalent of DNSPs) it regulates prior to undertaking its modelling – see Ofgem’s, RIIO-ED1 Final determinations for the slow-track electricity distribution companies, 28 November 2014, at p 41:

We consider whether DNO submitted data require adjustments prior to carrying out our comparative benchmarking. This is to ensure the comparisons are on a like for like basis. Where we decide adjustments are appropriate, we adjust the DNO submitted costs before our totex and disaggregated assessments. These adjustments fall into four broad categories: regional labour costs; company specific factors; exclusions from totex models; and other adjustments.

448    As the Huegin Report notes at p 23:

The OFGEM approach is … based on many years of regulatory reporting to a consistent format and common reporting timeframes which are more favourable conditions for data accuracy … [than the Australian staggered reporting and/or regulatory determination cycle]. Yet OFGEM still recognise the need to normalise the data prior to modelling. Regional and company specific factor adjustments recognise that particular locations and particular networks incur costs beyond the control of the operating business and these costs should not be included in efficiency models.

449    Frontier also comments favourably on Ofgem’s approach which results in its final allowances being comprised of 25 percent of the DNOs’ submitted costs and 75 percent of its benchmarking models and notes in the Frontier Report (at p 96) that this is despite the fact that:

    Ofgem uses a ‘toolkit’ of approaches to determine its benchmarking target, including top-down econometric models, bottom-up unit cost analysis, bottom-up engineering assessments, assessments of historic costs and assessments of forecast costs, in order to provide the scope to cross check and sense check the efficiency estimates derived by any single approach.

    The quality of data available to Ofgem is significantly better than the data available to the AER, owing to the prodigious effort that has been invested in improving the underlying data, in particular the cost data.

    There has been a significant amount of engagement with the …[DNOs] … to develop the Ofgem models in the first place, allowing them to comment on Ofgem’s technique, cost driver choice, the quality of their own and other’s data, cost drivers that are not adequately captured by the models, differences in business model that may be picked up as inefficiency and any circumstances otherwise unique to the company that should be adjusted for or at least understood when interpreting the results.

450    Noting that that Ofgem has undertaken a decade or more of development work in respect of its data collection, Frontier also observes at p 103, that the AER should anticipate the need to undertake a similar programme of work and that:

We recognise that the AER has gone through a process to develop RIN templates but, set against Ofgem’s experience, it would be naïve for the AER to think that the RIN data obtained to date is sufficiently free from errors and inconsistencies as to warrant the degree of confidence the AER has placed in its modelling.

451    Based on its review of the Australian data and its experience of applying benchmarking techniques across Europe, Frontier further observes that the AER is regulating a sector with an unprecedented degree of heterogeneity. It notes in that regard that the one of the largest DNSPs, Essential, serves an area significantly greater than France and another, Ergon, an area significantly greater than France, the UK and Spain combined. It is Frontier’s opinion at p 104 that:

These statistics alone ought give the AER pause to consider whether it is sensible to treat networks of such scale the same as networks that serve much smaller geographies. Yet, the AER appears to have given no particular consideration to the unique circumstances faced by these networks. Instead, the AER has relied on very crude modelling tools to capture the effects of extreme scale, rurality, and sparsity. As a result, the AER’s modelling identifies these two networks as among the least efficient DNSPs in Australia. This is very surprising to us because European regulators, such as Ofgem, engage closely with networks with much less extreme characteristics than Essential Energy and Ergon Energy to understand any important factors that their modelling may have failed to capture.

452    Again drawing on its experience of practice in Europe, Frontier observes that:

it is common for regulators to seek to triangulate “top down” benchmarking, of the kind produced by EI, with other sources of information, e.g. review by expert engineering consultants of unit costs, volumes of work, policies and practices in order to gain a more holistic view of network performance.

453    While ActewAGL accepts that r 6.5.6(e)(4) requires the AER to have regard to the benchmark opex that would be incurred by an efficient DNSP, it submits that the NER do not require the AER to give benchmarked opex any particular weight or mandate that the AER must give benchmarking a weight that is disproportionate to its probative value. In ActewAGL’s submission, the AER’s benchmarking methodology is incapable of providing much, if any, guidance whether ActewAGL’s forecast opex reasonably reflects the opex criteria and it is unreasonable to use it as the principal basis for the AER’s decisions.

454    It is also ActewAGL’s submission that:

(a)    although its Final Decision contains many lengthy descriptions and diagrammatical representations of the decision making processes adopted by the AER (eg Step 1 in Table 7.4 Arriving at our alternative estimate of base opex, Attachment 7 to the ActewAGL Final Decision at p 7-26), the sole basis on which the AER estimated ActewAGL’s opex was the EI model and the post modelling OEF adjustments; and

(b)    because that estimate was lower than ActewAGL’s forecast, and the AER was not satisfied by ActewAGL’s explanation of the difference, the AER adopted its own estimate based on the EI model.

455    That the AER’s decision making process is as submitted by ActewAGL may be seen by reference to p 16 of the Overview to the ActewAGL Final Decision where it is stated:

In this final decision we used our preferred benchmarking model [the EI model] as the starting point to arrive at an alternative estimate of opex that reasonably reflects an efficient base level.

456    This confirmed by reference to pp 15-16 of the Overview to the Ausgrid’s Final Decision:

In its revised proposal, Ausgrid based its opex forecast on its historical costs. … we are not satisfied that those forecasts are the appropriate starting point for forecasting its opex for 2015–19.

Instead, we have used our benchmarking analysis as the starting point for assessing Ausgrid's base level of opex. We are satisfied that our resulting opex forecast reasonably reflects the opex criteria.

… … …

In this final decision we used our preferred benchmarking model [the EI model] as the starting point to arrive at an alternative estimate of opex that reasonably reflects an efficient base level.

457    Thus, ActewAGL rightly submits that considerable caution ought be exercised about the manner in which the Final Decisions describe the AER’s decision-making processes, in particular, when claiming that the DNSPs’ opex as the starting point for its estimate of the required opex pursuant to rule 6.12.1(4)(ii). It is noted that ActewAGL’s submission goes further by saying that the AER’s statement that it “started” with ActewAGL’s forecast is “window dressing” and “meaningless” when viewed in light of the above quoted passages from the Overviews to the Final Decisions.

458    Attachment 7 to each of the Final Decisions in issue makes it plain that the AER arrived at its own estimate of opex based on the EI model that was lower than the DNSP’s and, as it was not satisfied that there was an explanation for the difference, rejected the DNSP’s forecast and deemed its own as the appropriate estimate. The DNSP’s forecast did not otherwise play a role in the AER's decision-making process, whether as a “starting point or otherwise.

459    In response to the submissions to the effect that it is in error in not using the DNSPs’ forecasts as a starting point, the AER submits that:

(a)    the NER do not stipulate that it must undertake a bottom-up engineering approach;

(b)    it exposed the DNSPs’ forecasts to a multitude of assessment techniques using the information contained in the DNSPs’ regulatory proposals, eg

(i)    category analysis disaggregated the costs in the opex forecasts and compared those costs to the DNSPs’ peers; and

(ii)    the reviews undertaken by the AER’s consultants, Deloitte and EMCa relied upon the information contained in the regulatory proposals.

460    Responding to the applicants’ submissions that it applied the EI model in a deterministic manner, the AER submits that:

(a)    it considered a range of analytical methods (not limited to the EI model) before concluding that it did not accept the DNSPs’ opex forecasts; and

(b)    once it decided not to accept the DNSPs’ opex forecasts, its task was to make its own estimate of forecast opex;

(c)    it checked the results of the EI model against two other econometric models and the index based opex MPFP technique (noting that the MPFP last technique does not use overseas data and uses a different output specification); and

(d)    all of the techniques produced similar results for each DNSP.

461    Viewed in light of the following acute observation by Networks NSW’s expert, Frontier, in the Frontier Report at p 105, the AER’s submissions are tenuous:

… the AER appears to have put undue faith in the ability of it, and its advisers, to develop a single benchmarking model (or suite of very closely related models, all derived from the same data and missing the same wider review of factors and sense checks) that can capture very well relative inefficiency.

462    Having regard to the conclusions that may be drawn from the above considerations of the applicants’ submissions challenging the AER’s approach to determining the DNSPs’ opex, it is not necessary consider the following challenges to the AER’s benchmarking methodology, other than to note them:

(a)    the AER’s failure to corroborate the results of the EI model;

(b)    the AER’s failure to have proper regard to the DNSPs’ endogenous circumstances; and

(c)    whether the AER had proper regard to the consequences of its estimates of the DNSPs’ opex.

Consideration of the Principal Opex Issue

463    Conceptually, the parties submissions address the principal issue (whether the AER’s application of the EI model discharged its obligations under rr 6.5.6 and 6.12.1(4)) at two levels. The first, involving the effect of the 2012 Rule Amendments, particularly the changes to r 6.5.6 and other rules relevant to its interpretation and application. The second, contingent on a DNSP establishing that the AER’s application of the EI model failed to discharge its obligations, involving the effect of the 2013 Legislative Amendments, particularly, the introduction of s 71P(2a) and (2b).

464    While the DNSPs’ submissions in support of their opex forecasts tend to an interpretation and application of the 2012 Rule Amendments favouring the RPP, the AER and PIAC’s submissions in support of lower opex allowances tend to an interpretation and application of the 2012 Rule Amendments favouring the NEO.

465    Insofar as there is such a tendency in a party’s submission it is rejected. The 2012 Rule Amendments simply do not contemplate that the NEO and the RPP are at cross purposes, or that their meaning has changed. They do not lead to a fresh policy subsidy to consumers by way of an artificially low opex figure or a bonus to a DNSP by way of an artificially high figure. Indeed, the AER (and the Tribunal on review) has a delicate task. Both must be conscious of the interests of consumers and the AER is bound to carefully scrutinise the information provided to it in support of a DNSP’s opex allowance. It must also have regard to the legitimate business interests of a DNSP and should not put itself in an adversarial position in relation to the DNSP so that it may be perceived as a champion of consumers – cf: Re East Australian Pipeline Limited [2004] ACompT 8 at [16] and [33].

466    The 2012 Rule Amendments together with the 2013 Legislative Amendments give rise to a multifaceted regulatory regime calling for a balance between the interests of consumers on the one hand and the interests of DNSPs on the other. The observations of the High Court (per Gleeson CJ, Heydon and Crennan JJ) in East Australian Pipeline at [39] in relation to an earlier gas access regulatory regime are most apt in the Tribunal’s consideration of the regime now before it:

Stripped to essentials, such a regime is at least intended to allow efficient costs recovery to a service provider and at the same time ensure pricing arrangements for the consuming public which reflect the benefits of competition, despite the provision of such services by monopolies. The balancing of those objectives properly has a natural flow-on effect for future investment in infrastructure in Australia.

467    As noted above, there are a number of issues with the EI model and the AER’s application of it:

    Inadequacies in the EI model’s data set and comparability issues

o    the RIN data;

o    the overseas data;

o    the country dummy variables;

    the lowering of the EI model’s comparison point

    the OEF adjustments;

    in circumstances where economic benchmarking is in its infancy in Australia, the reliance on qualitative analysis rather than bottom-up quantitative assessment to test issues such as those raised by the DNSPS re their vegetation management opex; and

    the AER’s use of the EI model as the sole or principal determinative of opex.

468    As a first step in its consideration, the AER was required to decide whether it was satisfied that the total of the forecast opex in the Revised Regulatory Proposals of each of the DNSPs reasonably reflected each of the operating expenditure criteria set out in r 6.5.6(c). The AER’s analysis of the Networks NSW and ActewAGL Revised Regulatory Proposals led to it expressing concerns about a number of components or elements of those proposals. The Tribunal is not persuaded, having regard to those concerns, that the AER’s lack of satisfaction on that question exposes a ground of review. There was material upon which it could have reached that conclusion. There is no demonstrated ground of review made out in that step, even though (not surprisingly) there is considerable debate in the submissions about a number of the matters considered by the AER.

469    Consequently, the Tribunal does not consider that the step taken by the AER under r 6.5.6(d) involved error on its part so as to enliven any grounds of review under s 71C of the NEL.

470    Rule 16.12.1(4)(ii) then obliges the AER, on making its Final Decisions in relation to each of the DNSPs, to include with its reasons for the lack of satisfaction under r 6.5.6(d) an estimate of the forecast opex for each of the DNSPs for the 2015-19 regulatory control period that the AER:

… is satisfied reasonably reflects the operating expenditure criteria, taking into account the operating expenditure factors (see rule 6.5.6(e)).

471    As is apparent from the above, there are a number of respects in which, or reasons for, the Tribunal on these applications being of the view that one or more of the grounds of review under s 71C(1) are made out. At a general level, that is because the AER placed too much weight on the outcome of the EI model. That, in the Tribunal’s view represents an exercise of the AER’s discretion about the use to which the EI model should have been put which was incorrect.

472    Underlying that view are a series of concerns about the inputs to the EI model, and the OEF adjustments (including those of concern to PIAC), and including the AER’s treatment of the vegetation management costs of Essential, Endeavour and ActewAGL, and further including the AER’s treatment of the labour costs of the Networks NSW DNSPs. Those concerns can generally be described as errors of fact by the AER in its findings of fact, as discussed in detail above. Those errors do not simply reflect the AER’s choice of competing expert views. There are underlying elements to the EI model which mean that the AER at this point (accepting that the available Australian data is not sufficiently extensive for appropriate modelling) should not have placed the weight it did on the output of the EI model. As the earlier Introduction to these reasons discuss, there may be room for debate about whether a particular step shows an error of fact in a finding of fact, or is an incorrect exercise of a discretion. It would be possible, in a number of the specific instances (in particular in relation to the OEFs) to use either description by the use of different semantics. The line between the two is often hard to draw. The Tribunal, having regard to its conclusion in the preceding paragraph, does not think it is helpful to embark on that exercise.

The 2012 Rule Amendments

473    It is desirable to add some further comments on this topic. On the one hand, the AER and PIAC perceive the 2012 Rule Amendments as shifting the emphasis in r 6.5.6 away from the individual actual circumstances of the DNSP whose opex forecast is subject to assessment to that of each DNSP’s opex forecast being assessed against a benchmark entity.

474    On the other hand, the DNSPs emphasise that:

(a)    each of the r 6.5.6 opex objectives, the opex criteria and the opex factors; and

(b)    in particular, each of the r 6.5.6(c) three opex criteria,

should not be conflated to arrive at a one size fits all benchmark assessment of a DNSP’s opex.

475    Also, the DNSPs perceive that the AER’s focus on the new r 6.5.6(e)(4) (benchmark opex that would be incurred by an efficient DNSP) lead it to ignore, or give insufficient weight to, other opex factors, in particular:

(a)    the actual and expected opex of the DNSP during any preceding regulatory control periods (r 6.5.6(e)(5));

(b)    the substitution possibilities between opex and capex (r 6.5.6(e)(7)); and

(c)    whether the DNSP’s opex forecast is consistent with the r 6.5.8 EBSS or the r 6.6.2 STPIS (r 6.5.6(e)(8));

476    Furthermore, to the extent that the DNSPs acknowledge that the 2012 Rule Amendments gave some greater standing to benchmarking, the DNSPs are critical of the AER’s benchmarking methodology inputs and outcomes.

477    The parties seek to advance their respective perceptions of the impact of the 2012 Rule Amendments by reference to extrinsic material within the ambit of Schedule 2 of the NEL, namely, the AEMC’s Final Position Paper. Reliance was also placed on a review related material Productivity Commission’s report: Productivity Commission, Electricity network regulatory frameworks, inquiry report, 9 April 2013.

478    As often is the outcome when one a party or another goes from the words of a section or a rule to the words of another document to seek to bolster their interpretation of the section or rule, apt words of comfort for the position of either party may be found in the extrinsic and review related material.

479    However, having regard to the Tribunal’s conclusion (as to which see below) that the AER was, (because of inherent weaknesses in the EI model and the ex post adjustments to its outcomes) wrong to rely on the EI model to estimate the DNSPs’ required opex, it is not necessary to explore and rule on:

(a)    the way that the parties advance the extrinsic and the review related material in support of their interpretation of r 6.5.6; or

(b)    the minutiae of the parties’ lengthy submissions on how r 6.5.6 should be interpreted.

480    Suffice to say at this point that in a context where it is applying benchmarking for the first time, the AER’s application of the EI model gave a discordant weight to r 6.5.6(e)(4) (benchmark opex that would be incurred by an efficient DNSP) vis á vis the other r 6.5.6(e) opex factors.

481    It is nevertheless appropriate to record that, as the Tribunal observed above, it does not consider that the 2013 Legislative Amendments changed the meaning of the NEO or the RPP, or their relationship. Clearly, the introduction of a “materially preferable NEO decision” and “preferable reviewable regulatory decision” and ss 16(1)(d) and 71P(2a) and (2b) in the NEL (and the complementary changes in the NGL) refined the focus of the respective decisions of the AER and the Tribunal, but if there were an intention to change ss 7 and 7A of the NEL (or ss 23 and 24 of the NGL), that would have been clearly expressed. The same general comment may be made about the 2012 Rule Amendments themselves, at least in relation to r 6.5.6 of the NER. The changes clearly refine the focus of both the AER and on review the Tribunal. But the concepts of “operating expenditure criteria” and “operating expenditure factors” pre-existed the 2012 Rule Amendments. Apart from the particular changes, some of which are clearly to accommodate the new and more sophisticated processes expected of the AER (eg in r 6.8 of the NER), and noting the amendments made to r 6.5.6(c) and (e), there is no reason to think that one or other of the opex criteria or the opex factors is intended to have a pre-eminence over that of the others.

Section 71P(2a) and (2b) of the NEL.

482    The Tribunal, having been satisfied that there are grounds of review made out in relation to the AER’s opex allowance for each of the Networks NSW DNSPs, and for ActewAGL, has the power under s 71P(2) available to it. They are expressly subject to s 71P(2a) as explained in s 71P(2b). In the case of the Networks NSW DNSPs, the appropriateness of granting particular relief or indeed of granting any relief at all is a more complex one because of the findings about PIAC’s application and in particular how the OEFs were addressed.

483    It is premature to deal with those questions at this point, because of the need to consider the inter-relationship of the constituent elements of the relevant Final Decisions. Where there are other elements of the four Final Decisions which are also the subject of challenge, it is appropriate at first to address those issues.

484    However, as is almost self-evident, if the Tribunal were to be satisfied in terms of s 71P(2a)(c), the Tribunal would not decide to vary the Final Decisions under s 71P(2)(b) because it is not satisfied that to do so would not require the Tribunal to undertake an assessment of such complexity that the preferable course is to set aside the Final Decisions, and remit the matters to the AER, having regard to these reasons. In short, the Tribunal is left with the options of affirming or setting aside the four Final Decisions and remitting them to the AER under s 71(2)(c).

485    That really follows from a reading of the submissions addressing the principal issue under this heading that for every competing argument there is a supporting expert or experts and given that context, the use of the phrase “materially preferable” requires the Tribunal to look through the inevitable conflict and difference of views between experts, all advocating positions which they regard as being preferable, and to determine whether an advocated materially preferable NEO decision is, indeed, materially preferable: ie a decision which, notwithstanding that divergence of views, is sufficiently compelling to be seen by the Tribunal as being “materially preferable” than that advocated by the AER: cf: Wellington International Airport Limited & Ors v Commerce Commission [2013] NZHC 3289 at [164].

Transition Path

486    One further matter should be mentioned. It was raised by each of Networks NSW, ActewAGL and Ergon.

487    They contended that, if the AER was correct in its conclusion that the distributors’ opex forecasts did not reflect efficient costs, the AER’s decision involved an incorrect exercise of discretion, or an unreasonable decision, in failing to provide a transition path for the DNSPs to reduce their opex to efficient levels (as decided by the AER) or any allowance for the costs involved in transitioning. Ergon also alleges that the AER made an error of fact in finding that a transition path was not required under r 6.5.5(c)(3).

488    Similar grounds are also raised by Networks NSW, ActewAGL and Ergon in relation to the X factor. The Tribunal’s reasons in relation to those grounds are outlined in the X factor section of these reasons.

489    It is submitted that, if there is a step decrease in opex between two regulatory periods, the AER should provide an allowance over and above the AER’s opex allowance, because of the time the particular DNSP would have to take, and the costs that it would have to incur, to transition its business to one that can operate at the AER’s proposed opex allowance.

490    The justification for such an allowance was summarised by Networks NSW as necessary for the following reasons. First, paying redundancy expenses constitutes the efficient, prudent and realistic costs of compliance with a “regulatory obligation or requirement” within the meaning of r 6.5.6(a)(2) of the NER – namely the provisions for redundancy pay contained in the EBAs which bind the Networks NSW businesses. The reasons why Networks NSW says compliance with EBA provisions constitutes a “regulatory obligation or requirement” are set out in relation to opex above.

491    Secondly, the AER’s view that the Networks NSW businesses should bear redundancy costs is predicated to a substantial extent on the proposition that the Networks NSW businesses acted inefficiently or imprudently in hiring a permanent workforce on EBAs in order to meet increased licence conditions. It is said that this is not correct and that, if those costs were prudent and efficient, the costs of reducing that workforce now that the need for labour is reduced should similarly be seen as prudent and efficient.

492    Thirdly, it is submitted that an immediate transition to a materially lower level of opex is neither prudent nor realistic, and that a smoother transition path with respect to any required opex reductions is in the long term interests of consumers. That is because, amongst other things, it would provide the relevant DNSP with a reasonable opportunity to recover at least their efficient costs and would provide sufficient incentive for an entity to invest in a manner that will best achieve the NEO: s 7A(2) of the NEL.

493    Ergon also notes that these considerations are equally important in the context of regulatory decisions which the AER is required to make in relation to capex, as, similarly capex forecasts must reasonably reflect a DNSP’s efficient costs, the costs that would be incurred by a prudent service provider and a realistic expectation of the DNSP’s demand forecasts and cost inputs.

494    Due to the Tribunal’s findings on opex, the Tribunal does not, in the circumstances, need to determine whether these contentions by Networks NSW, ActewAGL and Ergon are correct. When the AER revisits and redetermines the opex allowance, it will have to consider the costs involved in transitioning. It will do so at a time, and in relation to revenue streams, which will require it to make a fresh decision. The Tribunal is anxious not to inhibit the AER at this point in exercising its discretion in that regard.

Conclusion on Opex (subject to s 71P(2a) and (2b))

495    Having regard to the DNSPs and PIAC’s submissions as a whole the Tribunal concludes that the AER’S reliance on the EI model failed to discharge its obligations under rr 6.5.6 and 6.12.1(4).

496    In reaching that conclusion, the Tribunal has the following matters in mind.

(a)    The AER’s undue reliance on the EI model as a determinative factor in the AER’s estimation of each DNSP’s required opex pursuant to r 6.12.1(4)(ii). That reliance being placed on the model notwithstanding that it recognised it had limitations with respect to the specification of outputs and inputs, data imperfections, and other uncertainties in a context where economic benchmarking is being used for the first time to set opex allowances – see eg: Attachment 7 to the Endeavour Final Decision, at pp 7-268 and 7-269, Attachment 7 to the Ausgrid Final Decision, at p 7-64 and Attachment 7 to the ActewAGL Final decision at p 7-250.

(b)    The restricted opportunity afforded to the parties (and denied to third parties as the AER’s obligation to consult was past) to test the veracity of the EI model. That is not to cast an adverse reflection on the AER. Nor is it to suggest that the AER did not conscientiously examine submissions it received after its draft decisions. It is simply to recognise that the AER has a large and most difficult task to perform within a limited timeframe – a timeframe that did not permit it to conduct the consultation required to:

(i)    adequately test the data in the EI model and the other models to which it had regard; and

(ii)    expose the DNSPs’ consultants reports to the rigours of examination that the AER’s consultation obligations are designed to foster.

497    The Tribunal will address later in these reasons, under the heading “The Tribunal’s Determination” the application of s 71P(2a) and (2b).

X FACTOR

Background

498    This issue is confined to the Networks NSW DNSPs. For reasons which will shortly be apparent, it is an element of the Final Decisions of the AER in relation to the Networks NSW DNSPs which will need to be revisited by the AER. In short, it will follow from any adjustment to be made to the opex allowances of the Networks NSW DNSPs that the X-factor will have to be re-applied.

499    It is not a matter upon which PIAC made submissions, either as an applicant in relation to those Final Decisions or as an intervener. Nor did any other intervener address submissions in relation to it.

500    It is nevertheless appropriate for the Tribunal to address it as a separate issue, because Networks NSW assert that grounds of review may be made out in relation to its application in any event.

501    One of the constituent decisions of the relevant Final Decisions concerns the control mechanism for standard control services (r 6.12.1(11)). One part for these mechanisms is the “X factor. The X factor determined for a particular year represents the real rate of change in revenues for that year that have been approved by the AER (before any annual adjustments). In effect, it operates as a “smoothing factor for revenue over consecutive years.

502    The X factor to be used in the control mechanism is to be determined by reference to the requirements set out in r 6.5.9(b) of the NER. Rule 6.5.9(b) describes the X factor relevantly as follows:

(a)    must be set by the AER with regard to the DNSP’s total revenue requirement for the regulatory control period (6.5.9(b)(1));

(b)    must be such as to minimise, as far as reasonably possible, variance between expected revenue for the last regulatory year of that regulatory control period and the annual revenue requirement for that last regulatory year (6.5.9(b)(2)); and

(c)    must conform with (relevantly for standard control services) the X factor must be designed to equalise (in terms of net present value) the revenue to be earned by the DNSP from the provision of standard control services over the regulatory control period with the provider’s total revenue requirement for the regulatory control period (6.5.9(b)(3)(i)).

503    The X factor is generally calculated so as to allow for smoothing of revenues subject to the requirement that both smoothed and unsmoothed revenues are equal in net present value (NPV) terms over a 5 year regulatory control period. Due to the transitional rules the regulatory years 2014-19 were split over two regulatory control periods, with a “transitional regulatory control period” for 2014-15 and a “subsequent regulatory control period: for 2015-19: see r 11.55.1 of the NER. For the transitional regulatory control period, the AER determined a “placeholder revenue” separately from the annual revenue requirement for 2014-15 established through the determination process.

504    In relation to each of the Networks NSW DNSPs, the “placeholder Revenue” amount was higher than the annual revenue requirement which was finally determined.

505    In effect, after making a determination about the annual revenue requirement for each year in the regulatory period and the total revenue requirement for the regulatory control period, the AER is required to adjust and “smooth” the revenue in accordance with the NER.

506    The dispute in relation to the control mechanism for standard control services concerns the AER’s relative allocation of revenue for each year of the regulatory period 2014-19 and how the difference between the “placeholder revenue” amount and the annual revenue requirement, and the differences between annual revenue for each year, should be minimised or “smoothed”.

The X Factor Decision

507    The following description is taken from the AER’s submissions, and to a degree incorporates its explanations for the steps it took.

508    In 2014, the AER determined placeholder revenue allowances for the transitional regulatory control period. In the Draft and the Final Decisions for the years 2015-19, the AER made a full regulatory determination for each year, and accounted for any adjustment amount related to the transitional regulatory control period. As part of this process, it was required to determine annual revenue requirements for each year of the five year period (2014-19) and use a NPV neutral true-up mechanism to account for any difference between:

(a)    the placeholder revenue for the transitional regulatory control period; and

(b)    the annual revenue requirement for 2014-15 established through the full determination process.

509    To give effect to the true-up, the AER set each of the Networks NSW DNSP’s first year expected revenue in the post-tax revenue model equal to the AER approved placeholder revenue for 2014-15: see eg Ausgrid Final Decision, Attachment 1 at p 1-14. The AER considered that this was the only practical option, as distribution and transmission prices were set for 2014-15 based on these approved placeholder amounts. This meant that the difference in revenues for 2014-15 between the transitional and Final Decisions needed to be accounted for in the 2015-19 regulatory control period. That is, the placeholder revenue for 2014-15 from the transitional determination provided a base from which the expected revenues (smoothed) for the remaining four years of the 2014-19 period were calculated, giving effect to the true-up and returning the difference to customers over the 2015-19 period.

510    In determining the X factors for the remaining four years of the regulatory control period, the AER was constrained in smoothing by the transitional year X factor: that factor was locked in as it had been used to determine 2014-15 prices that were approved. Further, as the AER determined (in its final decision) that the actual revenue requirement for 2014-15 was lower than that approved for the transitional determination, the NER’s transitional requirement for a true-up in relation to 2014-15 revenues: see r 11.56.4(h)-(i), meant that there were revenues received in 2014-15 that had to be returned to customers and therefore reflected in future years X factors.

511    By reason of these circumstances, it was difficult to apply the AER’s usual smoothing approach, which aimed not only to smooth within the regulatory control period, but to minimise any step change in revenues from the end of the regulatory control period (2018-19) to the start of the next regulatory control period (2019-20). The NER’s transitional requirements removed the usual requirement to avoid such step changes, but the AER considered that as a matter of policy and consistency with the NEO it should still avoid too big a potential step change in revenues across regulatory control periods. Accordingly, the AER widened its usual tolerance limit of a +3 percent step change to as much as 10 percent for each of the Networks NSW DNSPs (Ausgrid: 10 percent for distribution and transmission; Endeavour: 10 percent; Essential: 10 percent): see Ausgrid Final Decision, Attachment 1 at p 1-15; Essential Final Decision, Attachment 1 at p 1-12; and Endeavour Final Decision, Attachment 1 at p 1-11.

512    Within these constraints (ensuring NPV neutrality, dealing with transitional year issues, and avoiding large revenue step changes at the end of the period) the AER smoothed revenues as much as it could by determining X factors that would not result in revenue falling and then rising again in subsequent years. The best profile that was achievable within the constraints required a significant revenue reduction in 2015-16 (as determined in the AER Final Decisions) which would then allow revenues to remain relatively flat for the rest of the period.

513    Between the Draft Decisions and Final Decisions, the AER made modest adjustments to the way smoothing occurred in response to concerns from the distributors, the effect of which was to allow the difference in end of period unsmoothed/smoothed revenues to be increased up to 10 percent. Although this facilitated a slightly smaller X factor for 2015-16 by increasing the 2016-17 X factor, the AER could not shift revenue reductions further into the future. A reduction to revenue in 2017-18, would require an increase in 2018-19 or, alternatively, a step change in revenue larger than 10 percent in 2019-20.

514    Hence, in the placeholder determination for the transitional regulatory control period applying to Ausgrid, Essential and Endeavour, the AER determined the annual revenue requirement for the Networks NSW DNSPs for the 2014-15 year as follows:

(a)    Ausgrid (distribution): $1,956.45m;

(b)    Ausgrid (transmission): $252.31m;

(c)    Endeavour: $949.45m; and

(d)    Essential: $1,291.72m.

515    However, in the Final Decisions in respect of the 2014-15 period, the AER determined the notional annual revenue requirements for the 2014-15 regulatory control year to be:

(a)    Ausgrid (distribution): $1,546.00m;

(b)    Ausgrid (transmission): $192.76m;

(c)    Endeavour: $858.58m; and

(d)    Essential: $976.15m.

516    As the AER says, because r 11.56.4(h) of the NER requires the AER, in the Final Decisions, to adjust the total revenue requirement for the subsequent regulatory control period to account for the difference between the placeholder annual revenue requirement determined for the transitional regulatory control period and the annual revenue requirement for the transitional regulatory control period as determined in the Final Decision, there were revenues received in 2014-15 that had to be returned to customers during 2015-19 and therefore reflected in future years X factors.

517    The result was that the AER determined the X factor for the 2015-16 regulatory year in a manner that gave immediate effect to a substantial amount of the difference between the annual revenue requirement approved in the AER’s placeholder determination for the transitional regulatory control period (the 2014-15 year) and the annual revenue requirement determined for the 2015-16 regulatory year. The effect of the X factor decision is to substantially reduce revenues in the 2015-16 regulatory year, followed by modest reductions (or in the case of Essential modest increases) year on year.

518    In order to offset the over recovery of revenue in 2014-15, in all of the 2015-19 regulatory years, the annual expected revenues are below the AER’s determined annual revenue requirement (except for Endeavour for the 2015-16 year, where the smoothed expected revenue is above the annual revenue requirement).

519    The expected annual revenue for the final year of the regulatory control period (2018-19) conforms with the AER’s +10 percent tolerance for revenue changes between regulatory control periods and equates to a 13.5 percent nominal increase in price in the first year of the next regulatory control period.

The Grounds of Review

520    Networks NSW says that the X factor decisions involve an incorrect exercise of discretion and/or an unreasonable decision: s 71C(1) of the NEL. They say that the magnitude of the reduction in revenue between 2014-15 and 2015-16:

(a)    does not promote efficient investment in, and operation of electricity services for the long term interests of consumers with respect to safety quality and reliability and is directly contrary to incentive regulation contrary to the NEO;

(b)    is inconsistent with s 7A(2) of the revenue and pricing principles as the decision does not provide Networks NSW with a reasonable opportunity to recover at least its (AER’s determined) efficient costs;

(c)    inconsistent with s 7A(3) of the RPP as it does not promote efficient investment because it requires Networks NSW DNSPs to incur significant debt and equity investment to continue to operate given its actual costs; and

(d)    gives rise to price shocks and pricing volatility which could be ameliorated if a more graduated reduction in revenue was implemented.

It is fair to observe that (a) and (b) are really formulaic.

521    Networks NSW also says that the AER’s Final Decisions place the Networks NSW businesses under immediate financial strain, and amplify the other errors by the AER which relate to the reduction of revenue and place the businesses at financial risk.

Consideration

522    There is little debate about the immediate effect of the X factor decisions of the AER.

523    They give rise to significant price decreases in 2015-16, potentially followed by nominal price increases from 2016-19. Networks NSW says this leads to pricing volatility which is not in the long term interests of consumers with respect to price and not in accordance with the NEO. They also give rise to annual expected revenues in all regulatory years of the 2015-19 regulatory control period below the AER’s determined annual revenue requirement for each of those years. Again, Networks NSW says that is not in the long term interests of consumers of electricity with respect to safety, quality, reliability and security of supply of electricity. They say that a more graduated reduction in annual expected revenues that permitted recovery of revenues closer to the annual revenue requirements in the earlier years of the regulatory control period would assist in providing the Networks NSW DNSPs with an opportunity to adjust to significant revenue reductions, and to improve tariff efficiency and equity without imposing unacceptable price shocks. It is argued that the decisions to impose a single real reduction in revenue requirements for 2015-16 limits their ability to deliver the long term benefit of tariff reform to customers because it is not possible to develop and implement a full tariff reform agenda for 1 July 2015, particularly given the need to engage with customers and other stakeholders prior to any decision on tariff reform.

524    It is evident that there are significant qualitative assessments underlying the respective positions of Networks NSW and the AER.

525    As the Tables in the Networks NSW submission indicate, because of the significant over-recovery of revenue in the transitional year, the smoothing imposed by the AER will impact largely in the 2015-16 year by ($449m) distribution and ($197m) transmission in the case of Ausgrid, being in excess of 24 percent and 27 percent of the unsmoothed revenue requirement for that year, and then ranging between 6 percent and 3 percent for distribution, and at 2 percent transmission for the following years.

526    In the case of Endeavour, the reduction is over 17 percent on the 2015-16 year, and then 3 percent for each of the following years. In the case of Essential, the reduction is over 31 percent in the 2015-16 year and then insignificant (a minor increment) in the following years.

527    In each instance, obviously there is a significantly reduced cash flow in 2015-16, following the revenue allowed in 2014-15 which on the other hand was excessive because of the assessed cost levels in the current regulatory period. Such a dramatic change, not smoothed over a period of years, is said to contravene the NEO by not promoting efficient investment in, and provision of, electricity services in the long term interests of consumers. It is also said to be directly contrary to the incentive regulation structure under the NEL (as discussed in the Introduction section of these reasons).

528    As noted, they are qualitative assertions, readily understood. The AER, for its part, says it was constrained to that “smoothing” decision by the effectively shortened regulatory period; by the “true-up” requirement for the 2014-15 revenues (which was supported by Networks NSW): see Consultation Paper on savings and transitional arrangements draft national electricity amendment (economic regulation of network service providers) rule 2012, 25 October 2014 at pp 4-5; by achieving net present value neutrality between smoothed and unsmoothed revenues; and by avoiding large revenue step changes at the end of the regulatory control period.

529    The proposition that the AER’s decision would not best contribute to the achievement of the NEO is a complex one. The AER selected smoothed later years as best contributing to the achievement of the NEO, including (as it considered) the avoidance of significant step changes in revenues at the end of the regulatory control period.

530    The Tribunal does not need to resolve that dispute because “the AER accepts that, in the event that the Tribunal finds reviewable error in relation to any of the AER’s other constituent decisions affecting revenue, the X factor will have to be reapplied by the AER on remittal”. As appears later in these reasons, the Tribunal does intend to remit the Final Decisions concerning the DNSPs for reconsideration by the AER.

531    It should be noted that the parties agree on the desirability of avoiding price shocks and volatility. When raised during the consumer consultation period, the strong message from consumers, both personally and through representative bodies, was that price shocks should be avoided where possible – eg Mr G Brody representing the Consumer Law Action Centre submitted:

… it’s better for consumers to have a smooth cost of the bill. If bills go up and down and create price shocks then that can cause as much problem for an individual household as, you know, overall high costs.

532    The AER says that if the 2015-16 revenue was to be higher than was set in the Final Decisions, the subsequent three years would need to be lower than was set in the Final Decisions and that this would lead to a final year difference greater than 10 percent, in circumstances where the costs beyond the 2015-19 regulatory period were uncertain. The AER says that this would lead to a price shock at the beginning of the next regulatory period (2019-20). The Tribunal notes that the AER had considerable flexibility to spread the reduction in revenue over a longer period (without disadvantaging consumers in present value terms) and that, if there is any option once it has re-worked the opex allowance, the Tribunal has not determined that it should necessarily default to making almost the whole of any reduction in the (next) first period.

533    That also means that the Tribunal does not have to form concluded views on Networks NSW’s submissions that the way the AER implemented the X factor imposed significant and inappropriate financial strains on each of them, even to the point of their respective financial viability being at risk.

534    There is one aspect of the AER’s submission on this topic which the Tribunal does, however, need to address.

535    The Networks NSW submission is that the X factor decision is inconsistent with s 7A(2) of the RPP because it would mean that each of them did not have a reasonable opportunity to recover at least the efficient costs of that operator. For the reasons given, the factual proposition does not need to be resolved.

536    The AER says, in addition, that it need not act consistently with the RPP at all times. It points out that s 16(2) provides:

In addition, the AER –

(a)    must take into account the revenue and pricing principles –

(i)    when exercising a discretion in making those parts of a distribution determination or transmission determination relating to direct control network services; or

(ii)    when making an access determination relating to a rate or charge for an electricity network service; and

(b)    may take into account the revenue and pricing principles when performing or exercising any other AER economic regulatory function or power, if the AER considers it appropriate to do so.

Hence, it says, it “may take into account” the RPP (relevantly s 7A(2)), but it is not bound to do so.

537    It may be that the difference in views between the Networks NSW and the AER is semantic. The Tribunal, of course, accepts that there are matters of judgment about how the RPP (or a particular element of one of the principles) should be taken into account. It does not accept that, as perhaps the AER is saying, the NEO in its application may give rise to a result which means that a DNSP is not given a reasonable opportunity to recover at least its efficient costs in providing the direct control network services. As the Tribunal has sought to express in its Introductory remarks, it does not regard ss 7 and 7A as other than complementary so that the NEO may give rise to a reviewable regulatory decision which in fact is inconsistent with the RPP or one of the elements of the RPP.

538    The Tribunal does not, in the circumstances, need to determine whether the basic assertion by Networks NSW is correct. When the AER revisits and re-determines the opex allowances, it will then have to apply the X factor. It will do so at a time, and in relation to revenue streams, which will require it to make a fresh decision on the X factor. The Tribunal is anxious not to inhibit the AER at this point in exercising its discretion in that regard.

EFFICIENCY BENEFIT SHARING SCHEME (EBSS)

INTRODUCTION

539    In the earlier part of these reasons for decision, there is extensive reference to the NEO and to the RPP in ss 7 and 7A of the NEL respectively. Section 7A(3) provides that:

A regulated network service provider should be provided with effective incentives in order to promote economic efficiency with respect to direct control network services the operator provides. The economic efficiency that should be promoted includes –

(a)    efficient investment in a distribution system or transmission system with which the operator provides direct control network services; and

(b)    the efficient provision of electricity network services; and

(c)    the efficient use of the distribution system or transmission system with which the operator provides direct control network services.

540    Specifically, s 7A(3) provides that the DNSPs should be provided with effective incentives in order to promote economic efficiency in the provision of their network services. As discussed above, and as contended for by Networks NSW (and ActewAGL), it is clear that the structure of the RPP under the NEL reflects the concept of “incentive regulation”.

541    As part of that incentive regulation, an EBBS makes provision for sharing between a DNSP and its customers the efficiency gains or losses derived from the difference between a DNSPs actual opex and the forecast opex allowance for a regulatory control period. The EBSS is a forward-looking scheme. A DNSP is told at the commencement of the regulatory period what to aim for and at the conclusion of the regulatory period, it is told how well it did in relation to the efficiencies. The EBSS incentivises a DNSP by allowing it to keep any yearly gain derived from the difference between its actual opex and its forecast opex, not just in that year, but until the conclusion of the regulatory period. Thus, as the AER’s Final Decision New South Wales distribution determination 2009-10 to 2013-14, 29 April 2009, observed (at p 245):

The scheme will not have a direct financial impact on the NSW DNSPs until the 2014–19 regulatory control period, when the DNSPs will receive carryover benefits/penalties for efficiency gains/losses made during the next regulatory control period.

542    The AER further explained the role of the EBSS in the following paragraphs commencing on p 1 of the AER’s Efficiency benefit sharing scheme for the ACT and NSW 2009 distribution determinations, 29 February 2008 (the 2008 EBSS), the relevant EBSS for the Tribunal’s review:

The purpose of the EBSS is to share efficiency gains and losses between DNSPs and distribution network users. In the absence of an EBSS, the share of efficiency gains and losses received by a DNSPs declines as the regulatory control period progresses and, consequently, the incentive for the DNSP to improve the efficiency of its operating expenditure (opex) declines also.

The EBSS allows a DNSP to retain the benefits of an efficiency gain for the length of the carryover period regardless of the year of the regulatory control period in which the gain was initiated. After the length of the carryover period the benefits of an efficiency gain are ‘shared’ with distribution network users. By doing so the EBSS provides a DNSP with a constant incentive to improve the efficiency of its opex and thus reveal their efficient level of opex.

543    Section B.2.1 of the written submission of Networks NSW refers to a range of extrinsic materials which confirm and explain incentive regulation. It is not necessary to refer to them in detail. For the immediate purpose of addressing the EBSS allowance, s 7A(3) is clear.

544    That reflects the requirement in r 6.3.2(a)(3) of the NER that the building block determination for a DNSP must specify, for a regulatory control period, amongst other things, how any applicable EBSS is to apply to the DNSP.

545    Rule 6.4.3 then explains the building block approach. Firstly, it specifies that the annual revenue requirement of a DNSP must include the approach under which the building blocks include the revenue increments or decrements (if any) for each regulatory year of the period arising from the application of any EBSS: see r 6.4.3(a)(5). Secondly, for that purpose, it cross refers to r 6.5.8: see r 6.4.3(b)(5).

546    Rule 6.5.8 then addresses in detail the EBSS. The AER emphasises that r 6.3.2(a) says that it is for the AER, in its relevant Final Decisions, to specify how the applicable EBSS is to apply, and that r 6.12.1(9) indicates that its relevant Final Decisions are predicated on its decision as to how any applicable EBSS is to apply to a DNSP.

547    It pithily asserts that those rules, including r 6.5.8(c), means that the EBSS must only reward “real efficiency gains”.

548    This topic also concerns only the three Networks NSW DNSPs. It was not the subject of submissions by PIAC, nor by any of the other interveners.

549    The Final Decisions of the AER in relation to each of the Networks NSW entities is reflected in dollar terms in the following table, comparing the Revised Regulatory Proposal of each of those entities and the Final Decision:

Ausgrid

Endeavour

Essential

Revised Regulatory Proposal

$426.3m reward

$197m reward

-$74.2m penalty

Final Decision

$260.3m reward

$93.4m reward

$0

Difference

$166m worse off

$103.6m worse off

$74.2m better off

The expressions “reward” and “worse off” or “better off” are those used by the AER. The position of Essential is partly addressed separately in the Tribunal’s reasons dealing with its application. As can be seen, the Final Decisions for both Ausgrid and Endeavour resulted in a significantly smaller allowance (or potential allowance) for EBSS than was claimed in their respective Revised Regulatory Proposals. In the case of Essential, the AER in the Final Decision in effect waived retrospectively the imposition of that penalty.

550    It is also convenient to recall that, an underlying theme of the Networks NSW submission is that the AER’s approach in its Final Decisions concerning those three entities was flawed because it did not allow appropriately for opex, as well as the EBSS, to be consistent with incentive regulation. That is, at a higher level of reasoning, merely a qualitative complaint. Moreover, by inviting the substitution of the Tribunal’s assessment of what is appropriate to achieve effective incentive regulation, that approach tends to seduce attention away from the NEO in s 7 and the RRP in s 7A, and from the manner in which the NER (as prescribed by the AEMC) provide for and describe the way in which the NEO is to be achieved. However, it must be borne in mind that it is necessary to pay close attention to the relevant provisions to address a particular complaint or complaints.

551    That is what the Tribunal has sought to do in relation to opex (above) and in relation to each of the elements of the Final Decisions about which Networks NSW, and ActewAGL and JGN complain.

552    In relation to the EBSS, Ausgrid and Endeavour say that the AER suspended the operation of the EBSS for the 2015-19 regulatory control period so that it no longer has a “functional role” in the long term regulatory structure of DNSPs as contemplated by r 6.5.8 of the NER. In its general submissions, they say:

However, as the AER observed, Ausgrid and Essential “will already bear any costs in transitioning to efficient levels … there does not seem to be a strong reason to provide it with additional incentive to become more efficient”. In other words, looking forward there is no role for incentives anymore because the AER is forcing (what it considers to be) an optimum efficiency, by the use of benchmarking, on Ausgrid and Essential, in the short term (indeed, immediately).

Background

553    As noted, it is common ground that one of the building blocks for the annual revenue requirement of a DNSP is the revenue increment of decrement (if any) for a particular year arising from the application of any EBSS. So much is required by r 6.4.3(a)(5) of the NER. The relevant increment or decrement are prescribed or anticipated by r 6.4.3(b)(5).

554    In the development and implementation of the EBSS, r 6.5.8(c) requires the AER to have regard to:

(1)    the need to ensure that benefits to electricity consumers likely to result from the scheme are sufficient to warrant any reward or penalty under the scheme for DNSPs;

(2)    the need to provide DNSPs with a continuous incentive, so far as is consistent with economic efficiency, to reduce opex;

(3)    the desirability of both rewarding DNSPs for efficiency gains and penalising DNSPs for efficiency losses;

(4)    any incentive that DNSPs may have to capitalise expenditure; and

(5)    the possible effects of the scheme on incentives for the implementation of non-network alternatives.

555    Thus, the EBSS is to create a continuous incentive for a DNSP to find efficiency gains by permitting the DNSP to retain the benefit of the gain for 5 years regardless of the year in which the gain is realised, with consumers having the benefit thereafter. The adverse is that the EBSS provides a disincentive for efficiency losses by providing that the DNSP is penalised for five years for any inappropriate increase in expenditure. These matters are achieved by providing for carryover gains or losses into the next period.

556    The 2008 EBSS describes the EBSS at section 2.1 as rewarding “sustained efficiency gains through the operation of a symmetrical carryover mechanism”. Hence, a DNSP is either rewarded for opex reductions against forecasts or penalised for opex that exceeds forecast expenditure.

557    The example of the EBSS in Appendix A to the 2008 EBSS shows that this approach would lead to a sharing ratio of 70 percent of the efficiency gain to be returned to consumers over a 15 year period, with 30 percent of the gain being retained by the DNSP (and the same pattern with efficiency losses).

558    Consequently, the 2008 EBSS provides for the calculation of carryover amounts (either gains or losses) to be applied as a building block element in the calculation of allowed revenue for the regulatory control period commencing on 1 July 2014. The 2008 EBSS contains a formula for calculation of the carryover amounts, where:

(a)    the efficiency gain or loss for the first year (2009-10) is the forecast opex minus actual opex for that year;

(b)    the efficiency gain or loss for each subsequent year is: (forecast opex minus actual opex for that year) – (forecast opex minus actual opex for the previous year).

559    The 2008 EBSS provided that a DNSP could propose a range of additional cost categories to be excluded from the EBSS. That was to ensure that efficiency gains would be measured as the difference between forecast and actual expenditure, subject to adjustments designed to remove the impacts of agreed uncontrollable costs, non-network alternative opex and recognised pass-through events, and changes in capitalisation policies, demand growth and regulatory responsibilities: see at p 12.

560    For the 2009-14 revenue determination, the AER excluded five specific cost categories of opex from the operation of the EBSS for the next regulatory control period on that basis. They were debt raising costs, self-insurance costs, insurance costs, superannuation costs relating to defined benefit and retirement schemes, and non-network alternative costs. The 2009-14 revenue determination also specified forecast total opex amounts for each year for the purposes of the EBSS (that is, total forecast opex minus the excluded costs).

561    It should also be noted that Networks NSW’s forecast, and approved, opex for that determination period included provisions, calculated on an accruals basis, which included provisions for employee benefits.

562    It is not suggested that the provisions so made did not accord with Australian Accounting Standard Board standard No 119, requiring an entity to recognise a liability when an employee has provided service in exchange for employee benefits to be paid in the future and on the other side of the ledger an expense when the entity receives that economic benefit. Hence, cash payments in relation to employee benefits such as long service leave payouts either in service or when an employee exits the employment, reduce the provisions as they are made.

563    Networks NSW says that, in accordance with the 2008 EBSS, they included in their revenue proposals a calculation of EBSS carry-over amounts flowing from the differences between forecast opex and actual opex in the 2009-14 period. However, a not insignificant proportion of the component of forecast and actual opex comprised movements in provisions for employee benefits and other miscellaneous matters which were excluded by the AER from its allowance for EBSS in the current regulatory period.

564    That is because, the AER says, the Networks NSW entities changed the assumptions used to calculate their estimates, and so their provisioning for future payments for existing liabilities, resulting in artificial efficiency gains or losses which were claimed as “real efficiency gains and losses but which did not represent genuine business outcomes.

565    The AER points out that the 2008 EBSS included the following statements:

The measurement of gains and losses should not be affected by artificial means such as the shifting of costs between years, but should represent genuine business outcomes that have arisen in the ordinary course of conducting the business in a prudent and diligent manner.

Adjustments will be made where necessary to correct for variances in costs categories and methodologies, and errors.

In calculating carryover gains or losses, the AER must be satisfied that the actual and forecast opex accurately reflects the costs faced by the DNSP in the regulatory control period.

566    The question whether the AER, in proceeding on the basis set out above, acted without exposing a ground of review is at the heart of this issue.

The AER Decision

567    By way of introduction to the AER’s decision, the AER says that it has a considerable degree of discretion in how it applies the EBSS. As noted, that discretion, it argues, is apparent inter alia from rr 6.3.2(3) and 6.12.1(9) which provide that its building block determination for a DNSP must specify how the EBSS is to apply to the DNSP, thus recognising its discretionary decision-making, and secondly by r 6.5.8(c) which provides that it must have regard to a number of factors when implementing the EBSS, including “the need to ensure that benefits to consumers likely to result in the scheme are sufficient to warrant any reward or penalty” for the DNSPs. It is not, therefore, simply a mechanical exercise of adopting, relevantly, the provisioning of the DNSP for its liabilities from time to time.

568    Consequently, the AER’s approach to the EBSS for the current regulatory period in relation to each of the Networks NSW entities is driven, at least to a significant extent, by its understanding that the claim for reward under the EBSS is unrelated to any real efficiency gains.

569    As noted above, the Networks NSW entities reduced their opex in the last regulatory period by changing their estimates for provisioning of future payments to employees of entitlements such as provision for long service leave. The AER considered that the change in estimates was not driven by real efficiency gains, but by a substantial change in the assumptions underlying those estimates.

570    The Tribunal accepts that position was available to the AER. It is not a position taken by the AER which the Tribunal regards as involving factual or other error on its part.

571    The AER drew attention to the fact that Cumpston Sarjeant in its “Response to Queries on Essential Energy Entitlements Valuation” 19 July 2012 (the Cumpston Sarjeant 2012 Essential Report) described the changed assumptions as “outside the range of realistic long term outcomes” and “unprecedented”. It says that its decision was consistent with the NEO, and with the encouraging of Networks NSW to pursue efficiency gains.

572    The AER’s reasoning involved a number of steps. Having identified the change in provisioning, resulting in a change in opex, it considered whether the change in expenses represented by the change in provisioning (largely based upon a change in the assumptions as to the quantification of provisioning for employee entitlement expenses or liabilities) should be rewarded or penalised under the EBSS.

573    It proceeded by considering whether, in those circumstances, the claim represented a fair sharing of efficiency gains and losses between DNSPs and network users, reflecting r 6.5.8(a). It considered whether, having regard to the need to ensure that benefits to electricity consumers likely to result from the scheme are sufficient to warrant any reward or penalty under the scheme for service providers (r 6.5.8(c)(1)), and more generally the desirability of both rewarding the service provider for efficiency gains or penalising it for its efficiency losses (r 6.5.8(c)(3)), it was appropriate to adopt the claim of Networks NSW. It considered whether the claim as so made accorded with the requirement of the 2008 EBSS, so as to satisfy it that the actual and forecast opex accurately reflected the costs faced by Networks NSW in the regulatory control period.

574    Having regard to the analysis of the reasons for those changes referred to above, the AER decided that the change in expenses attributable to those provisions did not represent real business outcomes, but were attributable to changes in underlying assumptions. Bluntly, it says, the expenses did not reflect costs actually faced by each of the Networks NSW entities in the regulatory control period. As there were no actual efficiency gains, and the changes in opex were as a result of different assumptions which might or might not prove to be correct, it did not consider it was appropriate to reward Networks NSW for those changes as efficiency gains.

575    The AER then took the additional step of saying that it considered the more appropriate way to reflect the cost faced by each Networks NSW was to use a cash accounting methodology. Under that methodology the AER would account only for expenses actually recorded in respect of payments actually made. It says it did not thereby alter the reported opex, affected (as it accepted) by the change in assumptions underlying the value of provisions for the amount of provisioning.

576    It reached that view notwithstanding that, as Networks NSW pointed out, the incentive to move to efficient costs is consistent over an entire regulatory period. That is because historical opex towards the end of a then current regulatory period is a key input for forecasting opex allowances for the new regulatory period so any incentive to reduce opex below the regulatory allowance diminishes towards the end of the then current regulatory period. It is, therefore, a fair observation (as Networks NSW made) that the EBSS represents a consistent incentive across regulatory periods. In that light, Networks NSW says that, in reality, the AER in the relevant Final Decisions, simply decided to abandon the EBSS in the next regulatory period for Ausgrid and Essential, and in relation to all the Networks NSW businesses it excluded retrospectively an additional cost category so that none of those businesses were deemed to be entitled to include, as part of efficiency gains or losses, what appears as movements in provisions.

577    Networks NSW identifies two relevant decisions in relation to the EBSS made by the AER, which it says require reconsideration by the Tribunal as they will make out the grounds of review asserted:

(1)    the determination that differences between forecast opex and actual opex arising from what the AER identified as changes in provisions, largely in provisions for employees’ entitlements, would be excluded from the calculation of the EBSS carry-over amounts to be included as a building block in the annual revenue requirements for Networks NSW (the EBSS Decision); and

(2)    secondly, in relation to both Ausgrid and Essential, the AER decision that having regard to its benchmark analysis the EBSS for 2013 should not apply in the 2014-19 regulatory period, so there should be no calculation of carry-over amounts arising from actual opex in 2015-19 then to be applied in the 2020-24 regulatory period (the EBSS Suspension Decision).

Networks NSW described the basis of the EBSS Suspension Decision as being that the operation of an incentive scheme in the form of the EBSS was inappropriate where the business was not being given an incentive to itself move to an efficient level of expenditure, but rather was being moved directly to a benchmark level of efficient expenditure by the AER. Consideration of the EBSS Suspension Decision is addressed later in these reasons.

EBSS Issues

The principal issue

578    The main debate is whether the AER was correct in adjusting the provisions expense reported in Networks NSW reported opex, so that it reflected liabilities actually settled and paid during a particular year, rather than changes in provisions for liabilities for accounting purposes made during that year. The AER, by making that adjustment, deducted an amount from the reported opex equivalent to the movement in provisions as illustrated in the course of submissions.

579    It should be noted that the AER did not, as suggested by Networks NSW at one point, coarsely simply exclude a category of expense. That is, of course, to draw a distinction between an expense actually paid on the one hand and a liability incurred, to be quantified in the future but the subject of a present estimate, on the other.

580    The AER referred to, and was probably prompted towards its analysis, by the apparent volatility in Ausgrid’s provisioning for employee benefits. In the 2009-10 and 2010-11 financial years, that provisioning was constant. It was increased by a very significant sum for 2011-12 and then somewhat reduced but by a materially significant sum for 2012-13. The accounting justification for those adjustments for 2011-12, is found, for example, in the Ernst & Young Report, June 2012, “Ausgrid – Actuarial assessment for specified employee entitlements as at 31 December 2011” (the EY 2012 Ausgrid Report). For 2012-13, the reduction is explained by the Cumpston Sarjent Report “Ausgrid – Actuarial assessment of long service leave and other employee entitlements as at 31 December 2012” (the Cumpston Sarjeant 2012 Ausgrid Report), including different assumptions to assess the salary promotional scale.

581    The AER submission points out that, as a result of those reports being given effect to, for the 2012 financial year Ausgrid recorded an actuarial adjustment expense for long service leave of about $58m in nominal terms, and for the succeeding financial year recorded a negative actuarial adjustment expense for long service leave of about $40m in nominal terms. That fall in expenses is, in essence, claimed by Ausgrid to be properly treated as an efficiency gain under the EBSS. Similar analyses were made in relation to Essential’s provisioning over that period and Endeavour’s provisioning over that period.

582    In the case of Essential, the adjusted present value of the provisioning for long service leave and other employee benefits very significantly increased between 2010-11 and 2011-12, and then significantly reduced (but well above the 2010-11 level) for the 2012-13 year. The reasons for those changes can be seen as attributable to the use of different discount rates in the latter two years, and the long term growth assumption being higher for the latter two years compared to earlier years. The significance of those changes is also illustrated by the change in the relationship between the “discounted” present value of those provisional liabilities and their nominal value.

583    Similar observations can be made about Endeavour’s treatment of provisions during the 2009-14 regulatory period. The “matched pairs” link under which it was assumed that the discount rate was always higher than the wages growth rate by a more or less constant percentage was broken.

584    The AER, in its Final Decisions, excluded the allowances claimed by Networks NSW businesses for efficiency gains for that final year as a result of movements in provisioning on the basis that they should not be treated as actual opex for EBSS calculations. It removed the movement in provisioning from each of the Networks NSW DNSP’s reported actual opex when calculating the EBSS carry over amounts because, it considered, the changes in provisioning were driven largely by changes in the discount rate and, at least in the case of Endeavour and Essential, by changes in salary growth assumptions used to value the provisions for long service leave. Those changes were, it considered, the result of accounting methodology and/or the result of assumptions made by the service provider or its actuary when there should be a minimal effect on rewards or penalties a service provider receives under the EBSS by the changes in provisioning. That is because, as the AER said:

The fundamental requirement for the EBSS under the NER is to derive efficiency gains and losses from the comparison of forecast and actual opex over the period, not merely accounting gains or losses.

See generally: Final Decision Ausgrid distribution determination 2015-16 to 2018-19, Attachment 9 - Efficiency Benefit Sharing Scheme at p 9-17; Final Decision Endeavour Energy distribution determination 2015-16 to 2018-19, Attachment 9Efficiency Benefit Sharing Scheme at p 9-18; and Final Decision Essential Energy distribution determination 2015-16 to 2018-19, Attachment 9 – Efficiency Benefit Sharing Scheme at p 9-16.

585    Having taken that approach, the AER also then rejected the submission of Networks NSW that, even if it was appropriate to exclude the changes in provisioning from actual opex, the forecast opex (from the earlier 2009-14 Determination), should be used to compare actual opex for the purpose of the EBSS adjusted to remove any movement in provisions embedded in the forecast. That contention was included in the Revised Regulatory Proposals of each of Networks NSW. The AER declined to do so, on the basis that its approval of the forecast opex for 2009-14 was not an approach made with reference particularly to provisioning of that character, so any attempt to identify what was “implicitly forecast at the time for provisions” would not be “robust given the hypothetical nature of this exercise”: see eg Final Decision Ausgrid distribution determination 2015-16 to 2018-19, Attachment 9 - Efficiency Benefit Sharing Scheme at p 9-17 to p 9-18.

The grounds of review

586    As with other grounds of review, there is some debate about how properly to characterise the errors which Networks NSW asserted in terms of s 71C of the NEL. The Networks NSW submission says that the AER made a number of material errors of fact in its findings of fact, namely:

.2    in concluding that the changes to provisions excluded by the AER from the application of the EBSS were produced predominantly by changes in assumptions;

.3    in concluding that changes to provisions were not actual costs incurred in delivering network services;

.4    as to the level of efficiency gains or losses, including by assessing such gains or losses:

.4.1    without taking into account changes in provisions;

.4.2    by excluding changes in provisions from actual expenditure but not excluding changes in provisions from forecast expenditure; and/or

.4.3    by assessing certain costs on a cash basis and other costs on an accruals basis.

587    Alternatively, it says that the AER’s decision involved an incorrect exercise of a discretion, or was an unreasonable decision because it was irrational, illogical and arbitrary, and inconsistent with the requirements of r 6.5.8(c) because:

(a)    by excluding changes in provisions, the AER arbitrary, irrationally and illogically excluded actual costs (including the incurring of liabilities for holiday pay and long service leave) from the assessment of efficiency gains or losses;

(b)    the AER’s approach was internally inconsistent by:

(i)    excluding changes in provisions from actual expenditure but not excluding changes in provisions from forecast expenditure; and/or

(ii)    assessing certain costs on a cash basis and other costs on an accruals basis.

588    Finally, as a further alternative, it says that the decision to exclude opex, in respect of changes of provision, constitutes an amendment to the operation of the 2008 EBSS by identifying an additional cost category for exclusion from the calculation of the carry-over amounts, when the 2008 EBSS itself, and the Rules, do not permit the amendment of the operation of the 2008 EBSS retrospectively in the Final Decisions. Consequently, it is said the Final Decisions on this topic involve a misconstruction and misapplication of the NER, and therefore an incorrect exercise of discretion and, alternatively, an unreasonable decision.

589    The AER’s broad position is that its approach involved no relevant “finding of fact”, but an exercise of judgment as to what methodology best gives effect to the aims of the EBSS. Then it says that its decision to adopt its approach was a reasonable decision, without demonstrable error which could enliven any of the asserted available grounds of review. That invites on the part of Networks NSW the proposition that the AER Final Decisions each rested on a fundamental step which was illogical, irrational or arbitrary so that the Final Decisions themselves had that character and therefore were unreasonable in relation to each of them. It also provokes a response on the part of Networks NSW that, to the extent that the AER’s decision involved an exercise of discretion, the discretion was incorrectly exercised.

590    It is accepted that, if the Tribunal accepts that the AER erred in its decision on opex, the challenge to the EBSS Suspension Decision in relation to Ausgrid and Essential will necessarily fall away as that part of their Final Decisions will be required to be varied pursuant to s 71P of the NEL. On the other hand, it is implicit from the absence of any detailed submissions separately addressing the EBSS Suspension Decision, that if the AER decision in relation to the EBSS is not shown to be in error, it will not be necessary separately to address the EBSS Suspension Decision.

591    The Tribunal does not comment on the extent to which either the EBSS Decision or the EBSS Suspension Decision would require reconsideration or amendment by reason of its decision in relation to opex generally.

592    However, as the fundamental approach to the EBSS allowance is obviously critical to any further revised final decision on the part of the AER, the Tribunal proposes to address the contentions concerning the EBSS Decision.

Consideration

593    It is convenient to address the contentions of Networks NSW in the sequence in which they appear in its written submissions.

594    The first is to address the alleged error that the AER retrospectively excluded the particular category costs in any event.

595    Much of the Tribunal’s reasons for its conclusion on this series of alleged errors emerges from the discussion above.

596    The Tribunal does not accept that the AER’s process of reasoning was retrospectively to exclude a category of costs in implementing the EBSS for the 2014-19 regulatory period.

597    As the AER says, it accepted that the costs which the provisioning allowed for – the payment of employee benefits such as holiday pay and long service leave entitlements – were to be accounted for. It departed from the previous practice of allowing for those costs on the basis of the provisions for them in the accounts, and instead used the measure of the actual payments to the employee creditors as and when they were paid.

598    It would seem that that change in the means of measuring those costs was prompted (at least in part) by the changes in the Networks NSW provisioning referred to above. Where there is such a dramatic change in provisioning, it is hardly surprising that the AER should consider whether the method of measurement of those costs by provisioning was the most appropriate way to measure them for the purposes of the NEL and in accordance with the incentivising of efficiency gains built into the EBSS. As a matter of practical commonsense, it is hard to see (for example) how Ausgrid’s adjusted provisioning between 2011-12 and 2012-13 in fact represented any actual efficiency gain, so that the reduction in provisioning in the latter 12 month period should be taken into account in the application of any EBSS designed to serve r 6.5.8. Of course, that is a simplistic view, but it is nevertheless one which is not demonstrably fallacious.

599    If the AER were to question the appropriateness of measuring the cost of payments for employees for holiday pay and long service leave simply by the provisioning of each of the Networks NSW DNSPs, where those costs can vary dramatically by assumptions made from time to time by the provider (as they did), it was reasonable to investigate how the measurement of provisioning costs represented (as the 2008 EBSS expressed it at p 3) “genuine business outcomes that have arisen in the ordinary course of conducting the business in a prudent and diligent manner”. Indeed, the 2008 EBSS at p 5 refers to the AER making adjustments to correct for the variances in methodologies.

600    Networks NSW says that the consequence of the change in methodology by excluding an additional category of costs, at the point of the relevant Final Decisions, is not to incentivise Networks NSW (or to disincentivise Networks NSW) because the costs which are reflected in the provisioning have already occurred by the “promise of rewards or penalties”.

601    However, the Tribunal does not consider that that is a correct way to characterise what the AER has done. The AER does not resile from the proposition that the costs incurred by Networks NSW for labour on account of holiday pay and long service leave approved and payable should not be taken into account in determining the appropriate payments or allowances going forward for the 2014-19 regulatory period. It is not obliged to accept the provisioning estimates of Networks NSW for that purpose. It has chosen to make the appropriate allowance prospectively, based upon the actual payments made from year to year for those costs in the previous regulatory period.

602    In taking that step, at least in principle, the Tribunal does not consider that the AER has made any error of fact in its findings of fact.

603    The detailed grounds of review of Networks NSW are set out above. It appears that, at this point in the AER’s reasoning, the relevant asserted error of fact is that the AER concluded that changes to provisions were not actual costs incurred in delivering network services. However, whatever may be the available accounting methods and assumptions from time to time for the making of provisions, or the appropriateness of the reported provisioning in accordance with accounting standards, it is not the case that the provisioning of those costs meant that they were “actual costs” incurred in delivering network services; they were estimates of liabilities incurred and to be paid in due course determined in accordance with accounting standards and based upon assumptions made by each of Networks NSW about the various elements going to the making up of the amount which, ultimately, would be paid to meet those liabilities.

604    The next step in Networks NSW’s contentions is that the AER erred in concluding that the changes to provisioning by each of the Networks NSW entities were produced predominantly by changes in assumptions. That is the second of the two alleged errors in a sequence of four alleged errors.

605    It is clear that the movements in the liability to employees during a financial year may increase, or accrue, by reason of the employee having worked during that year. If the number of employees was unchanged, and none took leave, clearly the accrued or accumulated liability to those employees would increase over that year. The following confidential Figure 1 in the Networks NSW submissions on the EBSS forcefully makes that point.

[CIC] Figure 1: AER adjustment of Ausgrid opex for provisions for employee benefits [CIC]

[TABLE REDACTED]

606    The Networks NSW submission also points out (as the AER recognised) that changes in provisions may be induced by a change in the value of existing provisions such as a change in the discount rate (to determine the present value of the accrued liabilities when anticipated to be paid), or changes in integers (such as the assessment of the employee attrition rate and so changes in the amount and timing of the long service leave payments), and changes because of changed external conditions (such as changes in the discount rate) affecting the net present value of those accrued liabilities.

607    Then it is said that there was no material before the AER which suggested that relevant assumptions were being manipulated or that the calculations as to provisioning were otherwise than the best assessment of the liabilities of each of the Networks NSW entities, based upon independent third party actuarial assessment.

608    It is not a necessary part of the AER’s role to determine that the way in which the Networks NSW provisioning was made involved any improper manipulation of data, although (as it pointed out) there was some material in the Cumpston Sarjeant 2012 Essential Report at pp 2 and 9 which did not clearly support certain integers used in the provisioning calculation of Essential. It should be noted that that report (as its title says) is confined to the circumstances of Essential. It is a significant departure from that expert’s advice, as it meant that for 2011-12 the selected discount rate and selected salary growth assumptions resulted in a discounted present value of employee entitlements of 128.8 percent of their nominal value, compared to 98.4 percent of their nominal value on the assumptions suggested by that expert.

609    As noted above, the 2012-13 provisioning estimate for Endeavour, with an altered discount rate on the same salary growth assumption, resulted in a reduced figure for provisioning which (at least in a significant measure) constituted the claimed efficiency gained under the EBSS for that year.

610    The Tribunal has concluded above that the AER did not err by looking at, and behind, the provisioning by Networks NSW entities to decide whether their provisioning was an appropriate basis for measuring efficiency gains by those businesses.

611    In the Tribunal’s view, the AER was not in error in deciding to look more closely at the alternatives to accepting the provisioning for such liabilities. As an accounting exercise, of course, double entry accounting necessarily means that an increase in provisioning requires the complementary recording of an expense, even though the payment of that expense is not then made and may ultimately vary from the provisional liability. That is influenced by the timing of the employee in taking the leave, the applicable salary rate, and the like. Hence, the provisioning for such liabilities is generally based upon actuarial assessment, in turn based upon – amongst other things – assumptions as to the applicable discount rate to determine net present value of future liabilities and the future salary growth, and affected at least by age and years of service.

612    It is not unreasonable to expect, in the absence of particular circumstances, a stable relationship in the long term between the discount rate (as it varies from time to time) and the rate of future salary growth. That relationship was changed by changes in those elements, but not in a complementary way, so that it led to the significant variations in provisioning between 2011-12 and 2012-13 as referred to. It should be noted, as counsel for the AER acknowledged, that the precise critical assumptions by Ausgrid, Endeavour and Essential in their respective provisioning processes were not routinely the same and that at least in the case of Ausgrid, there was no consistent direct “matching” relationship between the two significant assumptions referred to.

613    Nevertheless, it follows from that discussion that the AER did not, in the view of the Tribunal, err by concluding that the changes to provisions were largely the consequence of changes in critical assumptions.

614    The material before the Tribunal includes the correspondence between the AER and the respective Networks NSW commencing mid-2014 as to whether the changes in certain assumptions (and the consequential disjunction of the complementarity between the two assumptions or the consequences of the two assumptions particularly discussed above) were matters which were in fact reflective of real efficiency gains or losses. That material was not shown to have been overlooked by the AER; indeed the strong inference is that it was carefully considered by the AER.

615    In the circumstances, the Tribunal is not persuaded that the AER erred by concluding that the changes to provisions by the Networks NSW entities were produced predominantly by changes in assumptions of the character referred to. Insofar as there is said to be an error of fact in that conclusion of the AER, that is not made out.

616    The third contention of Networks NSW is that the AER erred in concluding that changes in provisions were not “actual costs incurred in delivering network services”. They say that it is an unconventional position for the AER to determine that a business entity, adopting an approach in accordance with standard and universally recognised accounting practice, and in a conventional way, should therefore exclude from having such properly recorded costs accounted for in an assessment of “actual opex”.

617    Reliance was placed upon the decision of the Full Court of the Federal Court in Paciocco v Australia and New Zealand Banking Group Limited [2015] FCAFC 50 in support of the position taken by Networks NSW. It is convenient at this point to remark upon that decision. The Tribunal does not regard it as directly relevant to the question which the AER (and the Tribunal) have to address: whether the changes to provisioning should be the subject of rewards under the EBSS, where those changes occur as a consequence largely of differences in assumptions made from time to time about critical elements in determining the net present value of future liabilities. The context for that question, of course, is whether such changes in provisioning (where the changes are the consequence of different assumptions and the relativity of the critical assumptions was not maintained as had previously been the case) should attract the benefit of efficiency gains under the EBSS.

618    The AER decided to reject the provisioning approach previously adopted and exposed by the adjustments made by Networks NSW (in particular by the Essential accounts and methodology for provisioning) and it decided not to directly reflect actual efficiency gains or losses by the accounting provisioning for those future liabilities. Its use of the term “actual costs” is in that context. It does not mean that the AER simply chose to ignore the costs incurred by Networks NSW, as reflected by the payments made by the Networks NSW businesses for those liabilities, or to ignore the fact that there are future liabilities which are reflected in the provisioning estimates.

619    The AER presented a graph prepared from figures in the Ausgrid Revised Regulatory Proposal and then presented in Figures 3 and 5 of the written Networks NSW submissions on the EBSS, showing the Ausgrid proposal for opex to be used for the EBSS proposal and the AER’s allowance of opex to be used for EBSS purposes. One line is calculated using the cash accounting methodology instead of Ausgrid’s accrual accounting methodology. It shows the extent to which the AER has in fact allowed for opex for that purpose. It cannot be said that the AER did not allow for those costs by the method which it adopted.

620    There was some debate about whether the “actual costs” allowed by the AER were in fact correctly measured. It cannot be said that the AER did not accept that liabilities for accrued leave and other employee entitlements were not actual liabilities. Its approach was to measure those liabilities, to be discharged at some time in the future and to be quantified ultimately by the movement in earnings, promotion and other variables, at “actual”, that is, paid amounts year by year. To the extent that the two critical projected elements of discount rates and wage movements (for the purposes of the provisioning) varied from time to time they would then be reflected in the amounts actually paid to meet those accrued liabilities from time to time. It is the Tribunal’s view that that was an approach reasonably open to the AER. That is not to gainsay the observations in the Ernst & Young report Advice on movement in provisions, January 2015 expressing at p 3 and elsewhere the proper accounting method for those liabilities, and its view that the “true economic cost” to Networks NSW was or included the provisioning for entitlements to be paid in the future.

621    But, it has not been shown that the AER, for the purposes of the EBSS, was in error in looking behind the provisioning. It did that. As a result of looking behind or into the reasons for the changes in provisioning, it did not accept that the provisioning by Networks NSW, and more specifically the movements in provisioning, represented efficiency gains (or losses). The Networks NSW contentions did not set out to show that the changes in the critical assumptions for provisioning (which the Tribunal has accepted, as identified by the AER) were themselves correct so that the AER, upon analysis, could not have concluded that they were the only, or visually the only, assumptions which could properly have been made by Networks NSW. Indeed, the Tribunal in the course of submissions was taken to material which would suggest that the assumptions made by Networks NSW from time to time were not the only reasonable assumptions on those matters.

622    Finally, in relation to this contention, the Tribunal notes that the comparison of the allowance for opex for EBSS purposes between the Ausgrid proposal for opex for that purpose and the Ausgrid Final Decision (as based on Figures 3 and 5 in the Networks NSW principal submissions) indicates that the cash accounting methodology of AER does not mean that Ausgrid (or Endeavour or Essential) were deprived of the benefits of the EBSS incentives, save for those which might have flowed from the changes in provisioning for the 2011-12 and 2012-13 years, largely influenced by the changed assumptions referred to above.

623    The fourth general category of attack adopted by Networks NSW is based upon the assertion of a change in methodology for calculating opex (in relation to, or for the purposes of, the EBSS). It is said that the AER should have excluded any movement in provisioning embedded in the forecast opex for the 2009-14 regulatory control period, so that looking forward there would have been a proper comparison of “apples with apples”.

624    In support of that contention, counsel for Networks NSW took the Tribunal to the AER’s Final Decision Efficiency Benefits sharing scheme for the ACT and NSW 2009 distribution determinations, February 2009, particularly Section 5.2: Measuring efficiency and the EBSS, and Section 5.5: Adjustment of actual and forecast opex. It is not necessary to refer in detail to all the passages referred to. The conclusions sections 5.2.3 and 5.5.3 are as follows:

5.2.3    AER conclusions

The AER considers it appropriate to utilise a rule of thumb in assessing efficiency gains under the EBSS. However, the AER also considers that the EBSS should, as far as possible, reflect efficiency gains and losses by DNSPs. To this end the AER will allow forecast opex to be adjusted for actual demand growth for the purpose of calculating carryover amounts. The AER will also consider for exclusion from the EBSS cost categories proposed by DNSPs in their regulatory proposal before the commencement of the regulatory control period and must be determined as uncontrollable by the AER in its final determination.

5.5.3    AER conclusions

The AER will make adjustments to forecast and actual opex for the purposes of calculating carryover amounts where it has been explicitly stated in the final determination at the beginning of the regulatory control period that those specific adjustments will be applied to the EBSS for that period.

Any cost categories that a DNSP considers to be uncontrollable and that should be excluded from the operation of the EBSS must be proposed in the DNSP’s regulatory proposal prior to the commencement of the regulatory control period. These cost categories will only be excluded if the AER considers them to be uncontrollable and their exclusion prudent.

The AER retains the right to exclude further cost categories from the operation of the EBSS. These cost categories must be outlined in the final determination at the beginning of the regulatory control period.

625    The Tribunal does not regard that material as supporting this contention of Networks NSW. Nor does it regard that material as demonstrating, or assisting to demonstrate, a ground of review as claimed by Networks NSW. The form of the discussions preceding those conclusions is to identify and give credit for real efficiency gains. Hence, the exclusion of uncontrollable costs. Indeed, at p 5 of the 2008 EBSS, the AER observed that it would react to changes in the methodology adopted by a DNSP to calculate opex. It is only a semantic debate to say that the changes in assumptions by Networks NSW did not, in the present circumstances, amount to a change in methodology.

626    It is also important to note that the AER for the 2009-14 regulatory control period did not simply adopt the Networks NSW proposed opex allowances, but substituted its own assessment.

627    The Tribunal does not, therefore, conclude that this contention demonstrates that a ground of review has been made out by Networks NSW.

Conclusion

628    For those reasons, Networks NSW has not demonstrated any ground for review, or more accurately the Tribunal is not satisfied that any ground of review exists, in relation to the AER’s Final Decisions concerning the EBSS.

629    In addition, having regard to the reasons why the claimed EBSS efficiency gains should not have been accepted (as found by the AER and accepted by the Tribunal), the Tribunal in any event would not have been satisfied that the restoration of the EBSS gains or rewards as claimed by Networks NSW would, or would be likely to, result in a materially preferable NEO decision.

630    The changes in provisioning do not represent in fact any real efficiency gain in the long term interests of consumers. They represent an accounting provision based upon changed assumptions, the reasons for which are not obvious or obviously correct, and which would introduce into the EBSS calculations a volatility which – as the material presently stands – is not necessarily a realistic reflection of the extent of the liabilities incurred. The changes between the three years specifically discussed above are sufficient to make that point.

631    In a sense, it is premature to make such a conclusion (because it is at the ultimate step of the Tribunal’s determination that s 71P(2a) and (2b) really come into the Tribunal’s consideration). In this instance, however, it can be said that if the EBSS were the only matter in respect of which a ground of review were made out, the Tribunal would not, for the reason given, be satisfied in terms of s 71P(2a)(c) so that it would not vary the AER Final Decisions. And, it can also be said that, even if this and other grounds of review were made out, as a constituent component of the Final Decisions, the AER’s decision on the EBSS would not weigh in the scales towards a favourable ultimate decision about whether to vary or set aside the Final Decisions.

RETURN ON EQUITY

INTRODUCTION

632    The return on equity topic gives rise to common interests between the Network Applicants under the NEL and under the NGL in the case of JGN. They sensibly presented their collective submissions through the same counsel.

633    While, in broad terms, their contentions were supported by the Vic/SA Interveners and by Ergon, it is on occasions necessary to separately address particular aspects of the submissions of the Vic/SA Interveners and Ergon.

634    As an applicant, PIAC did not apply to have the Network NSW Final Decisions in respect of this topic varied or set aside. As an intervener, however, PIAC made submissions on particular matters in respect of the topic confined to the Networks NSW Final Decisions.

635    As noted, the annual revenue requirement for a DNSP for each regulatory year of a regulatory control period must be determined using a building block approach, which includes a building block for return on capital for that year: r 6.4.3(a)(2) of the NEL. Rule 6.4.3(b)(2) of the NEL provides that the return on capital is calculated in accordance with r 6.5.2. Rule 76 of the NGR is to the same effect.

636    Rule 6.5.2 of the NEL relevantly provides:

6.5.2    Return on capital

Calculation of return on capital

(a)    The return on capital for each regulatory year must be calculated by applying a rate of return for the relevant Distribution Network Service Provider for that regulatory year that is determined in accordance with this clause 6.5.2 (the allowed rate of return) to the value of the regulatory asset base for the relevant distribution system as at the beginning of that regulatory year (as established in accordance with clause 6.5.1 and schedule 6.2).

Allowed rate of return

(b)    The allowed rate of return is to be determined such that it achieves the allowed rate of return objective.

(c)    The allowed rate of return objective is that the rate of return for a Distribution Network Service Provider is to be commensurate with the efficient financing costs of a benchmark efficient entity with a similar degree of risk as that which applies to the Distribution Network Service Provider in respect of the provision of standard control services (the allowed rate of return objective).

(d)    Subject to paragraph (b), the allowed rate of return for a regulatory year must be:

(1)    a weighted average of the return on equity for the regulatory control period in which that regulatory year occurs (as estimated under paragraph (f)) and the return on debt for that regulatory year (as estimated under paragraph (h)); and

(2)    determined on a nominal vanilla basis that is consistent with the estimate of the value of imputation credits referred to in clause 6.5.3.

(e)    In determining the allowed rate of return, regard must be had to:

(1)    the relevant estimation methods, financial models, market data and other evidence;

(2)    the desirability of using an approach that leads to the consistent application of any estimates of financial parameters that are relevant to the estimates of, and that are common to, the return on equity and the return on debt; and

(3)    any interrelationships between estimates of financial parameters that are relevant to the estimates of the return on equity and the return on debt.

Return on equity

(f)    The return on equity for a regulatory control period must be estimated such that it contributes to the achievement of the allowed rate of return objective.

(g)    In estimating the return on equity under paragraph (f), regard must be had to the prevailing conditions in the market for equity funds.

Return on debt

[Subparagraphs (h)-(l) are under the subheading “Return on debt” and are relevant to that topic. They are separately the subject of consideration in the next principal section of these reasons for decision.]

Rate of Return Guidelines

(m)    The AER must, in accordance with the distribution consultation procedures, make and publish guidelines (the Rate of Return Guidelines).

The Rate of Return Guidelines must set out:

(1)    the methodologies that the AER proposes to use in estimating the allowed rate of return, including how those methodologies are proposed to result in the determination of a return on equity and a return on debt in a way that is consistent the allowed rate of return objective, and

(2)    the estimation methods, financial models, market data and other evidence the AER proposes to take into account in estimating the return on equity, the return on debt and the value of imputation credits referred to in clause 6.5.3.

(n)    There must be Rate of Return Guidelines in force at all times after the date on which the AER first publishes the Rate of Return Guidelines under these Rules.

(o)    The AER must, in accordance with the distribution consultation procedures, review the Rate of Return Guidelines:

(3)    at intervals not exceeding three years ...

637    Rule 87 of the NGR is relevantly in much the same terms.

638    In its application of r 6.5.2 of the NEL and r 87 of the NGR the AER followed a six-step methodology outlined in Chapter 5 of its Better Regulation Rate of Return Guideline, December 2013, (the RoR 2013 Guideline) to arrive at a return on equity of 7.1 percent.

639    The AER’s return on equity of 7.1 percent is to be contrasted with:

(a)    Ausgrid, Endeavour and Essential’s proposed 10.11 percent;

(b)    ActewAGL’s proposed 10.71 percent; and

(c)    JGN’s proposed 9.83 percent.

The Regulatory Background

640    The 2012 Rule Amendments significantly altered the process previously prescribed for determining the return on capital.

641    Prior to 2012, r 6.5.2 of the NER required the return on equity to be determined using the Sharpe Lintner Capital Asset Pricing Model (SL CAPM), and r 87 of the NGR required the return on equity to be determined using “a well accepted financial model, such as the Capital Asset Pricing Model”. The 2012 Rule Amendments removed these requirements and instead required that regard must be had to relevant estimation methods, financial models, market data and other evidence: NER r 6.5.2(e)(1); NGR r 87(5)(a).

642    A further significant change is the insertion of the allowed rate of return objective (RoR Objective), being the rate of return for a regulated service provider which is commensurate with the efficient financing costs of a BEE with a similar degree of risk as that which applies to the regulated service provider in respect of the provision of standard control services/reference services: NER r 6.5.2(b); (c) and NGR r 87(2) and (3). The relevant Rules now require that return on equity is to be estimated such that it contributes to the achievement of the RoR Objective and in estimating the return on equity, regard must be had to prevailing conditions in the market for equity funds: NEL 6.5.2(f); (g); NGR r 87(6) and (7).

643    The 2012 Rule Amendments include a requirement that the AER must publish Rate of Return Guidelines in accordance with its consultation procedures. These guidelines must set out the methodologies that the AER proposes to use in estimating the allowed rate of return, and the estimation methods, financial models, market data and other evidence the AER proposes to take into account: NER r 6.5.2(m) and (n); NGR r 87(13) and (14).

644    The RoR 2013 Guideline referred to above, published by the AER in December 2013, result from the 2012 Rule Amendments requirement. It is accepted that the RoR 2013 Guideline are not binding on the AER when it comes to make an individual determination and that there is no requirement for the AER to provide persuasive evidence to depart from the RoR 2013 Guideline. The AER is permitted to make a decision that is not in accordance with the RoR 2013 Guideline, but if it does so, then it must state its reasons for departing from its guidelines: NER r 6.2.8(c); NGR r 87(18).

645    As the AER pointed out, one benefit of the changes was to provide a common framework for the determination of the rate of return under the NER and the NGR.

646    References by the parties to the AEMC’s 2012 Rule Amendments accurately reflect that. In principle, the AEMC sought to achieve a process by the AER to get to the best estimate of the rate of return that can be obtained which reflects efficient financing costs of the service provider at the time of the regulatory determination.

647    The AEMC said at p 43:

A rate of return that reflects efficient financing costs will allow a service provider to attract the necessary investment capital to maintain a reliable energy supply while minimising the cost to consumers.

648    The AEMC noted that its regulatory approach could best achieve the NEO, the NGO and the RPP. The broadening of available methods, to be adopted by the AER at its election, was intended to fulfil that objective. It made the point at p 68 that “achieving the overall objective has primacy”. Thus, it saw the RoR 2013 Guideline, and the process by which they were to come into existence, as representing the correct balance between flexibility and regulatory certainty following the consultation required of and by the AER before fixing upon the Guidelines.

649    The AER has appropriately extracted from the 2012 Rule Amendments the following propositions summarising how it intended the 2012 Rule Amendments, in particular r 6.5.2 of the NER and r 87(2) of the NGR, to operate:

(a)    the RoR Objective has primacy in any estimation of the rate of return on equity (pp 18, 36 and 38-39);

(b)    the AER’s obligation to “have regard to” the material referred to in NER 6.5.2(e) when determining the allowed rate of return is subject to its obligation under NER 6.5.2(b) to determine the allowed rate of return such that it achieves the RoR Objective (and equally under NGR r 87(3) and 87(2)) (pp 36-37);

(c)    the AER must actively turn its mind to the factors listed, but it is up to the regulator to determine whether and, if so, how the factors should influence its decision (if at all) (pp 36-37);

(d)    it is important that the AER be given flexibility to adopt an approach to determining the rate of return that is appropriate to market conditions (p 44);

(e)    it is important for the AER to be transparent in its approach to determining the rate of return in order to maintain the confidence of service providers, investors and consumers in the process (pp 23 and 24);

(f)    it is important that all stakeholders (including consumers) have the opportunity to contribute to the development of the RoR 2013 Guideline and its evolution through periodic review every three years (pp 45-46);

(g)    the RoR 2013 Guideline should include details as to the financial models that the AER would take into account in making a determination, and why it has chosen those models over other models (p 70);

(h)    the RoR 2013 Guideline should provide a service provider with a reasonably predictable, transparent guide as to how the AER will assess the various estimation methods, financial models, market data and other evidence in meeting the overall RoR objective. The Guideline should allow a service provider to make a reasonably good estimate of the rate of return that would be determined by the AER if the Guidelines were applied (p 71); and

(i)    while while the RoR 2013 Guideline are not determinative, these should “provide a meaningful signal as to the regulator’s intended methodologies for estimating return on equity” and be capable of being given “some weight” to narrow the debate about preferred methodologies and models. They should be used as a starting point in making a regulatory determination (p 71).

650    It is apparent also that the AEMC did not consider that the rate of return estimates should be driven by a single financial model, whether the SL CAPM or another model, or by one estimation method. The available relevant evidence should be considered. As the DNSPs and JGN pointed out, the AEMC recognised that, in any event, other models may be useful as all have weaknesses to some degree, including that they are all based on certain theoretical assumptions, so that no one model can be said to provide the right answer.

651    Indeed, it is commonly accepted that the AEMC’s view (see the AEMC’s 2012, Economic Regulation of Network Service Providers, and Price and Revenue Regulation of Gas Services, Draft Rule Determinations, 23 August 2012, at p 48) that “estimates are more robust and reliable if they are based on a range of estimation methods, financial models, market data and other evidence” is a sensible one.

652    Following the 2012 Rule Amendments, the AER from December 2012 undertook a careful consultation process, including through its Issues Paper of December 2012 then its Consultation Paper of May 2013, and its draft RoR 2013 Guideline of August 2013 before publishing the RoR 2013 Guideline on 17 December 2013.

653    In the RoR 2013 Guideline, the AER indicated that it proposed to apply a six-step methodology in order to determine the estimated rate of return on equity. Those six steps are:

(1)    identify relevant material: in this step the AER identifies relevant methods, models, data and evidence;

(2)    determine role: the AER then assesses each piece of material against a set of criteria that it set out in the RoR 2013 Guideline (discussed below). These criteria are then used to determine what role each piece of material would play in the determination of the return on equity. Under the AER’s approach, each piece of material could either be:

(i)    used as the foundation model (noting there could be only one); or

(ii)    used to inform the foundation model; or

(iii)    used to inform the overall return on equity; or

(iv)    not used in any way;

(3)    implement foundation model: in this step the AER determines a range and point estimate for the foundation model return on equity, based on the information from step two;

(4)    other information: the AER uses other information to inform the overall return on equity estimate;

(5)    evaluate information set: the AER evaluates outputs from steps three and four above, identifying patterns and investigating conflicting information; and

(6)    distil return on equity point estimate: the AER uses the foundation model point estimate informatively to determine a starting point. Based on the information from steps four and five, it selects a final return on equity value as the foundation model point estimate, or a multiple of 25 basis points (from within the foundation model range).

654    The RoR 2013 Guideline identified that the SL CAPM was to be used as the foundation model, the Black CAPM was to be used to inform the parameter estimate of the equity beta for use in the SL CAPM, dividend growth models (DGMs) were to be used to inform the parameter estimate of the market risk premium (MRP) for use in the SL CAPM, with no role for the Fama French three factor model (the Fama-French model). Figure 5.2 identified a broad range of other information which was relevant or potentially relevant to the AER’s task and is reproduced below:

Table 5.2    Role of other information.

Material (step one)

Role (step two

Commonwealth government securities

Inform foundation model parameter estimates (risk free rate)

Observed equity beta estimates

Inform foundation model parameter estimates (equity beta)

Historical excess returns

Inform foundation model parameter estimates (MRP)

Survey evidence of the MRP

Inform foundation model parameter estimates (MRP)

Implied volatility

Inform foundation model parameter estimates (MRP)

Other regulators’ MRP estimates

Inform foundation model parameter estimates (MRP)

Debt spreads

Inform foundation model parameter estimates (MRP)

Dividend yields

Inform foundation model parameter estimates (MRP)

Wright approach

Inform the overall return on equity

Takeover and valuation reports

Inform the overall return on equity

Brokers’ return on equity estimates

Inform the overall return on equity

Other regulators’ return on equity estimates

Inform the overall return on equity

Comparison with return on debt

Inform the overall return on equity

Trading multiples

No role

Asset sales

No role

Brokers’ WACC estimates

No role

Other regulators’ WACC estimates

No role

Finance metrics

No role

The AER’s Final Decisions

655    The AER made Final Decisions in relation to the return on equity for each of the Network Applicants in substantially the same terms. The AER’s Final Decisions on the return on equity for each of the Network Applicants are set out in Attachment 3 to each of the AER’s Final Decisions for the 2015-16 to 2018-19 period, each dated 30 April 2015, except for the JGN Final Decision dated 3 June 2015.

656    The AER Final Decisions rejected the Network Applicants’ proposal that the return on equity be calculated by reference to four models, being the SL CAPM, the Fama-French model, the Black CAPM and a CAPM informed by the dividend growth models (DGM).

The AER’s Foundation Model approach

657    Instead, the AER adopted the “foundation model” approach to estimating the return on equity as it regarded that approach to be consistent with the RoR 2013 Guideline. The AER used the SL CAPM as the foundation model, as it considered it to be superior to all other models for estimating the expected return on equity by reference to the BEE.

658    The AER considered that the SL CAPM:

(a)    was the current standard asset pricing model of modern finance, both in theory and practice;

(b)    has been in use for a long period to estimate expected equity returns and transparently represents the key risk and reward trade-off at the heart of the AER’s task;

(c)    is widely accepted; and

(d)    is consistent with the approach employed by financial market practitioners.

659    The AER did consider other models, including the Black CAPM, the DGM and the Fama French Model. The AER said it used the theory behind the Black CAPM to inform the equity beta to be used in the foundation model, and used the DGM to inform the MRP. It is noted that the Network Applicants contend that the AER erred at this point, because:

(i)    it “disregarded” estimates from other models, provided through other experts reports, and did not itself use other models with its own estimated input data; and

(ii)    gave “entirely subsidiary roles” to the other models, rather than using them to determine the return on equity.

660    It is therefore appropriate to note how the Final Decisions record the AER’s regard to other models.

661    The AER considered that the Black CAPM relaxes one of the key assumptions of the SL CAPM, namely the assumption that investors can borrow and lend unlimited amounts at the risk free rate. It is accepted that this leads the SL CAPM to underestimate the return required for low-risk investments. In place of that assumption, the AER said the Black CAPM assumes that investors can engage in unlimited short selling. It regarded this assumption as not reflecting how the stock lending markets work because short sellers are required to post collateral in the form of cash or equity when lending stock. It also noted that, in place of the risk free asset in the SL CAPM, the Black CAPM substitutes the minimum variance zero beta portfolio, which requires estimating an additional parameter (the zero beta expected return) in order to use the Black CAPM to empirically estimate the point estimate for return on equity.

662    However, consistent with the RoR 2013 Guideline, the AER said it did use the theory underlying the Black CAPM to inform the estimate of the equity beta. As the Network Applicants contend, all that the AER did was use the weakness of the SL CAPM identified by the Black CAPM as a rationale for selecting a beta at the top of the range suggested by the SL CAPM. As considered below, this approach may be justifiable, but it is hardly “using the theory”. It did not use the Black CAPM empirically to estimate the return on equity for the BEE, thereby not accepting the arguments advanced by the service providers to the contrary. It gave reasons for that conclusion.

663    DGMs use dividends forecasted by market analysts to derive the return on equity by assuming that the market value of the equity in the business is equal to the present value of future dividends. In the RoR 2013 Guideline, the AER determined that it would limit the use of the DGMs to the function of informing the MRP in the SL CAPM.

664    It did not use a DGM return on equity for the BEE. The AER’s reasons for that were:

(a)    there was not sufficiently robust data of dividend yields for Australian energy network service providers;

(b)    it was also unclear whether there was a sufficiently robust method for estimating the dividend growth rate for Australian energy network service providers; and

(c)    the sensitivity of a DGM to its input assumptions limits its usefulness as a foundation model.

The AER noted that simple DGMs generated returns on equity for energy infrastructure businesses which significantly exceeded the average return on equity for the market. This, it said, did not make sense because a regulated natural monopoly was much less risky than the overall market, and should therefore have a lower return on equity.

665    The AER also considered that the Fama French Model was not appropriate to use. The risk factors used by the model are return on the market, firm size (measured by market capitalisation) and the ratio of book value to market value. By reference to the RoR 2013 Guideline, the AER considered:

(a)    there was little evidence of companies or regulators using the model;

(b)    empirical implementation of the model is relatively complex and opaque;

(c)    its estimates are sensitive to the chosen estimation period and methodological assumptions;

(d)    there is a lack of theoretical foundation for the factors; and

(e)    the instability of parameter estimates and the backward looking observation of risk factors do not mean that those factors will apply on a forward looking basis.

666    The AER, then, needed to address the risk free rate, the equity beta, and the MRP for the purposes of its modelling.

667    In relation to the risk free rate, the AER was satisfied that the yields on Commonwealth government securities with a 10 year term to maturity represented a widely accepted proxy for the risk free rate. That is not contentious.

668    In relation to the MRP, after observing that the MRP cannot be directly observed, the AER considered a range of conceptual and empirical evidence to enable it to determine a point estimate that had regard to prevailing conditions in the market for equity funds.

669    The evidence that the AER had regard to in estimating the MRP was historical excess returns, DGM estimates (from its preferred construction of the DGM), survey evidence of the expectations of investors and market practitioners, conditioning variables (dividend yields, credit spreads and implied volatility) and recent decisions by Australian regulators.

670    The AER noted that there was no consensus among experts on which method produces the best estimate of the MRP and that estimates of it are diverse and can vary over time. As noted, the AER used DGM estimates (from its preferred construction of the DGM) to inform the estimate of the MRP, having regard to evidence that the output from the models is very sensitive to input assumptions and likely to show an upward bias in current market conditions. In that, it was supported by advice from McKenzie and Partington, in their report to the AER: Part A: Return on Equity, October 2014, at p 9 (the 2014 McKenzie Partington report).

671    In relation to the equity beta, the AER also noted that the equity beta also cannot be directly observed, and considered a broad range of information in order to inform its estimate. The evidence that the AER had regard to in estimating equity beta included empirical estimates based on Australian energy network firms, conceptual analysis of a BEE’s systematic risks relative to the market average, international empirical estimates and the theory of the Black CAPM. Consistent with the RoR 2013 Guideline, it appears to the Tribunal that the Black CAPM was not “used” to produce a beta estimate; only the idea that SL CAPM understates beta was used to justify selecting a high-end value.

672    The AER adopted an equity beta point estimate of 0.7 from a range of 0.4 to 0.7. It was satisfied that an equity beta of 0.7 is reflective of the systematic risk a BEE is exposed to in providing regulated services, and was likely to contribute to the achievement of the RoR Objective. The position adopted in the Final Decisions is the same as that set out in the RoR 2013 Guideline; that is, the AER did not depart from the RoR 2013 Guideline in its determination of the equity beta of the BEE.

673    Through the above assessments in the Final Decisions, the AER adopted a risk free rate of 2.55 percent, an equity beta of 0.7 and a MRP of 6.50 percent as the input parameter values for the SL CAPM. The return on equity estimated by the SL CAPM using these parameter values was 7.1 percent.

The principal concerns of the Network Applicants

674    The Network Applicants are critical of the outcome of the AER approach: a return on equity of 7.1 percent.

675    They submit, firstly the AER’s approach to the assessment of the equity beta is flawed because the AER wrongly ring-fenced the range of equity beta before considering other evidence to select within that range. Secondly, they submit the adjustment to the SL CAPM equity beta was in error because the AER’s adjustment in the light of the Black CAPM was arbitrary and not based on any empirical evidence.

676    They are also critical of the AER’s conclusion on the MRP because, they say, it unduly weighted historical average express returns and disregarded or discounted other relevant evidence, including the “Wright approach” (in very simple terms, an alternative means of calculating the MRP in the SL CAPM recommended by Professor Wright, see, for example: Wright, Review of risk free rate and cost of equity estimates: A comparison of UK approaches with the AER, October 2012). In this regard, they also say the AER erred in adjusting estimates of the MRP for the value of imputation credits by applying an incorrect formula in making that adjustment.

677    Finally, at a more general level, they contend that the estimate fixed for return on equity was not reasonable, and can be shown to have failed the appropriate cross-checks.

The AER’s consideration of other material

678    That complaint flows from the next step in the AER’s methodology. That was to undertake an examination of other information that could inform an overall estimate of return on equity as outlined in Figure 5.3 from Attachment 3 to the Ausgrid Final Decision. The AER considered the spread between debt and equity risk premiums, and return on equity estimates from the Wright approach, valuation reports, broker reports, and other regulators’ decisions. The AER considered that the range of MRPs derived from this other information supported its use of the foundation model and its foundation model return on equity estimate. Although investors’ required return on equity is unobservable, the AER noted that the observable MRPs for debt did not materially change in the face of declines in the risk free rate.

679    At its conclusion, the AER considered that an expected return on equity derived from the SL CAPM should be the starting point for estimating the return on equity, and that the other information did not indicate that the equity MRP estimate should be uplifted or downshifted to better contribute to the achievement of the RoR Objective.

680    The AER was satisfied that an expected return on equity estimate of 7.1 percent derived from its implementation of the SL CAPM would contribute to the achievement of the RoR Objective and was commensurate with the prevailing market conditions (as required by NEL r 6.5.2(f) and (g) and NGR r 87(6) and (7)).

PIAC’s Contention

681    PIAC, as might have been expected, carefully addressed the AER’s methodology and conclusions and the contentions of the Network Applicants critical of those methods and conclusions.

682    Equally of present significance, PIAC contended that the AER erred in selecting a point value of the equity beta of 0.7 from the range 0.4-0.7.

683    In preparing the RoR 2013 Guideline, and reflected in it, is the work of Professor Henry: Estimating β, 2009 (Henry 2009 Report) for the AER. It is referred to in the RoR Explanatory Statement. Professor Henry noted that the empirical estimates then available justified a point estimate of about 0.55 for equity beta. The AER nevertheless then, as it did in the Final Decisions, adopted a range of 0.4-0.7 as a reasonable range for equity beta of a BEE.

684    PIAC notes that after the RoR 2013 Guideline was published, Professor Henry provided a paper Estimating β: an update, April 2014 to the AER (Henry 2014 Report). The Henry 2014 report reported equity beta estimates consistently falling within the range of 0.4-0.7, but added that “most estimates clustered around 0.5”. That was expressly noted in the Final Decisions.

685    Based on the Henry 2014 Report, PIAC contends that the correct equity beta should have been 0.5, and so the AER erred by selecting a value of 0.7 at the top of the range which the RoR 2013 Guideline contemplated. It then says that, having regard to the obligations imposed on the AER by s 16(1)(d) of the NEL (as its applications concerned only the Final Decisions of the AER in relation to Networks NSW, it did not address the NGL), the AER should have proceeded on the basis of the equity beta of 0.5 to reach the materially preferable NEO decision.

Does s 71O of the NEL preclude PIAC from taking this position?

686    Before considering the issue whether PIAC is correct in its contention that the AER should have used an equity beta of 0.5, it is necessary to address the AER’s submission that PIAC is precluded from taking the point it has raised by reason of s 71O of the NEL. That section prescribes matters that may and may not be raised in a review.

687    Section 71O(2)(c) and (d) are the relevant provisions. They provide:

(2)    In a review under this Subdivision, the following provisions apply in relation to a person or body, other than the AER (and so apply at all stages of the proceedings before the Tribunal):

...

(c)    an affected or interested person or body (other than a provider under paragraph (a) or (b) may not raise in relation to the issue of whether a ground for review exists or has been made out any matter that was not raised by the person or body I a submission to the AER before the reviewable regulatory decision was made;

(d)    subject to paragraphs (a), (b) and (c) –

(i)    the applicant, or an intervener who has raised a new ground for review under section 71M, may raise any matter relevant to the issues to be considered under section 71P(2a) and (2b); and

(ii)    any person or body, other than the applicant or an intervener who has raised a new ground for review under section 71M, may not raise any matter relevant to the issues to be considered under section 71P(2a) and (2b) unless it is in response to a matter raised by –

(A)    the AER under subsection (1)(b)(iii); or

(B)    the application under subparagraph (i); or

(C)    an intervener under subparagraph (i).

688    As observed, PIAC is an applicant in respect of the NSW DNSP Final Decisions and an intervener in the NSW DNSPs’ applications. Also as observed, while PIAC as an applicant did not seek to have the Final Decisions on the return on equity varied or set aside, it has as an intervener made submissions on the appropriate return on equity. In terms of s 71O(3), PIAC is an applicant (limited by the terms of its application) and an intervener for the purposes of s 71O(2)(d).

689    The AER acknowledges that PIAC made the same points at a level of generality during the process of consultation leading to the RoR 2013 Guideline, well before the Network NSW Final Decisions. It says that PIAC did not raise the same issues in any submission to the AER specifically in relation to it making its Draft Decisions or between its Draft Decisions and its Final Decisions in relation to Networks NSW.

690    It is clear that PIAC made its points in a submission leading up to the RoR 2013 Guideline: see its submission of 28 October 2013 at 28-29, including that the point estimate for equity beta should not be at the higher end of the range.

691    However, for the reason in the following paragraphs, the Tribunal is of the view that submissions made to the AER not specifically concerning a reviewable regulatory decision which the AER is in the process of making can not qualify as raising a matter relevant to “… the issues to be considered under section 71P(2a) and (2b)” when the reviewable regulatory decision is under review. Section 71O was repealed and replaced by the 2013 Legislative Amendments. In part that would appear to have been prompted by the decision of the Full Court in SPI Electricity Pty Ltd v Australian Competition Tribunal (2012) 208 FCR 151 (the SPI Case), where the Full Court held that the Tribunal had been in error to deny SPI relief in relation to a ground of review that it had raised in its initial regulatory revenue proposal, but about which SPI had remained silent in its revised revenue proposal after the AER had “decided” the point adversely to SPI in its draft determination. Section 71O(2)(a) and (b) require a DNSP to “raise and maintain” the submission before the AER.

692    Clearly, s 71O(2) envisages a less stringent standard is to be applied to non-network applicants and interveners as regards the nexus between the matters addressed in the submissions of a non-network applicant or an intervener to the AER during the regulatory decision-making process and the matters which that applicant is permitted to raise in its application for review. The requirement of having maintained the submission does not appear in s 71O(2)(c). That may reflect a primary purpose to improve the practical accessibility of the merits review process for organisations representing consumers whose long-term interests are the touchstone of the NEO and the NGO.

693    However, it is clear that s 71O is focused upon ensuring that a matter is properly raised in the course of, and in relation to, the particular regulatory review decision or decisions being made. The end words of each of subs (2)(a), (b) and (c) “before the regulatory review decision was made” are not merely temporal. If that were so, an applicant (subject to the “maintained” requirement) could go back to submissions made in relation to a previous regulatory review period. The matter must have been raised during and in relation to the process of making the regulatory review decision. The addition of the “maintained” requirement is readily explained by the SPI Case. That construction also is dictated by s 71R confining the “review related matter” relevantly to “decision related matter” under s 28ZJ: see s 71R(1) and (6), and in turn the nature of the decision related matter. It is noted that s 71R(1)(a) extends the available review related matter to matter raised during the consultation under s 71R(1)(b).

694    The Tribunal observes that the word “matter” in s 71O may have a different meaning from its use in s 71R. In the former, it seems broadly to refer to a contention or issue; in the latter it seems broadly to refer to the documentary records. For present purposes, it is not necessary to explore any difference. The Tribunal takes s 71O as relating to a contention or issue raised before the AER in the course of its reviewable regulatory decision-making process. It would not be sufficient for a document provided to the AER to obliquely refer to a topic. The topic (contention or issue) should have been raised before the AER so that it was a matter which the AER had to consider.

695    In any event, in response PIAC has referred to its submission to the AER following the Networks NSW Draft Decisions, made on 13 February 2015. In particular, it refers to that submission at [36.6], [43.2] and [44.3]. The Tribunal notes that at [36.6] PIAC wrote:

PIAC was not comfortable with all the components of the AER’s rate of return approach in the Guideline. For instance, PIAC previously advised the AER that the equity beta set out in the Guideline (0.7) was overly conservative and did not recognise the extent to which the economic risks sat with consumers rather than the networks.

The submission at [43.3] expresses general concerns “with a number of the constituent decisions that form part of the “RoR 2013 Guideline”, and that the AER’s Draft Decisions may not best achieve the RoR Objective. It is not specific enough to say which particular “matter” (in the sense used in s 71O(2)(c)) is of concern to it. The submission at [44.3] expresses disagreement with the “sampling approach” used to calculate the equity beta, as US data was overweighted and was not properly reflective of the BEE. It also repeated the assertion that, given a range of equity beta between 0.4 and 0.7, the AER choice at the top of that range was “overly conservative” having regard to the “most recent updates to the empirical studies on Australian network companies”.

696    It is the Tribunal’s view that those references indicated clearly enough to the AER that PIAC did not consider the estimation of the equity beta at 0.7 as appropriate, and so it raised in its reply submission to the Draft Decisions relating to Network NSW the matter of whether the equity beta should be significantly lower than 0.7. It also raised the matter of whether the AER should have placed as much weight on the US data as it did. Its submissions to the AER, not surprisingly, also referred generally to the AER’s obligation under s 16(1)(d) of the NEL.

697    As the application of s 71O has been raised also in relation to certain other submissions by Networks NSW (having regard to certain submissions of the Vic/SA Interveners), it is convenient at this point to briefly note some other non-controversial points concerning the application of s 71O, which do not appear to have been altered by the 2013 Legislative Amendments.

698    For the purposes of s 71O, a “matter” means a controversy or thing in dispute that was raised in submissions to the AER: Re DBNGP (WA) Transmission Pty Ltd (No 3) [2012] ACompT 14 at [299] (DBNGP (WA) Transmission (No 3)). Consequently, a party will be permitted to raise a matter by way of argument before the Tribunal if it can be identified as broadly arising out of a matter fairly raised by that party before the final determination was made: Re Energy Australia [2009] ACompT 8 at [316(f)]. Whether the matter can be identified as broadly arising out of a matter fairly raised is a matter for the Tribunal to assess in practical terms, in the particular circumstances of the case: DBNGP (WA) Transmission (No 3) at [305].

699    The limited merits review provided for under both the NEL and the NGL, as explained in the Introduction section of these reasons, is premised upon the AER addressing a particular matter or topic (being mindful of the caution appropriate when using surrogate words) in the light of that matter or topic having been raised in submissions made to the AER by the party wishing to raise it before the Tribunal in relation to that reviewable regulatory decision then being made. The Tribunal is not to entertain a matter which has not been so raised before the AER. That is, as the preceding paragraph notes, a practical assessment to be made in the particular circumstances.

700    PIAC’s ground of review in relation to return on equity will be addressed in conjunction with considering the grounds of review of the Network Applicants. It is not a ground of review which is “alive” for the purposes of considering the return on equity allowed by the AER in the JGN Final Decision.

The Grounds of Review: Network Applicants

701    A summary of the reviewable errors asserted by the Network Applicants appears in [241] of their joint submissions. It is convenient to record it in detail, as it provides a structure for the further consideration of this topic.

702    The joint submissions at [241] assert the following:

(a)    The AER’s Final Decisions were based on an incorrect construction and application of the Rules, in that the AER chose not to have regard to relevant models, contrary to clause 6.5.2(e) of the NER / rule 87(5) of the NGR, and thus the exercise of the AER’s discretion was incorrect.

(b)    The AER made errors of fact in its findings of fact, each of which was material to the making of its Final Decisions:

(i)    in finding that the required return on equity for a benchmark efficient entity with a similar degree of risk as each of the Network Applicants was 7.1%;

(ii)    in concluding that the SL CAPM was superior to all other relevant models (and all possible combinations of these relevant models), and in concluding that other relevant models relied on by the Network Applicants were not suitable, including because they were “emprirically unreliable” or “lacked theoretical foundation”, to estimate the return on equity;

(iii)    in concluding that an MRP of 6.5% was reflective of prevailing market conditions;

(iv)    in concluding that the equity beta for a benchmark efficient entity with a similar degree of risk as each of the Network Applicants was 0.7; and

(v)    in concluding that its “cross-checks” supported or confirmed its return on equity estimate of 7.1%.

(c)    The errors of fact identified in (b) above, were relied upon bythe AER in the exercise of its discretion, and thus led to the incorrect exercise of the AER’s discretion.

(d)    The AER’s decision was unreasonable in that it failed to take into account relevant considerations, including:

(i)    not having regard to estimates of the return on equity from all relevant financial models as is required by the Rules as a result of the errors of fact in (b) above, and in particular the errors of fact in relation to the assessment of SL CAPM and the other models relied upon by the Network Applicants;

(ii)    not having regard to relevant information in relation to the MRP, including independent valuation reports and estimates from the Wright approach; and

(iii)    the material before it and on which it purported to rely, but correctly interpreted.

(e)    The AER’s exercise of discretion was incorrect, and/or the Final Decisions were unreasonable in all the circumstances, in that the AER;

(i)    adopted an inconsistent (and therefore irrational or illogical) approach to assessing the merits of the SL CAPM and the merits of other relevant models or approaches; and/or

(ii)    irrationally and illogically gave sole weight to the SL CAPM notwithstanding the recognised deficiencies of the model.

(f)    The AER’s exercise of discretion was incorrect, and/or the AER’s Final Decisions were unreasonable, in that the AER constrained the use of relevant evidence in relation to the equity beta from international data samples by using it only to inform the selection of a point estimate from a range based on a different (and very limited) data sample.

(g)    The AER’s exercise of discretion was incorrect, and/or the AER’s Final Decisions were unreasonable, in that the AER purported to use the theory of the Black CAPM to select an equity beta of 0.7 when in fact the Black CAPM is not a model for calculating the equity beta, the AER did not calculate any adjustment that had to be made, and the AE’s beta range was not a correct range for beta in any event.

(h)    The AER’s exercise of discretion was incorrect, and/or the AER’s Final Decisions were unreasonable, in that the AER failed to have proper regard to evidence of the prevailing MRP from its own DGM analysis, and instead constrained the role of this evidence to indicating whether an estimate above or below the “baseline” estimate (based on historical evidence) should be adopted.

703    The Vic/SA Interveners made separate submissions, largely directed to the same categories of error.

704    They each have a very real interest in the correctness or otherwise of the AER’s Final Decisions on the rate of return on equity. That is not in issue, so it is not necessary separately to record the stage of each of the regulatory review decisions being made by the AER in relation to their respective regulatory proposals for 2015-19.

705    Each of the Vic/SA Interveners is a privatised business raising capital and borrowing in the equity and debt markets. Obviously, the AER Final Decisions in relation to the Network Applicants on rate of return for equity, rate of return for debt, and the value of imputation credits are significant to them. Any discordance from prevailing market conditions will, in the long run, potentially affect their respective businesses to a material extent.

706    On this topic, they contend in broad terms that the AER’s “foundation model” approach to determining the allowed return on equity is based on the erroneous proposition that since the global financial crisis, prevailing rates of return for equity have moved downwards, one-for-one, in line with falls in base interest rates and can be best estimated by exclusively using the SL CAPM, with flawed AER ranges for beta and the MRP.

707    Ergon also broadly supported the joint submissions of the Network Applicants. Its submission on this topic is encapsulated in [1] of its submission where it said that the AER made an error of fact in finding that applying the SL CAPM as the foundation model would lead to a rate of return that meets the rate of return objective, when the evidence before the AER was that the SL CAPM underestimates the rate of return required by businesses with less than average risk.

708    Those submissions will be considered in conjunction with those of the Network Applicants. As noted, PIAC (apart from its own “ground of review”) supported the decision making process and analysis by the AER.

Consideration

The relevant Rules

709    On this topic, there was emphasis on the relevant rules, introduced by the 2012 Rule Amendments. It is necessary to note certain features of them. In particular, the allowed rate of return (r 6.5.2 NER, r 87 NGR) is to be determined such that it achieves the RoR Objective: r 6.5.2(b) NER; r 87(2) NGR.

710    NER r 6.5.2 (c) and NGR s 87 (3) then state the RoR Objective is that the rate of return is to be commensurate with the efficient financing costs of a BEE with a similar degree of risk as that which applies to the DNSP/service provider in respect of the provision of standard control services/reference services.. In this context (unlike the issues debated in relation to the return on debt, as to which see below), the expression “benchmark efficient entity” (BEE) was not the subject of particular submissions.

711    The AER was required by r 6.5.2(e) NER and r 87(5)(a) NGR to have regard to “relevant estimation methods, financial models, market data and other evidence” in determining the allowed rate of return. As already noted, that change removed from the NER the provision requiring the return on equity to be determined using the SL CAPM and from the NGR that it be determined using a well-accepted financial model such as “the Capital Asset Pricing Model” and substituted the above formulation. The 2012 Amendments also introduced the RoR Objective as defined. And, they obliged the AER periodically to publish RoR Guideline.

The application of the Rules

712    The AEMC made it clear, as the rules referred to in the preceding paragraph indicate, that the AER is to consider a range of sources of evidence and analysis to estimate the rate of return: see eg sections 6.2.4 and 6.5 of the AEMC’s 2012 Rule Amendments (at pp 48-49 and 56-57).

713    In the context of the first submission of the Network Applicants (and the submission of the Vic/SA Interveners), the obligation of the AER “to have regard to” the matters prescribed was itself the subject of submissions. The Network Applicants say that the AER did not have regard to models other than the SL CAPM, and to use their word it “chose” not to do so. The Tribunal takes the obligation on the AER so expressed as requiring it to give consideration to the range of sources of evidence and analysis to estimate the rate of return. It need not give particular weight to any one source of evidence, and indeed it might treat particular evidence as having little or no weight in the circumstances. It is for the AER to make that assessment. It may also have regard to other factors. See generally Rathbone v Abel (1964) 38 ALJR 293 at 295 and 301; R v Hunt; Ex parte Sean Investment Pty Ltd (1979) 180 CLR 322 at 329; Turner v Minister for Immigration and Ethnic Affairs (1981) 35 ALR 388 at 392; R v Australian Broadcasting Tribunal; Ex p 2HD Pty Ltd (1979) 144 CLR 45 at 49-50.

714    The AER accepted that it did not itself “run” other models than the SL CAPM. It had presented to it the outcome of other models, through various expert reports provided to it. It considered, but did not adopt, those outcomes. It is said by the Network Applicants that the AER’s approach was based upon an incorrect step – both non-compliant with the Rules and in fact – that the SL CAPM was a superior model and so an appropriate “foundation model” for the purposes of the RoR 2013 Guideline.

715    The relevant textual features, in the view of the Tribunal, are the breadth and generality of the words “relevant estimation methods, financial models, market data and other evidence”. They do not suggest a prescriptive obligation to consider particular methods, models or data. If that were intended, one would expect it to be more prescribed. Rather, it is left to the AER to decide what is “relevant” and a dispute about relevance is not itself a basis for asserting error of the character now asserted. In fact, the AER did have regard – in the sense of considering – the material put forward by the Network Applicants. The same reasoning suggests that the obligation to “have regard to” certain material is to consider it and to give it such weight as the AER decides. Again, if a more sophisticated obligation were intended, it is likely it would have been differently expressed. The main contextual matter indicating the nature of the obligation is the regulatory framework where the RoR Objective is as set out above. It, too, indicates that the requirement to have regard to certain material is not prescriptive in the sense argued for by the Network Applicants. The RoR Objective is the general umbrella concept which the prescribed process is to serve; it would not serve it by requiring particular weight to be given to particular materials. That conclusion is also supported by the AEMC’s views referred to, which indicate that it is left to the AER as the regulator to decide within the relevant Rules how it arrives at a rate of return which is robust and sensible and best achieves the RoR Objective.

716    That, the AER pointed out, is consistent with the RoR 2013 Guideline. Those Guidelines evolved in the manner described earlier in these reasons. In the AER Consultation Paper, Rate of Return Guidelines, May 2013, the AER had put forward four broad approaches. That paper was followed by extensive consultation, Draft RoR 2013 Guideline, the Issues Paper on equity beta and finally by the RoR 2013 Guideline.

717    The Tribunal does not regard the AER’s approach as reflecting any misunderstanding of the relevant Rules or any fundamental misapplication of them. It does not consider that the RoR 2013 Guideline is itself a departure from their prescription. Consequently, the AER’s decision to follow the process set out in the RoR 2013 Guideline was not itself necessarily erroneous.

718    The starting point for that conclusion is the NEL and the NGL themselves, principally the NEO and the NGO respectively and the complementary RPP. Their collective significance is explained in Re Application by ElectraNet Pty Limited (No 3) [2008] ACompT 3 at [18] and in Application by Energy Australia and Others [2009] ACompT 8 at [79]-[82].

The use of the SL CAPM model

719    The AER’s approach is supported by the expert advice it received in the McKenzie Partington 2014 Report and by Professor Handley, Advice on the Return on Equity, October 2014 (the Handley 2014 Report) (as well as the material sourced in the RoR 2013 Guideline and the earlier Consultation Paper and Draft RoR 2013 Guideline referred to above). As its Final Decisions disclose, it was well alive to the SL CAPM providing a starting point only. Whilst it used the SL CAPM as its foundation model, the AER did not then adopt its outcome without careful consideration of other sources of information. As noted, expert advice supported that as a starting point.

720    The AER’s approach in this regard does not lead to the view that it assumed the SL CAPM does not have strengths or weaknesses, or that other models do not have strengths or weaknesses. Its subsequent analysis shows that it was not “locked in” to one model, relied on to the exclusion of all others.

721    The Tribunal notes the material referred to by the Network Applicants to support their proposition that the SL CAPM was an inappropriate starting point. That material is divided into four categories:

(a)    the empirical literature, which shows that the SL CAPM in fact performs poorly against the empirical data, relative to other relevant financial models;

(b)    a recent study by NERA: Empirical Performance of Sharpe-Lintner and Black CAPM: A report for Jemena Gas Networks and others, February 2015 (NERA 2015 Report), which confirms the findings of the earlier literature that documents the poor empirical performance of the SL CAPM, and shows that the Black CAPM is in fact superior to the SL CAPM in terms of its performance against the empirical data (NERA also separately provided a review of the literature in March 2015);

(c)    the fact that alternative models have been developed specifically to overcome observed difficulties with the SL CAPM; and

(d)    expert evidence, which demonstrated that all models have strengths and weaknesses and that no one model is superior. No expert advised that the SL CAPM is superior to all other models and to all possible combinations of models (including combinations that include the SL CAPM).

722    The Network Applicants expand upon their proposition in their joint submissions at [45]-[68] and Appendix A. In their oral submissions, the Tribunal was taken extensively and appropriately to that material. The largely historical empirical literature indicated a bias in the SL CAPM by systematically underestimating returns for low-beta stocks and for stocks with high book-to-market ratios (such as network businesses). That was confirmed, at least on the first mentioned point, by the NERA 2015 Report.

723    The AER refers in turn to expert opinion supporting its characterisation of, and as a starting point its reliance on, the SL CAPM, together with its analysis of broker and valuation reports, and evidence of its use by market practitioners. It also referred to its analysis of the strengths and weaknesses of alternative models, both in the course of settling upon the SL CAPM as the “foundation model” in the RoR 2013 Guideline and in its Final Decisions. It is not necessary to refer in detail to that analysis. In Attachment 3 to the Ausgrid Final Decision, it appears at pp 3-244 to 3-271.

724    The difficulty of categorising the applicable ground of review for an asserted error on the part of the AER was raised in the course of the submissions as to whether leave to apply for review should be given. That difficulty persists. It is clear enough what the contentions are of the Network Applicants (supported by the Vic/SA Interveners and by Ergon), but their correct categorisation is a little more elusive.

725    The Tribunal has recorded the Network Applicants’ contention that the use of a foundation model was not in accordance with r 6.5.2(e) of the NER or r 87(5) of the NGR. For the reasons given, no asserted error of discretion in that respect is made out.

726    Nor does the Tribunal consider that the Network Applicants have shown that the AER made a factual error in deciding to use the SL CAPM as its foundation model in preference to other material. That other material exposed the risk of bias where the entity concerned has an equity beta of less than 1. The AER was alert to that. It considered a range of low beta expert and market material, partly noted above. Some of the expert opinion was more apparently forceful than other parts of it. It is not shown to have misunderstood that material.

727    The Tribunal would, of course, substitute its finding of fact as to the relative suitability of the models if it were to have reached a view that the AER’s assessment of their relative suitability was incorrect. It would substitute its findings of fact, where complex “opinion” about the respective quality of particular models were shown to be incorrect. If it was satisfied of such a matter, that assessment would itself involve a finding of fact different from that of the AER, and on merits review would mean a ground of review was made out. Alternatively, the Tribunal would substitute any exercise of discretion about the use to which particular information (including expert opinion) should be put or how it would serve into a critical conclusion on this aspect if it were satisfied that the AER’s discretion should have been otherwise exercised.

728    The Tribunal, like the AER, has access only to the materials before it. The opinions of the experts have not been tested by any process of joint exchange of views, whether before the AER or before the Tribunal. The sequential exchange of written opinions, and the variety of views expressed, suggests that views of experts genuinely held might indicate that there is no clearly correct view, but that matters of fine judgment are involved. The end point for the Tribunal on this aspect is that it is not satisfied that the adoption by the AER of the SL CAPM as the foundation model by the AER is incorrect.

729    Ergon’s submission noted above asserts an error of fact of a different character. Correctly, in the view of the Tribunal, it starts by emphasising that the return on equity for a regulatory control period must be estimated in such a manner that it contributes to achievement of the RoR Objective (NER, r 6.5.2(b)) and, in turn, that the rate of return for a DNSP (Ergon was concerned as an electricity network provider) is to be commensurate with the efficient financing costs of “a benchmark efficient entity with a similar degree of risk as that which applies to the Distribution Network Service Provider” in respect of the provision of standard control services (NER, r 6.5.2(c)). It is accepted by the AER that a BEE has a “low risk profile”, is “not average risk” and has a “very low” business risk.

730    Ergon is then critical of the AER for not adopting estimation methods and financial models that would most accurately estimate the return on equity for a business with a lower than average degree of risk. It says that the AER, by using assessment criteria set out in the RoR 2013 Guideline to select a model to use as a “foundation model” fell into error. That is firstly because the NER and the NGR do not require or authorise the determination of the allowed rate of return by the use of a foundation model (a contention the Tribunal has not accepted), and secondly because the SL CAPM was not a model which could or would accurately estimate the return on equity for businesses with lower than average risk. It is apparent that the AER by using the RoR 2013 Guideline process selected a foundation model by criteria that were general in nature, and not by reference to a model that would in itself produce a rate of return commensurate with the efficient financing costs of a BEE with a lower than average degree of risk.

731    It is, as the AER noted, correct that the three parameters for the SL CAPM – equity beta, risk free rate, and MRP – are recorded as giving a low beta bias for businesses with a beta (that is, the risk of the asset relative to the average asset) of less than 1.0, and that the Network Applicants are all within that group. There was also evidence that the low beta bias is exacerbated when it is combined with conditions of low government bond rates and a high MRP. Those conditions were applicable at the time of the AER Final Decisions. The AER at p 3-240 of Attachment 3 to the Ausgrid Final Decision concluded that “notwithstanding potential limitations with the model, we consider that our implementation of the model recognises any potential empirical limitations”.

732    Ergon says this conclusion was erroneous, and the AER then erred by considering whether the SL CAPM could be adjusted so as to mitigate the effects that made SL CAPM unsuitable for use as a foundation model, when it should have considered whether the SL CAPM was suitable for selection as the foundation model.

733    Ergon, as with the Network Applicants, was also critical of the way the AER used the SL CAPM.

734    The contention requires the Tribunal to focus on s 71C(1)(a) of the NEL (and s 246(1)(a) of the NGL). It is necessary to consider whether there was an error of fact as asserted, and whether it was material to the making of the decision.

735    The Tribunal does not consider the AER, by selecting the SL CAPM as its foundation model made an error of fact. It was aware of the shortcomings of the SL CAPM, and on broad terms of the shortcomings of other models. It analysed their respective qualities, including as assessed or reported on by a range of expert commentators. Whilst it is possible to argue for an alternative model as the more suitable (Ergon argues for the Black CAPM), the Tribunal is faced with the range of competing views but that does not take the Tribunal itself to reaching a conclusion that the AER’s selection of the SL CAPM involved an incorrect finding of fact. To get to that point would be to reach a firm view that a different model should have been chosen. The conflicting expert opinions, and supporting contentions based on other material, do not – in the Tribunal’s assessment – get beyond showing that there are reasonable arguments for an alternative foundation model.

The challenged findings of fact

736    It is then to the implementation of the six-step methodology in the RoR 2013 Guideline, using the SL CAPM as the foundation model, to which the Tribunal turns. It is mindful of the RoR Objective and the submission that the six-step methodology does not estimate the cost of equity in a manner that contributes to the RoR Objective, but that it adds an unnecessary layer of complexity to the estimation process that is not mandated or authorised by the NER, and diverts attention away from the AER’s regulatory task.

737    That last mentioned submission is a useful reminder of the AER’s task, and re-affirms the view the Tribunal has taken – having regard to the ultimately qualitative assessment to be made to select a rate of return which achieves the agreed RoR Objective – as to the proper consideration of the factors prescribed.

738    As Appendix D to Attachment 3 to its Final Decisions discloses, the AER went through several steps before finally adopting its estimated equity beta of 0.7. They are, briefly, conceptual analysis; Australian empirical analysis; international empirical estimates; the theory of the Black CAPM; and then the selection of the range and the point estimate.

739    It used its conceptual analysis to ascertain an expectation of the systematic risk of the BEE relative to the market average firm. That is as a cross-check against the empirically derived range. The AER considered two types of systematic risk were relevant to this analysis: business risk and financial risk. The AER concluded that the intrinsic business risk of a firm is the primary driver of its systematic risk, and that this intrinsic risk is low for the BEE (relative to the market average firm). The AER accepted that, while the BEE had a relatively high financial gearing of 60 percent (as it commonly accepted in the submissions), compared to the market average firm (30 to 35 percent), this did not imply that it has an equivalently high exposure to financial risk. This is because the exact relationship between financial risk and financial leverage is not straightforward. For example, the likelihood of bankruptcy as leverage increases is low (to the extent that the business is able to pass on borrowing costs to consumers). This conclusion was supported by advice in the McKenzie and Partington 2014 Report.

740    The AER then reviewed the empirical evidence on equity betas produced by Professor Henry on the instructions of the AER in the Henry 2009 and Henry 2014 Reports. Henry performed regression analysis on the data of the weekly returns of nine Australian energy firms. He concluded that the majority of the evidence suggested that the point estimate for the equity beta lay in the range 0.3 to 0.8. He considered that it was difficult, given the differences in sample periods and sizes underlying the various individual estimates, to pin down a value for the beta of a “typical firm”. However, within the range 0.3 to 0.8 the average of the ordinary least squared estimates for the nine firms was 0.5223. Table 2 reflecting that data in the Henry 2014 Report presents beta estimates for the individual firms over the longest estimation period, using a weekly return interval.

741    The AER reviewed various different regression permutations from the Henry report. It concluded that Henry used credible econometric techniques and incorporated robustness checks for data outliers, thin trading and parameter instability in his analysis. The AER considered that Henry’s 2014 Report indicated a best empirical estimate of approximately 0.5 for the BEE. This was because most of his estimates were clustered around 0.5, as shown in the following graph appearing at p 3-413 of Attachment 3 to the Ausgrid Final Decision:

Figure 3.27    Equity beta estimates from Henry's 2014 report (average of individual firm estimates and fixed weight portfolio estimates)

742    This graph featured significantly in the submissions of PIAC.

743    The AER also considered a range of other empirical evidence on the equity beta, including a report from Grant Samuel & Associates, Envestra: Financial services guide and independent expert’s report, March 2014 which estimated equity betas for the sector based on various data of 0.42 to 0.64. That material is tabulated in eg the Ausgrid Final Decision, Attachment 3 – Rate of Return at pp 3-417 to 3-418.

744    Upon careful review of the AER Final Decisions, it does not appear that, when it considered evidence of equity beta estimates of energy companies operating in foreign countries, that material was used as the primary determinant of the equity beta range or point estimate. The AER considered this evidence provided some limited support for an equity beta point estimate towards the upper end of the AER’s empirical range.

745    When it came to consider the Black CAPM, the AER recognised that the theory underlying the Black CAPM implies that it may predict a higher return on equity than the SL CAPM for firms with a beta less than 1.0. It was, at least in part, prompted by that when the AER has had regard to this theory by selecting an equity beta above the empirical estimates implied from the Henry 2014 Report.

746    The AER’s broad stepping back then to weigh (in a practical way) the various factors led it to the empirical studies. It thought there was an extensive pattern of support for an empirical equity beta within a range of 0.4 to 0.7, despite Henry in 2014 reporting a range of 0.3 to 0.8, because averages of individual firm estimates and fixed weight portfolio estimates were more likely to be reflective of the BEE. Using this approach, the empirical estimates showed a consistent pattern of support for an empirical equity beta range of 0.4 to 0.7.

747    Then the AER, noting that the Henry 2014 Report suggested a best empirical equity beta estimate of approximately 0.5, considered that other information pointed towards a higher estimate. It looked again at the empirical estimates of international energy networks, which ranged from 0.3 to 1.0, giving some limited support for an equity beta point estimate towards the upper end of its range. It noted the “theory”, or at least the likely outcome, of the Black CAPM was consistent with an equity beta point estimate towards the upper end of the range. So it reached an equity beta of 0.7. That was then reviewed by the broader steps 4 to 5 of the RoR 2013 Guideline methodology referred to above.

748    The Network Applicants and the Vic/SA Interveners mounted a sustained attack on that conclusion, including its observation (for example in the Ausgrid Final Decision, Attachment 3 – Return on Equity at p 3-59) that:

Contrary to what some submissions indicated, there is no compelling evidence that the return on equity estimate from the SLCAPM will be downward biased given our selection of input parameters.

749    The Tribunal is not persuaded that the AER’s estimate of an equity beta of 0.7 is understated and a wrong finding of fact, despite those forceful submissions. As an overall approach, the AER appears to have examined and considered relevant material, including that presented by the Network Applicants. It is clear enough that other minds might have treated the negative bias in the equity beta in the SL CAPM differently. It is clear that the Black CAPM would, depending on the input data, have produced a higher return on equity than the SL CAPM; using equity beta of 0.7 and MRP of 6.5 percent, the outcome would be 8.1 percent. It is clear also that other data might have supported a conclusion that the equity beta might have been estimated at a different number (PIAC says at 0.5). It is clear that there is room for debate about the significance of international empirical data. The Tribunal has had careful regard to that material. Much of it was referred to in the course of the submissions of the Vic/SA Interveners, including the analysis in the Henry 2014 Report of different combinations of the available Australian data, over time and with different entities. In the course of those submissions, it was pointed out why it was said the various sources of data might not be indicative of a clear result. Then the question was asked: what should have been done? Various alternatives were canvassed.

750    It is one matter to show reasons why the AER’s analysis might have been undertaken in another way, but it is another matter to show that the other way would produce an outcome which is the correct outcome rather than an alternative and also rational outcome. It is important, in that regard, to note that the various sources of information and analysis and the alternative models and their inputs, will not routinely present a scientific and inevitably correct outcome. All experts agree with the AER that the various models have flaws, that the particular data sources do not automatically or mathematically convert to a precise number applicable to the BEE. For instance, the low beta bias on the SL CAPM does not mean that the Black CAPM will necessarily generate a more reliable outcome. Consequently, the more precise adjustments to the SL CAPM suggested by the Network Applicants in their submissions, whilst they are capable of showing one way in which the SL CAPM would be adjusted, do not necessarily represent the best or only way to make the appropriate adjustment. The fact that the AER did not “run” the Black CAPM, the Fama-French Model or DGM with its own inputs does not demonstrate that it did not have regard to the various expert reports presented by the Network Applicants, or to the outcomes generated by those experts’ use of those models.

751    Similarly, in the Tribunal’s view, the fact that the AER set out – in accordance with the RoR 2013 Guideline – to:

(a)    seek financial models which were “fit for purpose” in the sense that simple approaches should be promoted over complex ones where possible; and

(b)    use estimation methods and financial models that are “consistent with well accepted economic and finance principles and informed by sound empirical analysis and robust data”.

does not support that conclusion that its estimated equity beta of 0.7 was incorrect. It did not assume that its approach would in those respects necessarily result in a reliable estimate of equity beta without reference to a range of material.

752    The Network Applicants also criticised the AER’s reasons for not using either the Black CAPM or the Fama-French Model more directly as erroneous. That does not tend to demonstrate that the AER made the error or errors of fact asserted. As the Network Applicants assert, all models are sensitive to methodological and input choices. It does not tend to demonstrate the primary factual error asserted to show that the AER might have adopted other methodological choices (relating to data sources and estimation techniques) which it might have used in other models. But the fact that it did not do so does not, in the view of the Tribunal, support the necessary step of showing that the estimated equity beta itself was wrong.

753    The Network Applicants produced a figure in their submission showing a range of returns on equity from the SFG Report The required return on equity for the benchmark efficient entity, 13 February 2015 (the SFG 2015 Report). It shows the 7.1 percent selected by the AER (from the SL CAPM with the AER parameter estimates) and a range of 9.3 percent to 10.3 percent (from the other three models and the SL CAPM with other parameter estimates selected by SFG). It is said that the AER could not disregard that sort of outcome simply because it did not reflect the results of its preferred model. But the Tribunal does not consider that the AER simply proceeded on that basis; its reasoning is set out in a little detail above. It shows that the AER’s approach was more complex and careful than simply adopting the SL CAPM output.

754    As to the equity beta estimate itself, the primary criticism of the Network Applicants is that, having identified a range for equity beta of 0.4 to 0.7 as the “primary range”, the AER then only considered other relevant material to select within that range and effectively discounted material which suggested an equity beta outside that range. The Tribunal does not accept that the AER chose to ignore empirical material suggesting an equity beta outside its starting range; the reasons of the AER in its Final Decisions show that it did consider all the empirical evidence. It was alert to the potential problems arising from the data source confined to the small number of publicly listed Australian energy network businesses. It is not shown to have failed to appreciate the terms of the Henry 2009 and 2014 Reports. It is not shown to have ignored the proposition of SFG (in several reports) that the data set for estimating equity beta should not be so confined to those businesses, or to have ignored empirical evidence of international energy network businesses.

755    The Network Applicants further contended that the AER erred in concluding that adopting the top of the AER’s range would overcome problems with the SL CAPM indicated by the theory of the Black CAPM. They point out that the theory of the Black CAPM does not say anything about the equity beta to be used in the SL CAPM, but that theory of the Black CAPM says that the SL CAPM formula should not be used and that the Black CAPM formula should be used in its place. One of the reasons for the low beta bias in the SL CAPM is said to lead to the Black CAPM being preferable. That is said to be fortified by the AER saying that it does not know by how much it needs to adjust its equity beta estimate to account for the issues identified and said to be corrected by the Black CAPM theory. The AER noted, at Attachment 3 to the Ausgrid Final Decision at p 3-426, that:

We consider the theoretical principles underpinning the Black CAPM demonstrate that market imperfections could cause the true (unobservable) expected return on equity to vary from the SLCAPM estimate. For firms with an equity beta below 1.0, the Black CAPM may predict a higher expected return on equity than the SLCAPM. We use this theory to inform our equity beta point estimate, and consider it supports an equity beta above the best empirical estimate implied from Henry's 2014 report. However, while the direction of this effect may be known, the magnitude is much more difficult to ascertain. We do not consider this theory can be used to calculate a specific uplift to the equity beta estimate to be used in the SLCAPM. This would require an empirical implementation of the Black CAPM, and we do not give empirical evidence from the Black CAPM a role in determining the equity beta for a benchmark efficient entity (as discussed under step two of our foundation model approach in section 3.4.1).

756    But it does not follow, as the Network Applicants submit, that the AER could not reasonably be satisfied that its equity beta estimate of 0.7, when used in the SL CAPM, will lead to a return on equity that contributes to the RoR Objective and that the AER’s determination of its point estimate is highly arbitrary and is affected by errors in the interpretation of key evidence.

757    The Tribunal has, of course, carefully considered that assertion. As it has already observed, it is not satisfied that the AER’s process of reasoning has led to an error of fact in selecting its foundation model. It has had regard, on the one hand, to the AER’s reasons for selecting the SL CAPM as the foundation model and what the AER has said in response to the submissions of the Network Applicants, the Vic/SA Interveners and Ergon. It has had regard, on the other hand, to the submissions of those parties and the extensive material to which they have referred.

758    To confirm that the Tribunal has not overlooked the assertion, it is desirable to refer to the submissions concerning the use of international empirical data. This data was used by the Network Applicants to argue for a higher equity beta. PIAC argued that it was given too much weight by the AER.

759    The AER clearly identified its use of that data. It was treated with caution by the AER; in the Tribunal’s view, that was appropriate for the reasons given by the AER.

760    The Network Applicants say that rather than simply providing “limited support” for the AER’s estimated equity beta of 0.7, it should have been recognised as unequivocally supporting a higher equity beta.

761    The Tribunal shares the AER’s view. It provides limited support for an equity beta higher than the 0.5 which the contemporary analysis of Australian empirical data by the Henry 2014 Report showed. To say it provides “limited support” for the selected figure is not erroneous. It does not reflect a misunderstanding of the data. It is simply to say that it is data, treated with caution, which assists in reaching the figure of 0.7.

762    As the conclusion of the submission of the Network Applicants shows, ultimately, it is necessary on this point for the Tribunal to be persuaded that an equity beta of 0.7 for a BEE with a similar degree of risk to that of the Network Applicants (they did not distinguish between themselves for this purpose) was a wrong finding of fact.

763    For the reasons given, the Tribunal is not satisfied that the ground of review attaching the finding of fact represented by the estimation of the equity beta of 0.7 was in error, as asserted by the Network Applicants. Nor does the Tribunal consider that the exercise of any discretion by the AER involved in the process of decision making to reach that conclusion was incorrect, having regard to all the circumstances. That conclusion encompasses the grounds summarised in para 241(a),(b)(ii) and (iv) and (c) (at least partly), (d)(i) and (ii), (d), (f) and (g) of the Network Applicants’ submissions. The attack on the MRP, and then more broadly the assessment of the return on equity of 7.1 percent in para 241(b)(i), (iii) and (v) are still to be considered, as are the grounds in para 241(d)(iii) and (h).

764    There is no doubt that, if such a factual error were made out, it would be material to the making of the decision. It is noted that, even if such an error of fact were made out, the Tribunal would have to turn to the question prescribed by s 71P(2a)(c) of the NEL, having regard to s 71P(2b)(d)(i), and by s 259(4a)(c) of the NGL having regard to s 259(4b)(d)(i).

765    It is not necessary for the Tribunal therefore to address in detail those provisions in relation to this topic. However, it observes that where the primary attack is upon a factual finding (such as the proper estimate of the equity beta), and the Tribunal is not persuaded that the asserted error of fact is made out, it is not immediately apparent how the criterion specified in s 71P(2a)(c) of the NEL and s 259(4a)(c) of the NGL might be satisfied by attacking an anterior discretionary decision feeding in to that finding of fact. That is not to say that the attack on an anterior factual finding or discretionary decision feeding in to that ultimate finding of fact may not be a reason, or the reason, why the ultimate finding of fact is found to be in error. But that has not been found to be the case on this aspect of the contentions.

766    Before addressing the remaining grounds of attack of the Network Applicants, it is convenient to address PIAC’s contention that the AER erred by not estimating equity beta at 0.5 with a consequential adjustment to the rate of return conclusion.

767    The Tribunal has found that, notwithstanding s 71O(2) of the NEL, PIAC is entitled to maintain its ground of review, in relation to each of the Network NSW Decisions.

768    However, the Tribunal does not consider that the PIAC contentions demonstrate error on the part of the AER, despite the apparent attraction of its position by reason, in part, of it being straightforward. That is, it says, that having regard to s 16(1)(d) of the NEL, and the AER’s observation (based mainly upon the Henry 2009 Report) that the empirical estimates justified a point estimate for equity beta of 0.55 (as it recorded in its explanatory statement to the RoR 2013 Guideline, and the AER’s selected range of 0.4-0.7 for equity beta, it should have selected an equity beta of 0.5, roughly representing mid-point in that range.

769    The contention is based in part upon the Henry 2014 Report (available only after the RoR 2013 Guideline was published) with its clustering of estimates around 0.5.

770    It cannot be said that the AER did not have regard to that data; it is referred to in some detail in the Final Decisions. PIAC says more weight should have been given to that conclusion as it is more contemporary. It also says the Henry 2014 Report was based on a larger data set, and has a clearer clustering, than was previously the case, especially if the “outliers” are removed at the extremes.

771    The AER recognised that the Henry 2014 Report produced an empirical estimate of about 0.5 for equity beta. PIAC says that, in the circumstances, should have been adopted.

772    The Tribunal has accepted that, in principle, the AER was entitled to adopt the process as laid out in the RoR 2013 Guideline. Indeed, PIAC’s submissions support that, including the use of the foundation model concept and the selection of the SL CAPM as the foundation model. Once the AER, on that basis (and reasonably in the view of the Tribunal) selected a provisional range of 0.4-0.7 for equity beta, it was also entitled to have regard to the expert advice that the SL CAPM had, in the circumstances, a low equity beta bias. It was entitled to have regard to other models, and a range of other data. Indeed, it was required to do so.

773    However, PIAC says that it was in error, even having regard to that material, to adopt the 0.7 estimate rather than the 0.5 estimate.

774    Its starting point is to criticise the way the AER used the theoretical principles of the Black CAPM to assist in its selection of its point estimate. It describes the AER as using the “theoretical principles” of the Black CAPM to inform its selection of the point estimate for beta within its “reasonable range” because the SL CAPM may underestimate with return on equity for firms with equity betas less than one. PIAC criticises the AER’s view that the Black CAPM would be expected to warrant an upward adjustment (of some unspecified magnitude) to the best empirical estimates derived in accordance with the SL CAPM. That, it says, is found in the Final Decisions and in the RoR 2013 Guideline.

775    PIAC says the analysis of the AER to justify that approach is an exercise in econometric reverse-engineering; and was to assess whether the AER might be able to justify making an adjustment from any point within the 0.4-0.7 range to the upper bound of that range.

776    The PIAC submission then refers to the AER’s expressed concerns about the reliability of the estimates from the Black CAPM generally. It suggests the AER’s confidence in the validity of the Black CAPM had “waned significantly” between the RoR 2013 Guideline and the Final Decisions.

777    The following step in PIAC’s submission is that, in the circumstances, the AER had no proper basis on which it, acting reasonably and applying its own estimation methods rigorously, objectively and transparently, could justify making any upward adjustment from the best empirical estimate of 0.5 on account of the “theory” of the Black CAPM, even if it accepted that the SL CAPM had a low equity beta bias. It says that having the Henry 2014 Report and the empirical analysis it contained, it was inconsistent with the RoR 2013 Guideline for the AER to adhere to its approach of first affording itself a very sizeable margin of regulatory discretion and then relying on the “theory” of the Black CAPM to make an arbitrary adjustment from the best empirical estimate of 0.5 to the top end of that range.

778    PIAC accepts that the AER could have regard to overseas empirical estimates as a subsidiary consideration to inform its provisional point estimate of equity beta. At the Final Decisions stage, the overseas estimates considered by the AER lay in the range 0.3-1.0. The AER said that provided some limited support for an equity beta estimate towards the upper end of its empirical range. But, PIAC says, the AER does not appear to have taken account of the improved statistical reliability of the Henry 2014 Report estimates and to have drawn the logical conclusion that the overseas empirical estimates should have been given less weight than they had been given at the RoR 2013 Guideline stage.

779    As with the submissions of Networks NSW, supported by the Vic/SA Interveners and Ergon (although differently focused), the Tribunal can readily understand PIAC’s reasons for urging error on the part of the AER. However, for much the same reasons, it has not taken the step of concluding that the AER was in fact in error in finding that the proper point estimate was 0.7 for equity beta. There are reasons why it might have chosen another point estimate. But the Tribunal accepts that the AER was entitled to start with a range. Upon reviewing the whole of the material before the AER, the Tribunal however is not satisfied that that material does not support a conclusion that the SL CAPM provided a low equity beta bias. When, therefore, it comes to the selection of a point estimate, and having regard to the range of data available to the AER, the Tribunal must consider whether it is satisfied of the correctness of an alternative to that adopted by the AER. The short answer is that it is not so satisfied.

780    In the course of the PIAC submissions, there was some focus on s 16(1)(c) of the NEL and r 6.5.2(e)(3) of the NER requiring the AER to have regard to relevant inter-relationships between the constituent components of a reviewable regulatory decision. In that regard, the Tribunal notes that the AER in determining the rate of return adopted a common 10 year term for estimating the risk-free rate, the MRP, and the return on debt.

781    PIAC says, in that context, that there is no explanation for why the upper range figure of 0.7 for equity beta was selected, and no satisfactory assessment of the comparative elements of the rate of return on equity and the rate of return on debt. Nor, it says, has the AER explained satisfactorily why its selected figure of 0.7 as the value for beta will contribute to the achievement of the NEO to the greatest degree: s 16(1)(d)(i) of the NEL. The AER has addressed those matters in relation to equity beta in Attachment 3 to the Ausgrid Final Decision at pp 3-128 to 3-132; similar passages appear in the equivalent attachments to the other relevant Final Decisions.

782    There were no direct submissions on how an (assumed) failure on the part of the AER of that character would itself demonstrate a ground of review before the Tribunal under s 71C(1)( of the NEL. Once the Tribunal has reached the view that there is no error of fact in the AER’s findings (as they are put in issue) or other error of the character identified on those sections, its role is to determine whether to set aside or vary the AER decision having regard to s 71P(2a) and (2b) of the NEL.

783    Having reached the view, at least to the point of considering the contentions thus far, that there is no ground of review made out on the part of the AER, the Tribunal is not required to consider the issues otherwise raised (relevantly, for PIAC’s contentions) under s 71P(2a) and (2b) of the NEL. In particular, the Tribunal is not of the view that there was an error of fact in the AER’s selection of the equity beta at 0.7.

784    If the alternative in PIAC’s submission is considered, namely, that there was error in selecting the equity beta at 0.7 because, either as a matter of fact that equity beta would not produce the preferable regulatory decision (as defined in s 16(1)(d) of the NEL), the proposition attracts different consideration.

785    The consumer submissions considered by the AER (as listed in footnote 1715 of Attachment 3 to the Ausgrid Final Decision at p 3-433 and handed up to the Tribunal in the course of submissions) show very considerable support for a lower equity beta. The AER was alive to those submissions. Many were reflected in the course of the Tribunal’s consultation under s 71R(1)(b) of the NEL (and s 261(1)(b) of the NGR).

786    It is one thing to criticise the reasoning of the AER as being superficial or slight because it did not cogently (at least in some views) explain why such submissions did not lead to a lower equity beta, and another to demonstrate that its conclusion therefore exposed a ground of review. Clearly the decision of the AER under s 16(1)(d) of the NEL is a complex one. It involves the balancing of all elements of the reviewable regulatory decision. Making the preferable regulatory decision does not require every element of the decision itself to be measured in that way.

787    Even if that were not correct, at least in relation to selecting the equity beta, the reference to that material does not show that a ground of review has been made out. As the Tribunal has discussed, the NEO and the RPP operate together. It is not the case that the NEO means that, where the long term interests of consumers is relevant, the RPP must be ignored or suppressed. The assumption in the regulatory scheme is that the long term interests of consumers is served by ensuring that monopoly infrastructure providers are permitted to recover at least the efficient costs of providing those services and, broadly speaking, the AER’s role is to fix those efficient costs by reference to the proxy of the efficient costs of the competitive market. That is, of course, an oversimplification. But, as the AER said (for instance, in Attachment 3 to the Ausgrid Final Decision at p 3-434), it applied a “regulatory judgment” in that context to best satisfy the RoR Objective, and it considered that its conclusion is consistent with the NEO/NGO and the RPP.

788    Despite the material to which the Tribunal was taken by PIAC, and its submissions, the Tribunal is not satisfied that the AER’s regulatory judgment enlivened a ground of review.

789    The next finding of fact of the AER which it is necessary to address is the MRP.

790    Having regard to estimates of historical excess returns, the AER first used a baseline estimate of 6.0 percent. It then considered whether DGM evidence warranted any adjustment to that baseline. The DGM estimates provided a range of 7.5 percent to 8.6 percent. Having regard to that, the AER considered that it should take the top of the range for historical excess returns of 6.5 percent. It then considered whether other evidence directed it to some other different MRP.

791    The Networks Applicants attacked each of those steps, both in their principal written submissions (including its Appendix F) and of course orally. They complain that the outcome of the DGM “had very little influence” on the MRP estimate and, on the other hand, undue weight was given to historical average excess returns. As well, they say the AER incorrectly analysed the range for the historical average MRP, by regarding it as suggesting the proper MRP would be found in that range rather than treating it merely as showing an average. That, they argue, meant that the AER did not allow for an MRP outside that range, even though prevailing market conditions had changed to justify a significantly higher MRP.

792    The estimate for the MRP of 6.5 percent was adopted in the RoR 2013 Guideline in December 2013, and was maintained in the relevant Draft Decisions and Final Decisions (in April and June 2015).

793    The Network Applicants asserted that there had been a significant change in market conditions over that period. The AER’s DGM estimated range had altered from 6.1 percent-7.5 percent (as exposed in its Better Regulation: Explanatory Statement – Rate of Return Guideline, December 2013, at p 93) to 7.4 percent to 8.6 percent (JGN Final Decision at p 3-325 and the other relevant Final Decisions at June 2015 and April 2015 respectively). There had also been a significant fall in the risk free rate: Commonwealth Government Securities from about 4.2 percent to about 2.55 percent over the same period. It is the Network Applicants’ contention that as the DGM analysis was that the MRP was not falling in lock-step with the risk free rate, but was increasing over that period, the return on equity should have been higher.

794    In support of their contentions, the Network Applicants say that:

(1)    by a different DGM model construction and with different input assumptions, the DGM estimate should have been 8.73 / 8.84 percent rather than the range 7.4 to 8.6 percent;

(2)    corrected historical excess returns showed an MRP of 6.56 percent rather than the 5.1 percent to 6.5 percent range;

(3)    the “Wright approach”, which was not used by the AER showed an MRP estimate of 9.00 / 9.11 percent; and

(4)    the independent expert reports: SFG 2014 Report at 74 and 77 and Incenta Economic Consulting, Update of evidence on the required return on equity from independent expert report, May 2014 provided an MRP range of 6.93 / 6.91 percent and should also have been used by the AER.

795    It is said that the misapprehension of, or misuse or non-use of that data is a supplement to the primary contentions briefly summarised in the second preceding paragraph.

796    It is further said that the AER wrongly gave some weight to survey evidence when it should not have done so, and wrongly understood the “conditioning variables” so as to perceive (wrongly) that they supported its MRP estimate.

797    As to the “Wright approach”, they assert that in Wright, Review of Risk Free Rate and Cost of Equity Estimates: A comparison of U.K. Approaches with the AER, 25 October 2012 there is the evidence of the historical average real market return as leading to a proper estimate of the MRP, and they argue that it was incorrect to use this analysis only as a cross-check on the estimated overall return on equity.

798    The evidence available to the Tribunal indicates that the DGM as applied by the AER estimates the MRP range for the two months up to February 2015. The discussion in the Final Decisions (eg Attachment 3 to the Ausgrid Final Decision at pp 3-125 to 3-127) explains why the AER considered its outcome at 7.5 percent to 8.6 percent as being too high in the (then) current market. It is a matter of debate whether those reasons are correct, but it is not apparent to the Tribunal that they are incorrect.

799    The AER’s process then was to consider a range of other material: historical excess returns, survey evidence, conditioning variables and recent Australian regulatory decisions. It then made a decision based upon its assessment of the whole of that material: see eg Attachment 3 to the Ausgrid Final Decision at pp 3-115 to 3-120.

800    As the Tribunal said in Re WA Gas Networks Pty Ltd (No 3) [2012] ACompT 12 at [105]-[110], there is no single econometric modelling or other financial technique which can particularly and correctly provide a figure for the forward-looking estimate of MRP. The analysis of the individual source material balanced by the AER will not of itself show that the evaluative decision of the AER was a wrong finding of fact, or reflected the wrong exercise of a discretion.

801    The Tribunal has carefully considered the material to which the AER had regard, including the respects in which (as the Network Applicants contend) some elements of that material might point to a different outcome on the MRP than the AER adopted. It has considered how the AER used that material, as exposed by its reasons in the Final Decisions. The Tribunal is not satisfied that the AER has wrongly taken into account averages from the historical data. Nor is the AER’s use of the DGM outputs inappropriate: there is a difference of experts’ views about that but the difference of views does not demonstrate error of itself. The use of the survey evidence is not seen by the Tribunal as inappropriately too much. The AER recognised that there was a fall in the risk free rate, but decided that despite the apparent relative growth in dividend yields, as they were very close to the long term average and had been for some time, the fall in the risk free rate did not support a higher MRP. The AER did refer to the Wright approach: see eg Attachment 3 to the Ausgrid Final Decision at Section C.4.1, and is not shown to have misunderstood it. Similarly, the AER referred to the expert reports to which the Network Applicants referred to in their submissions on this topic, and explained why it did not regard those reports as persuasive.

802    As the Tribunal has elsewhere noted, as a merits review process, it may reach a conclusion on a question of fact different from that of the AER. Provided the fact is a material one to the outcome of the decision under review, a ground of review will then be established. Like the AER, the Tribunal is called upon (as here) to assess the respective weighting of pieces of information, and to assess the respective competing views of experts. The mere existence of competing views or of reasons why a particular piece of information might point in one or other direction will not of itself mean that the Tribunal should or will reach a view different from that of the AER. That is particularly so where there are competing expert opinions. In the universe of the NEL and the NGL (as in other areas of decision making) it is a feature of the qualitative decision making process that competing materials, including competing expert opinions, may be available to the AER. It must make its decisions under, and in accordance with, the legislative and regulatory instruments having regard to that material. So too, on review, must the Tribunal.

803    On this topic of the MRP, the Tribunal does not conclude that the AER’s decision was factually erroneous. It selected an available starting point. It addressed the relevant material. It applied its own experience to the qualitative findings to be made, and it sought to cross-check them with other sources of information. By following the same process, but also in the light of the detailed and thorough submissions on behalf of the Network Applicants and PIAC, the Tribunal has not come to a firm but different conclusion. It does not consider that the AER’s selection of the MRP at 6.5 percent was an error of fact. Nor (for the reasons already given) does it consider that the other findings attached in the submissions, as set out in in the above quoted [241(b)] of the Network Applicants’ submissions were errors of fact.

804    It follows that the grounds of review there specified, and the complaints in [241(c)] of those submissions are also not made out.

The Unreasonableness of the Final Decisions

805    The remaining grounds of review of the Network Applicants are based on the asserted unreasonableness of the decision, as specified in [241(d)-(h)] of that submission.

806    The submission identifies four matters: the misunderstanding, and therefore the misuse, of the Wright approach as a cross check; secondly, the misuse of the Grant Samuel 2014 Report (especially in the light of the Grant Samuel, Response to AER Draft Decision, January 2015 (the Grant Samuel 2015 Letter); thirdly, other independent expert reports, being those discussed in Attachment 3 to the Draft Decisions concerning JGN at pp 3-91 to 3-92 and concerning the DNSPs at the more or less similar pages in the Attachments 3 to their respective Draft Decisions; and finally the misunderstanding of the broker reports considered by the AER.

807    To an extent, these contentions of the Network Applicants stand or fall with their earlier contentions. They are nevertheless a broader and qualitative attack on the correctness of the AER Final Decisions.

808    The Tribunal does not consider that the AER failed to have regard to the available financial models to inform its decisions on the appropriate return on equity. It has produced as part of its contentions an analysis of the available data sources. That data was considered as part of its cross-checks (steps 4 and 5) in its methodology. The line through the ranges provided by that data showing the AER decision appears to “fit”, that is it appears as a reasonable and sensible one. Moreover, it appears at, or close to, the mid-point of the ranges provided by the source providers, other than the proposals of the service providers (whose range is from and above the line) and the “stakeholders” (whose range is from just above the line and then in large measure below the line).

809    In short, whilst it is possible to use the data sources to which the Tribunal was referred by the Network Applicants (including the outcomes of the DGM) to arrive at a different and somewhat higher figure for the return on equity, that does not persuade the Tribunal that the AER’s decision was unreasonable, or that its process of addressing that data involved any error of the character to make its outcome unreasonable. Part of the contentions was to criticise how the AER considered particular pieces of information. To the extent that those contentions are critical of the exercise of a discretion by the AER, it is sufficient to say that the Tribunal does not take the step which the Network Applicants invited it to take.

810    It is desirable to add a little more in relation to the Wright approach. As an approach or process, the AER appears to have followed it. That is, it estimated the MRP, the equity beta, and the risk free rate to arrive at its return on equity estimate. It is correct to say that the AER used a range for equity beta in that process, rather than a point estimate. The Tribunal does not regard that as illogical or as having misapplied the Wright approach in a way which renders its decision on the return on equity unreasonable.

811    It is also necessary to note that the AER, when considering the return on equity estimates from broker and valuation reports that included both uplifts and adjustments for dividend imputation, was well aware of those differences and took them into account.

812    The AER’s cross-checks included an analysis of the spread between debt risk premium (the cost of debt less the risk free rate) and equity risk premium. That was an obvious and appropriate form of cross-check. It supports the conclusion of the AER. It does not tend to suggest that the overall return on equity estimate is too low. The various broker reports and revaluation reports do not include any consideration of the appropriateness of the outcomes which they support with the market data on debt premiums.

813    For those reasons, the Tribunal does not find any grounds of review are made out in relation to the AER’s return on equity estimate.

814    There is one final matter to mention. The AER raised the issue of whether, by reason of s 70O of the NEL, the Vic/SA Interveners were entitled to make the attack on the return on equity estimate in relation to the Henry material concerning the equity beta. As the Network Applicants adopted those contentions, and they were made and maintained before the AER by the Network Applicants, the Tribunal has not needed to rule on that contention.

RETURN ON DEBT

INTRODUCTION

815    In making its decisions that are challenged under this heading the AER was required to give effect to the NER and NGR as altered by the 2012 Rule Amendments. They included the introduction of the RoR Objective that informs both the rate of return on equity (as canvassed in the preceding section of these reasons) and the rate of return on debt which consists of two components – a risk free rate (or base rate) component and a risk premium over the base rate. The risk premium is called the debt risk premium (DRP).

816    As observed, the 2012 Rule Amendments now require that the allowed rate of return is to be determined such that it achieves the RoR Objective. The objective is that the rate of return for a regulated service provider is to be commensurate with the efficient financing costs of a BEE with a similar degree of risk as that which applies to the regulated service provider in respect of the provision of standard control services/reference services: NER r 6.5.2(b) and (c); NGR r 87(2) and (3). The relevant rules now require that return on equity is to be estimated such that it contributes to the achievement of the objective and in estimating the return on equity, regard must be had to prevailing conditions in the market for equity funds: NEL 6.5.2(f) and (g); NGR r 87(6) and (7).

817    In reaching its decisions under this heading, the AER introduced, uncontentiously, a new methodology to determine the rate of return on debt: the trailing average approach. As explained further below, it replaced the on-the-day approach applied by the AER in relation to the previous regulatory period.

818    The on-the-day approach estimates the allowed return on debt based upon prevailing interest rates at the start of the regulatory period. It is a forward-looking approach which applies the then prevailing rate across the period. At each regulatory determination, the allowed return on debt is reset based upon prevailing interest rates at the start of the new regulatory period.

819    The “trailing average” approach estimates the allowed return on debt based on a trailing average of interest rates over a historical period. To obtain the trailing average, each year a component of debt at the prevailing interest rate is added, and the component of debt for the oldest year in the trailing average is removed. It too is a forward-looking approach in that each addition to the average occurs at the prevailing interest rates.

820    The impact on the BEE, rather than on individual service providers, is in order to incentivise and reward efficient practices, rather than seeking to make allowances for the impact on particular providers, consistently with the incentive based regulation reflected in the regulatory framework.

821    The issue of concern is how the AER’s new approach was to be adopted or transitioned. The AER considered that a BEE with efficient financing practices would have staggered borrowing and that it would be likely to have hedging contracts which it would need to unwind in moving to the new approach. Consequently, the AER adopted a 10 year transitional period which was annually adjusted to shift to this approach from the previous on-the-day approach.

822    It is the selection of that transitional option, including calculation and implementation of the transitional period, which is the subject of this dispute.

823    Each of the following applicants challenged the AER’s decisions on return on debt: the three Networks NSW entities, ActewAGL, JGN and PIAC (the latter only in relation to the AER’s Network NSW decisions). The Vic/SA interveners and Ergon also raise issues in relations to the AER’s return on debt decisions.

Networks NSW’s challenge

824    In their regulatory proposals, Networks NSW proposed the immediate application of the trailing average approach.

825    Networks NSW submit that the AER’s approach involved a series of errors in terms of s 71C(1) of the NEL, because it:

(1)    used as the relevant BEE a regulated efficient entity, rather than an unregulated efficient entity;

(2)    misapplied the concept of a regulated efficient entity in its assessment, in particular that it thereby required transitioning rather than direct application; that it used the prevailing 2014 rate for the future, rather than periodically re-setting it; and because the trailing average approach did not require the transitioning imposed by the AER:

(3)    did not reflect the fact that Networks NSW had debt facilities comprising a staggered portfolio of fixed rate debt without hedging;

(4)    concluded therefore the debt management strategies of Networks NSW were inefficient;

(5)    failed to have regard to the way Networks NSW had managed its debt facilities in the past having regard to the life of its assets;

(6)    concluded the transitioning of the trailing average approach as proposed by Networks NSW did not represent the efficient financing costs or debt management practices of a BEE;

(7)    did not take into account that there were or may be other characteristics of a BEE;

(8)    concluded there was no “windfall” gain to Networks NSW by the immediate introduction of the trailing average approach with the transitioning process it imposed and that reference to any earlier windfall gain could not inform the current AER decision;

(9)    inappropriately selected a simple average of broad BBB rated debt data series published by the Reserve Bank and Bloomberg; and

(10)    departed from its RoR Guideline.

ActewAGL’s challenge

826    While ActewAGL and Networks NSW made joint submissions on the topic of return on debt, there are some differences between them in the detail of their respective challenges to the AER’s approach which are explained as these reasons are developed.

PIAC’s challenge

827    PIAC also complains of the AER’s transitional introduction of the rate of return on debt, insofar as it applies to Networks NSW, but from a different perspective. It says the transition should have commenced from 2015-2016 rather than 2014-2015 to comply, or to better comply, with r 6.5.2 of the NER, especially where the on-the-day rates at the time of, and leading up to, the date of its decision were declining. On PIAC’s analysis, the consequential allowances over the regulatory period would be very much less than the effect of the AER’s decision.

JGN’s challenge

828    It is accepted that while the return on debt is calculated based on the risk free rate and DRP, it is not possible to hedge the DRP component of the return on debt because the level of DRP in respect of a tranche of debt is incurred at the time of issue.

829    JGN says that, as a result, the AER’s decision to apply its transition methodology to both the base rate and the DRP components of the return on debt was inappropriate and that it should only have applied the transition method to the base rate. JGN submits that the AER’s decision to include the DRP in the transition was inconsistent with r 87 and r 76 of the NGR, including because it was not appropriate to undercompensate for the efficient return on debt in order to ‘clawback’ an alleged windfall gain in the last regulatory period when the DRP was high.

830    Additionally, JGN submits that:

(1)    the AER’s measurement of the return on debt for all future measurement periods is inflexible and imprudent as there is uncertainty around when refinancing will be required;

(2)    the AER incorrectly determined that JGN’s credit rating should be BBB+ not BBB;

(3)    the AER should have allowed it to revise its proposal to the AER under r 60 of the NGR.

831    While JGN generally makes submissions similar to those of the Networks NSW and ActewAGL on this topic, there are differences in its approach concerning the AER’s application of the transition to the base rate and its use of the trailing average fixed principle.

The Vic/SA Interveners’ challenge

832    The Vic/SA Interveners’ challenge largely reflect those of the Network Applicants.

Ergon’s challenge

833    Ergon raised as an additional ground of review that the AER made an error of fact in finding that a simple trailing average should be preferred over a Post Tax Revenue Model (PTRM) weighted trailing average in estimating the allowed return on debt.

The AER’s Final Decisions

834    The AER’s Final Decisions estimated the allowed return on debt based on a trailing average of interest rates over a moving historical period. Each year, prevailing interest rates from each new year are added to the trailing average, and interest rates from the last year of the trailing average “fall out” of the trailing average.

835    There is no disagreement among the parties that the trailing average approach is an acceptable methodology and is available under the NER and NGR. The disagreement lies within the transition. That is the topic which primarily requires the Tribunal’s attention.

836    In arriving at its Final Decisions, the AER investigated four options for determining the return on debt. Those options appear in the Attachment 3 to the Ausgrid Final Decision at 3-145 as follows:

    Option 1 - Continue the on-the-day approach

    Option 2 - Start with an on-the-day rate for the first regulatory year and gradually transition into a trailing average approach over 10 years

    Option - Hybrid transition. Start with an on-the-day rate for the base rate component and gradually transition into a trailing average approach over 10 years. This would be combined with backwards looking trailing average DRP (that is, a base rate transition only).

    Option 4 - Adopt a backwards looking trailing average approach (that is, no transition on either the base rate or DRP components of the return on debt).

837    The AER adopted Option 2 and explained its application at p 3-145 of Attachment 3 to the Ausgrid Final Decision as follows (without footnotes):

Applied to Ausgrid's distribution determination, this means our return on debt approach is to:

estimate the return on debt using an on-the-day rate (that is, based on prevailing interest rates) in the first regulatory year (2014-15) of the 2014–19 period, and

gradually transition this rate into a trailing average approach (that is, a moving historical average) over 10 years using a forward looking approach.

This means for the 2014–15 regulatory year, the return on debt is based on prevailing interest rates in 2014 (during Ausgrid's debt averaging period) before the start of the 2014–19 period. For subsequent regulatory years, the gradual transition will occur through updating 10% of the return on debt each year to reflect prevailing interest rates (during Ausgrid's debt averaging period) in each year.

In practical terms, our return on debt approach means that an on-the-day rate shortly before the start of the 2014–19 period is applied to:

    100% of the debt portfolio in the calculation of the allowed return on debt for the 2014–15 regulatory year

    90% of the debt portfolio in the calculation of the allowed return on debt for the 2015–16 regulatory year, with the remaining 10% updated to reflect prevailing interest rates during Ausgrid's averaging period for 2015–16

    80% of the debt portfolio in the calculation of the allowed return on debt for the 2016–17 regulatory year, with 10% based on prevailing interest rates during Ausgrid's averaging period for 2015–16, and 10% updated to reflect prevailing interest rates during Ausgrid's averaging period for 2016–17, and

    so on for the subsequent regulatory years.

After the 10 year transition period is complete, the return on debt is a simple average of prevailing interest rates during Ausgrid's averaging periods over the previous 10 years.

… … …

This debt approach is consistent with the approach we proposed in the Guideline and adopted in the draft decision. In the Guideline, we based our transition on the approach recommended by the Queensland Treasury Corporation (QTC). We refer to this as 'the QTC approach'.

838    The AER considered this approach to be the best way forward because as it explained in Attachment 3 to the Ausgrid Final Decision at (at p 3-148):

We are satisfied our return on debt approach contributes to the achievement of the NEO, the allowed rate of return objective and is consistent with the revenue and pricing principles. This is because it:

    Has regard to the impact on a benchmark efficient entity of changing the method for estimating the return on debt/

    Promotes efficient financing practices consistent with the principles of incentive based regulation/

    Provides a benchmark efficient entity with a reasonable opportunity to recover at least the efficient financing costs it incurs in financing its assets. And as a result it:

    Promotes efficient investment, and

    Promotes consumers not paying more than necessary for a safe and reliable network/

    Avoids a potential bias in regulatory decision making that can arise from choosing an approach that uses historical data after the results of that historical data are already known /

    Avoids practical problems with the use of historical data as estimating the return on debt during the global financial crisis is a difficult and contentious exercise.

839    Option 2 was supported broadly by consumer representatives, energy retailers, major energy users and the AER’s consultants Chairmont, Cost of debt: Transitional analyses, April 2015 (Chairmont Report) and Lally, Transitional arrangements for the cost of debt, November 2104 (Lally 2014 Report) and Lally, Review of Submissions on the cost of debt, April 2015 (Lally 2015 Report). Option 2 was also supported by CitiPower, Powercor and SAPN. It was initially also supported by Jemena Electricity Networks, JGN and United Energy, which later changed their support to Option 3.

840    Option 4 was supported by Ausgrid, Essential, Endeavour, TransGrid, ActewAGL and Directlink.

841    In short, the AER adopted Option 2 as the entirely forward-looking approach. As its reasons indicate, that involved setting a debt portfolio for the BEE at the commencement of the regulatory control period, and then replacing 10 percent of the portfolio with the prevailing interest rate at that time. It says that the process provides and maintains incentives on service providers to meet or beat the performance of the BEE. It also says that it is an appropriate option because it did not incorporate any historical rates decided on a backward-looking basis, and it created no windfalls for either service providers or consumers.

842    The AER says that its approach involved no error, as the very diversity of the contentions by service providers demonstrates that its approach was reasonably open to it.

843    It has separately addressed the issues concerning credit rating and data series (Networks NSW and JGN), the selection of averaging periods and whether the transition process should be “locked in” (JGN), how the trailing average should be determined (Ergon), and the commencement year for the introduction of the trailing average approach (PIAC).

844    There is also an issue about whether JGN may make its contention in its present terms, having changed its position from an initial support of Option 2.

845    The AEMC made its 2012 Rule Amendments in response to requests submitted by the AER and a group of large energy users (the Energy Users Rule Change Committee) for changes to the economic regulation of electricity and gas distribution services. As noted, the reforms included reference to the rate of return, both the return on debt and the return on equity. The rate of return has a substantial impact on the building block revenue of regulated service providers.

846    A key emphasis in the changes was regulatory transparency and consultation when determining the allowed rate of return. The AER accordingly conducted extensive consultation leading to the RoR Guideline. The 2012 Rule Amendments allowed consideration of alternative ways of determining the efficient debt servicing costs of network service providers. It was broadly agreed by stakeholders, including consumers, network service providers and the AER that the approach to estimating the return on debt could be improved.

847    It is not necessary to refer to the AEMC’s detailed consideration leading to the 2012 Rule Amendments which pointed to, and accommodated, the introduction of the trailing average approach. As noted, that led to the AER publishing the RoR Guideline together with the RoR Explanatory Statement, and then adopting the trailing average approach.

848    The AEMC introduced the 2012 Rule Amendments, in broad terms, having regard to the following:

(1)    the NEO, NGO and RPP are more likely to be met by a non-prescriptive flexible framework that allows the regulator to more accurately match debt conditions in the market for funds and that it should remain open to the regulator to consider how different sectors and businesses have different debt characteristics that lead to efficient debt financing;

(2)    a one size fits all approach to setting a benchmark should not be considered as a default position;

(3)    stakeholders would be engaged with on the development of an appropriate benchmark and with the latest evidence taken into account;

(4)    the actual historical debt financing practices of network service providers could be added to the range of evidence that the AER considers in developing its methodologies, alongside the prevailing cost of funds, or a combination of both;

(5)    the allowance must be consistent with the RoR Objective, which would consider the position of a BEE, rather than the actual cost of debt of the particular network service provider; and

(6)    there should be no distinction between state-owned network service providers and other network service providers.

849    As noted above, the 2012 Rule Amendments included the introduction of r 6.2.8 of the NER, which required the AER to make and publish a number of Guidelines, including the RoR Guideline. It was relevantly in the following terms:

(a)    The AER:

(1)    must make and publish the Shared Asset Guidelines, the Capital Expenditure Incentive Guidelines, the Rate of return Guidelines, the Expenditure Forecast Assessment Guidelines, the Distribution Confidentiality Guidelines and the Cost Allocation Guidelines in accordance with these Rules; and

(2)    may, in accordance with the distribution consultation procedures, make and publish guidelines as to any other matters relevant to this Chapter.

(b)    A guideline may relate to a specified Distribution Network service provider or Distribution Network service providers of a specified class.

(c)    Except as otherwise provided in this Chapter, a guideline is not mandatory (and so does not bind the AER or anyone else) but, if the AER makes a distribution determination that is not in accordance with the guideline, the AER must state, in its reasons for the distribution determination, the reasons for departing from the guideline.

(d)    If a guideline indicates that there may be a change of regulatory approach in future distribution determinations, the guideline should also (if practicable) indicate how transitional issues are to be dealt with.

850    Rule 6.5.2 of the NER has been set out in the section of these reasons dealing with the Return on Equity (excluding subparas (h)-(k) of r 6.5.2). It is partly repeated for ease of reference, as well adding as subparas (h)-(k):

Allowed rate of return

(b)    The allowed rate of return is to be determined such that it achieves the allowed rate of return objective.

(c)    The allowed rate of return objective is that the rate of return for a Distribution Network service provider is to be commensurate with the efficient financing costs of a benchmark efficient entity with a similar degree of risk as that which applies to the Distribution Network service provider in respect of the provision of standard control services (the allowed rate of return objective).

(d)    Subject to paragraph (b), the allowed rate of return for a regulatory year must be:

(1)    a weighted average of the return on equity for the regulatory control period in which that regulatory year occurs (as estimated under paragraph (f)) and the return on debt for that regulatory year (as estimated under paragraph (h)); and

(2)    determined on a nominal vanilla basis that is consistent with the estimate of the value of imputation credits referred to in clause 6.5.3.

(e)    In determining the allowed rate of return, regard must be had to:

(1)    relevant estimation methods, financial models, market data and other evidence;

(2)    the desirability of using an approach that leads to the consistent application of any estimates of financial parameters that are relevant to the estimates of, and that are common to, the return on equity and the return on debt; and

(3)    any interrelationships between estimates of financial parameters that are relevant to the estimates of the return on equity and the return on debt.

Return on debt

(h)    The return on debt for a regulatory year must be estimated such that it contributes to the achievement of the allowed rate of return objective.

(i)    The return on debt may be estimated using a methodology which results in either:

(1)    the return on debt for each regulatory year in the regulatory control period being the same; or

(2)    the return on debt (and consequently the allowed rate of return) being, or potentially being, different for different regulatory years in the regulatory control period.

(j)    Subject to paragraph (h), the methodology adopted to estimate the return on debt may, without limitation, be designed to result in the return on debt reflecting:

(1)    the return that would be required by debt investors in a benchmark efficient entity if it raised debt at the time or shortly before the making of the distribution determination for the regulatory control period;

(2)    the average return that would have been required by debt investors in a benchmark efficient entity if it raised debt over an historical period prior to the commencement of a regulatory year in the regulatory control period; or

(3)    some combination of the returns referred to in subparagraphs (1) and (2).

(k)    In estimating the return on debt under paragraph (h), regard must be had to the following factors:

(1)    the desirability of minimising any difference between the return on debt and the return on debt of a benchmark efficient entity referred to in the allowed rate of return objective;

(2)    the interrelationship between the return on equity and the return on debt;

(3)    the incentives that the return on debt may provide in relation to capital expenditure over the regulatory control period, including as to the timing of any capital expenditure; and

(4)    any impacts (including in relation to the costs of servicing debt across regulatory control periods) on a benchmark efficient entity referred to in the allowed rate of return objective that could arise as a result of changing the methodology that is used to estimate the return on debt from one regulatory control period to the next.

(As noted above in the Return on Equity section of these reasons, r 6.5.2 of the NER corresponds with r 87 of the NGR.)

851    As observed, the RoR Guideline, proposed the use of the trailing average portfolio approach to estimating the return on debt for the BEE. The RoR Explanatory Statement noted (at p 102) that the AER proposed to use a single definition of a BEE and specify a single approach to estimating the return on debt and considered that:

    that holding a portfolio of debt with staggered maturity dates was likely to be an efficient debt financing practice of the BEE under the trailing average approach;

    the regulatory return on debt allowance under the trailing average portfolio approach is, therefore, commensurate with the efficient debt financing costs of the BEE; and

    that the trailing average portfolio approach is consistent with the Rules, RPP and the NEO and NGO.

852    The RoR Explanatory Statement also states (at p 34) that a BEE is a pure play, regulated, energy network service provider, operating within Australia – one that was regulated under the NER or NGR. By way of support for the AER’s preferred BEE, the RoR Explanatory Statement goes on to observe (without footnotes) that:

… the benchmark efficient entity should be a regulated entity as:

    The rules require that the risks associated with the provision of regulated services are considered in determining the required rate of return [see NER, rr. 6.5.2(c), 6A.6.2.(c); NGR, r.87(2) and (3)]. As regulated services are delivered by regulated entities, it is logically consistent to consider the benchmark efficient entity as a regulated entity.

    Regulated service providers are typically not exposed to competition from other firms (in the case of distribution and some transmission businesses) or exposed to limited competition (in the case of regulated transmission businesses). The limited competition may alter the relevant (systematic) risk profile when compared with an unregulated firm.

    Regulated service providers are able to earn more stable cash flows relative to most unregulated businesses. These cash flows are regularly updated at resets to reflect required revenue (including changes due to shifts in demand and expenditure drivers) and therefore have similar business risks. Regulated service providers are also provided with some protection to their cash flows during regulatory control periods (e.g. pass through provisions and reopeners).

853    For reasons later explained, the views in above quoted passage from the RoR Explanatory Statement are not views which the Tribunal necessarily regards as correct.

854    The AER recognised that the networks faced refinancing and interest rate risks under the on-the-day approach, which operated on the premise that the entire debt portfolio of a regulated service provider would be refinanced once every regulatory control period. The refinancing risk would arise when debt cannot be refinanced or it is not commercially sensible to do so. The interest rate risk was that the interest rate on debt would exceed the regulatory allowance, resulting in higher costs for the network service provider.

855    It noted that a network service provider could reduce its interest rate and refinancing risk by entering into hedging arrangements that are aimed to replicate a borrowing cost structure that would arise if the BEE did refinance the entirety of its debt at the commencement of the regulatory control period. The Chairmont Report explained that the BEE would not be able to alleviate all potential mismatches in relation to the debt margin component of its return on debt, unless the entirety of the entity’s debt is reissued during the averaging period.

856    The AER observed that most network service providers held a diversified portfolio of debt with staggered maturity dates to help manage refinancing risk. It observed that small to medium sized and privately owned network service providers were likely to engage in interest rate swaps to reduce their interest rate risk. The AER also recognised that some network service providers may be too large to lock in interest rates during the averaging period, or may choose not to reduce interest rate exposure by hedging.

857    However, as a consequence of the AER’s view that the BEE was of one size and shape only, it approached its task on the basis that a BEE under an on-the-day approach would hold a debt portfolio with staggered maturity dates and use swap transactions to hedge interest rate exposure for the duration of the regulatory control period:

The Transition: The AER Approach

858    As the AER had defined the BEE as a pure play, regulated, energy networks business, it could not observe directly what efficient debt management practices of such an entity would be under the trailing average approach. All pure play, regulated, energy networks businesses up to that time were limited to adhering to the on-the-day approach under the Rules. Consequently, the AER had to rely on “theoretical reasoning” and indirect evidence, including observed financing practices of entities subject to the on-the-day approach and the observed debt financing practices of unregulated businesses: RoR Explanatory Statement at p 108.

859    The AER took the view that the decision to enter into swaps contracts by small to medium network service providers was likely to be a function of the on-the-day approach. The BEE would, it decided, be required to have the characteristics of a small to medium, privately owned entity that was subject to an on-the-day approach and managed its risk through swaps.

860    In its RoR Explanatory Statement, the AER stated that a uniform transition to the trailing average approach would be applied to all network service providers, consistent with the method proposed by the Queensland Treasury Corporation (QTC) in its submission to the AER. The decision to conduct a uniform transition relied on what the BEE as identified by the AER was likely to need to transition from the on-the-day approach to the trailing average approach. The AER was aware that any transitional process would need to contribute to the RoR Objective and the Rules. It would need to provide a steady transition to the trailing average approach “given a possible change in prior expectations regarding the regulatory framework by stakeholders”. It also would need to consider the use of historical information and minimise incentives for potential strategic behaviour by network service providers: RoR Explanatory Statement at p 120.

861    The AER was conscious of r 6.5.2(k)(4) of the NER and r 87(11)(d) of the NGR relating to any transition from one methodology to another. Rule 6.5.2(k)(4) is set out above and r 87(11)(d) is in relatively similar terms. That required the AER to have regard, in estimating the return on debt, to any impacts (including in relation to the costs of servicing debt across regulatory control periods) on a BEE that could arise as a result of changing the methodology that is used to estimate the return on debt from one regulatory control period to the next.

862    That lead to a critical step the AER took in transitioning to the trailing average approach. It did not consider that the transitional arrangements should have regard to the specific debt financing practices of individual network service providers. It had defined a BEE in the RoR Guidelines, and decided that a singular transitional method should be applied. It maintained that approach in its Final Decisions.

863    Consequently, for the transitional step in introducing the trailing average approach, the AER considered that the efficient debt financing practices of the BEE under the on-the-day approach was to have a staggered debt portfolio combined with hedging contracts to reduce interest rate risk. As such, an immediate change to a trailing average approach would require the BEE to unwind its hedging contracts, which might be costly if at all possible. The AER would allow for a gradual transition so that the BEE could adjust to changes.

864    The AER also outlined in the RoR Guideline (and reflected in its Final Decisions) that the transition would occur over a ten year period. It was aware that ActewAGL, and Networks NSW and some other network service providers would be affected by that form of transition as they had not entered into swap contracts under the on-the-day approach. Indeed, Networks NSW already used a staggered debt financing approach without hedging, and so in a sense reflected the proposed benchmark efficient portfolio approach to debt management which the AER was seeking ultimately to achieve by the trailing average approach.

865    Underlying the adoption of the trailing average approach, commonly accepted, is the fact that companies usually structure their debt by refinancing a portion of their debt each year at fixed rates prevailing at the time of renewal. At any given time, a company will therefore have a portfolio of fixed rate debt entered into in different years in the past and at interest rates prevailing at the date of entry into each facility referred to in the submissions as a staggered portfolio of fixed rate debt. As older facilities come up for renewal, they are replaced by new debt at current rates.

866    One consequence of the previous on-the-day methodology was that regulated businesses took steps to partially align one component of their cost of debt (the risk free rate component) to the regulatory allowance for that component under the on-the-day approach by entering into floating rate debt rather than fixed rate debt (or alternatively by entering into fixed rate debt and converting this to floating rate debt using fixed-to-floating interest rate swaps), and then entering into hedge transactions during the averaging period to fix part of their cost of debt at the prevailing rate. It is common ground that businesses could not hedge the DRP component of their debt. It is also clear that some DNSPs did not enter into such hedge transactions. That included each of the Networks NSW businesses, and (for different reasons) ActewAGL.

867    The AER’s transitional approach, adopted in each of the Final Decisions under review and apparently in all regulatory decisions for the current regulatory period, was based on a regulated entity as the BEE which (it considered) would have a portfolio of floating rate debt that, had the on-the-day approach continued, it would have swapped into fixed rate debt during the relevant averaging period. Consequently, the BEE would have unwound its hedging contracts in moving from the current on-the-day approach to the trailing average portfolio approach.

868    Thus, it allowed for the transition on the following basis:

(a)    the return on debt for the first regulatory year as the prevailing rate, averaged over the relevant averaging period (being set using the on-the-day approach – this allowance corresponds to the expected return on debt of an entity that refinances its entire debt portfolio during the averaging period prior to the first regulatory year);

(b)    in the second regulatory year, the allowed return on debt is a weighted sum of the prevailing rates in the first and second years (with weights of 0.9 and 0.1 respectively) (this allowance corresponds to the expected return on debt of an entity if it refinanced its entire debt portfolio during the averaging period prior to year one and then refinanced 10 percent of its debt portfolio during the averaging period for year two);

(c)    in the third year, the allowed return on debt is a weighted sum of the prevailing rates in the first, second, and third regulatory years (with weights of 0.8, 0.1 and 0.1, respectively);

(d)    and so on until in the tenth year of transition, the allowed return on debt is an equally weighted (with weights of 0.1) sum of the prevailing rates in the ten years of transition – at this stage the transition is complete.

See generally Appendix G to the RoR Explanatory Statement at p 131.

869    The AER’s transition methodology was therefore:

(a)    for Networks NSW, the return on debt applying in the 2015-16 regulatory year is calculated by giving 90 percent weight to the annual return on debt that applied in 2014-15 (being 6.51 percent), and 10 percent weight to the annual return on debt that was calculated using the relevant averaging period prior to the commencement of the 2015-16 regulatory year, being 1 July 2014 to 31 December 2014, namely 6.40 percent; and

(b)    for ActewAGL, the return on debt applying in 2015-16 is calculated by reference to 90 percent weight on the annual return on debt that applied in 2014-15 (being 6.07 percent), and 10 percent weight to the return on debt that was calculated using the relevant averaging period prior to the commencement of the 2015-16 regulatory year, and as this averaging period was 20 business days ending 31 January 2015, the resultant annual return on debt was 5.91 percent;

(c)    the return on debt applying in the 2016-17 regulatory year is calculated by giving 80 percent weight to the return on debt that applied in 2014-15; and 10 percent weight to the return on debt measured in the averaging period that applied prior to the commencement of the 2015-16 regulatory year; and 10 percent weight to the return on debt measured in the averaging period that applied prior to the commencement of the 2016-17 regulatory year; and

(d)    the above process is continued until the weight placed on the return on debt that applied in 2014-15 is zero as the transition to the trailing average approach is complete, that is by 30 June 2024.

Consideration

870    Networks NSW and ActewAGL say that the AER’s decision to delay moving to the trailing average approach for up to 10 years by imposing a 10 year transition from the on-the-day approach to a trailing average approach does not reflect the cost of debt of Networks NSW, and does not reflect the cost of debt for a hypothetical benchmark entity in the position of Networks NSW or ActewAGL.

871    It is perhaps a little coarse, but not inaccurate, to describe the primary errors asserted by them on the part of the AER in its transitioning approach as:

(a)    adopting the concept of the BEE as a regulated BEE; and

(b)    adopting a “one size fits all” BEE for the purposes of each of its Final Decisions, and its other similar regulatory decisions made in respect of the current regulatory period.

872    Those steps, they contend, do not reflect the fact that – at least for the transitioning decision to the trailing average approach – efficient debt management practices of DNSPs and other network service providers may differ according to their size and structure. Consequently, they refer to various passages in the 2012 Rule Amendments at pp 84-90, particularly the following passage at pp 84-85:

The inclusion of the factors in the rules is intended to provide direction to the regulator as to what factors it should consider for determining the best approach to estimate the return on debt.

The factors reflect a number of key issues raised by SFG in its analysis of different methodologies for estimating the return on debt, and other stakeholders during the rule change process. These issues can be summarised as follows:

    efficient benchmarking service providers may have different efficient debt management strategies;

    the effect on the cost of equity of different methodologies for estimating the return on debt;

    the effect on incentives for efficient capex during the regulatory period of the methodology used to estimate the return on debt; and

    consideration of whether transition arrangements are required if there is a change in the methodology used to estimate the return on debt.

The purpose of the fourth factor is for the regulator to have regard to impacts of changes in the methodology for estimating the return on debt from one regulatory control period to another. Consideration should be given to the potential for consumers and service providers to face a significant and unexpected change in costs or prices that may have negative effects on confidence in the predictability of the regulatory arrangements.

It may be possible in many circumstances for the method to estimate the return on debt to take such concerns into account in the design of the method. Therefore, this criterion was intended to promote consideration of concerns raised by service providers with regard to transitions from one methodology to another. Its purpose is to allow consideration of transitional strategies so that any significant costs and practical difficulties in moving from one approach to another is taken into account.

873    The reference to SFG in the above quoted passage is a reference to its Rule change proposals relating to the debt component of the regulated rate of return: Report for AEMC, 21 August 2012.

In its submissions the AER referred to what was said by the AEMC as quoted above and a passage from the SFG report which concludes with the observation that an appropriate transition arrangement should effectively destroy any incentive of a business to seek to “game” the regulatory allowance by proposing whichever method might result in the highest allowance.

874    Each of Networks NSW and ActewAGL acknowledge that the immediate implementation of the trailing average approach, which they advocated in their respective regulatory proposals and maintained in their revised regulatory proposals was not consistent with the AER’s RoR Guideline. The relevant AER Final Decisions (as illustrated by the passages from Attachment 3 to the Ausgrid Final Decision set out earlier in this section of these reasons) adhered to the transitional approach in the RoR Guideline.

875    For the purposes of Networks NSW (but not ActewAGL) there is an issue that the AER used a BBB+ credit rating, other than a BBB credit rating, and that it used the Reserve Bank of Australia (RBA) broad-BBB rating interest rate data extrapolated to 10 years (although it used Bloomberg data for 2004, as RBA data was not available).

876    It is common ground that the risk free rate element of the return on debt, being the unobservable return on risk free assets, is properly represented by the prevailing yields on 10 year Commonwealth Government Securities.

The Benchmark Efficient Entity

Was this issue raised and maintained by Networks NSW and ActewAGL?

877    The AER first contended that s 71O(2)(a) of the NEL precludes the issue whether the BEE is an unregulated firm operating in a workably competitive market now being raised.

878    In the Tribunal’s view, the issue was raised by Networks NSW and by ActewAGL in submissions to the AER. Indeed, it is a little curious, on this question, that the respective contentions of Networks NSW and ActewAGL, on the one hand and of the AER, on the other, are based on the same documents.

879    It is sufficient, in that light, to note those parts of the respective submissions which lead the Tribunal to that view.

880    Networks NSW’s submissions to the AER, dated 13 February 2015 at pp 3-4 refer to the efficient financing costs that would be expected absent regulation. It says:

The notion of the hypothetical “benchmark efficient entity” is a tool designed to ensure that the relevant service provider only recovers revenue in respect of the efficient conduct of the business in a hypothetical competitive environment, not the inefficient conduct of the business in a monopoly environment.

881    The submissions also made the claim that in moving to the trailing average methodology, the efficient financing costs of a BEE with a similar degree of risk as that which applies to Networks NSW is the cost of issuing debt on a fixed rate staggered portfolio basis, and that Networks NSW already issues its debt on that basis, so that there is no requirement in the interests of efficiency or the avoidance of monopoly pricing for imposing a delay in the movement to the preferred methodology.

882    In short, Networks NSW raised as a matter that the efficient financing costs of a BEE are the financing costs that would be expected in a competitive environment.

883    The Tribunal, having considered the particular passages from the submissions referred to by the AER, does not consider that they argue for a BEE that was regulated.

884    The AER also referred to the report of Frontier Economics entitled Cost of Debt Transition for NSW Distribution Networks, January 2015. There is nothing in that report which says that the BEE is a regulated entity. Indeed, the report at pp 8-9 considers the efficient practices of an unregulated infrastructure service provider as a potential benchmark and notes that such a provider would have adopted a fixed rate staggered maturity approach.

885    The AER also submitted that the Revised Regulatory Proposals submitted by Networks NSW do not contain any reference about the definition of the BEE. However, the Tribunal thinks it is clear that Networks NSW was maintaining its position. In each of the Revised Regulatory Proposals, reference is made to the predominant debt management approach of non-regulated infrastructure firms such as ports, airports, roads and railways as being to issue debt on a staggered portfolio/trailing average basis, and as supporting the approach that Networks NSW has adopted.

886    The contention that the AER should have found that the BEE was an unregulated form operating in a workably competitive market was also raised and maintained by ActewAGL. ActewAGL submitted in its Revised Regulatory Proposal at pp 473-474 that:

The financing practices of relevance to the term “efficient financing costs” do not encompass practices adopted in response to a pre-existing regulatory approach to the estimation of the return on debt notwithstanding whether one of the characteristics of the benchmark efficient entity that informs the degree of risk for which capital market investors require compensation is that that entity is regulated.

Such a construction of the term “efficient financing costs” is consistent with the objective of the regulatory regime established by the NEL and the Rules, as evinced by the NEO and the RPPs, which is itself concerned with creating incentives for efficiency and mimicking, so far as practicable, the outcomes of a workably competitive market, including in particular by creating incentives for providers to operate and invest in the manner of a firm in a competitive environment.

887    Those submissions assert that the pre-existing regulatory approach is of no relevance to the “efficient financing costs” referred to in the RoR Objective “regardless of the characteristics of the ‘benchmark efficient entity’”. It is said that the financing practices of relevance to the term “efficient financing costs” do not encompass practices adopted in response to the regulatory approach: see its Revised Regulatory Proposal at p 478.

888    The AER also contended that even if the matter was raised by any of the material, it was not raised by Networks NSW and ActewAGL with the precision sufficient to be considered “raised and maintained” within the meaning of s 71O(2) of the NEL because the Networks NSW and ActewAGL proposals did not identify and explain it as a departure from the RoR Guideline in accordance with r S6.1.3(9) of the Rules. The AER relied on Application by Jemena Gas Networks (NSW) Ltd (No 3) [2011] ACompT 6 at [102].

889    That is a qualitative assessment the Tribunal does not make. In both their Regulatory Proposals and Revised Regulatory Proposals, Networks NSW and ActewAGL identified and provided reasons for a departure from the RoR Guideline because they sought the immediate adoption of the AER’s trailing average with no transition. The Regulatory Proposals submitted by Networks NSW and ActewAGL clearly did not apply the transitional arrangements as set out in the RoR Guideline. They explain why the RoR Guideline, in that respect, should not be followed. In the Tribunal’s view, that is sufficient to satisfy r S6.1.3(9) of the NER. It is not, therefore, necessary to address whether the obligation in r S6.1.3(9) applies to Revised Regulatory Proposals. Nor is it necessary to explore whether the expression “raised and maintained” in s 71O(2) of the NEL is somehow informed by r S6.1.3(9) of the NER.

890    The Tribunal also does not consider it necessary to explore the general extent or “sufficient precision” referred to in [102] of the case mentioned above. It was a comment made by the Tribunal in the context of cautioning participants to a regulatory process against providing a mass of material to the AER without an indication of which parts of the material they regard as relevant.

Is the Benchmark Efficienty Entity a Regulated Entity?

Is the Benchmark Efficient Entity a common entity for all DNSPs?

891    In the Tribunal’s view, these issues are related and can be addressed together. They concern, for the purposes of r 6.5 of the NER, identifying the matters relevant to the building block determinations required by Part C, in particular for the return on capital under r 6.4.3(a)(2) and 6.4.3(b)(2) to be calculated in accordance with r 6.5.2. The relevant parts of r 6.5.2 are set out above.

892    A preliminary point was raised by the AER. It referred in some detail to the process by which it made the RoR Guideline, including the position taken by the Energy Networks Association (ENA) that proposed the definition of the BEE consistent with that adopted by the AER in the RoR Guideline and in its Final Decisions. It pointed out that each of Networks NSW, ActewAGL, JGN, the Vic/SA Interveners and Ergon supported the ENA submission on that point. Consequently, it says, it is not now open to those entities to adopt a position different from that taken during the making of the RoR Guideline.

893    That preliminary point was not developed further in oral submissions. The Tribunal takes the view, having regard to r 6.2.8(c), that neither the AER nor the regulated service providers generally were bound to comply with the RoR Guideline in relation to the process leading to the relevant Final Decisions. As it has concluded that Networks NSW and ActewAGL raised and maintained this issue in their submissions to the AER, it does not consider that somehow they are now estopped from raising the same issue before the Tribunal.

894    The position of JGN requires separate consideration, addressed in the Tribunal’s decision concerning its application for review.

895    Understandably, the AER adhered to the view on these related issues expressed in its relevant Final Decisions for much the same reasons as it then gave. It is also understandable that, subject to one significant issue, PIAC too adopted that position.

896    In one relevant respect, at least in PIAC’s submissions, the 2012 Rule Amendments have a somewhat different emphasis than previously.

897    Under the earlier r 6.5.4(e)(1) of the NER, the key objective for the return on equity and debt was to be:

… a forward looking rate of return commensurate with prevailing conditions in the market for funds and the risk involved in providing the standard control services.

898    That focus on prevailing conditions is maintained in relation to the return on equity: r 6.5.2(g). The nearest parallel expression now for the return on debt is in r 6.5.2(j)(1) which allows for the continuation of the on-the-day methodology, requiring an estimate of the return required by debt investors if the debt were raised shortly before the determination.

899    Rule 6.5.2(j)(2), which allows for the trailing average methodology, does not have a like expression. However, it provides for the estimate as the average return required by debt investors in a BEE if it raised debt over a historical period prior to the commencement of the regulatory year itself during the regulatory control period.

900    Section B of PIAC’s submission on the return on debt issue is headed: “The Economic Regulation amendments on return on debt: what became of ‘prevailing conditions?’ It is a rhetorical question, as the change is not said (at that point or later in the submissions) to be of particular significance. That is probably because it is not. The annual recalculation under the trailing average methodology, giving effect to the RoR Objective and in turn the efficient financing costs of the BEE, would require that contemporaneous consideration in any event.

901    PIAC made much of the process by which the 2012 Rule Amendments in relation to debt came to be made. As already noted, it is clear that, in the course of that process, significant transitional issues including the potential for windfall gains or losses by reason of the transition process from one methodology to another were addressed. One context was the avoidance of a DNSP “gaming” by selecting its preferred methodology, or its preferred transitional process to a new methodology. The 2012 Rule Amendments do not permit that. The decision is made by the AER. Another was to have regard to the consequences for a particular DNSP by reason of the change in methodology and the transitional consequences.

902    PIAC emphasised QTC’s June 2012 submission to the AEMC on those issues. QTC proposed the moving average approach, or “rolling in” arrangement, as ultimately adopted by the AER in the RoR Guideline and then in the relevant Final Decisions.

903    Also, in relation to r 6.5.2(k)(4), PIAC pointed to what was said by the 2012 Rule Amendments at pp 84-85 as quoted above.

904    The critical step then, notwithstanding the recognition that different network providers may have different efficient financing structures in place under the previous regulatory period because of their responses to the on-the-day approach – dictated by their respective circumstances – was for the AER to adopt one regulated entity as the BEE.

905    Once that step was taken, headed by the AER’s choice to adopt the QTC proposal of a 10-year progressive introduction of the trailing average approach, necessarily some DNSPs and network providers would be materially disadvantaged. That step is one also supported by PIAC.

906    It is the case, commonly accepted, that the Networks NSW entities would each recover a significantly greater sum during the current regulatory period by an immediate transition to the trailing average methodology: see the respective Final Decisions for Ausgrid (Attachment 3 at p 3-151); Endeavour (Attachment 3 at p 3-149); and Essential (Attachment 3 at p 3-148). That would have a significant impact on price, adverse to consumers. Networks NSW says that they are being deprived of those amounts by reason of the erroneous adoption of the “one size fits all” transition process. That is, they submit, their debt financing structures are presently efficient structures that the AER seeks to achieve by the introduction of the trailing average approach. By the transition process imposed, they say, they are being given an artificial debt financing structure as a starting point which depresses their recoverable financing costs below their actual (and subject to analysis by the AER) efficient financing costs.

907    It is the Tribunal’s view that the BEE referred to in the RoR Objective is not a regulated entity. It need not necessarily be the one entity for the purpose of all regulatory decision-making in a particular regulatory period for all regulated service providers.

908    The general underlying purpose of economic regulation of regulated service providers under the NEL, the NGL and the Rules is canvassed earlier in these reasons. It is common ground. It is to secure, so far as practicable, the NEO and the NGO in accordance with the RPP. To achieve that, the AER is required to make its regulatory determinations in relation to a regulated service provider, in an environment where there is no competition for the services it provides, but broadly speaking as if the relevant provider were operating in a competitive environment.

909    As the AER said, its decision on this topic (and on other topics) is to be made by reference to the efficient financing costs of a BEE, rather than the actual financing costs of the particular regulated service provider. Once those costs or allowances are fixed, they provide the economic incentive to the provider to operate more efficiently.

910    The relevant rules support that overall approach, rather than (as would be the effect of the AER’s contention) support the measurement of performance and the fixing of the return on capital (including the return on debt) by reference to a regulated efficient entity.

911    The particular features of the NER (and the equivalent provisions in the NGR) which, in the view of the Tribunal, are significant are:

(a)    the definition of the RoR Objective in r 6.5.2(c);

(b)    the reference to the return required by debt investors in r 6.5.2(i);

(c)    the interrelationship between the return on equity and the return on debt under r 6.5.2(j)(2);

(d)    the reference to incentives in r 6.5.2(k)(3); and

(e)    the reference to the impacts on a BEE that could arise as a result of changing the methodology used to estimate the return on debt from one regulatory control period to another in r 6.5.2(k)(4).

912    It is appropriate to address those provisions in turn.

913    The RoR Objective directs the allowed rate of return on capital for the relevant DNSP to be applied to its regulatory asset base: r 6.5.2(a) and (b). When r 6.5.2(c) then defines the RoR Objective, it is directed to determining a rate of return for a DNSP by reference to (what the AER determines as) the relevant BEE. The relevant BEE is to be used to determine efficient financing costs to be allowed for. The BEE is to have a similar degree of risk as that which applies to the relevant DNSP in respect of the provision of standard control services.

914    The BEE, in the view of the Tribunal, is likely to refer to the hypothetical efficient competitor in a competitive market for those services. Such a BEE is not a regulated competitor, because the regulation is imposed as a proxy for the hypothetical unregulated competitor. Otherwise, the starting point would be a regulated competitor in a hypothetically regulated market. That would not be consistent with the policy underlying the purpose of the NEL and the NGL in relation to the fixing of terms on which monopoly providers may operate. Indeed, the concept of a regulated efficient entity as the base comparator would divert the AER from the role of fixing the terms for supply of services on a proxy basis compared to those likely to obtain in a competitive market, and focus its attention on some different and unidentified regulated market.

915    It may be observed that the AER, both in the RoR Guideline and in the relevant Final Decisions, imposed the trailing average methodology as that most likely to represent the proxy for the cost of debt for a supplier of the services in a competitive market.

916    Secondly, it is necessary to focus on the characteristic that the BEE must have: a similar degree of risk to that of the relevant DNSP. The relevant DNSP is the DNSP for which the BEE is being determined by the AER. Once it is accepted that different DNSPs have in fact different degrees of risk (as is recognised in the discussions referred to) and so may have different efficient financing cost structures, it leads to the conclusion that there will not be an identical BEE for all DNSPs.

917    The reference to “debt investors” in r 6.5.2(j)(1) and (2) needs little comment. The allowed return may be fixed having regard to the return required by the debt investors in a BEE. The “debt investors” are likely to be investors in a competitive market, rather than in a regulated service provider as the measure for comparison is then unidentified and may not lean towards an efficient entity.

918    The interrelationship in r 6.5.2(k)(2) also points to the same conclusion. The reference to the interrelationship between the return on equity and the return on debt means they have a complementarity. The complementarity is significant and meaningful if they are measured by similar, or similarly conceptual, yardsticks. Otherwise, the comparison would not be meaningful. The return on equity is to be measured by the prevailing conditions in the market for equity funds. It would follow that market conditions for the BEE should be used to measure the return on debt, rather than some undefined regulated conditions.

919    Much the same may be said about r 6.5.2(i).

920    The AER contended that, although economic regulation seeks to achieve certain outcomes consistent with a workably competitive market, if the BEE is assumed to compete in a workably competitive market, then the regulatory framework in which the concept of that entity is employed would be otiose. It emphasises the words on the RoR Objective that the BEE is to be taken to have “a similar degree of risk” to the relevant DNSP. Because of their monopoly position, each of the regulated service providers is insulated from comparative risk and is provided with regulated rates of return for capital and debt. Thus, it is argued, the rates of return of investors for investing in regulated service providers is “commensurately lower”. Moreover, it is said, to adopt the alternative view is to depart from the NEO and the NGO, and is to detract from their achievement, because the regulatory environment alters the risk profile of the relevant regulated service provider. It also means, it is said, that the BEE must be a regulated entity because it is otherwise an entity with a risk profile different from, rather than similar to, the risk profile of the regulated DNSP or network provider.

921    The Tribunal has, of course, carefully considered those contentions in reaching its conclusion. For the reasons given, it considers that textually and contextually there are strong reasons why the AER’s contentions should be rejected. The AER’s analysis of the definition of the RoR Objective involves a degree of circularity. The comparison is provided so that the BEE is not an artificial or contrived comparator. As explained in the next section of these reasons, the Tribunal is not persuaded that the AER erred by adopting a single BEE for the regulated service providers. But, it is not likely that within the structure of the NER and NGR, premised (as the AER acknowledges) on imposing by regulation a pricing structure for monopoly service providers by reference to the hypothesised efficient pricing structure in a workably competitive market, there would be a discrete subset of tests prescribing a comparison with a regulated service provider. There is nothing in the AEMC materials leading to the 2012 Rule Amendments which indicates such an intention.

922    The Tribunal in the next section of its reasons dealing with “The Transition” addresses the proper operation of r 6.5.2(k)(4). It reaches the view that, although the concept of the BEE is a standard one, because the RoR Objective refers to a BEE “with a similar degree of risk as that which applies to the Distribution Network Service Provider” (Tribunal underlining) and “the Distribution Network Service Provider” refers back to the RoR Objective for that particular DNSP, it is necessary to consider how that DNSP should efficiently have structured its financing costs under the former regulatory regime. Relevantly, how it should efficiently have done so in response to the on-the-day methodology of estimating the rate of return. As different DNSPs may have different degrees of risk, there is scope for a range of structures of efficient financing costs to exist at the end of one regulatory period. That range of structures then assumes significance for the purposes of r 6.5.2(k)(4) of the NER.

The Transition

923    This part of the reasons concerns Network NSW and ActewAGL.

924    For present purposes, once the step has been taken (as the Tribunal has done) of starting with a BEE which has the characteristics of one hypothetical participant in the competitive market – that is, the “efficient financing costs” are determined on that basis – it follows that the AER’s approach to transitioning under r 6.5.2(k)(4) must be reconsidered.

925    Its determination of the BEE required it to determine at the commencement of the current regulatory period, as between the various DNSPs, which (if any) of their debt financing structures adopted in relation to the on-the-day methodology used in the previous regulatory period was the preferable or more representative one. As noted, it selected that applicable to those DNSPs which had a portfolio of floating debt that would have been hedged, and then considered what would have been involved in moving to the trailing average portfolio approach. That starting point meant that the debt financing structures of Networks NSW (which did not hedge) or of ActewAGL (which does not have debt financing) were, by its definition, inefficient and the implementation of the trailing average methodology required transitioning in their instance in a manner which was obviously artificial.

926    It is somewhat ironic that, by that process, the BEE at the end of the current regulatory period, under the trailing average approach, would (subject to particular considerations) have the characteristics of the financing cost structure of Networks NSW at the commencement of the current regulatory period. That is because, by its approach, the AER has treated that current financing cost structure as inefficient, even though that structure (subject to particular considerations) underlies the trailing average approach.

927    If a different starting point, that is a different BEE efficient financing cost structure, is adopted, it is then necessary to revisit the AER’s approach to, and consideration of, the factor to which it (or the Tribunal) must have regard under r 6.5.2(k)(4).

928    The Tribunal addresses later in these reasons whether, and if so how, that should be done having regard to s 71P(2a) and (2b) of the NEL.

929    It is desirable to comment, at this point, on one further submission of the AER.

930    Its contention is that the effect of debt transition on a particular service provider is ultimately a largely irrelevant consideration. The relevant matters that the AER must have regard to under r 6.5.2(k)(4) are any impacts on a BEE that could arise from a change in methodology, including in relation to the cost of debt across regulatory control periods. Accordingly, the effect of debt transition on a particular service provider can be relevant only to the extent that it provides some information about how a change in debt methodology would impact a BEE. Therefore, it says, it is not a mandatory relevant consideration whether an immediate transition to the trailing average approach would cause any cost and inconvenience for Networks NSW and ActewAGL because they either have no debt or staggered non-hedged debt. The relevant consideration is the efficient financing costs of a BEE, not the particular DNSPs the subject of a decision.

931    Consequently, it argues, that the AER made no reviewable error in adopting a transition option (Option 4 of the four options referred to above) on the basis that it would not have any effect on Networks NSW and ActewAGL. The effect on Networks NSW and ActewAGL was not the matter that the AER was required to turn its mind to. This effect was relevant only to the extent to which their practices reflect efficient financing practices, which is addressed further below.

932    The contention itself is ironical. It takes the regulated BEE (which is chosen by the AER as a standard from the range of individual network providers financing costs structures, which are their idiosyncratic individual responses to the on-the-day methodology), and then selects a transition option to achieve a financing costs structure to that reflected by, and in, the trailing average approach. So it is converting the hypothesised regulated BEE from one financing costs structure which it therefore regards as “the efficient costs structure”, but which ultimately it regards as inefficient, to another financing costs structure. And in doing so, it does not have to have regard to the fact that Networks NSW already have that financing costs structure (not necessarily in the efficient form), but it deems Networks NSW and ActewAGL to have some other costs structure for the purposes of the transition process.

933    The Tribunal’s view is that is not correct. In its view, the compulsory consideration in r 6.5.2(k)(4) of the NER:

(1)    starts with the efficient financing costs of a BEE as described above (ie not a regulated BEE);

(2)    in the case of a changed methodology to estimate the return on debt, determines whether the BEE would suffer any impacts as a result of the changed methodology; and

(3)    if so, have regard to those impacts in deciding on the transition process to the new methodology.

934    The starting point is not the actual financing costs of the relevant DNSP, but the efficient financing costs having regard to its degree of risk. In the case of Networks NSW, as its financing costs structure was readily applied to the trailing average methodology, the relevant inquiry would start with whether its actual financing costs were efficient as at the commencement of the new regulatory period. If not, those of the BEE would be applied prospectively. In the case of other DNSPs, the relevant inquiry would start with whether each of their actual financing costs (including the hedging costs) were efficient having regard to their particular degree of risk at the start of the new regulatory period. If so (as appears, broadly speaking, to have been accepted), the impacts of the changed methodology would require the sort of transition process which was imposed in the Final Decisions concerning them. If not, then the starting point for that transition process would be some refinement to the efficient financing costs within that structure.

935    It is the expression in r 6.5.2(c) requiring the efficient financing costs of a BEE “with a similar degree of risk” as that applying to the particular DNSP which, in the view of the Tribunal, supports that conclusion. It also has a degree of common sense, as a response to a changed methodology, as it represents a means of realistically looking to the actual consequences of the changed methodology. It means, contrary to the AER submission, that an actual assessment must be made of the efficient (not just the actual) financing costs of each DNSP as it has responded in its methodology for estimating the return on debt for the prior regulatory period and an actual assessment must be made of the impacts on those efficient financing costs of that DNSP by the changed methodology.

936    Those conclusions are consistent with the AEMC’s reasons for the relevant new or changed rules introduced by the 2012 Rule Amendments. The AEMC’s comments on new r 6.5.2(k)(4) of the NER and r 87(11)(d) of the NGR are set out above, and of course the underlying theme of the AEMC is that the most appropriate benchmarking is the efficient private sector provider: see the 2012 Rule Amendments as quoted above. The AEMC there also recognises that there may be multiple debt management strategies that are efficient, and that different businesses may adopt different but equally efficient debt management practices:

The Commission intends that the regulator could adopt more than one approach to estimating the return on debt having regard to different risk characteristics of benchmark efficient service providers.

The first factor in the rule requires the regulator to have regard to the characteristics of the benchmark service provider and how this influences assumptions about its efficient debt management strategy … debt management practices tend to differ according to the size of the business, the asset base of the business, and the ownership structure of the business.

And, earlier the AEMC said, at p 49:

The Commission considered that no one method can be relied upon in isolation to estimate an allowed return on capital that best reflects benchmark efficient financing costs.

937    Accordingly, the Tribunal considers that a ground or grounds of review have been made out by Networks NSW and ActewAGL in relation to the estimate of the rate of return on debt.

938    The selection or identification of the BEE as a regulated entity involved the wrong exercise of a discretion about the character of the BEE in all the circumstances, and as a consequence its decision on the topic was unreasonable in all the circumstances. It may have been possible to identify the specific features of the regulated BEE which then, as a matter of fact, might be said to involve errors of fact in its findings of fact, but it is not necessary to go into that detail. Similarly, its exercise of its discretion to apply the characteristics of its selected regulated BEE to the transition process in the case of Networks NSW and ActewAGL is also erroneous, and its decision on the transition process was unreasonable, in all the circumstances.

939    If the changed methodology might produce benefits to a particular DNSP (as, it was suggested, might be the case because of some carry forward windfall arising from the previous methodology), it may be that s 16(1)(d) of the NEL in the case of the AER (or s 71P(2a) and (26) of the NEL in the case of the Tribunal) would require some alterations to what would otherwise be an appropriate transition process. That is not a matter which was much debated in the course of submissions.

940    As the Tribunal proposes to remit this matter to the AER, for reasons expanded upon later, it is not necessary or appropriate to explore those alterations in detail at present.

941    The Tribunal notes that Networks NSW and ActewAGL argued that the regulatory regime does not permit “true-ups” based on an ex post review of the previous regulatory allowance, in part because it would remove the incentives to efficiency on which the regulation is based. They also extensively responded to the analysis of Lally, Transitional Arrangement for the Cost of Debt, 24 November 2104, suggesting at a general level (that is, not specific to any one DNSP) a significant past benefit under the on-the-day approach because it led to rates of return on debt significantly higher than those actually incurred. The submission is to the effect that there was no past “windfall” gain. It is not necessary to do other than note those matters.

942    There are a few other matters which may be relevant to that review. The Tribunal has noted that the existing debt financing structures of the Networks NSW DNSPs are not necessarily to be taken as efficient for the purposes of any transition. There are issues as to any correlation between the risk free rate and the DRP. There are issues as to the relevance and significance of the high DRP rates immediately following the Global Financial Crisis (GFC) of 2007-08, and how they should or should not be taken into account. To the extent that the Networks NSW DNSPs have currently locked in those rates, it may not (on the appropriate analysis) have been appropriate to do so. PIAC notes that locking in those rates at about August/September 2008 produces a return in debt of 8.82 percent per annum (risk free rate of 5.82 percent per annum and a DRP of 3 percent per annum) whereas the rates if taken at about June 2014 would produce a return on debt of 6.51 percent per annum. There may be other relevant considerations. In addition, even if the correct starting point is each Networks NSW DNSP’s current actual financing costs (that is, if they are efficient), s 16(1)(d) of the NEL may entitle the AER to make some adjustment if – as PIAC says – consumers may thereby be paying “a second time” for the consequences of the spike in rates following the GFC.

943    The Tribunal, as it does not propose to itself make the reviewed decision, simply notes those contentions.

PIAC’s contentions

944    Having regard to the Tribunal’s conclusions on the matters discussed above, in theory PIAC’s contention on the estimation of the return on debt does not presently require determination. It is premised upon the Tribunal, in broad terms, adhering to the AER’s Final Decisions concerning the transition for the Networks NSW entities into the trailing average approach.

945    The Tribunal has decided earlier in these reasons, that PIAC’s contention is not precluded by s 71O(2)(c) of the NEL.

946    Before addressing this issue, as should be apparent, the Tribunal has not disregarded PIAC’s contentions in support of the AER’s transition approach for Networks NSW. PIAC pointed out that the 2012 Rule Amendments by the AEMC were instigated in part by consumer complaints about both the methodology and outcomes of the AER’s estimation of the return on debt for the previous regulatory period. They have been carefully considered as part of the earlier section of these reasons. In particular, it is noted that PIAC had the concern that the transition should not produce windfall gains or losses either to networks or consumers, and should not generate any incentive for networks to “game” any change in estimation method in order to maximise their regulatory allowances.

947    This section of the reasons addresses PIAC’s contention that the transitional mechanism adopted by the AER has resulted in return on debt allowances to Networks NSW that substantially exceeded those justified by prevailing conditions in the debt finance market at the time of the relevant Final Decisions in April 2015.

948    PIAC contends that the substantial over-allowance is the result of :

(1)    the AER having misspecified the formula implementing the transitional mechanism for the return on debt, as it applies in the particular circumstances of the 2014-15 and 2015-19 regulatory control periods for Networks NSW; and

(2)    a consequent misalignment of the averaging periods for observing the on-the-day return on debt for the transitional base year, and for the risk-free rate.

949    The AEMC recognised that its guideline development period would overlap with the time period during which networks would be due to submit their regulatory proposals for the first of the “second round” of network revenue determinations. Accordingly, when making the 2012 Rule Amendments, the AEMC inserted transitional provisions to allow for the full revenue determination process to be carried out for each network after publication of the guidelines, and to make arrangements for interim revenue determinations where necessary for a short period following the conclusion of networks’ then current regulatory control periods. PIAC’s Return on Debt submissions describes the process as set out in the following paragraphs.

950    PIAC’s submissions commence by noting that the AEMC’s prescription for the NSW/ACT Networks was to use a placeholder determination for a one-year interim regulatory control period (2014-15), followed by a full determination for the subsequent 4-year regulatory control period (2015-2019), incorporating an NPV-neutral “true-up” mechanism to account for any differences between the revenue allowed for 2014-15 under the placeholder determination, and the revenue requirement for 2014-15, as determined in the full determination process. The submissions then note that that mechanism was relevantly provided for in r 11.56 of the NER:

11.56.4    Subsequent regulatory control period

General

(a)    Except as otherwise specified in this clause 11.56.4, current Chapter 6 governs the making of a distribution determination for the subsequent regulatory control period [2015-2019] of an affected DNSP.

Calculation of an annual revenue requirement and other matters

(b)    

(c)    For the purposes of making a distribution determination for an affected DNSP for the subsequent regulatory control period of that affected DNSP, the AER must determine:

(1)    the annual revenue requirement of the affected DNSP for each regulatory year of its subsequent regulatory control period;

(2)    the total revenue requirement of the affected DNSP for that subsequent regulatory control period;

(3)    the X factor for each control mechanism for each regulatory year of that subsequent regulatory control period; and

(4)    the opening value of the regulatory asset base for the relevant distribution system,

in accordance with current Chapter 6 … and as if:

(5)    the subsequent regulatory control period comprised the transitional regulatory control period (as the first regulatory year of the subsequent regulatory control period) and all of the regulatory years of the subsequent regulatory control period (as the remaining regulatory years of the subsequent regulatory control period); and

(6)    the transitional regulatory control period were not a separate regulatory control period.

For the avoidance of doubt, this paragraph (c) requires the AER to determine a notional annual revenue requirement, a notional X factor or X factors and a notional opening value of the regulatory asset base for the regulatory year that comprises the transitional regulatory control period

(e)    The transitional regulatory control period of an affected DNSP must be treated as if it were the first regulatory year of the subsequent regulatory control period of the affected DNSP, and not a separate regulatory control period, for the purposes of the application of the following clauses of current Chapter 6 in respect of a distribution determination for the affected DNSP for that subsequent regulatory control period: clauses 6.5.2(i) …

Adjustment to annual revenue requirement

(h)    An affected DNSP’s total revenue requirement for its subsequent regulatory control period must be fully adjusted for the adjustment amount determined in accordance with paragraph (i) …

(i)    For the purposes of paragraph (h), the adjustment amount is calculated as:

(1)    the amount of the annual revenue requirement that was approved for the transitional regulatory control period under clause 11.56.3(b) or (d); less

(2)    the amount of the notional annual revenue requirement for the transitional regulatory control period that is determined under paragraph (c).

951    It is PIAC’s submission that the effect of r 11.56.4(e) modifying r 6.5.2(i) (and consequently r 6.5.2(l)) is that if the AER estimates the return on debt:

(a)    using a method that results in a uniform return on debt for each regulatory year, then the AER must apply that uniform return on debt throughout the combined period 2014-19; or

(b)    using a method that results in the return on debt being different for different regulatory years, then the AER must specify a formula to the determine the resulting change to a DNSP’s annual revenue requirement across the combined period 2014-19.

952    It is also PIAC’s submission that r 11.56.4 did not make any specific provision affecting the operation of Chapter 6 for estimation of the return on equity. Thus, it submits, notwithstanding that 2014-15 stood alone as a regulatory control period separate from the subsequent 2015-19 regulatory control period, the AER was required by r 11.56.4(c), when making its Final Decisions, to make its determination in accordance with “current Chapter 6”, as if the 2014-15 year were included as the first year of the revenue determination. Accordingly, when making the Final Decisions in April 2015, the AER was required, in determining the return on equity, to have regard to the then prevailing conditions in the market for equity funds (r 6.2.5(g)), unaffected by any considerations pertaining to the notional inclusion of the 2014-15 year as the first year of the revenue determination.

953    It went on to submit that similarly, in estimating the return on debt (and the overall return on capital), the only modification of r 6.5.2 that was made by r 11.56.4 was that the r 6.5.2(l) formula for annual updating of the return of debt had to cover the 2014-15 year, in addition to the 2015-19 regulatory control period. Otherwise, the AER remained subject to the other requirements of r 6.5.2 which as PIAC submitted included:

(a)    if the AER elected to use the “on the day” method (or some combination of that and the trailing average methods), then the “on the day” rate was required to reflect the return that would be required by debt investors in a BEE raising debt “at the time or shortly before” the making of the distribution determination: 6.5.2(j)(1);

(b)    in estimating the return on debt, the AER was required to have regard to the interrelationship between the return on equity and the return on debt: r 6.5.2(k)(2);

(c)    in determining the overall allowed rate of return, the AER was required to have regard to:

(i)    relevant market data;

(ii)    the desirability of consistent application of parameters relevant or common to the return on equity and the return on debt; and

(iii)    any interrealtionships between estimates of financial parameters relevant to the return on equity and the return on debt: r 6.5.2(e).

954    As PIAC pointed out, its grounds of review do not concern the return on debt that the AER in fact determined under the placeholder revenue determination for the interim 2014-15 regulatory control period, or the truing-up of that placeholder return on debt in the Final Decisions.

955    Its complaint is how the AER adopted the trailing average approach and the 10 year transition methodology. PIAC says it commenced the 10 year transition from 2014-15 as the transitional base year, for which an on-the-day rate was applied in full; and then in each subsequent year, it rolled in that year’s return on debt, weighted as to 10 percent in determining the weighted average return on debt for that year. Hence, the on-the-day rate for the transitional base year determines 80 percent of the aggregate 5 year return on debt allowance.

956    From that point, PIAC points out that in the RoR Guideline, the AER had indicated that the observed return on debt for each successive year would be determined over an averaging period to be nominated by each network of a duration between 10 consecutive business days up to a maximum of 12 months; lying wholly in the future at the time it is nominated; and as close as practical to the commencement of each regulatory year in a regulatory control period.

957    In the Final Decisions, the AER approved the following averaging periods nominated by each of Networks NSW for 2014-15 and 2015-16, and observed the following average returns:

Regulatory year

Averaging period

Return on debt

2014-15

28 February – 30 June 2014

6.51 percent

2015-16

1 July – 31 December 2014

5.41 percent

The last mentioned figure is calculated from the 90 percent/10 percent weighted average annual rate of 6.40 percent determined for 2015-16.

958    For estimation of the risk-free rate, the AER had indicated in the RoR Guideline that it would adopt a short averaging period of 20 business days in length, as close as practically possible to the commencement of the regulatory control period. That continued the AER’s pre-2012 regulatory practice, and has previously been endorsed by the Tribunal: see Re DBNGP (WA) Transmission Pty Ltd (No 3) [2012] ACompT 14 at [127].

959    In the Final Decisions, the AER determined a 20 business day averaging period, from 9 February to 6 March 2015, resulting in an annual risk free rate of 2.55 percent.

960    PIAC’s case is that, in the particular transitional circumstances of the 2014-15 and 2015-19 regulatory control periods, the AER’s specification of the formula for annual updating of the return of debt contravened cl 6.5.2 in three main respects. Those points are:

(1)    the transition did not commence from the on-the-day rate reflecting prevailing conditions shortly before the making of the final decisions;

(2)    the base year return on debt was not based on the latest and most up to date market data; and

(3)    the AER’s transitional formula resulted in the base year return on debt and the risk free rate being observed in windows 8 to 12 months apart.

961    The AER addressed those contentions, in part by challenging the proposition that r 6.5.2(j)(1) is mandatory rather than discretionary, and (it said) it follows that its approach was compliant with the Rules including the transitional rule 11.56.4(e). Secondly, it says, it correctly adopted 2014-15 as the transitional year, consistently with r 6.5.2(e)(1) including by having regard to market data, leading to its graduated transition methodology. Thirdly, it says that sound regulatory practice does not require the same, or approximately the same, averaging intervals for the calculation of the return on equity and the return on debt.

962    It is not necessary to record in detail the further submissions of Networks NSW in relation to PIAC’s contentions.

963    As the Tribunal proposes to set aside the Final Decisions concerning Networks NSW, at least in relation to the return on capital (more specifically the return on debt) and remit the decisions to the AER for reconsideration, PIAC’s contentions do not need to be determined. As noted, it is premised on the transitional path for the trailing average contained in the AER Final Decisions concerning Networks NSW being maintained in principle.

Separate issues of Networks NSW

964    There are two further issues which related to Networks NSW only (in the case of the second also raised by JGN but on a different basis and for different reasons). In the case of Networks NSW, they arise because they proposed that, in estimating the return on debt, the AER should use:

(a)    a BBB credit rating; and

(b)    the RBA 10 year curve, extrapolated to an effective term of 10 years for the nine year period from 1 January 2005 to 31 December 2013, and Bloomberg data for the one year period from 1 January 2004 to 31 December 2004 (over which RBA data is not available).

965    The AER’s Final Decision concerning Networks NSW was that a credit rating of BBB+, and a simple average of the RBA curve and the Bloomberg Fair Value (BFV) curve should be used to estimate the return on debt. These issues are discussed below, dealing with data source first and then credit rating.

(a)    Data source

966    There is no issue as between Networks NSW and the AER that a third party data service provider should be used in estimating the return on debt: Attachment 3 to the Ausgrid Final Decision at pp 3-12 and 3-149. The only issue is which of the curves (or combination of curves) should be used.

967    At the time of the publication of the RoR Guideline, the AER used the BBB seven year BFV curve, extrapolated to a 10 year maturity: RoR Explanatory Statement at p 127. The RoR Explanatory Statement, also noted (at p 128) the AER’s expectation that the RBA would commence publication of an estimate for return on debt, on both broad band BBB (includes BBB-, BBB and BBB+) and an A credit rating band (includes A-, A and A+), with a range of maturities (including a seven and 10 year average debt terms) which the AER observed: “Importantly we also understand that the RBA’s method will be transparent”.

968    In April 2014, the AER published an Issue Paper on the choice of third party data service provider for estimating the return on debt: Return on Debt: Choice of Third Party Data Service Provider: Issue Paper, April 2014. The Issues Paper noted that the RBA, in its December 2013 Bulletin, had published an article, New Measures of Australian Corporate Credit Spreads, that presented a method for estimating the aggregate credit spreads of A-rated and BBB-rated bonds issued by Australian non-financial corporations across a range of maturities. The Issues Paper noted that the RBA would commence publishing monthly credit spreads estimates from December 2013.

969    Networks NSW proposed that the return on debt be measured solely by reference to the RBA curve, with the exception of the one year period from 1 January 2004 to 31 December 2004 (over which RBA data is not available). Its reasons included the behaviour of the curves published by Bloomberg and the RBA in response to market events; and the relative transparency of the methodologies used to construct the curves.

970    Networks NSW says the RBA curve (introduced in November 2013 and backcast to January 2005) responded to the GFC in late 2008 and early 2009 in the manner expected, but the BFV curve did not. That proposition is based on the report of CEG: WACC estimates – A Report for NSW DNSPs, May 2004 at p 41. It is noted that the BFV curve is the predecessor to the Bloomberg Valuation Service broad BBB (BVAL) curve.

971    The BVAL curve was only introduced in 2013 and was subsequently backcast by Bloomberg to mid-2010 but does not extend back to the 2008-09 GFC.

972    For present purposes, the BPV curve and the BVAL curve can be treated as equivalent, as the contentions of Networks NSW apply equally to them. The AER referred to the BVAL curve in its submissions.

973    Networks NSW says the CEG view is supported by the RBA: New measures of Australian corporate credit spreads, at p 24 and by the Chairmont Report at pp 40-41. It also says the RBA curve responds appropriately to the perceived sovereign risk in some European currencies up to 2012. The graph of the two curves against new issuance margins during 2009 suggests the RBA curve fits more conformably. It is common ground that the RBA methodology and data is transparent whereas Bloomberg, as a commercial entity, does not provide publicly its methodology or the data it has used.

974    It is not necessary to refer to the particular points made by CEG in support of the RBA’s selection criteria and data used for its curve.

975    Following an extensive consultation process and on the basis of advice from Dr Martin Lally and the ACCC’s Regulatory Economic Unit (Lally, Implementation issues for the cost of debt, November 2014; REU, Return on debt estimation: A review of the alternative third party data series: Report for the AER, August 2014, published with the Draft Decisions) the AER decided to use both the RBA curve and BVAL curve.

976    The AER also submits that following the Draft Decisions, the most common position among service providers was to support a simple average of the RBA and BVAL curves in all or most circumstances. It referred in detail to them in its written submissions to the Tribunal. It is convenient to note JGN’s position. JGN supported using a simple average of the RBA and BVAL curves where the difference between them was not “a material divergence” (which it considered to be 60 basis points), but did not necessarily support a simple average when the difference was greater than 60 basis points. JGN’s preferred approach involved an annual testing of the available third party data series.

977    As is self-evident, Networks NSW maintained their initial proposal to use only the RBA curve (a position taken also by Ergon).

978    It is also worth noting that Networks NSW and JGN hold opposing views on this matter.

979    The AER summarised its reasons for the approach it adopted, for example, in Attachment 3 to the Ausgrid Draft Decision at p 3-136 as follows:

We consider a simple average of the two curves will contribute towards a return on debt that is commensurate with the efficient debt financing costs of the benchmark efficient entity. This is because:

    Based on analysis of the bond selection criteria, we are not satisfied that either curve is clearly superior to the other.

    Based on analysis of the curve fitting (or averaging) methodologies, we are not satisfied that either curve is clearly superior to the other.

    Both curves require adjustments from their published form to make them suitable, and we are not satisfied that either can be more simply or reliably used for estimation of the annual return on debt.

    A simple average is consistent with Lally’s advice that we adopt a simple average of the BVAL curve and the RBA curve, subject to the necessary adjustments to each curve. In particularly, Lally concluded that based on analysis of the curves, it was reasonably likely that a simple average of the two curves would produce an estimator with a lower mean squared error (MSE), than using either curve in isolation. Lally also noted “on the question of which index better reflects the cost of debt for the efficient benchmark entity, there is no clear winner”.

    The two curves have regularly produced substantially different results at particular points in time. While we are not satisfied that either curve is clearly superior, this suggests that it may not be appropriate to simply select one curve or the other.

    A simple average of two curves, in these circumstances, is consistent with the Tribunal’s decision in the ActewAGL matter [Application by ActewAGL Distribution [2010] ACompT 4 at [78]], where the Tribunal concluded that:

if the AER cannot find a basis upon which to distinguish between the published curves, it is appropriate to average the yields provided by each curve, so long as the published curves are widely used and market respected.

    A simple average of the two curves will reduce the likely price shock if either curve becomes unavailable or produces erroneous estimates during the period.

Further, our draft decision is also to make certain adjustments to the RBA and BVAL curves. For the RBA curve, our draft decision is to interpolate the monthly data points to produce daily estimates, to extrapolate it to an effective term of 10 years using the spread between the extrapolated RBA 7 and 10 year curves, and to convert it to an effective annual rate.

980    In their Revised Revenue Proposals, Networks NSW again proposed the exclusive use of the RBA curve for the period in which this was available, although it did not provide any further detailed analysis or evidence on this issue going beyond what was included in its initial regulatory proposals. Nor did Networks NSW engage with the extensive reasons that the AER had set out in its Draft Decisions as to why it chose to adopt a simple average of the two curves. The Revised Revenue Proposals did not include any substantive new analysis in support of the exclusive use of the RBA curve. The arguments presented in relation to this issue in the Revised Regulatory Proposals were limited to the following:

The AER’s draft decision adopted an average of Bloomberg’s Valuation (BVAL) curve and data on corporate bond yield from the Reserve Bank of Australia (RBA) to estimate the allowed return on debt. In this revised proposal, we maintain our initial position that where available the RBA data source should be used to estimate the trailing average cost of debt. As outlined in our initial proposal, we consider the RBA to be a highly reliable independent data service provider for estimates of yields on 10 year BBB rated Australian corporate bonds. Moreover, RBA data extends back to January 2005, which enables the use of a consistently calculated data series to estimate the trailing average cost of debt as far back as January 2005.

981    Networks NSW points out that there is a careful and thorough rebuttal of each of the matters raised in the other expert reports, and in the ACCC Regulatory Economic Unit’s analysis.

982    The submission that the BFV/BVAL curves should be regarded suspiciously because of an apparently counter-intuitive response following the GFC, according to Dr Lally, is a criticism related to a different Bloomberg (BFVC) curve so the point is said not to be significant. In any event, the AER says, it was aware of that counter-intuitive aspect of a Bloomberg curve but nevertheless considered that it was an acceptable reliable future indicator. That sort of judgment has been previously considered by and not rejected by the Tribunal: eg Application by AAPT Allgas Energy Ltd (No 2) [2012] ACompT 5 at [76]-[80]. The issue of relative transparency was also recognised and taken into account by the AER: eg Attachment 3 to the Ausgrid Draft Decision, at pp 148-149.

983    In the Tribunal’s view, whilst there are arguments for the sole use of the RBA curve, it has not been shown that – for the purposes of estimation of the return on debt – any ground of review has been made out. The AER had a choice to make as to what data services, or combination of data services, it should use. Its reasons for selecting the combination of data services are cogent, and reasonable. It is not shown to have misunderstood or overlooked material information. Although there are facts underlying the choice of the AER, the Tribunal is not persuaded of any particular material factual finding which is different from those made by the AER. For the purposes of the relevant Final Decisions, the AER does not positively find that the RBA curve was clearly superior to the BVAL curve, so that its averaging of the two curves was an acceptable measure of the DRP. The Tribunal is not satisfied that, on the material, the AER should have exercised its discretion to select either the RBA curve only, or some other formula for the estimation of the return on debt. Consequently, the Tribunal is not satisfied that the AER made an irrational decision in this respect.

984    The Tribunal considers JGN’s separate contention on this issue in its reasons for decision on the JGN application.

(b)    Credit Rating

985    In its relevant Final Decisions, the AER noted the divergence of views between service providers and distribution providers, consultants and consumer groups as to the appropriate credit rating to adopt. There was a mix of views among service providers, a mix of views among consultants, and consumer groups generally supported using a benchmark credit rating of BBB+ or higher or placing less reliance on credit ratings in general.

986    The AER explained its identification of the median credit rating in Attachment 3 to the Ausgrid Final Decision, at p 3-197 as follows:

For historical periods of progressively longer length (starting with the current year, then the last two years and etcetera, up to the last 10 years), the median credit rating has been BBB+ in three out of ten cases, BBB+/BBB in six cases, and BBB in one case. While some evidence supports a BBB credit rating (for example, the median over 2009-2015), we are satisfied that, on balance, the evidence supports a BBB+ credit rating (for example, the median over the periods 2013-2015, 2014-2015 and 2015). We also note that this estimate entails taking the median from the yearly medians.

987    The AER acknowledged that it could also take the median of all credit rating observations over these time periods. That would produce BBB+ for the five most recent periods, BBB/BBB+ for the period 2010-2015 and BBB for the averaging periods 2006-2015 to 2009-2015.

988    Networks NSW says the AER’s methodology of taking the median from yearly medians at the measurement point over the relevant period (as opposed to taking the median of all available data points at the measurement point over the relevant period) was in error. This is because such an approach results in disproportionate weight being given to observations at measurement points where there are limited data points. For example, under the AER approach, a year in which there may be a single observation on the relevant measurement of A-, will be given the same weight as a year in which there are four observations of BBB+. A methodology that takes the median over the relevant period gives equal weight to each observation.

989    Networks NSW submit that, if that methodology (of taking the median of the observations, as opposed to the median of the yearly medians) had been adopted, then the applicable credit rating is BBB for longer averaging periods (such as 2006-2015 to 2009-2015). In support of its submission, Networks NSW refer to the CEG report: WACC estimates – a Report for NSW DNSPs, May 2014 at p 65 and say that the median across all credit rating observations from 2004 to 2013 (inclusive) is BBB, not “BBB+, negative watch” as per the AER’s estimate. Networks NSW also say that as Networks NSW (and the AER’s BEE) have a trailing average DRP component, the correct approach is to adopt a longer averaging period.

990    Alternatively, Networks NSW submits that, even if a shorter averaging period is adopted, the AER’s methodology wrongly did not exclude or otherwise adjust for the two most highly rated issuers as untypical outliers because their credit rating was as a result of ownership by the Singapore Government.

991    Networks NSW also submits that the Final Decisions indicates that the AER considered data from 7 April 2015 and that if the AER were to include a 2015 observation (based on an April 2015 or earlier observation), all observations from the previous years would logically need to be made at the same time (being April of the relevant years).

992    Networks NSW says that the AER’s decision involved material errors of fact “in concluding that the credit rating” for the BEE was BBB+ and that it made an error of discretion and took an illogical and irrational approach as it gave a disproportionate weight to observations in years where there were fewer observations, rather than a median of observations.

993    The Tribunal is not satisfied that the AER’s relevant Final Decisions on this topic disclose a ground of review. In the Final Decisions (eg Attachment 3 to the Ausgrid Final Decision, –at p 3-197) is a table analysing the median credit ratings over time. The table itself is not apparently inaccurate. The more recent years point firmly towards a BBB+ credit rating for the BEE. The Tribunal does not consider that it was either factually wrong, or a wrong exercise of the discretion, to have regard to that material for the purpose of identifying the characteristics of the BEE. The Networks NSW contentions properly demonstrate the potential for bias in the AER's "median of medians" approach and argue that another approach or approaches might have been taken. Nevertheless, the Tribunal is not satisfied that the further step should be taken of concluding that, appropriately using a median of observations methodology, that another approach or approaches should have been taken so that a ground of review has been made out. It is not to be taken as accepting that the “median of medians” approach is a statistically valid approach.

(c)    Conclusion on the two issues

994    For the reasons given, the Tribunal is not persuaded that a ground of review has been made out.

995    In any event, the Tribunal would not take the step of being satisfied, in either respect, that to vary or set aside the relevant Final Decision would, or would be likely to, result in a materially preferable NEO decision under s 71P(2a)(c). There was no data to indicate the extent of the change or changes which might be made to the relevant Final Decisions by substituting the components of the process of decision-making for which Networks NSW contends, and so no basis on which the Tribunal might conclude in the long term interests of consumers that some other decision on either of these two topics would be affected in a material way.

Other General Issues

996    In view of the above, it is not necessary to address the contentions of Networks NSW and ActewAGL that:

(1)    assuming the AER correctly adopted the concept of regulated efficient entity as the BEE, and correctly adopted as the BEE, a network service provider which adopted a financing cost structure involving swaps contracts and hedging in response to the on-the-day methodology, in any event its transition process was erroneous because it was not possible to enter into hedging arrangements to match the regulatory allowance for the DRP component of the financing;

(2)    assuming the AER correctly adopted the concept of the regulated efficient entity as the BEE, it erred in selecting the swap-based approach as the BEE rather than the trailing average approach as providing a better match to the regulatory allowance during the previous regulatory period (a number of specific reasons for that contention were advanced); and

(3)    in any event, whether or not it is assumed that the AER correctly adopted the concept of the regulated efficient entity as the BEE, the AER erred in adopting a single BEE across all gas, electricity, transmission and distribution networks by proceeding on the basis that there is one single BEE.

997    As to (3) above, the Tribunal has generally adopted the position that it was not correct for the AER to have done so, but it has not separately addressed the detailed arguments to support the position of particular DNSPs by reason of their size; the size and availability and cost of swap contracts; their operating environments; and their corporate structures. The arguments are made largely upon the expert views in the Chairmont Report, the Report of Frontier Economics: Cost of Debt Transition for NSW Distribution Networks, January 2015; the Lally Report: Estimating the Cost of Debt of the Benchmark Efficient Regulated Energy Network Business, 16 August 2013; CEG: Efficiency of staggered debt issuance, February 2013; and UBS: Response to the Networks NSW request for financeability analysis following the AER Draft Decision of November 2014, 16 January 2015. That is not a comprehensive list.

Ergon’s issue

998    Ergon raises as intervener an additional ground or review pursuant to s 71M of the NEL. Ergon contends that the AER made an error of fact in finding that a simple trailing average should be used to estimate the allowed return on debt, when the evidence before the AER was that the use of a Post-Tax Revenue Model (PTRM) weighted average would better meet the requirements of the NER.

999    The AER considered this submission during the course of preparing the Final Decisions. It also considered this submission in its consideration of Ergon’s own regulatory proposal, which was considered in the preliminary decision made by the AER with respect to it.

1000    The approach that the AER has adopted in the RoR Guideline and the Final Decisions is to calculate the allowed return on debt as a simple (equally weighted) average of the prevailing market rates in each of the past 10 years. Ergon contends for an alternative weighting approach, based on the debt component of the forecast capex approved in the PTRM. This is a more complex approach, which effectively weights the prevailing rates in each of the past 10 years by the amount of debt that the service provider was forecast in its PTRM to have raised in that year.

1001    The AER decided that, while it acknowledged that the PTRM-weighted average had potential advantages in some circumstances, it would maintain the approach set out in the RoR Guideline of taking the simple average. It stated that it was open to future consideration.

1002    The Tribunal does not, in the circumstances, consider it desirable to address that issue. Ergon will have the opportunity to take it up, if so advised, if the point is maintained by the AER as in its Preliminary Determination in relation to Ergon.

1003    It is not a matter of direct moment to Networks NSW or ActewAGL.

JGN’s separate issues

1004    As noted above, there are a number of issues raised by JGN which are discrete to it. They are dealt with in the reasons for decision on its application.

1005    There are also some issues requiring separate consideration on its application, although addressed above.

GAMMA

1006    Under the Australian taxation system, company shareholders can receive an imputation credit (in the form of a franked dividend) for income tax paid at the company level. Australian resident investors may be eligible to use these imputation credits to reduce their individual income tax liability or to obtain a tax refund. Imputation credits may therefore be valuable to investors and of benefit to investors. They represent a return in addition to the face value of franked dividends and the capital gains or losses associated with owning shares.

1007    The value of imputation credits is recognised by the NER and the NGR in estimating a regulated service provider’s allowed revenue. Under the Rules, a regulated service provider is entitled to recover revenue that compensates it for its efficient costs of providing regulated services. Those costs include a return on equity sufficient to promote an efficient level of investment. While the value of imputation credits flowing from a regulated service provider’s franked dividends may reduce those costs, the return on equity is not reduced to take into account the value of imputation credits. Rather, the NER (r 6.5.3) and NGR (r 87A) reduces the revenue that the service provider requires to pay the estimated cost of its corporate tax by way of a formula in which the value of imputation credits is represented by the Greek letter γ.

1008    The Rules require an estimate of “the value of imputation credits” (also referred to as “gamma” or "γ") as an input into the calculation of the corporate income tax building block: r 6.5.3 NER:

6.5.3     Estimated cost of corporate income tax

The estimated cost of corporate income tax of a Distribution Network Service Provider for each regulatory year (ETCt) must be estimated in accordance with the following formula:

ETCt = (ETIt × rt) (1 – γ)

where:

ETIt is an estimate of the taxable income for that regulatory year that would be earned by a benchmark efficient entity as a result of the provision of standard control services if such an entity, rather than the Distribution Network Service Provider, operated the business of the Distribution Network Service Provider, such estimate being determined in accordance with the post-tax revenue model;

rt is the expected statutory income tax rate for that regulatory year as determined by the AER; and

γ is the value of imputation credits.

1009    Rule 87A of the NGR is relevantly in much the same terms.

1010    The application of the formula was explained in Application by Energex Ltd (No 2) [2010] ACompT 7 (Energex No 2) as follows:

[18]    The generally accepted regulatory approach in Australia has been to define the value of gamma imputation credits as a product of the imputation credit ‘distribution ratio’ (F) and the ‘utilisation rate’ (theta or θ) (γ= F x θ) where:

(a)    F is defined as the value of imputation credits distributed by a firm as a proportion of the value of imputation credits generated by it in the period (the distribution ratio); and

(b)    theta or θ is defined as the value of imputation credits distributed to investors as a proportion of their face value (the ‘utilisation rate’).

[19]    Under the formula set out in the rules, the higher the value for gamma, the lower the estimated cost of corporate income tax for a service provider. The overstatement of either the distribution rate or the utilisation ratio would result in an overstatement of gamma and thus an underestimate of the cost of corporate income tax for the DNSP. This would result in an underestimate of the revenue that is required to provide the required return to investors. This, in turn, would deprive the DNSP of a reasonable opportunity to recover its efficient costs, such that it would not have the incentive to achieve the efficiency objectives, that are the purpose of the regulatory regime.

1011    That explanation of the application of the rule in Energex No 2 is to be read subject the 2012 Rule Amendments, which substituted “the value of imputation credits” for “the assumed utilisation of imputation credits in the definition of gamma.

1012    It is the Network Applicants’ submission that, notwithstanding that change, the regulatory practice and consistent approach by all parties in previous years was to treat gamma as the value to the investor of imputation credits. The AER submits it is a change to make the language reflect what has always been the position and that while its construction of the rules is not materially different from the applicants, its way of working out the value to investors in the market is. The difference as explained by counsel for the AER: “… is whether one identifies a market value on the before tax basis, as the AER has it, or whether one estimates it on the basis that is significantly affected by factors such as personal costs and risk and other factors which we say are brought into the dividend drop-off studies.”

1013    The common approach between the parties to the assessment of gamma, expressed as a decimal ratio, is to be calculated as the product of:

(a)    the distribution rate for imputation credits, expressed as a decimal ratio (also referred to as “F”); and

(b)    the value of distributed imputation credits (also referred to as “theta” or "θ").

1014    The AER adopts a gamma of 0.4 (from a possible range of 0.3 to 0.5). The Network Applicants contend for a gamma of 0.25. The result of the AER’s decision is that the calculation of the corporate income tax building block for the Network Applicants is lower than would be the case had the Network Applicants’ gamma proposals been accepted, and therefore amounts to the Network Applicants being allocated lower revenue allowances.

1015    There is a dispute between the Network Applicants, the Vic/SA Interveners, Ergon, on the one hand and the AER, on the other as to:

(a)    the appropriate interpretation of the distribution rate and theta parameters (including what is meant by “the value of imputation credits” in the Rules);

(b)    the appropriate method, sources of information and/or weight to be attributed to each data source when determining “the value of imputation credits”; and

(c)    the appropriate figures for each of the distribution rate, theta, and ultimately, gamma.

1016    The submissions made by the Vic/SA Interveners and by Ergon broadly reflect the submissions made by the Network Applicants. Therefore, they are only specifically discussed where they have raised additional or separate issues which are necessary to address.

1017    In particular, the Network Applicants contend that the AER’s estimate of gamma:

(a)    does not reflect the best estimate of the value of imputation credits to investors, as reflected in market prices that investors are willing to pay for traded stocks; and

(b)    is significantly above even the upper bound for the value of imputation credits, as indicated by tax statistics.

1018    The Network Applicants raise each of the grounds in s 71C of the NEL and s 246 of the NGR in relation to each of these matters. The Network Applicants say that if their construction of the Rules is correct, the AER erred in the weight it attributed to each source of data and incorrectly determined the appropriate figure for gamma. However, even if the AER’s interpretation is favoured, they say that the AER erred in the weight that it attributed to each data source and incorrectly determined the appropriate figure for gamma.

Historical and Legislative Content

1019    It is helpful to consider the significance of imputation credits and the legislative context to contextualise the submissions made by the parties in relation to gamma.

1020    In 1987, the Commonwealth introduced an imputation tax system for companies. Soon after the introduction of the imputation system, academics began to consider the implications of the system for the valuation of companies and estimation of a company’s cost of capital, including a 1994 paper by Professor Officer, The cost of capital of a company under an imputation system, Accounting and finance; vol 34(1), May 1994 (the Officer Paper). This paper forms the basis of the “Officer Framework” upon which the current regulatory system is based.

1021    Gamma was first introduced into the Australian regulatory context in 1998 by the ACCC in the first version of the National Electricity Code (the Code) under Part VII of the Trade Practices Act 1974 (Cth), in the context of calculating the weighted average cost of capital under the imputation tax system. It was defined as the “value of franking credits or imputation factor”. This definition of gamma was then introduced as part of the weighted average cost of capital formula in Ch 6 of the NER, which commenced operation on 1 July 2005.

1022    In 2006, Ch 6A of the NER relating to transmission services was introduced, which broadly aligned the provision in Ch 6 relating to distribution services as part of the determination of the corporate income tax building block. It defined gamma as “the assumed utilisation of imputation credits, which is deemed to be 0.5”. From 1 January 2008, the definition of gamma as “the assumed utilisation of imputation credits” was adopted in Ch 6.

1023    The current versions of the Rules are the result of a series of amendments culminating in the 2012 Rule Amendments. Unlike the Code which was initially implemented, and as the AER has observed (in Attachment 4 to each of the JGN Final Decision at p 4-6, the ActewAGL Final Decision at p 4-7, and the Networks NSW Final Decisions p 4-7), without footnotes) under the Rules:

the estimation of the return on equity does not take imputation credits into account. Therefore, an adjustment for the value of imputation credits is required. This adjustment could take the form of a decrease in the estimated return on equity itself. An alternative but equivalent form of adjustment, which is employed by the NER/NGR, is via the revenue granted to a service provider to cover its expected tax liability. …This form of adjustment recognises that it is the payment of corporate tax which is the source of the imputation credit return to investors.

1024    As the AER observes, the 2012 Rule Amendments effectively restore the wording of the definition of gamma that appeared in the first version of the NER (incorporating Schedule 6.1 of the Code), as the “value of franking credits”.

1025    As observed, it is agreed between the parties that the change in the definition of gamma in the Rules from the “assumed utilisation of imputation credits” to the “value of imputation credits” does not change the meaning of gamma. Instead, the dispute concerns what that meaning is.

1026    The Rules continue to reflect the relationship between imputation credits and the cost of capital in the method of calculating the rate of return. Relevantly, r 6.5.2(d) of the NER provides that:

…the allowed rate of return for a regulatory year must be:

(1)    a weighted average of the return on equity for the regulatory control period in which that regulatory year occurs (as estimated under paragraph (f) and the return on debt for that regulatory year (as estimated under paragraph (h)); and

(2)    determined on a nominal vanilla basis that is consistent with the estimate of the value of imputation credits referred to in clause 6.5.3.

[emphasis in bold added]

1027    Rule 87(2)(4) of the NGR is in similar terms.

1028    As noted elsewhere in these reasons, the 2012 Rule Amendments also introduced a requirement for the AER to periodically publish the RoR Guideline: r 6.5.2(m) of the NER and r 87(13) of the NGR. As required, the RoR Guideline sets out (among other things) the estimation methods, financial models, market data and other evidence the AER proposes to take into account in estimating the value of imputation credits under r 6.5.3 of the NER and r 87A of the NGR: r 6.5.2(n) of the NER and r 87(14) of the NGR. Also as noted elsewhere in these reasons, while the RoR Guideline is not binding on the AER in relation to making individual determinations, if the AER makes a decision that is not in accordance with the RoR Guideline, it must state its reasons for departing from the them: r 6.2.8(c) of the NER and r 87(18) of the NGR.

1029    In accordance with the Rules, as indicated above, the AER published the RoR Guideline in December 2013, setting out (amongst other things) guidelines for estimating imputation credits. In keeping with accepted interpretation and practice, the AER’s gamma decision calculated the value of imputation credits as the product of the distribution rate for imputation credits and the utilisation rate of distributed imputation credits. The estimates outlined at p 23 of the RoR Guideline were an estimate of 0.5 for the value of imputation credits, based on a distribution rate of 0.7 and a utilisation rate of 0.7.

The AER’s approach to setting Gamma

1030    While the AER notes that it did not “decide” (and did not need to decide) particular values for each of the distribution rate and utilisation rate in making its gamma decision, the AER analysed several approximations of the distribution rate and the utilisation rate.

1031    The AER rejected the Network Applicants’ proposed value of 0.25 for gamma and adopted a gamma of 0.4 in its Final Decisions based on an analysis of various sources and estimates for both all equity and listed equity. Those sources and the AER’s use of that evidence is discussed in more detail below. The reasons for its decision are outlined in Attachment 4 to each of the AER’s Final Decision for each Network Applicant.

The distribution rate (F)

1032    The distribution rate was interpreted as “the proportion of imputation credits generated that is distributed to investors”. It was estimated with a cumulative payout ratio approach which uses Australian Taxation Office (ATO) Franking Account Balances (FAB) statistics to calculate the proportion of imputation credits generated (via tax payments) that have been distributed by companies since the start of the imputation system. There is no dispute about this definition or the reliability of the ATO FAB data used to determine the distribution rate.

1033    The parties do, however, dispute whether the distribution rate should be calculated from all equity, or from the sub-set of listed equity.

1034    In its Draft Decisions, the AER derived its estimate of the distribution rate of 0.7 based on data for all equity, consistent with past practice and the estimate endorsed by the networks and their advisors. Its Final Decisions were based on consideration of the original 0.7 value and a new estimate of 0.8 (0.77 in the case of JGN) derived from the distribution rate based on data only for listed equity. The listed equity only estimate was produced in response to advice from the AER’s expert Professor Handley: Report prepared for the Australian Energy Regulator: Advice on the value of imputation credits, 29 September 2014 (the Handley 2014 Gamma Report). He said that the distribution rate should be calculated on a consistent basis with the utilisation rate, theta. This, in turn is based on his understanding of a CAPM framework; consistent with the definition of a BEE, Handley had advised the distribution rate should be estimated only from the data for listed equity.

1035    As discussed below, it appears that the AER effectively adopted the distribution rate of 0.8 when setting gamma. Whether this was correct or reasonable in the circumstances hinges on the validity of the rationale it has provided for the emphasis placed on the listed equity estimate of the utilisation rate.

The utilisation rate (theta)

1036    Theta was interpreted as “the utilisation value to investors in the market per dollar of imputation credits distributed”, which reflects the extent to which investors can utilise the imputation credits they receive to reduce their tax or obtain a refund.

1037    Three methods of estimating theta were considered by the AER in the Final Decision: the equity ownership approach; tax statistics; and market studies. A fourth approach (the conceptual goalposts approach) mentioned in the RoR Guideline was not given any further consideration, but the reasons for excluding it were explained.

Equity Ownership Approach

1038    The AER described the equity ownership approach in the Final Decisions (eg Attachment 4 to the Ausgrid Final Decision at p 4-23) as follows (without footnote):

We consider that the value-weighted proportion of domestic investors in the Australian equity market is a reasonable estimate of the utilisation rate. This is because, in general, domestic investors are eligible to utilise imputation credits and foreign investors are not. Moreover, as discussed above, we consider that eligible investors have a utilisation rate of 1 because each dollar of imputation credit received by these investors can be fully returned to them in the form of a reduction in tax payable or a refund. We refer to this approach as the 'equity ownership approach', and we use data from the National Accounts of the Australian Bureau of Statistics (ABS) to estimate the domestic ownership share.

1039    The AER says that, on a before-personal-tax and before-personal costs basis, an investor that is eligible to fully utilise imputation credits they receive has a utilisation rate of 1 (ie they gain 100 percent of the “value of the imputation credits); whereas an investor that is ineligible to redeem imputation credits has a utilisation rate of 0 (ie they gain no “value from the imputation credits). The AER rejected all arguments suggesting that individual eligible investors could value imputation credits at less than their nominal dollar value. Consequently, the equity ownership approach assumes this dollar value of imputation credits to a relevant class of investors and then attempts to estimate the proportion of those investors in the total.

1040    Consistent with the Handley 2014 Gamma Report the Australian Bureau of Statistics (ABS) National Accounts data was filtered to exclude equity in public sector entities, and the AER calculated the “refined domestic ownership share of total equity” as the “equity held by ‘households’, ‘pension funds’ and ‘life insurance corporation as a share of the equity held by these classes plus ‘rest of world’”: Attachment 4 to the Ausgrid Final Decision at p 4-72. The “rest of world included both equity held by foreigners and government-held equity.

1041    The refined domestic ownership shares of total equity were calculated for both all and listed equity over the period from July 2001 to October 2012.

1042    The Network Applicants have criticised the equity ownership approach on the basis that it makes no allowance for the percentage of Australian domestic investors who are unable to redeem an imputation credit because of restrictions on redemption, such as the 45 day holding rule (a requirement, subject to certain exceptions, that domestic investors hold shares for at least 45 days, excluding the date of purchase or sale, before they are eligible to obtain the benefit of imputation credits associated with that share), and therefore domestic equity ownership rates exceed even the true maximum figure for the proportion of eligible investors. In its Final Decisions the AER specifically considered the extent to which the equity ownership data should be adjusted for the effect of the 45 day holding rule. The AER concluded, based on an analysis of ATO data in N. Hathaway, Imputation credit redemption ATO data 1988–2011: Where have all the credits gone?, September 2013 that “the 45-day holding rule does not appear to have a material effect on the utilisation rate” (Attachment 4 to the Ausgrid Final Decision at p 4-56).

1043    The Network Applicants have also criticised the equity ownership approach on the basis that it essentially assumes the value of imputation credits rather than deriving it from market data and, as discussed below, have identified a number of reasons why, in aggregate, Australian resident shareholders will value a dollar of distributed imputation credits at less than a dollar.

1044    The Network Applicants also contend that the AER erred by giving more weight to the equity ownership approach to estimating theta (which they say is, at best, an “upper bound” for theta) and that the AER erred by relying on equity ownership rates over a period commencing in July 2000. The Network Applicants and the Vic/SA Interveners claim that there is no apparent basis for taking figures up to 15 years old.

1045    The AER says that the use of historical equity ownership data is important as there is volatility in the ABS data as to equity ownership and therefore, more than just the most recent estimates should be taken into consideration.

Tax Statistics

1046    The AER estimate of the utilisation rate based on ATO tax statistics applied similar reasoning to that in the AER’s equity ownership approach. The tax statistics estimate, also assumes a dollar value for each dollar of imputation credits issued, but measures the actual rate of redemption of distributed imputation credits by eligible investors from information reported in tax returns. This ATO data also does not reflect any of the factors which may decrease the value of imputation credits to shareholders, although the rate of redemptions is smaller than the domestic ownership share, so the associated estimate of theta is smaller.

1047    Although the Network Applicants dispute the AER’s interpretation of the redemption rate, there is no dispute about the validity of the relevant tax statistics or the estimate. They note (Network Applicants Joint Submissions On Gamma, at [8(b)]):

The AER correctly identifies that the redemption rate from tax statistics is 0.43 (or 0.45 using updated data).

1048    The proper use of tax statistics in determining a value for theta was also considered in Energex (No 2) at [91] in which it was said:

[I]ts relevance [the relevance of taxation statistics] could only be related to the fact that it was an upper bound. No estimate that exceeded a genuine upper bound could be correct. Thus the appropriate way to use the tax statistics figure was as a check.

We agree with the Tribunal’s discussion in Energex (No 2). In our view tax statistics can only provide an upper bound on the estimate of theta.

Market Studies

1049    The third source of estimates of the utilisation rate were market studies of the value of imputation credits. The AER Final Decisions cite a substantial number of studies of utilisation rates based on share market data and summarises the estimates they produced (eg Table 4-9 at p 4-78 of Attachment 4 of the Ausgrid Final Decision). These include, but are not limited to, dividend drop-off studies that compare the changes in share prices in the period when stocks go from cum-dividend to ex-dividend (that is, before and after entitlement to dividends and any associated imputation credits) with the value of the associated dividends and imputation credits.

1050    Stock prices rarely change by exactly the “face value” of the dividend and associated franking credits when they go ex-dividend. Dividend drop-off studies identify any consistent differences between price changes and the related dividends and imputation credits. They have to address data and statistical problems, such as isolating the dividend drop-off from other factors impacting prices at the same time and multicollinearity in parameter estimates due to the close correlation between the levels of dividends and imputation credits for fully franked dividends. Properly specified and estimated dividend drop-off studies estimate the market values of dividends and associated imputation credits.

1051    The theory and practice of using dividend drop-off studies to estimate theta was considered in detail by the Tribunal in Energex (No 2) at [70] to [76] and again at [100] to [144]. The AER’s discussion and summary includes a significant proportion of studies which, based on that discussion, could be excluded on the basis that they were not relevant to an estimate of theta for the purposes of its determination. For instance, none of the results based on pre-2000 period data include the effects of changes to allow tax rebates of imputation credits for low income earners.

1052    The AER correctly identified a number of weaknesses in the market studies, particularly the dividend drop-off studies. These included that, because imputation credits are not traded, the studies must infer the value of imputation credits from econometrically estimated parameters, rather than observing market prices directly. This criticism is at odds with the AER’s reliance on economic modelling in other aspects of its determinations, particularly the benchmarking model used to determine opex.

1053    The Network Applicants preferred value of gamma is based on the theta estimate of 0.35 from a dividend drop-off study commissioned from SFG, Updated dividend drop-off estimate of theta, SFG Consulting, 7 June 2013, (the SFG 2013 Study) and intended to update a previous SFG study, reported and relied upon in Application by Energex Limited (Gamma) (No 5) [2011] ACompT 9, which was produced in response to the Tribunal's concerns with previous studies as expressed in Energex (No 2).

1054    The discussion in Energex (No 2) also makes it clear that the AER is correct not to place much weight in its Final Decisions on the results of early Australian dividend drop-off studies. Nevertheless, its summary of the range of potential theta estimates from these studies fails to focus on what the AER, in its RoR Guideline (RoR Guideline, Appendices at p 174), notes were “[t]he most relevant dividend drop-off studies, by SFG and Vo et al” which “present estimates in the range 0.35 to 0.55”.

Estimates of gamma

1055    Each of the Final Decisions summarised the AER’s analysis of the various data sources and methodologies for estimating gamma as follows (eg Attachment 4 to the Ausgrid Final Decision, Tables 4.1 and 4.2 at p 4-18):

Evidence from all equity

Methodology

Utilisation rate (theta)

Distribution rate

Implied Gamma

Equity ownership approach

0.56 to 0.68

0.7

0.4 to 0.47

Tax statistics

0.43

0.7

0.3

Evidence from listed equity

Methodology

Utilisation rate (theta)

Distribution rate

Implied Gamma

Equity ownership approach

0.38 to 0.55

0.8

0.31 to 0.44

Market value studies

0 to 1(implied market value studies)

0.35 (SFG Dividend drop-off study)

0.8

0 to 0.8

0.28

1056    It is necessary for the Tribunal to review the AER’s decision based on reference to its assessment of the component parts. It follows that the gamma decision must logically be consistent with those parts. In order to adhere to the AER’s setting gamma at 0.4, the Tribunal needs to identify preferred point values or ranges of the distribution and utilisation rates that, multiplied together, equal 0.4 or produce a range around that value. If the two components, or a range of the two components, when multiplied cannot set gamma at that figure, or in that range, the Tribunal would regard the decision of the AER as demonstrating error. If the AER’s broad discretionary approach is the correct one, it must be referable to properly assessed data. If the data was not properly assessed, that may demonstrate factual error.

1057    In responding to the Network Applicants, the AER submits:

The AER did not adopt a “range” for the distribution rate. Rather, it used an estimate of the distribution rate of 0.8 (or for JGN, 0.77) when considering estimates of the utilisation rate from only listed equity and an estimate of the distribution rate of 0.7 when considering estimates of the utilisation rate from all equity.

1058    Without further clarification about which value was used, it is only possible to conclude that, by setting gamma at 0.4, from a range between 0.3 and 0.5, the AER relied upon a value for theta that was one of, or close to, 0.5, 0.52 or 0.57, depending on whether it used a distribution rate of 0.8, 0.77 (for JGN) or 0.7, respectively. All of these values exceed the upper bound suggested by the tax statistics estimates of the redemption rate (0.43 (updated to 0.45 for JGN)).

Interpretation of “The Value of Imputation Credits”

1059    In order to determine the appropriate methodology for calculating gamma it is necessary to consider the role of gamma in the Rules.

1060    Gamma is part of the formula for determining the corporate income tax building block which in turn comprises part of the total revenue allowance for a network service provider: rr 6.4.3(a)(4), (b)(4), 6.5.3 of the NER and rr 76 and 87A of the NGR. It is necessary to consider corporate income tax as part of the total revenue allowance for a network service provider so that the “cost of taxation can be offset through its revenue allowance. In doing so, the network service provider is provided with a reasonable opportunity to recover at least the efficient level of revenue incurred in providing services and complying with regulatory obligations or making regulatory payments: s 7A of the NEL and s 24(2) of the NGL.

1061    The calculation of gamma must be approached in this context. The proper concern is not the extent to which imputation credits may be translated to real money. Instead, it involves a determination of the cost of taxation to a network service provider, and the extent to which that cost must be reduced to reflect the impact of the dividend imputation system on the network service provider. The reduction in the cost of income tax represented by gamma reflects the personal taxation benefits (as opposed to other benefits such as dividends) gained by shareholders from holding equity in the network service provider and the value of those benefits as ascribed by shareholders. Consequently, it is necessary to consider both the eligibility of investors to redeem imputation credits and the extent to which investors determine the worth of imputation credits to them.

1062    A significant proportion of the written and oral submissions in relation to gamma, as well as certain submissions made during the consumer consultation process, concerned the correct approach to gamma in r 6.5.3 of the NER and r 87A of the NGR and the extent to which either or both the approaches by the Network Applicants and the AER were concerned with the “worth” or market-value of imputation credits. The Network Applicants say that the AER took a non-market view in the construction of gamma which involved a misconstruction of the Rules. The AER says that the dispute does not concern a debate between market and non-market approaches, but instead can be characterised as a debate between how one calculates the value of imputation credits to investors in the market, which does not turn on the construction of the legislation.

1063    While the Tribunal is not required to conclusively determine the character of this dispute, except to the extent necessary to consider whether a ground is made out under s 71C(1) of the NEL or s 246(1) of the NGL, it is helpful to consider the nature of the dispute as a background to the detailed submissions made by the parties.

1064    The respective approaches of the parties to calculating gamma are as follows. The Network Applicants say that the correct approach to conceptualising both theta and gamma is to interpret “value” as meaning the “actual value” that equity-holders place on imputation credits assessed from examining market prices, particularly dividend drop-off studies. The AER’s approach is set out above.

1065    The implications of these approaches to gamma mean that it is agreed that gamma may be significantly less than the face amount of the distributed imputation credits because they cannot always be utilised by investors. It is agreed that this may be because, inter alia, foreign investors cannot utilise imputation credits which are part of the Australian taxation system or, at least conceptually, because of the 45 day holding rule.

1066    However, in addition, relying on an SFG report, (An appropriate regulatory estimate of gamma: Report for Jemena Gas Networks, ActewAGL, APA, Networks NSW (Ausgrid, Endeavour Energy and Essential Energy), ENERGEX, Ergon, Transend, TransGrid and SA Power Networks, 21 May 2014, at [65]) the Network Applicants submit that shareholders who do utilise imputation credits may not value them at the full face amount, including because:

(a)    Time value of money: unlike dividends themselves, imputation credits only produce value through reducing or rebating personal tax, such that there can be a significant delay between receiving the credit and obtaining the benefit. That delay can be years where credits are distributed through other companies or trusts or where the taxpayer is initially in a tax loss position. Thus, credits may be worth less to investors than their face amount.

(b)    Transaction costs: the accounting and administrative costs of redeeming imputation credits are greater than for dividends (which are typically simply paid into a nominated bank account). These costs will partially offset against the value that an investor would otherwise receive.

(c)    Portfolio effects: An Australian investor obtaining an 8% return from an investment in the USA might decide instead to redirect the investment to an Australian equity returning 7% but so as to obtain the benefit of imputation credits which contribute an additional 2%. For that investor, who is switching investments to obtain the benefit of imputation credits, the imputation credits in question are not worth 2% but are worth 1% (because they come with an opportunity cost). Also, as an investor redirects funds to Australian equity, the investor’s portfolio becomes more concentrated which is costly. For each investor, such switching and portfolio adjustment would rationally continue until the marginal value of switching (and thus the marginal value of the imputation credits) approaches zero.

1067    The AER characterises these additional factors as ‘personal costs’ which are faced by investors and submits that they should not be accounted for when characterising the proper value for theta and, consequently, gamma. The AER says that this is because of the requirements for consistency with the “Officer framework” and for an internally consistent method for estimating gamma and the allowed rates of return on debt and equity in the weighted average cost of capital (WACC).

1068    While the AER does not contend that the Officer Paper is a statute or a code, it explains that the Officer Paper underpins the inclusion of gamma in the corporate income tax formula in the NER r 6.5.3 and the NGR r 87A, and that it is fundamental to a coherent understanding of the role of gamma in the regulatory scheme.

1069    As outlined above, there is a relationship between imputation credits, the cost of capital and the method of calculating the rate of return. This is formally recognised by r 6.5.2(d)(1) and (2) of the NER and r 87(4)(a) and (b) of the NGR which require that the allowed rate of return be a WACC “determined on a nominal vanilla basis that is consistent with the estimate of the value of imputation credits. The vanilla WACC framework upon which the NER and NGR are based (and have historically been based) is an “after-tax” or “post-tax” framework (cf r 6.4 of the NER), finding its origins in the Officer Framework.

1070    Under the Officer Framework, the AER says that the required return on capital is calculated after company tax and does not explicitly factor in the personal taxes or transaction costs of equity-holders. Further, the AER says the approach to gamma should be consistent with this after-company tax, before-personal tax and costs rate of return framework. This is because it says that estimating different aspects of regulated revenue using different definitions of the return on capital may result in incorrect compensation for the regulated business, which does not incentivise efficient investment. Therefore, the AER says that, conceptually, gamma should not be characterised as including the “personal costs faced by investors, as contended by the Networks Applicants.

1071    In effect the AER submits that $1 of capital gain is taken to have a value of $1, which is equal to $1 of dividend, using the all ordinaries accumulation index (All Ords) as a proxy for the return on Australian domestic equity and using the All Ords to determine a dividend yield. As a result, in order for the regulatory system to be consistent (assuming that the imputation credits can be utilised by an investor), the AER says that $1 of capital gain, which is equal to $1 of dividend, must be equal to $1 of imputation credits.

1072    As the Network Applicants submit, the difficulty that arises with this line of reasoning is that market-value studies of imputation credits suggest that investors may not value cash dividends and eligibility to reduce their income tax liabilities, equally.

1073    Moreover, the AER's reasoning ignores the fact that other parameters in the WACC calculations are market values that already incorporate the effects of the differences in investors’ tax positions and transaction costs. As noted by Professor Gray of SFG Consulting, Estimating gamma for regulatory purposes, 6 February 2015 at 9:

In my view, gamma is no different from any other WACC parameter in this respect. For example, when estimating beta, the AER uses traded stock prices, which reflect the value of those shares to investors. That value reflects any “personal costs” that the investors bear. There is no process of adjusting share prices to reverse some of the reasons why investors value shares the way they do. The same applies to the traded bond prices that the AER uses to estimate the cost of debt. All of these prices reflect the value to investors – all of the considerations that are relevant to how investors value the stock are reflected in the price. [italicised emphasis in the original]

1074    Consequently, there is no inconsistency between the use of market studies to estimate the value of imputation credits and the methods used to calculate other parameters of the costs of debt and equity from market data.

1075    In the Handley 2014 Gamma Report prepared for the AER, Handley discusses, inter alia, recent extensions (Monkhouse, Lally and van Zijl, Handley) of the Officer Framework that relax the simplifying assumptions in the original analysis. These extensions include assessing the effects of:

    allowing for multiple time periods instead of Officer’s perpetuity approach (in which dividends and other flows are assumed to be constant amounts that are reduced to net present values at constant discount rates);

    the subsequent possibility that not all imputation credits are distributed in the periods in which they are generated;

    recognising that not all company tax paid can or will be issued as imputation credits (unlike Officer’s Framework which equates the proportion of imputation credits generated/available as a result of paying company tax with those actually distributed through franked dividends - the special case of 100 percent distribution, in which F=1 and γ=θ); and

    explicit application of forms of the CAPM, including variations that allow for the existence of, and interaction with, foreign residents, asset classes and markets.

These extensions infer different interpretations of, and appropriate empirical approaches to, the estimation of the distribution and utilisation rates that determine the value of gamma.

1076    Although these recent studies give greater insight into the impact of imputation credits on the value of a company in more general circumstances than allowed for by the Officer Framework, they do not appear to present an empirically robust and internally consistent explanation for the link between the existence of imputation credits and the applicability of the vanilla WACC that remains the basis for the allowed rate of return defined by r 6.5.2(d)(1) and (2) of the NER and r 87(4)(a) and (b) of the NGR.

1077    The Tribunal notes that the complementarity already discussed does not mean that there is some refined concept for the “value of imputation credits”.

1078    The complementarity suggests that the sort of factors which inform the return on equity generally (as discussed in “Return on equity” above) should also inform the determination of gamma. But, as is apparent from the matters there referred to, the values of the relevant elements are informed by data in the market, and are arrived at by analysis and inference from that data. They are not then adjusted by a more detailed analysis of why the participants in the market have caused the market to reach those levels or to act in that way. For some predictions, such information and analysis may be appropriate. But, in the present circumstances, there is nothing to support that reliance on historical data is inappropriate. Indeed, that is what all parties did in their respective submissions.

1079    The consequence is that it is how shareholders act in the marketplace, in relation to the utilisation of the franking credits available to them, which should inform the value of those imputation credits. Their individual, and inevitably different, sets of reasons for acting in the way they do generates their behaviour. The observable market behaviour is the consequence of the individual reasons of each shareholder, in that shareholder’s personal circumstances.

1080    Consequently, to the extent that the Network Applicants submit that the value of theta, and therefore the value of gamma, should be assessed at a “market value” which is less than the value which the observable behaviour including behaviour in the market demonstrates (as analysed by market studies and divided drop-off studies) as well as the ATO tax statistics, the Tribunal does not accept that submission. It may be that the submission of the Network Applicants does not go that far.

1081    Of course, it also follows from the above, that the Tribunal does not accept the AER’s approach that imputation credits are valued at their claimable amount or face value (as it said in the Final Decisions: the measure is what can be claimed). The value is not what can be claimed or utilised, but what is claimed or utilised as demonstrated by the behaviour of the shareholder recipients of the imputation credits.

1082    Those comments do not address how the value of the imputation credits is best assessed or properly assessed in these matters. But they do re-affirm how the imputation credits in the Tribunal’s view, are to be valued. Of course, the valuation is, or may be, a complex exercise depending upon the inference to be drawn from a range of data sources.

Consideration

AER’s CAPM framework

1083    It is clear that the AER's conceptual and empirical approach to estimating gamma has been influenced by models of the effects of imputation credits on the value of a BEE that generalise the original Officer Framework to allow for important real-world complications, such as the limits on companies’ ability to issue imputation credits and the existence of, and interaction of Australian investors with, foreign residents, asset classes and markets. It is also clear that the intent of the changes to the NER and the NEL was to allow the AER greater flexibility to adopt a more sophisticated approach to the cost of capital than previously envisaged by the NER and the NEL.

1084    It is appropriate that the AER should use that additional flexibility to seek advice on alternatives to the Officer Framework that better define the impact of imputation credits on the cost of capital. Nevertheless, the Tribunal considers that, in light of the changes to its methodology, the AER has not satisfied the Tribunal that its conception and estimation methods are consistent with the requirements of the NER and NEL, including the RPP. That is understandable where the experts themselves, through their recent reports, present no consistently coherent CAPM framework for the assessment of the components of the cost of capital. There are models with disputed applicability which may or may not be consistent with the application of a vanilla WACC, as required by r 6.5.2(d)(1) and (2) of the NER and r 87(4)(a) and (b) of the NGR.

1085    For instance, as discussed above, the AER adopted the SL CAPM as its foundation model for assessing the cost of equity. That model makes no allowance for the presence of imputation credits. Nevertheless, the AER’s preference for, and effective adoption of, the listed equity versions of the estimate of the distribution rate and of the equity ownership approach to theta was based on advice it received based on CAPMs that allow for the effects of imputation credits.

1086    This contrasts with the approach taken by the New Zealand Commerce Commission (NZCC) which adopted a variant of the CAPM, the Simplified Brennan-Lally CAPM (the SB-L CAPM) as the basis for its estimates of the costs of capital in its determinations in 2012 of allowed revenue for a number of regulated entities. As described in Wellington International Airport Ltd and others v Commerce Commission [2013] NZHC 3289 (WAIL) at [1090]:

The SB-L CAPM adapts the classical (tax free) CAPM to take account of New Zealand’s taxation system. It recognises the presence of imputation credits, assumes that they are fully utilised and also assumes that capital gains are tax-free.

The intention was to underpin its assessment of the cost of capital with a coherent framework that incorporated the effects of imputation credits.

1087    In their appeal to the New Zealand High Court against the NZCC determinations, the regulated entities criticised use of the SB-L CAPM on a number of grounds, including because the NZCC assumed entities had a common level of debt financing. This assumption was made in order to overcome an anomaly in the SB-L CAPM whereby the WACC increases with leverage, contrary to the generally accepted view that it should not (WAIL [1418]), since that would imply firms should never use debt finance. Whatever its other merits, the SB-L CAPM was not able initially to model the cost of capital in a way that was consistent with an important “real world” observation. The authors of the model were later able to demonstrate that the anomaly could be resolved by changes to the assumptions underlying the original version (WAIL at [1614] et seq), albeit at the cost of introducing other imperfections. Despite the acknowledged weaknesses of the SB-L CAPM and the approach taken by the NZCC, the High Court in WAIL considered that alternatives suggested by the appellants would not produce “a materially better IM [input methodology]” (WAIL at, for instance, [1656]).

1088    The New Zealand decision in WAIL suggests that financial modelling may not yet have produced a workable version of a CAPM that incorporates a generalised treatment of imputation credits, in which case the AER would necessarily have to make judgements about whether and how to the modify the methodology in the RoR Guideline for factors subsequently raised in advice it received from experts.

1089    The question still remains whether the AER’s relevant conclusions in relation to this building block should be maintained. The Tribunal notes that the impact of the advice it received based on alternative forms of CAPM relates primarily to the adoption of the listed-equity versions of its estimates of the distribution and utilisation rates, rather than the all-equity versions in the RoR Guideline and Draft Decisions. The AER’s reliance on the equity ownership approach hinges on the larger question of whether that approach correctly captures the value of imputation credits, a concept unchanged by the amendments to the Rules and the NEL and NGL that allowed the AER to consider a wider range of financial models.

AER’s conceptual approach to and estimation of theta

1090    The evidence indicates that there is a discrepancy between the values for theta determined using the equity ownership approach, which is higher, and through the use of tax statistics, which is lower. As noted above, consistent with the Tribunal’s discussion in Energex (No 2), the Tribunal’s view is that tax statistics provide an upper bound on the estimate of theta. Therefore, to the extent that the equity ownership approach indicates that theta is above the amount specified through tax statistics, it is also apparent that there are investors who the AER assumes are eligible to redeem imputation credits but, for whatever reasons, either cannot redeem them or attribute so little value to the credits that they do not utilise them.

1091    As noted above, in its Final Decisions the AER specifically considered and rejected one of these potential reasons, the effect of the 45 day holding rule. The AER concluded, based on an analysis by Hathaway of ATO data that “the 45-day holding rule does not appear to have a material effect on the utilisation rate”. The AER found that, on the basis that there was no other data available, there was no compelling evidence of a material class of investors who hold shares for less that 45 days. On this basis, the AER did not attribute any effect from the 45 day rule to its calculation of gamma.

1092    In the Tribunal’s view, three issues arise with that analysis by the AER. Firstly, as outlined by the Network Applicants, clearly there is a class of investors who hold shares for less than 45 days. The present issue is not whether such a class exists, but the size of that class and the extent to which the value of imputation credits is lower as a result of domestic shareholders being unable to use them. Secondly, the value of theta produced by taxation statistics (and by market value studies to some extent) is evidence that Australian investors do not value imputation credits at their face value, including because they may be unable to use them. Finally, the ATO data relied on by Hathaway has since been acknowledged by Hathaway to be of some concern, as the existence, or non-existence, of some $180 billion of dividends cannot be internally reconciled with that data.

1093    Leaving aside the issue of whether the AER is correct to assume that eligible shareholders value each dollar of imputation credits at a dollar, the Tribunal considers that the equity ownership approach overstates the redemption rate. We agree with the Network Applicants submission that “even on the AER’s own definition of theta (focussing on potential utilisation by eligible investors), equity ownership rates are above the true maximum possible figure for theta”. In the Tribunal’s view the estimates of the redemption rate produced by the equity ownership approach would be useful only, like the upper bound suggested by tax statistics, as a further check on other estimates.

1094    It is agreed that the correct approach to gamma must involve an internally consistent method for estimating gamma with the allowed rate of return and that gamma must be given a “market-value”. It is the concept of “market value” that is disputed. The AER argues that its approach is consistent with the value of imputation credits to investors in the market. In Attachment 4 to the Ausgrid Final Decisions the AER states (p 4-46):

Our definition of the utilisation rate in this final decision and the draft decisions is the utilisation value to investors in the market per dollar of imputation credits distributed. Thus, we do consider that the utilisation rate represents the value to investors in the market. However, the key difference between our position and SFG's is we consider that, to be consistent with the underlying conceptual framework provided by Officer, we need to estimate the before personal-tax and before-personal-cost value.

The AER perceives no difference between attributing an assumed “utilisation value ... per dollar of imputation credits distributed” to estimates of the number or proportion of “investors in the market” eligible to redeem imputation credits and an estimate of the “market value” that those investors attribute to imputation credits as a part of the capitalised value of companies in the share market.

1095    The AER’s equity ownership and tax statistics approaches consequently make no attempt to assess the value of imputation credits to shareholders and ignores the likely existence of factors, such as the 45 day rule, which, across all eligible shareholders, reduce the value of imputation credits to those shareholders below the “face value assumed by the AER. The Tribunal considers these approaches to be inconsistent with a proper interpretation of the Officer Framework underlying s 6.5.3 of the NEL. It is the reason that the theta estimates produced by the equity ownership approach and tax statistics can be no better than upper bounds on the market value of imputation credits.

1096    Given that two of the three approaches adopted by the AER are considered no better than upper bounds, it follows that the assessment of theta must rely on market studies. The Tribunal considers that, of the various methodologies for estimating gamma employed by the AER, market value studies are best placed to capture the considerations that investors make in determining the worth of imputation credits to them.

1097    As noted above, the Tribunal considers the use of market studies to estimate the value of imputation credits is consistent with the methods used to calculate other parameters of the costs of debt and equity from market data.

1098    The Tribunal accepts the Network Applicants’ submission that the return on equity is derived from the market prices of government bonds (the risk-free rate) and from the market prices of shares (beta and MRP). The cost of debt is calculated by reference to bond yields. Bond yields are derived directly from the traded market prices of bonds. Further, we accept the Network Applicants’ submission that these market prices reflect every consideration that investors make in determining the worth of shares to them and that the bond prices, and the yields that are derived from them, reflect every consideration that investors make in determining the worth of the asset to them, including “personal costs”. Consequently, placing significant weight on market value studies is, in the Tribunal’s view, consistent with evidence relied on by the AER to calculate the rate of return on capital.

1099    The Network Applicants contend that the AER erred in its estimation of gamma by considering tax statistics and market value studies in a “very general manner” and thereby giving less weight to the SFG 2013 Study, advanced by them as providing the “best available” study of the market value of imputation credits.

1100    We consider that, by placing most reliance on the equity ownership approach and effectively defining the utilisation rate as the proportion of distributed imputation credits available for redemption, the AER has adopted a conceptual approach to gamma that redefines it as the value of imputation credits that are available for redemption. This is inconsistent with the concept of gamma in the Officer Framework for the WACC which underlies the Rules, and with the objective of ensuring a market rate of return on equity by making an adjustment to the revenue allowance for taxation to account for imputation credits.

Adjustment of SFG theta estimate for personal costs

1101    In summarising the utilisation rates from market value studies, the AER made adjustments based upon the view from its advisers that both the estimated value of cash dividends and imputation credits need to be grossed-up to adjust for factors, such as differential personal taxes and risk, which are not relevant to the utilisation rate.

1102    SFG Consulting in Estimating gamma for regulatory purposes, 6 February 2015 at fn 36 explains that the adjustment is not necessary, and stemmed from a misinterpretation of theta:

The AER’s adjusted figure of 0.40 with respect to the SFG study is based upon the incorrect view that both the estimated value of cash dividends and imputation credits need to be grossed-up to the correct figures for interpretation. This view is based upon the idea that the understatement of the value of cash dividends is due to an econometric bias that needs to be accounted for (that is, the true value of cash is 1.00 and the estimate is 0.88, and so the coefficients need to be multiplied by 1.00 ÷ 0.88 = 1.14). That adjustment is based entirely upon conjecture that the coefficients provide an under-estimate of the true value.

1103    The Tribunal accepts this explanation and provisionally considers that the best estimate of theta derived by the updated SFG Study is 0.35. That provisional view is subject to the “Conclusions” section of this part of the Tribunal’s reasons.

Estimation of the distribution rate

1104    The AER calculated the distribution rate based on data from listed equity only (0.8 in the Networks NSW and ActewAGL Final Decisions and 0.77 in the JGN Final Decision) and the distribution rate for all equity (0.7).

1105    The Networks Applicants say that the AER should not have relied on an estimate of the distribution rate for listed equity in estimating the distribution rate because it was likely to be unrepresentative of the distribution rate of the benchmark entity. This is because a large proportion of listed companies are multinational firms with foreign profits which will generally have an incentive (by virtue of generating foreign-sourced income) to distribute a higher proportion of imputation credits. In contrast, the benchmark entity, by definition, is an entity with 100 percent Australian income.

1106    The all equity estimate follows past practice up to and including the RoR Guideline and the Draft Decisions. The AER only introduced the listed equity estimate to reflect the views of its expert Handley on the scope of the relevant markets for assessing theta. In explaining its reasons, the AER stated (eg Attachment 4 to the Ausgrid Final Decisions at p 4-22):

… we now consider that:

    It is open to us to have regard to evidence from all equity and/or only listed equity.

    It would be inconsistent to pair an estimate of the utilisation rate from only listed equity with an estimate of the distribution rate from all equity (and vice versa).

Without questioning whether the option was open to the AER, the Tribunal on review is not of the view that this is a sufficient explanation for introducing the alternative measure. It does not explain how the change would be consistent with the NEL or NER, or otherwise advance the NEO or the NGO. In any event, given that the AER was referring to its estimate of the utilisation rate in the equity ownership approach, consistency with a market study estimate of theta would no longer necessarily require a particular definition of the distribution rate. At present, the Tribunal is of the view that it is appropriate to follow past practice.

Range of possible gamma

1107    If the AER’s equity ownership approach is adopted, other than as a further check on the upper bound on estimates of theta, the AER gamma decision (0.4 from a possible range of 0.3 to 0.5) no longer aligns with the estimated ranges of the distribution and utilisation rates.

1108    The AER identifies that the redemption rate from tax statistics is 0.43 (or 0.45 using updated data) (Attachment 4 to each of the Final Decisions: JGN at p 4-17 (Table 4-1); ActewAGL at p 4-18 (Table 4-1); Networks NSW Final Decisions at p 4-18 (Table 4-1)). The Network Applicants’ preferred estimate of theta from the updated SFG study is 0.35. These values of theta produce estimates of gamma in the range between 0.25 and 0.30 with the all equity distribution rate in the RoR Guideline and Draft Decisions (0.7), or between 0.28 and 0.34 if using the higher listed equity distribution rate (0.8).

1109    Similarly, using the updated redemption rate that applied to JGN, gamma in that determination would either range between 0.25 and 0.32 with the original distribution rate, or between 0.27 and 0.35 when using the higher updated listed equity distribution rate (0.77).

Conclusion

1110    The Tribunal considers that the AER decision on this topic should be set aside. Further reasons for the conclusion, having regard to s 71P(2a) and (2b) of the NEL are in the concluding section of these reasons.

1111    As explained, the AER’s decision sets a value for gamma which is too high, where the relevant upper bounds for theta should be no more than the ATO statistical data of 0.43 (or 0.45 in the case of JGN).

1112    The 2012 Rule Amendments then impose a complex task on the AER, and on the Tribunal on review. Within the parameters of the NEO/NGO and the RPP, the decision on gamma (as generally) should take into account and reflect the inter-relationships between the building blocks and the elements within them within the determination to be made, and then produce the decision which properly serves the NEO/NGO.

1113    In this context, whilst recognising the complementarity discussed above, the reduction in the utilisation rate (theta) to a figure below the upper bound represented by the ATO statistics will or may have the consequence that the relevant regulated service provider, under this building block, may recover for corporate income tax more than the face value of the tax which it has in fact paid on behalf of its shareholders and which has been utilised by them, because a value of theta below the tax statistics will mean the imputation credits used come to be valued at less than their face value. There is a tension there which requires careful balancing.

1114    The AER has estimated that, without allowing for interrelationships with other issues, varying gamma from 0.4 to the Network Applicants’ preferred 0.25 would change allowed revenues by around 1.3 percent for Ausgrid, Essential and ActewAGL, 1.5 percent for Endeavour and 0.6 percent for JGN equivalent to, in nominal dollar terms, $110.4m Ausgrid, $62.3m for Endeavour, $65.5m for Essential, $10.1m for ActewAGL and $13.9m for JGN. Those amounts were specified as the revenue outcomes on the topic of gamma as presented by the AER in its closing submission. The amounts involved of themselves are clearly significant.

1115    Any change in gamma will also have to be included in a revised rate of return on equity.

1116    The interaction with the rate of return on equity should only increase the materiality of the direct effects of a change in gamma on the revenue allowance for income tax.

1117    No further consideration of inter-relationships may be necessary – the impact of changing gamma “is what it is”, and the consequences for equity should also only be what will necessarily follow from correcting the AER’s error.

1118    The Tribunal notes that the SFG 2013 Study represents one point of view. As in a number of instances in these matters, there are conflicting expert views. Without the benefit of learning further from the experts, the Tribunal (like the AER) is faced with the selection between competing views.

1119    There are finely balanced decisions to be made in that light. As the Tribunal proposes to remit each of the Networks NSW and ActewAGL applications to the AER for reconsideration, in relation to the topic of opex, and as its revised determination will have to give effect to its obligation under s 16(1)(d), the Tribunal considers that it is likely to result in a materially preferable NEO decision if this issue, in the light of its reasons, is also remitted to the AER. It is an obligation which is a “holistic” one, so that – apart from its individual decision on the building blocks – the AER is required to step back and make the overall assessment required.

1120    The way in which the aspect of the Tribunal’s reasons should be given effect in relation to the JGN Final Decision is addressed in the separate reasons of the Tribunal in that matter.

METERING SERVICES OPEX

INTRODUCTION

1121    Ausgrid raised a ground of review with respect to metering services opex. It is prudent to briefly explore the classification of metering services and why metering services opex is a separate component to opex, as already addressed.

1122    Under the NER, the AER makes a distribution determination every five years relating to the provision of electricity network services. The services that are a subject of the distribution determination are called “direct control services”.

1123    “Standard control services” (SCS) may be described as the services shared across all customers in a network, for example, vegetation management and maintenance. The costs associated with SCS are placed together into other building block elements, which include the return on capital, opex, the cost of corporate income tax, increments and decrements from incentive schemes and the cost of any jurisdictional scheme. The building block for SCS, with the application of any control mechanisms, produces with the other building blocks the revenue that can be earned by a DNSP. The allowed revenue in each year of the regulatory control period for SCS is then recovered through network tariffs paid by electricity retailers, which are ultimately paid for by consumers. This is the “revenue cap” control mechanism in action because the DNSP cannot earn more than its allowed revenue for a particular combination of services: r 6.2.5(b)(3) of the NER.

1124    “Alternative control services” (ACS) are services with costs that are attributable to a specific customer. They could be requested by the customer or may be simply attributable to them alone. These costs are recovered differently from SCS, which are spread across all customers within the distribution area. These are regulated by a “price cap” control mechanism which means a DNSP cannot charge more than a specific amount for the service r 6.2.5(b)(2) of the NER.

1125    The AER decided to include the services Ausgrid provides associated with the reading, operation and maintenance of electricity meters installed at customer premises as ACS under r 6.2.2 of the NER. The price cap would therefore include a return on the metering asset base (MAB). The meters that were classified as ACS are known as Type 5 and Type 6 meters.

1126    A Type 6 meter is a meter that most of Ausgrid’s residential or small business customers have at their premises. It is a widely used meter for small customers across the national electricity market. It calculates the amount of energy consumed in kilowatt hours (kWh). It is read manually every quarter.

1127    A Type 5 meter is also known as a Manually Read Interval Meter (MRIM). This meter is able to record customer consumption throughout the day at 30 minute intervals. This means that it records 48 data points a day and stores it until it is read manually on a quarterly basis. It is read by being probed with the data downloaded onto a device held by the meter reader. As it is a manually read meter, it does not possess remote communications capacity to transmit data to a separate location.

1128    The AER based its decision for metering services opex as being part of the non-capital component on an estimate of the forecast opex associated with providing metering services for the regulatory control period. It rejected Ausgrid’s proposal of $142.7m (2014-15) and substituted its own forecast of $111.0m (2014-15) for metering services opex.

The Decision

1129    Ausgrid’s Regulatory Proposal selected 2012-13 as the best representation of the current volumes and efficiencies for Type 5 and 6 meters. In Attachment 8.15 to its Regulatory Proposal at p 22, Ausgrid stated that the costs of Type 5 meters were three to four times higher per quarter than Type 6 meters.

1130    The AER’s Draft Decision used a lower annual point for metering opex in 2014-15 of $23.3m ($nominal). This was done, as asserted by Ausgrid in the Networks NSW submission, by reference to the average opex for 2009-13 ($nominal per annum) and Ausgrid per customer benchmark metering cost of $14 per customer per annum. The AER rejected the proposed $143.4m ($nominal) of metering opex and substituted it for $119.1m ($nominal).

1131    Ausgrid challenged the AER’s Draft Decision in its Revised Proposal, by contending that Energex had not historically operated Type 5 meters and as at 30 June 2014, there were zero meters operating as Type 5 in its distribution area. After removing the cost of Type 5 meters from the forecast, Ausgrid’s metering costs were $11.26 ($nominal) per customer for a Type 6 meter. This is below $14 per customer and demonstrated, according to Ausgrid, that Energex was not an appropriate benchmark. The second issue was that Ausgrid had experienced a significant increase in Type 5 meters since 2008-09. If an average of 2008-13 was utilised to calculate metering opex then it would incorrectly represent the costs associated with metering due to the increase in Type 5 meters being used in Ausgrid’s distribution area since 2008-09. Ausgrid posited that Type 5 meters were more costly to operate and maintain as it takes longer to read a Type 5 meter (as reflected in the probe meter reading surcharge). There are also greater costs associated with the validation of interval meter as per the AEMO metrology procedure requirements. Ausgrid argued that a 2012-13 base year would be more appropriate as in 2008-09 Ausgrid had 15 percent Type 5 meters and 85 percent Type 6 meters in use, whereas 2012-13 had 30 percent Type 5 meters and 70 percent Type 6 meters.

1132    Its Revised Regulatory Proposal took into account updates from labour, materials, contracted services and labour hire costs. It proposed $142.7m ($real 2014-15). The AER rejected this proposal and substituted its own forecast of $111m ($real 2014-15) in its Final Decision.

1133    The AER used the 2008-09 to 2012-13 average when calculating metering opex and maintained its position from the Draft Decision. This was because, in the absence of an EBSS for ACS opex, it did not want to create an incentive to overload metering opex into a single year. It considered that Type 5 meters were not more expensive to operate than Type 6. It stated in Attachment 16 to the Ausgrid Final Decision, at 16-57 that:

There is no material difference in the cost of operating type 5 and 6 meters as both have to be manually read. This involves visiting an installation, reading them from a numeric display, and recording the information on a handheld device. This process takes only a few minutes at each site and is not materially different between type 5 and type 6 meters.

1134    To the extent that more stringent data obligations existed in relation to Type 5 meters under AEMO metrology procedure requirements, the AER considered that data validation could be done through the imposition of appropriate computer systems. The costs associated with Type 5 meters would not be a recurrent cost on base opex. The AER said it provided Ausgrid with an opportunity to recover at least its efficient costs and approved metering opex for $15.5m ($2014-15) in information technology capex for the provision of Type 5 and 6 metering services.

1135    The AER decided that metering could be charged on the basis of an up-front capital cost for new and upgraded meters from 1 July 2015 and an annual charge with capital and non-capital components to cover the costs of the MAB, opex and tax. The charge was set based on a cost-reflective basis to meet the pricing principles in r 6.18.5 of the NER.

1136    The network pricing principles relate more closely to the pricing proposal for the first regulatory year which a DNSP must submit to the AER as soon as practicable and within 15 days of the distribution determination. The proposals are produced annually and three months prior to the commencement of the second to fifth regulatory year under r 6.18.2 of the NER.

1137    The network pricing objective in r 6.18.5(a) of the NER states:

The network pricing objective is that the tariffs that a Distribution Network Service Provider charges in respect of its provision of direct control services to a retail customer should reflect the Distribution Network Service Provider’s efficient costs of providing those services to the retail customer.

1138    The network pricing principles are included in r 6.18.5.

Grounds of Review

1139    Ausgrid sought to establish a ground of review under s 71C of the NEL on the basis of the following errors:

    The conclusion that Type 5 meters are not more expensive to operate and maintain than Type 6 meters, when the information before the AER demonstrated otherwise; and

    The decision to determine that it was inappropriate to use a single year because that would create an incentive for Ausgrid to load its expenditure into that single year, which is based on the erroneous assumption that the opex costs for Type 5 and Type 6 meters are not materially different.

1140    It said its grounds of review on the AER’s metering opex decision would be made out on the basis of each point individually, or the decision collectively. The conclusion that Type 5 meters were not more expensive to operate was, it said, an error of fact material to the making of the decision. It could also be seen as an unreasonable decision because it contained logical error and did not take into account the material before it regarding the costs to operate the two different meter types.

1141    It also claims that the AER’s decision to use an average rather than a single year involved an incorrect exercise of discretion. This is because the basis for not using a single year was based on the logical error or erroneous assumption referred to and not on an analysis of the most appropriate methodology for choosing the base rate to forecast metering opex. As the Final Decision was based on a factual error, it was an incorrect exercise of discretion. The decision was also claimed to be unreasonable as it contained an element of arbitrariness, and did not take relevant considerations into account.

1142    Ausgrid also stated the decision by the AER to substitute its own metering opex forecast was an unreasonable decision because it contained a logical error, given the evidence that Ausgrid was likely to incur substantially greater opex over the regulatory control period than that taken into account by the AER in setting the ACS price cap.

Costs of Type 5 and Type 6 meters

1143    Ausgrid argued that there was evidence before the AER that Type 5 meters were more expensive to operate and maintain because they are more complex. They store and record readings every 30 minutes for three months, unlike a Type 6 meter which is only required to read and store one figure per quarter. A Type 5 meter requires a probe reading which takes 35 seconds whereas a Type 6 meter reading takes 7 seconds. Some Type 5 meters are read monthly and others are read quarterly, whereas all Type 6 meters are read quarterly. The contractor who conducts meter reading for Ausgrid charges a probe reading surcharge which is $1.2m annually.

1144    It was presented to the AER that Type 5 metering data must be published to the National Electricity Market, electricity retailers and the network. The server data storage for that would be greater in volume and sophistication than that of Type 6 meters, including the need to ensure the accuracy of the data and meeting requirements for validation. These all involve greater costs.

1145    It also said that complexity of the Type 5 meters compared to the Type 6 also means they need to be maintained more regularly by a metering technician to resolve operational issues, such as failed reads.

1146    Ausgrid also submitted that the price cap determination made by the AER had costs like a higher non-capital charge cap on the annual metering services charge for Residential Time of Use (ToU) tariff customers (Type 5 meters) than the Residential Inclining Block Tariff (IBT) customers (Type 6 meters). This applied equally to small business customers. The ToU tariff charges different rates in price per kWh depending on the time of day the electricity is consumed. The IBT sets a flat rate that applies throughout the day but charges more depending on the volume of electricity consumed, with different amounts per “block. The higher the usage, the higher the price per kilowatt hour. While the AER’s revenue allowance assumed the opex costs are comparable, the ACS price cap demonstrated differences in costs between the two meter types.

1147    The AER argued in its submissions that it accommodated the differences in time to read the different types of meters as it was assumed to take a couple of minutes at each metering site. As that had been considered, the only differential was the frequency of meter reads, acknowledging that some will require monthly reads. The AER said that only 24,616 of 1,619,307 customers in 2014 required monthly reads, which is only 1.5 percent of customers.

1148    The AER also said the costs associated with Type 6 meters are broadly the same as to Type 5 meters. They must be read, and the data processed and stored. It correctly highlights that the Ausgrid submission also relied on the data processing costs to reflect the higher maintenance costs. The AER stated that the costs associated with storing data were non-recurrent costs and that Ausgrid did not consider its approved capex costs in making this claim.

1149    The AER stated that the ACS revenue was determined based on cost-inputs. The pricing models are not inputs into the assessment of revenue. The AER stated that the manner in which prices had been set (prepared on the basis of Ausgrid’s pricing model) was not an indication that it accepted that the overall costs of Type 5 and 6 meters were materially different.

Averaging from 2008-09 to 2012-13

1150    Ausgrid said that the AER’s decision to use an average over 2008-09 to 2012-13 was in error. The AER did this to avoid any incentive to load metering expenditure into one year. This was seen as a consequence of the AER’s decision on the similarity in costs between Type 5 and 6 meters. If there was no material difference in costs, there would be no reason to use a narrower date range.

1151    Ausgrid said the AER did not suggest that it had loaded up a single year with expenditure. Ausgrid argued that the suggestion was in fact contrary to the explanation for the 2012-13 proposed year and that its proportion of Type 5 meters had stabilised. Therefore, using a single year’s metering opex as the basis for the decision in the present circumstances would not have created any likelihood or expectation that a single year’s metering opex would form the basis for any determination in the future. It also argued that the issue facing the AER was to determine the forecast metering opex. Using an average of Ausgrid’s metering opex from 2008-09 to 2012-13 would not be a good predictor of Ausgrid’s likely metering opex for the 2014-19 regulatory control period. Ausgrid’s position was that the AER Final Decision resulted in it not considering costs into the future out of concern it would create an incentive to load costs into a single year, even though the costs incurred in the single year were identifiable and quantifiable.

1152    The AER stated that it took its “multi-year approach because it was a more robust approach and would avoid any incentives to load up into a single year moving forward. This was because in the absence of an EBSS, there would not be such an incentive to reveal costs. The AER stated that it considered Ausgrid’s position regarding the Type 5 meters during that period of time. Its preference for an alternative methodology did not involve any factual error, or any incorrect exercise of discretion and did not lead to an unreasonable decision, having regard to all the circumstances.

Consideration

1153    The Tribunal does not consider that the AER was in error in its decision relating to metering services opex. The decision that Type 5 and 6 meters do not have materially different costs was not an error of fact and does not, therefore, lead to it making an unreasonable decision, having regard to the circumstances.

1154    It is easy to say that utilising Energex as a benchmark at $14 per customer per annum cost for metering was inappropriate given the differences in metering uptake between their respective distribution areas. However, the AER did not use that benchmark blindly. It took ito account the relative time actually required to read Type 5 and Type 6 meters, but as part of the attendance cost per read. As it said, the time for reading each type of meter was not charged at 35 second and 7 second units, but was a more general charge on the basis of each site attendance. The information before the AER on that basis did not support a materially different number of reads per hour or per day, by reason of some of the reads being of Type 5 meters.

1155    The AER accepted that, in the case of Type 5 meters, there may be a slightly higher frequency of meter reads because of servicing issues or for other reasons, the data available to it indicated that in the 2013-14 year, that was an insignificant (1.5 percent) difference.

1156    The Tribunal is not satisfied that those two steps by the AER are either incorrect or inappropriate. They do not lead to the Tribuanl having the firm view that some other finding of fact on that topic is the correct one, whether it be the finding of fact urged by Ausgrid or some other finding of fact.

1157    In those circumstances, the use of Energex as a benchmark for metering costs, despite the differences in its network which Ausgrid pointed out, was not an incorrect exercise of a discretion as asserted by the AER.

1158    The Tribunal is also not satisfied that the AER did not adequately consider the differences between Type 5 and 6 meters, and their associated costs. The Tribunal notes that the associated costs for storage of data and information technology infrastructure are a capital cost, which was allowed. It may be that capital costs for collecting and maintaining data, in alignment with AEMO requirements, are somewhat higher because of the vastly more data to manage, validate and report on.

1159    However, it was open for the AER to conclude that only 1.5 percent of customers would be causing higher opex costs as a result of monthly meter reads. It is logical that a monthly site visit to extract meter data would be more expensive than a quarterly read. The surcharge cost, the additional time, the more sophisticated equipment and regular repair and maintenance are all relevant considerations. If Ausgrid is required to pay additional charges for probed readings, it takes longer and the technology is more sophisticated, then to that extent it may be more costly to operate and maintain. Using Energex as the benchmark, against Ausgrid’s lower nominal cost per read of Type 6 meters would appear to have the potential for such higher costs built into the AER decision.

1160    In any event, having regard to the relatively small percentage of increased reads which the data suggested, the Tribunal does not consider that the overall allowance by the AER indicates an error of fact in the findings of fact or an error in the selection of total allowance which would be material to its decision.

1161    The Tribunal is also not satisfied that the AER’s decision to use the 5 year averaging to calculate metering opex demonstrates reviewable error. It did not overlook the fact that there were more Type 5 meters in its distribution system towards the end of that period. It acknowledged that in future years the intensity of Type 5 meters may increase, but indicated that with the passage of time allowance could be made for that to the extent appropriate. However, for the purposes of the current regulatory period, it was appropriate for the AER to be conscious of the risk that the 2012-13 year (Ausgrid’s suggested base year) may be unreliable because it may reflect the overloading of metering opex in that year.

1162    The Tribunal regards the AER’s caution in the circumstances as appropriate. In general terms, a multi-year averaging is more robust. It is not satisfied that the AER’s decision in this regard involved any relevant factual error, or any wrongful exercise of its discretion.

Conclusion

1163    For those reasons, the Tribunal does not consider it appropriate to set aside the AER’s decision on metering opex in the Ausgrid Final Decision.

1164    It is not necessary, therefore, to consider whether or how the rectification of any ground of review would or would be likely to lead to, or contribute to, a materially preferable NEO decision.

1165    If error of the kind asserted by Ausgrid were made out, it is not at present obvious to the Tribunal that the correction of the asserted errors by remitting these issues to the AER would, or would be likely to, lead to a materially preferable NEO decision. As discussed in the concluding section of these reasons, it appears that to a significant extend, the AER (and on review the Tribunal) is charged with making the best or preferable decision in the long term interests of consumers which may involve a trade off between cost and quality or reliability of the provision of the service. At least on this particular topic, the trade off is not self-evidently in favour of increasing the cost to consumers, for the benefit of the installation of the Type 5 meters or for the benefit of the detailed usage data that can then be provided. That may or may not be the case. It is not necessary to decide it in this context.

FINAL CONCLUSIONS

General

1166    For the reasons stated above, the Tribunal concludes that Ausgrid has made out its grounds of review in relation to the allowance or figures in the Ausgrid Final Decision for:

(1)    Opex. As explained above, it follows from that conclusion that the AER’s decision on Ausgrid’s STPIS is flawed and it is unnecessary for the Tribunal to determine whether the AER’s decision on the X factor is flawed.

(2)    Return on debt.

(3)    Gamma.

1167    Also for the reasons stated above, the Tribunal concludes that Ausgrid has not made out its grounds of review in relation to the allowance or figures in the Ausgrid Final Decision for:

(1)    EBSS.

(2)    Return on equity.

(3)    Metering services.

1168    It is then necessary to decide what, if any, orders should be made by the Tribunal in the light of those conclusions. The options available to the Tribunal under s 71P of the NEL are:

(1)    to affirm the Ausgrid Final Decision;

(2)    to vary the Ausgrid Final Decision; or

(3)    to set aside the Ausgrid Final Decision and remit the matter to the AER to make the decision again in accordance with any direction or recommendation of the Tribunal.

1169    As noted above, the Tribunal may only vary the Final Decision, or set it aside if, as s 71P(2a) provides, the Tribunal is satisfied that, to do so will, or is likely to, result in a materially preferable NEO decision (otherwise the Final Decision must be affirmed): s 71P(2)(c). In the case of a variation order, it is only if the Tribunal is satisfied that a variation order will not require the Tribunal to undertake an assessment of such complexity that it is preferable to set aside the Final Decision and remit it to the AER: s 71P(2a)(d).

1170    The second step can readily be addressed. It is almost self-evident from the topics in respect of which the Tribunal has found grounds of review made out and its reasons that the task of undertaking the appropriate review and determining the appropriate orders is a complex one. The Tribunal does not presently have the resources available to the AER to itself undertake that task and to secure those resources rather than have the AER reconsider its decision would not be sensible.

1171    Networks NSW proposed a variation of the Final Decision in the case of Ausgrid by substituting in respect of:

(1)    opex, the figure of $2,674.3m ($2013-14, excluding debt raising costs and demand management innovation allowance) for the period 2014-19;

(2)    gamma, the total income requirements for standard control services altered to amounts calculated by reference to an estimated cost of corporate income tax based on a gamma of 0.25, with consequential amendments in respect of alternative control services;

(3)    allowed rate of return on debt, by amounts calculated by reference to a return on debt of 7.908 percent for the 2014-15 regulatory year; of 7.94 percent for the 2015-16 regulatory year; and annual updating using a trailing average methodology with consequential amendments;

(The grounds of review were not made out in respect of the allowed rate of return on equity. The Tribunal has excluded the adjustments Ausgrid has proposed requiring use of a BBB credit rating and the RBA curve only, as it has not concluded that the use of the credit rating and mixed curve selected by the AER separately give rise to any ground of review.)

1172    There were variations in those proposed final orders to accommodate the different positions of Endeavour and Essential. The Networks NSW proposed opex allowance for Endeavour without specifying nominal or real of $1465.6m and for Essential of $2306.6m. There were EBSS adjustments for Endeavour of $197m and Essential of $72m (again without specifying nominal or real). The proposed orders also recognised that only Ausgrid provided transmission control services.

1173    Networks NSW, no doubt to cover the prospect of the Tribunal not being satisfied that the variation of the relevant Final Decisions would not require such a complex assessment as to warrant the preferable course of remitting the matter to the AER, also submitted alternative orders to accommodate the course of remitting the Final Decision to the AER with directions.

1174    The Tribunal’s consideration of the opex issue is sufficient to explain why it does not have the satisfaction prescribed in s 71P(2a)(d). The task involves a reconsideration of extensive source material and the decision upon the form or forms of modelling, including the relevant inputs, which is a complex task in itself. It requires the careful re-analysis of historical data. The material which comprised the review-related material for the purposes of the hearing of these applications was said to extend to more than one million pages. The number of expert reports is very extensive.

1175    The short step the Tribunal has taken in the light of its reasons regarding the opex issue is that it does not have the satisfaction prescribed in s 71P(2a)(d) to lead it to making orders varying the relevant Final Decisions of the AER in relation to Networks NSW.

A Materially Preferable NEO Decision?

1176    It is then necessary to address the issue prescribed in s 71P(2a)(c). Unless the Tribunal has that satisfaction, notwithstanding its conclusions about the ground or grounds of review made out, it must affirm the Ausgrid Final Decision and, in the absence of particular reasons to distinguish the circumstances of Endeavour and Essential, each of the Networks NSW Final Decisions.

1177    In considering that issue, the Tribunal is directed to consider each of the matters referred to in s 71P(2b)(a)-(c). It is also directed by s 71P(2b)(d) that, in themselves, neither the establishment of a ground of review, nor the “consequences for, or impacts on, the average annual regulated revenue” of a DNSP, nor the fact that the amount in issue exceeds the amount specified in s 71F(2) – namely the lesser of $5m or 2 percent of the average annual regulated revenue of the DNSP – “determine” the question about whether a materially preferable NEO decision exists.

1178    At a straightforward level, Networks NSW contends that correcting an error (as established by a ground of review being made out) will, or will be likely to, result in a materially preferable NEO decision. The fact that a ground of review is made out cannot, of itself, determine that question affirmatively: s 71P(2b)(d)(i). But, Networks NSW says, the proper application of the building block methodology in the NER, with each building block determined in accordance with the NER, will promote the NEO.

1179    There is, however, an additional step required. The fact that (as may be accepted) the proper application of the NER on the building block methodology under Part C of the NER will promote the NEO does not mean that, where a step taken by the AER is, or is not, in full accordance with the building block methodology, the NEO is not being achieved. There may be other matters which the AER considered, and which may balance out any adverse consequences of such non-compliance. The amounts involved may not merit the description of a departure from the building block approach so as to impair in a material way the NEO. Depending on the options considered by the AER, there may be two or more possible decisions which may contribute to the achievement of the NEO, and the AER may have formed an appropriate assessment of those alternatives: s 16(1)(d) of the NEL. As was pointed out by the AER, s 16(2) requires it to “take into account” the RPP in s 7A when exercising a discretion in relation to a regulatory distribution or transmission determination, or when making an access determination relating to a rate or charge for an electricity network service provider. The AER says that how it takes the RPP into account is a matter for it.

1180    It is also important to acknowledge, as was very clearly demonstrated by the consultation undertaken by the Tribunal, that the elements of the NEO – in the long term interests of consumers – are potentially in conflict. In particular, the price at which electricity is supplied to consumers is presently (and will continue to be under the new regulatory regime) one which many consumers find confronting. There are significant numbers of consumers or potential consumers who either cannot pay, or have great difficulty in paying, that price. The difficulty in paying that price was also reported by some small and medium sized businesses, so that alternatives to using the electricity network or a focus on minimising that usage, were explained. On the other hand, for obviously good personal or commercial reasons, there were a significant number of consumers who expressed the need to have a very reliable and secure supply of electricity, and others who emphasised the need for safety in the structure and operations of the network.

1181    Where the line or lines are to be drawn between price on the one hand, and quality service, reliability and security of supply (or some of those elements) on the other, is not an easy question. The line nevertheless is clearly one which must be drawn. The consultation process, and the submissions of all parties, made it clear that some compromise is necessary. Also, as observed in the Introduction to these reason, it was specifically noted on the introduction of the NEO (which has remained constant) that the NEO did not (and has not) extended to “broader social and environmental objectives”: Legislative Council, South Australia, 16 October 2007, Hansard p 886.

1182    It is also important to note that the NEO is to promote efficient investment in, and efficient operation and use of, electricity services for the long term interests of consumers with respect to the identified topics. Efficiency is an economic concept. It is then explained or expanded on by the RPP. It is not necessary to list the RPP serially to reinforce that point; they include references to consideration of the potential for under and over investment and under and over utilisation where regulatory control is imposed. The building blocks specified in Part C of the NER, as generally identified in r 6.4.3 and as then specified in r 6.5 fortify the appropriateness of that observation.

1183    Consequently, the line to be drawn involves or requires a regulatory assessment to be made about those matters.

The AER’s Approach

1184    It is instructive, in this context, to refer to how the AER addressed that task, and in turn its obligations under s 16(1)(d) of the NEL.

1185    In this respect, the references to the Ausgrid Final Decision can be taken as typical of how the AER explained its approach to those matters in each of the five Final Decisions which gave rise to the eight applications heard together.

1186    It will not be necessary, when addressing them individually (including the JGN application under the NGL) to refer separately to each of the respective Final Decisions.

1187    Under the heading “Contribution to the achievement of the NEO” (Ausgrid Final Decision – Overview at p 10-20), the AER specifically recorded its conclusion in terms of s 16(1)(d) and in terms of the NEO.

1188    It recognised the key drivers of cost facing a network service provider are:

    its accumulated network investment (reflected in the regulatory asset base);

    its expected growth in network expenditure (reflected in the capex program “net of capital returned to shareholders through depreciation”);

    its financing costs (interest on borrowing and a return on equity to shareholders);

    its opex program (the cost of operating and maintaining the network); and

    its taxation costs (taxable income at the corporate rate adjusted for the value of imputation credits).

1189    Each of those topics reflects one of the building blocks in the NER.

1190    At p 11, the AER referred to the most important factors impacting on Ausgrid’s costs in the 2015-19 regulatory control period. It identified an improved investment environment, translating to lower financing costs necessary to “attract efficient investment”; as evidence that Ausgrid’s past expenditure has been higher than necessary to maintain its network safety and reliability (confirmed, it stated, by inter alia its benchmarking analysis; lower than expected demand growth and therefore falling levels of utilisation; expected reasonably flat forecast demand; and inefficiency in Ausgrid’s labour and workforce practices).

1191    It concluded at p 11:

These factors are reflected throughout our final decision and impact the different constituent components of our decision to varying degrees. At the total revenue level, they provide a consistent picture: Ausgrid, operating prudently and efficiently, could provide distribution services with materially less revenue than it has proposed for the 2015-19 regulatory control period. Further, the average annual revenue Ausgrid requires in the 2015-19 regulatory control period is materially less than the revenue it recovered from customers in 2013-14.

In our final decision we consider that Ausgrid’s proposal does not reflect the factors impacting on its cost drivers to a satisfactory extent. As a consequence, we conclude that Ausgrid has proposed to recover more revenue from its customers than is necessary for the safe and reliable operation of its network. It follows that we consider that Ausgrid’s revised proposal does not contribute to the achievement of the NEO to a satisfactory degree.

1192    As the AER then said, the major constituent components of the AER Final Decision related to the rate of return and opex. Those matters are, of course, addressed above.

1193    One matter which the AER addressed at this point, prompted by Ausgrid’s Revised Regulatory Proposal, was the “safety implications” of the opex proposed in the Draft Decision. The AER said, in part by reference to its benchmarking analysis and the OEFs, that it considered the revenue allowance it allowed would fund efficient costs for Ausgrid, as a prudent operator, to run its network safely and reliably. Thus, the AER said, its costs above that level should be borne by its shareholders and not its consumers.

1194    Another matter addressed at this point, prompted also by Ausgrid’s Revised Regulatory Proposal, was financeability. The AER noted that Ausgrid indicated that its financial viability would be threatened as a result of its Draft Decision if carried through. In support of this, Ausgrid had submitted a range of material including an expert’s report submitting that sizeable opex reductions in a short period of time would negatively impact the ongoing financeability of the DNSPs and their viability as economic entities: Ausgrid, Revised Regulatory Proposal, January 2015 at p 45; a confidential credit profile report by Standard and Poors (S&P): S&P, Confidential credit assessment: Ausgrid – Stand-alone credit profile, January 2015; and a report by USB including confidential content relevant to financeability: USB, Financeability – Debt issue and capital structure (Confidential version), January 2015.

1195    The AER said as to that at pp 18-19:

Neither the NEL nor the NER include an explicit obligation requiring us to consider the impact of our determination on the viability of the service provider in its actual circumstances. Our task is to determine the revenue that a service provider can recover from its customers with reference to an efficient and prudent level of expenditure. The service provider’s actual ownership circumstances and the financial structure of its shareholder are not factors that we are required to consider in fulfilling our task under the NEL or the NER.

1196    The AER added that Ausgrid had not been clear about what it meant by the term “financial viability”, so it had considered whether it would be at material risk of insolvency. By modelling cash flows, and on the basis of an expert report of RSM Bird Cameron: Independent review of the AER’s internal cash flow analysis of insolvency risk for NSW electricity service providers for the regulatory period 2014-19, April 2015, the AER concluded that Ausgrid would not be at material risk of insolvency. It was not prepared to act on the S&P Report, as it “was not persuaded” that the assumptions underlying that report were reasonable; its cash flow modelling and its expert report did not support the claim; and it noted that Ausgrid was subject to a stable regulatory environment favourable for capital raising (of course, the latter observation is dependent upon informed investors considering that an investment of capital is worthwhile, and upon informed lenders considering that the advance of funds would be able to be funded as to interest at an appropriate rate and the funds duly repaid).

1197    The AER Final Decision in Attachment 20 – Analysis of financial viability further explains why it reached that view.

1198    More generally, and in principle more significantly, by reference to s 16(1)(d), the AER under the heading “Assessment of options under the NEO”, addressed matters relevant to the issue the Tribunal is now addressing. That is that there may be several possible decisions that will, or are likely to, contribute to the achievement of the NEO. It said at pp 19-20:

For at least two reasons, we consider that there will almost always be several decisions that contribute to the achievement of the NEO. First, the NER requires us to make forecasts, which are predictions about unknown future circumstances. As a result, there will likely always be more than one plausible forecast. Second, there is substantial debate amongst stakeholders about the costs we must forecast, with both sides often supported by expert opinion. As a result, for several components of our decision there may be several plausible answers or several point estimates within a range. This has the potential to create a multitude of potential overall decisions. In this decision we have approached this from a practical perspective, accepting that it is not possible to consider every possible permutation specifically. Where there are several plausible answers, we have selected what we are satisfied is the best outcome, under the NEL and NER.

In many cases, our approach results in an outcome towards the end of the range of options materially favourable to Ausgrid (for example, our choice of equity beta). While it can be difficult to quantify the exact revenue impact of these individual decisions, we have identified where we have done so in our attachments. Some of these decisions include:

    selecting at the top of the range for the equity beta

    setting the return on debt by reference to data for a BBB broad band credit rating, when the benchmark is BBB+

    the cash flow timing assumptions in the post-tax revenue model

    the the point at which we have set the benchmark for opex

    the allowances we have made for operating environment factors in our benchmarking analysis.

We set out our detailed reasons in the attachments. They demonstrate that the constituent components of our decision comply with the NER’s requirements. At an overall level our decision reflects the key reasons set out above, which indicate that Ausgrid should recover less revenue than it has proposed or recovered in recent years. Our decision reflects these at both the constituent component and overall revenue levels.

Given our approach, we are satisfied that our decision will or is likely to contribute to the achievement of the NEO to the greatest degree.

1199    It is important to note that the AER recognises that the regulatory decision-making under the NEL may involve balancing of factors going to the long term interests of consumers. It returned to that theme in the later section of its Ausgrid Final Decision-Overview, under the heading “Understanding the NEO” at pp 52-53. There it referred to the importance of “correct” pricing. It said that overpricing leads to consumers not using or not efficiently using the network (and the longer term pricing for those consumers continuing to do so) on the one hand, and that underpricing by too low a revenue stream leads to investors being unwilling to invest in adequately maintaining the network so as not to adversely affect its safety, security and reliability to the detriment to consumers on the other. Either of those positions would not advance the NEO.

1200    The Tribunal agrees with those observations

1201    There was an additional matter for the AER to address.

1202    In tandem with s 71P(2b)(a) and (2c)(a) for the Tribunal, s 16(1)(c) of the NEL requires the AER to consider how the constituent components of its relevant Final Decisions relate to each other, and how that inter-relationship has been taken into account. It recognised that inter-relationships can take various forms, including:

(1)    underlying drivers and context which are likely to affect many constituent components of its decision – an example is that forecast demand affects the efficient levels of capex and opex in the regulatory control period;

(2)    direct mathematical links between different components of a decision – examples are that the level of gamma has an impact on the appropriate tax allowance; the BEE’s debt to equity ratio has a direct effect on the cost of equity, the cost of debt, and the overall vanilla rate of return;

(3)    trade-offs between different components of revenue – an example is that undertaking a particular capex project may affect the need for opex or vice versa;

(4)    trade-offs between forecast and actual regulatory measures, that is the reasons for one part of a proposal may have impacts on other parts of a proposal – an example is that an increase in augmentation to the network means the distributor has more assets to maintain leading to higher opex requirements; and

(5)    the distributor’s approach to managing its network, as the distributor’s governance arrangements and its approach to risk management will influence most aspects of the proposal, including capex/opex trade-offs.

1203    That approach by the AER is a proper one. It is the approach the Tribunal takes. That is, the Tribunal considers whether, in the light of its conclusions on the merits of the grounds of review, the remittal of the matter to the AER is, or is likely to, result in a materially preferable NEO decision.

1204    The AER has drawn attention to the nature of the relevant inter-relationships in broad terms. As the AER did, the Tribunal has sought to identify in addressing the grounds of review those similar inter-relationships.

Consideration

1205    The real question, in the view of the Tribunal, is whether, by reason of the matters where it has found grounds of review made out, the balancing exercise which the AER carried out is, or may well be, erroneous. If it is, or is likely to be, then there is a very real risk that allowing the AER Final Decision concerning Ausgrid to stand will, or is likely to, have the adverse consequences to the long term interests of consumers to which the AER referred.

1206    Obviously, from the price perspective, the present issues raised by Networks NSW are not intended to reduce the price for the provision of electricity. PIAC’s applications concerning Networks NSW are intended to have that effect. Obviously, pricing for the provision of electricity services is a sensitive topic.

PIAC’s contentions

1207    For the reasons already given, the Tribunal has concluded that the AER’s approach to determining opex was erroneous. The nature of those errors is such as to have made it unnecessary to fully explore PIAC’s contentions regarding opex, namely whether the benchmark comparison point for Networks NSW should not have been lowered (as the AER did) but held to the weighted average of the upper quartile of the comparators, and whether there should be re-setting of the OEF adjustments adversely to Networks NSW.

1208    Those contentions, if accepted, would have reduced significantly the opex allowances by the following amounts (claimed by Networks NSW without specifying whether the figures were real or nominal): Ausgrid by $365m; Endeavour by $196m; and Essential by $291m.

1209    Clearly, that in turn would materially contribute to lowering the price to consumers for the provision of electricity services during the current regulatory period, and in the longer term.

1210    PIAC’s contentions, however, were premised on the AER’s primary approach being correct. The Tribunal has not accepted with respect to opex that to be the case.

1211    As to the return on equity, the Tribunal has not concluded that the contentions of PIAC should lead it to setting the equity beta at 0.5. That contention, if correct, would have set the return on equity at 5.8 percent. It would also have reduced the Networks NSW return on equity over the regulatory control period by the following amounts (claimed by Networks NSW without specifying whether the figures were real or nominal): Ausgrid by $485m, Endeavour by $196m, and Essential by $241m.

1212    Again, all else being equal, and putting aside any inter-relationships between elements of the revenue, those adjustments would substantially reduce the price to consumers for electricity.

1213    The Tribunal has also not concluded that, with respect to the return on equity, the AER’s relevant Final Decisions have exposed any ground of review as raised by Networks NSW, or by ActewAGL.

1214    As to the return on debt, again the grounds of review made out by Networks NSW have meant that PIAC’s particular point concerning the commencing year for the QTC methodology for the introduction of the trailing average approach has not needed to be determined. There were very significant potential consequences – on the assumption that the AER’s adoption of that transitional methodology was broadly correct but should have commenced at the earlier year – as the Networks NSW allowances for return on debt would have been reduced by the following amounts (claimed by Networks NSW without specifying whether the figures were real or nominal): Ausgrid by $706m, Endeavour by $288m, and Essential by $341m.

1215    If the other elements of the AER Final Decisions with respect to Networks NSW were not disturbed as no ground of review had been made out by Networks NSW, then it would be quite clear to the Tribunal that any one or more of PIAC’s contentions with respect to those topics – if established – would have satisfied the Tribunal that the error should be corrected because its correction would have, or would be likely to have, resulted in a materially preferable NEO decision. The price to consumers would have been substantially reduced, and (on the assumption of no other grounds of review being made) there would probably be no offsetting detriment to the long term interests of consumers with respect to the quality, safety, reliability or security of the supply of electricity or of the national electricity system. However, whether either the AER or Networks NSW would have asserted and established before the AER (if the matter were remitted) or before the Tribunal (if it decided simply to vary the Final Decisions) either some significant detriment or detriments to the long term interests of consumers which needed to be taken into account is not a matter which needs to be addressed.

Application of the prescribed test

1216    The AER has, as discussed above, identified the appropriate considerations required to address the making of a materially preferable NEO (or NGO) decision. Or put alternatively, the parameters within which the materially preferable NEO (or NGO) decision lies.

1217    As directed by s 71P(2b)(d), the Tribunal does not regard any of the three particular matters there referred to as determinative.

1218    However, in the Networks NSW matters, the Tribunal’s conclusions on the grounds of review indicate that in significant respects the AER has formed its decision on foundations that are not properly established. Put another way, its decisions have been reached on complex factual bases and/or the exercise of discretions giving rise to very significant outcomes which, by reason of the Tribunal’s conclusions on the grounds of review, are not appropriate to support the ultimate decision of the AER.

1219    The Tribunal, in that light, is satisfied that it is appropriate to set aside the AER Networks NSW Final Decisions and to remit them to the AER under s 71P(2)(c) of the NEL.

1220    In that way, the AER will better identify the appropriate revenue during the current regulatory control period for those entities to achieve the level of quality, safety, reliability and security of supply of electricity and of the national electricity system in the long term interests of consumers, and be in a better position then to also address the desirability of consumers not paying more than is necessary over the long term for those services. Those two elements are identified and addressed by the AER as noted above. The AER’s analysis is also reflected in the comments of the Tribunal in Electranet (No 3) [2008] ACompT 3 at [15], [201] and [251].

1221    There are obviously significant inter-relationships between elements of the building blocks. Again, the AER has identified them or some of them. To avoid a piecemeal approach, the Tribunal does not propose to restrict the AER by confining the remittal to a particular building block or building blocks. Moreover, because such significant building blocks are to be revisited, when re-assessing the options under the NEO, the Tribunal does not intend to prevent the AER (subject to giving effect to the matters determined by, or directed by, the Tribunal) from revisiting those other matters where (as noted above) its approach involved, in its view, an outcome on particular matters towards the end of the range of options materially favourable to Networks NSW. Those matters were not quantified, and the AER does not present the case that they are of such magnitude that, taken as they are, they would offset the potentially adverse consequences of the established grounds of review for the purposes of the Tribunal’s task under s 71P(2a)(and (2b).

1222    Apart from those matters, which address s 71P(2b) and (2c)(b), it is desirable to add a little more about inter-relationships having regard to s 71P(2b)(a) and (2c)(a).

1223    Obviously, as noted in the body of these reasons for decision, there is a relationship between the allowance for opex and the decisions of the AER to suspend the operation of the EBSS for Ausgrid and Essential, and not to impose the penalty carry over amount for Endeavour in relation to the EBSS in respect of the previous regulatory period.

1224    The Tribunal notes that, in relation to Networks NSW, there are no other direct inter-relationships where a change in the method for quantifying one building block necessarily requires a change in another building block analysis.

1225    At the request of the Tribunal, the AER provided a “Table of Inter-relationships in its Draft and Final Decisions” for each of the network service providers. Inevitably, as senior counsel for the AER said, it was extensive and non-specific in its application because, at that time, the parties did not know whether the Tribunal would determine whether any, and which, grounds of review would be made out. The Tribunal has had careful regard to that Table.

1226    It does not consider that the data in that Table goes to minimise or offset the very substantial (putative) consequences to Networks NSW of the grounds of review which have been made out. They are quantified in the “Networks NSW – Summary of revenue impacts”, and a related document concerning data source revenue impacts presented in the course of closing submissions. They are described by the Tribunal as “putative” simply because they represent the contentions of Networks NSW, but not necessarily the final outcomes which the AER might reach when it reconsiders its relevant Final Decisions. In any event, they have been treated by the Tribunal as indicative only. The precise figures are not critical to these decisions. The AER, in turn, present a different version of “Estimated revenue outcomes by topic (without inter-relationships)” concerning each of the five network service providers. It incorporates changes which would flow from PIAC’s contentions being accepted. The Tribunal does not express any view about the “correct” outcome, or the range of correct outcomes, following the AER’s reconsideration.

Determination

1227    The Tribunal therefore makes the following determination:

(1)    Pursuant to s 71P(2)(c) of the NEL, the Final Determination is set aside and remitted to the AER to make the decision again in accordance with the following directions:

(a)    the AER is to make the constituent decision on opex under r 6.12.1(4) of the NER in accordance with these reasons for decision including assessing whether the forecast opex proposed by the applicant reasonably reflects each of the operating expenditure criteria in r 6.5.6(c) of the NER including using a broader range of modelling, and benchmarking against Australian businesses, and including a “bottom up” review of Ausgrid’s forecast operating expenditure;

(b)    the AER is to make the constituent decision on return on debt in relation to the introduction of the trailing average approach in accordance with these reasons for decision; and

(c)    the AER is to make the constituent decision on estimated cost of corporate income tax (gamma) in accordance with these reasons for decision, including by reference to an estimated cost of corporate income tax based on a gamma of 0.25;

(d)    the AER is to consider, and to the extent to which it considers appropriate to vary the Final Decision in such other respects as the AER considers appropriate having regard to s 16(1)(d) of the NEL in the light of such variations as are made to the Final Decision by reason of (a)-(c) hereof.

A final observation

1228    The Tribunal wishes to record its appreciation to each of the applicants in these eight applications heard together, and the AER, and each of the interveners, and their respective counsel and solicitors, for the very extensive assistance provided to the Tribunal prior to and during the hearing.

1229    As is readily apparent, each of the parties, the AER and the interveners worked cooperatively to properly identify issues, to draw to the Tribunal’s attention the material relevant to those issues, and to present their respective submissions. That meant that, in some instances, joint submissions were made by more than one party; in many instances submissions made on behalf of one party were adopted and refined to a particular entity rather than repeated. The volume of the review related material was very extensive, and it appeared to the Tribunal that reference to it was appropriately selective and focused.

1230    Moreover, as all readily acknowledged, the task confronting the AER, and then the Tribunal was a very large one having regard to the extensive 2012 Rule Amendments, and the 2013 Legislative Amendments. The AER was required to adopt consultation procedures in relation to its Guidelines and its relevant Final Decisions in what was in real terms a fairly confined period. The Tribunal was constrained by a tight timetable to complete the hearing, allowing for its consultation and the work of the parties and the AER to prepare and participate in the hearing. The fact that the eight applications were able to be heard and determined in the time which has elapsed is in no small measure a consequence of the efforts of the parties, the AER and the interveners, and without exception the very considerable assistance they provided to the Tribunal.

I certify that the preceding one thousand two hundred and thirty (1230) numbered paragraphs are a true copy of the Reasons for Decision herein of the Honourable Justice Mansfield, Mr R Davey and Dr D Abraham.

Associate:

Date: 26 February 2016