Part 4 Loss of Load Probability: A Historical Perspective

Published by

  • John D. Kueck and Brendan J. Kirby, Oak Ridge National Laboratory
  • Philip N. Overholt, U.S. Department of Energy
  • Lawrence C. Markel, Sentech, Inc.

Published in Measurement Practices for Reliability and Power Quality: A Toolkit of Reliability Measurement Practices, 2004

Prepared by Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6285 managed by UT-BATTELLE, LLC for the U.S. Department of Energy under contract DE-AC05-00OR2272

Twenty years ago, when a distribution feeder recloser caused a momentary interruption to clear a fault, this was counted as a reliability improvement, since the barely noticeable flicker prevented an extended outage. Ten years ago, the recloser operation was regarded as an outage, as it interrupted electronic clocks, VCRs, personal computers (PCs), etc. Now, as appliances come equipped with “ride-through” capacitors and more PCs are using uninterruptible power supplies, the recloser operation may soon be classified again as a non-outage. For the reliability of 21st-century power systems, this development has two implications:

  • It may not be appropriate to require the utility alone to meet reliability and power quality criteria; the customer, too, must take some responsibility.
  • The “reliability” of electric service is a function of the loads served, as well as of the characteristics of the electricity provided.

Consider the historical use of loss-of-load probability (LOLP), which has been used for years as the single most important metric for assessing overall reliability. LOLP is a projected value of how much time, in the long run, the load on a power system is expected to be greater than the capacity of the generating resources. It is calculated using probabilistic techniques. In setting an LOLP criterion, the rationale is that a system strong enough to have a low LOLP can probably withstand most foreseeable outages, contingencies, and peak loads. A utility is expected to arrange for resources—generation, purchases, load management, etc.—so the resulting system LOLP will be at or below an acceptable level.

Loss-of-load probability characterizes the adequacy of generation to serve the load on the system. It does not model the reliability of the transmission and distribution system where most outages occur.

LOLP is really not a probability but an expected value.3 It is sometimes calculated on the basis of the peak hourly load of each day, and sometimes on each hour’s load (24 in a day). As a result, the same system may be characterized by two or more values of LOLP, depending upon how LOLP is calculated. Moreover, LOLP is used to characterize the adequacy of generation to serve the load on the bulk power system; it does not model the reliability of the power delivery system—transmission and distribution—where the majority of outages actually occur.

The LOLP criterion is much like a rule of thumb to maintain a 25% reserve margin, but it is an improvement because it takes into account system characteristics such as generator reliability, load volatility, correlation of summer peak loads, and unit deratings. Thus, where one utility might function acceptably with a 25% reserve margin, another might survive with 20%, and still another might require 30% to maintain the same LOLP. If utilities were planned so that they maintain an appropriate reserve margin, different utilities should have different reserve margins because the same reserve margin in different utilities would result in different levels of reliability.

The common practice was to plan the power system to achieve an LOLP of 0.1 days per year or less, which was usually described as “one day in ten years.” This description resulted from erroneously assuming the LOLP was a probability rather than an expected value, interpreting the 0.1 criterion as a probability of 0.1 per year that load would exceed supply, and simplifying this as “a probability of 0.1 per year results in an interruption every 10 years.” In addition to the definition error, there are several problems with this use of LOLP:

  • LOLP alone does not specify the magnitude or duration of the electricity shortage. As an expected value, it does not differentiate between one large shortfall and several small, brief ones.
  • Different LOLP calculation techniques can result in different indices for the same system. Some utilities calculate LOLP based on the hour of each day’s peak load (i.e., 365 computations), while others model every hour’s load (i.e., 8760 computations).

    In fact, “one day in ten years” is not acceptable. The Northeast blackouts of 1965 and 2003 and the New York City blackout of 1977 resulted in major changes to power system planning and operating procedures to try to prevent their recurrence, even though they occurred more than ten years apart.
  • LOLP does not include additional emergency support that one control area or region may receive from another, or other emergency measures that control area operators can take to maintain system reliability.
  • Major loss-of-load incidents usually occur as a result of contingencies not modeled by the traditional LOLP calculation. Often, a major bulk power outage event is precipitated by a series of incidents, not necessarily occurring at the time of system peak (when the calculated risk is greatest).

All of these problems stem from a further misunderstanding of the meaning of reliability indices, such as LOLP or frequency and duration. LOLP is an index, a surrogate indicator of the robustness of the bulk power system. The vertically structured utility will build generation or enter into power purchase contracts to achieve the required LOLP, but LOLP is not necessarily an accurate predictor of the resulting incidence of electricity shortages.

In the vertically structured utility industry, typical guidelines for prudent planning were LOLP of 0.1 or less and ability to withstand the single maximum credible multiple contingency (or the worst two or three) at the time of heaviest load.

These were accepted as best practices of that time; however, there was criticism that utilities were “gold-plating” their systems by building too much capacity. The feeling was that the level of reserves or redundancy provided by the utilities was not cost-justified compared with the costs of outages. Generation reserves were declining under declining regulation. In the restructured utility environment they are now greatly increased in many regions. Transmission reserves, which are regulated, are still declining.


3. IEEE, “Probability Analysis of Power System Reliability,” IEEE Tutorial, Course Text 71 M30-PWR, 1971.

Published by PQBlog

Electrical Engineer

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s