Part 5 IEC Standards Description and Discussion

Published by

  • John D. Kueck and Brendan J. Kirby, Oak Ridge National Laboratory
  • Philip N. Overholt, U.S. Department of Energy
  • Lawrence C. Markel, Sentech, Inc.

Published in Measurement Practices for Reliability and Power Quality: A Toolkit of Reliability Measurement Practices, 2004

Prepared by Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6285 managed by UT-BATTELLE, LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725


The International Electrotechnical Commission (IEC) (www.iec.ch) is an organization based on a structure of national committees that appoint experts to IEC working groups to develop standards and other documents (e.g., recommended practices, guidelines). There are about 100 such working groups. The working group documents are reviewed and approved by the participating national committees on a one-country, one-vote basis.

Among the documents issued by IEC working groups are international standards, technical specifications, technical reports, and industry technical agreements. International standards issued by IEC are publications resulting from international consensus or approval. IEC technical specifications are similar to standards but have not yet obtained the required consensus, or they cover practices which the working group feels are premature to standardize. IEC technical reports contain data from surveys or delineate the “state of the art” in a technical area. The IEC has also adopted a procedure for industry technical agreements (ITAs), which are platforms for reaching technical agreements among key industry organizations in time-critical market sectors. ITAs are intended to be used by industry in high-tech areas where international consensus standards are not needed immediately. IEC technical committees developing power quality-related standards are

  • SC37A—Low-voltage surge protective devices
  • SC77A—EMC–Low-frequency phenomena
  • TC64—Electrical installations
  • TC81—Lightning protection

The IEC’s strength is that its standards result from multinational input and are the result of international consensus. However, IEC standards are often different from, and sometimes incompatible with, U.S. standards developed by ANSI, IEEE, the National Fire Protection Association, or other U.S. code-making bodies. Table 1 shows the correspondence between IEC and some other power quality standards.

Table 1. Correspondence Between IEEE, ANSI, and IEC power quality standards

DisturbanceIEEEIEC
Harmonic environmentNoneIEC 1000-2-1/2
Compatibility limitsIEEE 519IEC 1000-3-2/4 (555)
Harmonic measurementNoneIEC-1000-4-7/13/15
Harmonic practicesIEEE 519AIEC-1000-5-5
Component heatingANSI/IEEE C57.110IEC 1000-3-6
Under-sag-environmentIEEE 1250IEC 38, 1000-2-4
Compatibility limitsIEEE P1346IEC 1000-3-3/5 (555)
Sag measurement None IEC 1000-4-1/11
Sag mitigationIEEE 446, 1100, 1159IEC 1000-5-X
Fuse blowing/upsetsANSI C84.1IEC 1000-2-5
Oversurge environmentANSI/IEEE C62.41IEC 1000-3-7
Compatibility levelsNoneIEC 3000-3-X
Surge measurementANSI/IEEE C62.45IEC 1000-4-1/2/4/5/12
Surge protectionC62 series, 1100IEC 1000-5-X
Insulation breakdownBy productIEC 664
Source: EPRI–Power Electronics Application Center

Part 4 Loss of Load Probability: A Historical Perspective

Published by

  • John D. Kueck and Brendan J. Kirby, Oak Ridge National Laboratory
  • Philip N. Overholt, U.S. Department of Energy
  • Lawrence C. Markel, Sentech, Inc.

Published in Measurement Practices for Reliability and Power Quality: A Toolkit of Reliability Measurement Practices, 2004

Prepared by Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6285 managed by UT-BATTELLE, LLC for the U.S. Department of Energy under contract DE-AC05-00OR2272


Twenty years ago, when a distribution feeder recloser caused a momentary interruption to clear a fault, this was counted as a reliability improvement, since the barely noticeable flicker prevented an extended outage. Ten years ago, the recloser operation was regarded as an outage, as it interrupted electronic clocks, VCRs, personal computers (PCs), etc. Now, as appliances come equipped with “ride-through” capacitors and more PCs are using uninterruptible power supplies, the recloser operation may soon be classified again as a non-outage. For the reliability of 21st-century power systems, this development has two implications:

  • It may not be appropriate to require the utility alone to meet reliability and power quality criteria; the customer, too, must take some responsibility.
  • The “reliability” of electric service is a function of the loads served, as well as of the characteristics of the electricity provided.

Consider the historical use of loss-of-load probability (LOLP), which has been used for years as the single most important metric for assessing overall reliability. LOLP is a projected value of how much time, in the long run, the load on a power system is expected to be greater than the capacity of the generating resources. It is calculated using probabilistic techniques. In setting an LOLP criterion, the rationale is that a system strong enough to have a low LOLP can probably withstand most foreseeable outages, contingencies, and peak loads. A utility is expected to arrange for resources—generation, purchases, load management, etc.—so the resulting system LOLP will be at or below an acceptable level.


Loss-of-load probability characterizes the adequacy of generation to serve the load on the system. It does not model the reliability of the transmission and distribution system where most outages occur.


LOLP is really not a probability but an expected value.3 It is sometimes calculated on the basis of the peak hourly load of each day, and sometimes on each hour’s load (24 in a day). As a result, the same system may be characterized by two or more values of LOLP, depending upon how LOLP is calculated. Moreover, LOLP is used to characterize the adequacy of generation to serve the load on the bulk power system; it does not model the reliability of the power delivery system—transmission and distribution—where the majority of outages actually occur.

The LOLP criterion is much like a rule of thumb to maintain a 25% reserve margin, but it is an improvement because it takes into account system characteristics such as generator reliability, load volatility, correlation of summer peak loads, and unit deratings. Thus, where one utility might function acceptably with a 25% reserve margin, another might survive with 20%, and still another might require 30% to maintain the same LOLP. If utilities were planned so that they maintain an appropriate reserve margin, different utilities should have different reserve margins because the same reserve margin in different utilities would result in different levels of reliability.

The common practice was to plan the power system to achieve an LOLP of 0.1 days per year or less, which was usually described as “one day in ten years.” This description resulted from erroneously assuming the LOLP was a probability rather than an expected value, interpreting the 0.1 criterion as a probability of 0.1 per year that load would exceed supply, and simplifying this as “a probability of 0.1 per year results in an interruption every 10 years.” In addition to the definition error, there are several problems with this use of LOLP:

  • LOLP alone does not specify the magnitude or duration of the electricity shortage. As an expected value, it does not differentiate between one large shortfall and several small, brief ones.
  • Different LOLP calculation techniques can result in different indices for the same system. Some utilities calculate LOLP based on the hour of each day’s peak load (i.e., 365 computations), while others model every hour’s load (i.e., 8760 computations).

    In fact, “one day in ten years” is not acceptable. The Northeast blackouts of 1965 and 2003 and the New York City blackout of 1977 resulted in major changes to power system planning and operating procedures to try to prevent their recurrence, even though they occurred more than ten years apart.
  • LOLP does not include additional emergency support that one control area or region may receive from another, or other emergency measures that control area operators can take to maintain system reliability.
  • Major loss-of-load incidents usually occur as a result of contingencies not modeled by the traditional LOLP calculation. Often, a major bulk power outage event is precipitated by a series of incidents, not necessarily occurring at the time of system peak (when the calculated risk is greatest).

All of these problems stem from a further misunderstanding of the meaning of reliability indices, such as LOLP or frequency and duration. LOLP is an index, a surrogate indicator of the robustness of the bulk power system. The vertically structured utility will build generation or enter into power purchase contracts to achieve the required LOLP, but LOLP is not necessarily an accurate predictor of the resulting incidence of electricity shortages.

In the vertically structured utility industry, typical guidelines for prudent planning were LOLP of 0.1 or less and ability to withstand the single maximum credible multiple contingency (or the worst two or three) at the time of heaviest load.

These were accepted as best practices of that time; however, there was criticism that utilities were “gold-plating” their systems by building too much capacity. The feeling was that the level of reserves or redundancy provided by the utilities was not cost-justified compared with the costs of outages. Generation reserves were declining under declining regulation. In the restructured utility environment they are now greatly increased in many regions. Transmission reserves, which are regulated, are still declining.

Reference

3. IEEE, “Probability Analysis of Power System Reliability,” IEEE Tutorial, Course Text 71 M30-PWR, 1971.

Part 3 Power Quality: Definition and Discussion

Published by

  • John D. Kueck and Brendan J. Kirby, Oak Ridge National Laboratory
  • Philip N. Overholt, U.S. Department of Energy
  • Lawrence C. Markel, Sentech, Inc.

Published in Measurement Practices for Reliability and Power Quality: A Toolkit of Reliability Measurement Practices, 2004

Prepared by Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6285 managed by UT-BATTELLE, LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725


The IEEE Standard Dictionary of Electrical and Electronics Terms defines power quality as “the concept of powering and grounding sensitive electronic equipment in a manner that is suitable to the operation of that equipment.” Power quality may also be defined as “the measure, analysis, and improvement of bus voltage, usually a load bus voltage, to maintain that voltage to be a sinusoid at rated voltage and frequency.”2

Today’s electronic loads are susceptible to transients, sags, swells, harmonics, momentary interruptions, and other disturbances that historically were not cause for concern. For sensitive loads, the quality of electric service has become as important as its reliability. Power quality is a new phenomenon. Events such as voltage sags, impulses, harmonics, and phase imbalance are now power quality concerns. Power quality problems have a huge economic impact. As a result, any discussion of power system reliability must also include power quality.

The body of literature on reliability indices and calculation techniques represents a fairly mature discipline. In contrast, power quality references are works in progress, often revised and frequently outdated. There are several reasons for this:

  • Reliability and availability describe clearly defined events—loss of power. Power quality incidents are often momentary—a fraction of a cycle—and hard to observe or diagnose. Power quality measurement devices had to be developed so that the phenomena could be observed before power quality could be analyzed.
  • The growing digital load, and the increased sensitivity of some of these loads, means that the definition of a power quality incident keeps changing. Ten years ago, a voltage sag might be classified as a drop by 40% or more for 60 cycles, but now it may be a drop by 15% for 5 cycles.
  • The constituencies concerned with power quality are very diverse—utilities, regulators, facilities managers, equipment manufacturers, electrical engineers, electrical inspectors, building designers, electricians. All these groups have different definitions, objectives, responsibilities, criteria, and levels of sophistication in measurement and modeling capabilities.
  • Power quality often involves safety issues (e.g., grounding and elevated neutral voltages) that were not ever a part of reliability assessment.
  • Power quality involves design issues, such as the stiffness of the user’s distribution system, that did not have such an impact on operational reliability before.
  • Power quality problems can easily cause losses in the billions of dollars, and an entire new industry has recently grown up to diagnose and correct these problems.
  • Often, power quality problems can best be addressed with local corrective actions, and these local devices are undergoing a revolution themselves, with changes occurring rapidly.

There are many measures and indices of power quality. Some of the more common indices are the following:

  • Total harmonic distortion (THD): The ratio of the RMS value of the sum of the individual harmonic amplitudes to the RMS value of the fundamental frequency
  • K factor: The sum of the squares of the products of the individual harmonic currents and their harmonic orders divided by the sum of the squares of the individual harmonic currents
  • Crest factor: The ratio of a waveform’s peak or crest to its RMS voltage or current
    Flicker: A perceptible change in electric light source intensity due to a fluctuation of input voltage. It is defined as the change in voltage divided by the average voltage expressed as a percent. This ratio is plotted vs the number of changes per minute to develop a “flicker curve.”

There are many more indices and definitions of power quality. A list of power quality standards is provided in Appendix B. The definitions are rapidly changing, and often quite specialized in their application. For example, there is a harmonic voltage factor for motor application that is similar to, but not the same as, THD. The motor harmonic voltage factor is defined in NEMA Standard MG-1, a standard for motors and generators.

Typically, electrical engineers who work in various fields of electrical engineering—power systems, communications, computers—are familiar with the indices and definitions that pertain to their particular disciplines. There is also a new group of consultants who deal exclusively with power quality problems. Some industries are also developing their own standards for power quality; these are discussed further in Appendix F.

Some power quality problems are supplied to customers’ load through the utility distribution system, and some are caused by the customers themselves. Many problems originate with one customer and travel through the distribution system, and even the transmission system, to impact other customers. Some manufacturers are now equipping their products with filters and short-term storage devices so that they will be immune to many power quality problems. Local solutions to power quality problems tend to be the most cost-effective.

Knowing what power quality to expect from the power supplier is critical to designing power quality tolerance in end-use equipment, thus benefiting the customer, the utility, and the equipment manufacturer.

Reference

2. Gerald Heydt, Electric Power Quality, Stars in a Circle Publications, December 1991.

Appendix B Power Quality Standards, Guidelines, and Measurement

The following is a list and brief synopsis of many of the standards for power quality. Again, this list is not intended to capture every single standard, but rather the significant ones that are often invoked or referenced currently.

IEEE Std. 141-1995: Red Book IEEE Recommended Practices for Electric Power Distribution for Industrial Plants
Organization: IEEE
Targeted industry segment: Industrial and commercial facilities
Limitations: This standard is directed more towards good electrical power engineering practice and is not focused on power quality.
Strengths: A thorough analysis of basic electrical systems is presented. Guidance is provided in design, construction, and continuity of the overall system.
Other: Recommendations are made regarding system planning, voltage considerations, surge voltage protection, flicker, protective devices, grounding, and other issues.

IEEE Std. 519-1992: IEEE Recommended Practices and Requirements for Harmonic Control in Electrical Power Systems
Organization: IEEE
Targeted industry segment: Industrial and commercial power systems with non-linear loads
Limitations: This standard addresses steady-state operation only, and not transients.
Strengths: The practices are used for guidance in the design of power systems with non-linear loads, such as adjustable speed drives and uninterruptible power supplies.
Other: The standard also discusses power system response characteristics, the effects of harmonics, methods for harmonic control, and recommended limits.

IEEE Std. 1159-1995: IEEE Recommended Practice for Monitoring Electric Power Quality
Organization: IEEE
Targeted industry segment: Utilities and industrial and commercial power systems
Limitations: This standard deals with electromagnetic disturbances such as voltage dips, notching, oscillatory transients, and frequency variations.
Strengths: The standard provides guidance in the monitoring, classification, and correction of a wide range of steady state and transient phenomena. It defines disturbances in 24 categories of typical characteristics of power system electromagnetic phenomena.
Other: The standard provides guidance on troubleshooting, interpreting data, analysis tips, and verifying the solution. Some IEEE groups developing sections of this standard are the following:

  • UIEEE P1159.1 Task Force on Recorder Qualification and Data Acquisition Requirements for Characterization of PQ Events
    This task force is developing the Guide for Recorder and Data Acquisition Requirements for Characterization of Power Quality Events. This guide will establish the data acquisition attributes necessary to characterize the electromagnetic phenomena listed in Table 2 of IEEE Std. 1159-1995. The guide includes definitions (in conjunction with P1433), instrumentation categories, and technical requirements that are related to the type of disturbance to be recorded. The objective of this guide is to describe the technical measurement requirements for each type of disturbance in Std 1159-1995. Measurement requirements of these types of disturbances are not currently covered by other standards.
  • IEEE P1159.2 Task Force on Characterization of a Power Quality Event
    This task force is developing a recommended practice for converting a suitably sampled voltage and current data set into specific power quality categories. Appropriate definitions, categories, and sampling rates are being developed by other task forces. The emphasis is on compatibility between power delivered by power suppliers and power needed by equipment manufacturers. The translation from sets of digital data to statistically comparable events could be used for comparing power suppliers, comparing susceptibility qualities of equipment, and evaluating performance vs specification or contract.
  • IEEE P1159.3 Task Force on the Transfer of Power Quality Data
    This task force is developing a recommended practice for a file format suitable for exchanging power-quality-related measurement and simulation data in a vendor-independent manner. (Definitions and event categories are being developed by other task forces.) Many simulations and measurement and analysis tools for power quality engineers are available from numerous vendors. Generally, the data created, measured, and analyzed by these tools are incompatible among vendors. The proposed file format will provide a common ground to which all vendors could export and from which they could import to allow the end user maximum flexibility in choice of tool and vendor.

IEEE Std. 1100-1999: Recommended Practice for Powering and Grounding Electronic Equipment (Emerald Book)
Organization: IEEE
Targeted industry segment: Industrial and commercial electric power distribution systems.
Limitations: This standard is directed toward the designers and users of industrial power systems. It is not intended for utility distribution or transmission systems.
Strengths: The standard provides information on the electrical environment, conducting power surveys, monitoring and metering equipment, power conditioning equipment, and wiring and grounding for power quality. Historically, most power quality problems in an industrial environment resulted from improper wiring or grounding. The standard has been revised to include additional information on the sensitivity of industrial environments.

IEEE 1250-1995 IEEE: Guide for Service to Equipment Sensitive to Momentary Voltage Disturbances
Organization: IEEE
Targeted industry segment: Utilities and industrial and commercial power systems.
Limitations: The purpose of this guide is to assist in identifying potential problems and to suggest effective ways to satisfy sensitive equipment voltage problems.
Strengths: The guide describes many common problems, such as capacitor switching, motor starting, and tap changing.
Other: The guide provides data on various types of sensitive loads―including computers, process control, and adjustable speed drives―and suggests solutions and measures, such as grounding, circuit design, and surge protection.

IEEE Std. 1346-1998: IEEE Recommended Practice for Evaluating Electric Power System Compatibility with Electronic Process Equipment
Organization: IEEE
Targeted industry segment: Utilities and industrial and commercial power systems.
Limitations: This standard deals with planning and designing a power supply system so that compatibility issues with electronic process equipment are resolved.
Strengths: The standard provides guidance in methods for analysis of power systems in evaluating the compatibility of service quality with the equipment that uses the electricity. This standard addressesbthe issue of how service quality affects the end user. The standard provides worksheets to provide an estimate of the number of disruptions, financial loss, and analysis of alternatives.
Other: This first edition of the standard provides a methodology for voltage sags; later editions will deal with issues such as harmonics and transients. The purpose of this document is to recommend a standard methodology for the technical and economic analysis of compatibility of process equipment with the electric power system. The emphasis is on the new digital loads of microprocessors and power electronics equipment. This document does not intend to set performance limits for utility systems, power distribution systems, or electronic process equipment. Rather, it shows how the performance data for each of these entities can be analyzed to evaluate their compatibility in economic terms. The recommended methodology will also provide standardization of methods, data, and performance of power systems and equipment in evaluating compatibility so that compatibility can be discussed from a common frame of reference. The methodology is intended to be applied at the planning or design stage of a system; consequently, it does not discuss troubleshooting or correcting existing power quality problems. [See Hhttp://grouper.ieee.org/groups/1346/H index.html, IEEE Electric Power System Compatibility with Electronic Process Equipment (P1346)H.]

IEEE Std. P1564 IEEE: Recommended Practice for the Establishment of Voltage Sag Indices
Organization: IEEE
Targeted industry segment: Utilities and industrial and commercial power systems.
Limitations: This is a draft standard in preparation. It will provide sag indices to indicate the different performance levels at the transmission, substation, and distribution circuit levels.
Strengths: The standard will provide guidance in characterizing sags in terms of indices.
Other: The standard should help utilities and manufacturers compute the advantages and disadvantages of various connections to the electrical system.

CBEMA Curve and IEEE Standard 446-1995: IEEE Recommended Practice for Emergency and Standby Power Systems for Industrial and Commercial Applications (Orange Book)
Organization: Computer Business Equipment Manufacturers Association (CBEMA) and IEEE
Targeted industry segment: Computer manufacturers, building electrical system designers.
Limitations: Criteria for tolerance of computer equipment to voltage variations. The CBEMA curve has been applied to many types of electronic equipment and needs to be updated to reflect current state-of-the-art of electronic equipment.
Strengths: This is a widely accepted and recognized standard.
Other: The CBEMA Curve is a part of IEEE 446. Cognizant industry groups are involved in updating the curve and specifying the situations where it is applicable. The CBEMA curve is discussed further in Chapter 3 of this report under “A Basic Generally Accepted Level of Reliability.

Appendix F Industry Initiatives to Define Power Quality: Discussion of the SEMI, CBEMA and ITIC Curve

Semiconductors Manufacturers’ Institute

One industry that has established its own specific level of power quality is the semiconductor manufacturing industry. The Semiconductors Manufacturers’ Institute (SEMI) has developed a power quality need curve that simply shows the minimum voltage vs time that their equipment is expected to ride through an outage (Figure F.1). With this curve, semiconductor manufacturers can specify tools, adjustable speed drives, controllers, etc. that are designed to function during power quality events. The manufacturers can also specify the needed DER to ensure that they can ride through events worse than those covered by the need curve.

Figure F.1. Semiconductors Manufacturers’ Institute provisional specification for voltage sag ride-through capability.

SEMI #2844 is the ride-through limit curve for semiconductor tools

  • The curve was developed from an analysis of 30 monitor years of disturbance data collected at major semiconductor sites.
  • The proposed curve should result in less than one event per site per year.
  • However, the curve requires 80% voltage at the longer durations, 1 second to 10 seconds.
  • The curve is based on minimal use of energy storage devices; instead, it suggests the careful selection of devices such as tools, relays, and power supplies.
  • The curve assumes direct connection to transmission; connection through a distribution feeder may result in sags and durations that would fail to meet the curve.
  • DER will enable the curve to be met in locations where direct connection to transmission is not possible.

The CBEMA curve is the defined power quality acceptability level defined by a group of computer manufacturers

  • The CBEMA curve requires a return to 90% voltage after one minute.
  • The CBEMA curve is more restrictive for the first 12 cycles.
  • DER with energy storage and a power electronics interface could easily enable any manufacturing facility to meet either curve.
  • The SEMI curve was created because so much SEMI equipment could not meet the CBEMA curve.

Figure F.2 is a histogram of sag and interruption rate magnitude. The solid line indicates the CBEMA acceptance limit, and the light blue dashed line indicates the SEMI 2844 acceptance limit. (This figure is from the EPRI Distribution Power Quality Study.) The numbers in the cells indicate the probability of a sag of that voltage and duration. These are based on 1-minute aggregations from 6/1/93 to 6/1/95. The SEMI curve is not as restrictive as the CBEMA limit.

Figure F.2. A histogram of sag and interruption rate magnitude. The solid line indicates the CBEMA acceptance limit and the dashed line, the SEMI 2844 acceptance limit. The numbers in the cells indicate the probability of a sag of that voltage and duration, based on 1-minute aggregations from 6/1/93 to 6/1/95. Source: EPRI Distribution Power Quality Study.

Information Technology Industry Council Curve

ITIC, the successor organization to CBEMA, has developed the ITIC Curve, a recommended capability curve for single-phase data processing equipment operating at 120 V. The ITIC Curve provides an easier graphical format to reproduce and requires improved ride-through capability for minor voltage sags (Figure F.3). However the curve is still general in nature and does not reflect typical performance for any particular type of equipment.

Figure F.3. ITIC curve defining voltage sag ride-through design goals for manufacturers of information technology equipment (applies to single-phase 120/240-V equipment).

Part 2 Reliability: Definition and Discussion

Published by

  • John D. Kueck and Brendan J. Kirby, Oak Ridge National Laboratory
  • Philip N. Overholt, U.S. Department of Energy
  • Lawrence C. Markel, Sentech, Inc.

Published in Measurement Practices for Reliability and Power Quality: A Toolkit of Reliability Measurement Practices, 2004

Prepared by Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6285 managed by UT-BATTELLE, LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725


Reliability: Definition and Discussion

In brief, reliability has to do with total electric interruptions – complete loss of voltage, not just deformations of the electric sine wave. Reliability does not cover sags, swells, impulses or harmonics. Reliability indices typically consider such aspects as

  • the number of customers;
  • the connected load;
  • the duration of the interruption measured in seconds, minutes, hours, or days;
  • the amount of power (kVA) interrupted; and
  • the frequency of interruptions.

Power reliability can be defined as the degree to which the performance of the elements in a bulk system results in electricity being delivered to customers within accepted standards and in the amount desired. The degree of reliability may be measured by the frequency, duration, and magnitude of adverse effects on the electric supply.1

There are many indices for measuring reliability. The three most common are referred to as SAIFI, SAIDI, and CAIDI, defined in IEEE Standard 1366 (see Appendix A).

  • SAIFI, or system average interruption frequency index, is the average frequency of sustained interruptions per customer over a predefined area. It is the total number of customer interruptions divided by the total number of customers served.
  • SAIDI, or system average interruption duration index, is commonly referred to as customer minutes of interruption or customer hours, and is designed to provide information as to the average time the customers are interrupted. It is the sum of the restoration time for each interruption event times the number of interrupted customers for each interruption event divided by the total number of customers.
  • CAIDI, or customer average interruption duration index, is the average time needed to restore service to the average customer per sustained interruption. It is the sum of customer interruption durations divided by the total number of customer interruptions.

A reliability index that considers momentary interruptions is MAIFI, or momentary average interruption frequency index.

  • MAIFI is the total number of customer momentary interruptions divided by the total number of customers served. Momentary interruptions are defined in IEEE Std. 1366 as those that result from each single operation of an interrupting device such as a recloser.

The major drawback to reliability metrics is that there is a great deal of debate about comparing these indices from one geographic area to another and exactly how the input data is to be applied in making the calculations. This is discussed further in Chapter 6, “Pitfalls in Methods for Reliability Index Calculation.” In addition, there are concerns about how to “normalize” the indices for adverse weather. Many state public utility commissions require utilities to compute and track certain reliability indices, but comparing them from region to region and utility to utility has been problematic due to differences in how the data is applied, system designs, weather differences, and even differences in vegetation growth. Because of this, the indices are limited in their usefulness. If the calculation method is kept the same, they are useful within a specific geographic area in evaluating changes in reliability over time, perhaps as a measurement of the effectiveness of maintenance practices.

Reference

  1. Electric Power Research Institute, Dynamics of Interconnected Power Systems, A Tutorial for System Dispatchers and Plant Operators, prepared by Power Technologies, Inc., Schenectady, N.Y., for the Electric Power Research Institute, Palo Alto, Calif., May 1989.

Appendix A Terms and Definitions of Reliability

Major Sources for Terms and Definitions

The following is a list and brief synopsis of many of the major sources for terms and definitions of reliability and service quality. This list is not intended to capture every single definition, but rather, the important ones that are in common use or practice today.

IEEE Tutorial Course: “Probability Analysis of Power System Reliability,” Course Text 71 M30-PWR (1971)

Organization: The Institute of Electrical and Electronics Engineers (IEEE)
Targeted industry segment: Utility power system planners—bulk power and distribution systems

Limitations: This defines the terms and introduces the calculation techniques for power system reliability indices, but it does not address variations in methods for calculating the indices. As an older reference, it is not current on some reliability issues, such as demand-side management. Power quality is not addressed.

Strengths: It introduces concepts of power system reliability and provides consensus definitions of reliability terms and indices.

IEEE Std. 1366-1998: Trial Use Guide for Electric Power Distribution Reliability Indices

Organization: IEEE

Targeted industry segment: Utility distribution systems and substations and defined regions

Limitations: This standard deals primarily with interruptions over one minute and defines such indices as SAIFI (system average interruption frequency index), SAIDI (system average interruption duration index), and CAIDI (customer average interruption duration index). The indices are concerned with both the duration and frequency of interruption.

Strengths: These are the indices that are commonly reported in reliability surveys. The standard shows the mathematical definition of the indices and gives examples of their calculation.

Other: Although these are the most common indices, they do not include the voltage sag and dip disturbances that are so troubling to digital equipment. Also, there is a great deal of debate about comparing these indices from one geographic area to another, because rural areas, or areas with high lightning activity, are expected to have a higher number of outages than densely populated urban areas with network distribution systems, for example.

IEEE Std. 762: Definitions for Use in Reporting Electric Generating Unit Reliability, Availability and Productivity

Organization: IEEE

Targeted industry segment: Utility power system planners

Limitations: Does not treat intermittent generation, as from renewable sources (wind, solar), according to the generally accepted method.

Strengths: Defines and classifies parameters for use in generation availability reporting. Terms such as “maximum capacity,” “planned derating,” and “forced outage hours” are defined.

IEEE Std. 859-1987: Standard Terms for Reporting and Analyzing Outage Occurrences and Outage States of Electrical Transmission Facilities

Organization: IEEE

Targeted industry segment: Transmission system modeling and reliability evaluation

Limitations: The standard provides outage definitions and indices that are intended for use in system planning models, operations and maintenance planning, and system design. It is not intended to provide guidance on how to perform quantitative evaluations of system reliability, and it provides no guidance other than to give definitions for key indices.

Strengths: The standard provides the definitions of a number of common transmission indices such as “outage rate,” “failure rate,” and “mean time to outage.”

Other: The standard also provides definitions for terms such as “component availability” and “probability of failure to operate on command.” These indices are used by engineers involved in analyzing and predicting outages of transmission facilities.

IEEE Std. 493-1997: Recommended Practice for Design of Reliable Industrial and Commercial Power Systems (IEEE Gold Book)

Organization: IEEE

Targeted industry segment: Industrial and commercial electric power distribution systems. This standard could be used for microgrids.

Limitations: This standard is directed toward the designers and users of industrial power systems. It is not intended for utility distribution or transmission systems.

Strengths: The standard provides historical component reliability data that can be used in a quantitative evaluation of the industrial power system to compute load interruption frequency, expected duration of load interruption events, total expected interruption time per year, and system availability as measured at a specific load supply point.

Other: The method used for the evaluation is the minimal-cut-set method. System weak points can readily be identified.

Electricity Distribution Price Review—Reliability Service Standards, working document to benchmark Australian utility service quality

Organization: Office of the Regulator General, Australia

Targeted industry segment: Australian utilities and their customers

Limitations: This is an ongoing activity; therefore, the standards being developed are not yet available to the general public.

Strengths: The objective of this activity is to develop performance-based rates, including both reliability and service quality, by applying IEEE 1366 and monitoring the performance of distribution feeders. Observed levels of reliability and power quality will be used to establish benchmarks for feeders (by type of feeder and types of customers served) to set service quality targets. The utility’s incentive to meet or exceed the targets is through performance-based rates. This is an important attempt to apply indices to setting service quality and reliability standards, relating system performance to price (tariffs), and establishing benchmark levels for power system performance that are related to characteristics of the customers being served.

Other: The Electric Power Research Institute (EPRI) is developing a similar approach for U.S. utilities that goes beyond performance-based rates and IEEE 1366 to a QRA approach (see “Assessing and Evaluating Reliability,” in Chapter 2).

State of Illinois: Title 83 Public Utilities, Chapter 1: “Illinois Commerce Commission,”

Subchapter C: “Electric Utilities,” Part 411, “Electric Reliability”

Organization: Illinois Commerce Commission

Targeted industry segment: Illinois utilities

Limitations: Code language for the state of Illinois. Does not specify reliability levels: “design according to generally accepted engineering practices.” The utility is required to state what those practices are.

Strengths: Defines reliability terms and record-keeping requirements for Illinois utilities. Classifies outage causes, defines procedures and formats for annual reliability reports and assessments.

Other: Unique to one state’s requirements

Emerald Contract: power supply contract under the “Green Rate”

Organization: Electricite de France (EdF)

Targeted industry segment: Green Rate customers of EdF

Limitations: The topology of the French power system, particularly the distribution system, is unlike that of the U.S. power system. Also, EdF is a government-owned, vertically-integrated monopoly; it doesn’t operate in the same type of competitive environment as U.S. utilities. (However, European union rules on competition and third party access may change this.) This is a contract of a single utility.

Strengths: The contract contains an appendix that lists disturbances that may affect the quality of electric power, includes simplified definitions, and delineates EdF commitments and minimum acceptable levels of service quality and reliability. It also specifies the tolerances with which the customer should comply concerning disturbances generated by customer-owned equipment that could be injected into the EdF network.

Other: Provides an example of service quality commitments from the utility to the customer and requirements imposed upon the customer by the utility. Even if the numbers are not directly applicable to the United States, this approach could be applicable.

Quality of Supply Standards, User Specification, ESKOM, South Africa, NRS 048-1996

Organization: ESKOM and South Africa’s National Electricity Regulator

Targeted industry segment: Electric utility and customers

Limitations: This is one utility’s guideline, applied to a non-U.S. power system.

Strengths: This specification provides the South African electricity supply industry with a basis for evaluating the quality of supply delivered to customers by the industry, and with a means of determining whether utilities meet the minimum standards required by the National Electricity Regulator. The underlying principle is that, on a national basis, the combined cost of supply and usage of electricity be minimized. The specification recognizes that quality is affected by the users (as a result of the nature of the loads connected), as well as by the producers or suppliers. Customers are therefore essential partners with utilities in the effort to maintain the quality of supply while the supply networks are expanded and developed to allow electrification of South Africa to proceed effectively and economically. The specification deals with the voltage characteristics in statistical or probabilistic terms. NRS 048 provides an overview of standards and procedures for the management of the quality of supply in the electricity supply industry, with particular reference to the application of minimum standards to meet the requirements of the National Electricity Regulator. NRS 048 does not cover safety requirements, network design or equipment performance.

Reliability and Power Quality Performance-based Tariffs

Organization: DTE Energy (Detroit Edison Company)

Targeted industry segment: Three large industrial customers

Limitations: This is one utility’s strategy to retain customers by guaranteeing that the power supply system will meet performance criteria. Manufacturing facilities of the Big Three automakers had experienced higher-than-normal outages and voltage sags that had resulted in significant manufacturing losses. Faced with the possibility that these customers would turn to another energy supplier or self-generate, DTE offered a performance guarantee. The initial contract specified that DTE would pay stipulated “damage” costs to customers if they experienced more interruptions than specified in the contract. This performance guarantee was later expanded to include instances of voltage sags (i.e., power quality). DTE implemented measures to improve the reliability, and the resulting performance has almost completely met the reliability criteria. The power quality criterion has not been violated. The criteria were set based on historical performance, with an assumed year-by-year reliability improvement. The utility did a reliability assessment for each plant, and since existing problems or weak points were corrected, the plants’ reliability has been acceptable. The corrective actions seemed consistent with standard utility practices to provide acceptable quality service; it appears that the tariff and its penalty levels were designed more on a marketing basis than from a cost analysis. The tariff (1) retained the customers and protected DTE from competition for 5 years, (2) was used to economically justify reliability improvements that might have been within current best practices anyway, and (3) has not resulted in any significant performance payments to the customers.

Strengths: Despite its limitations, this is the first well-documented instance of service quality guarantees being included in customer agreements that recognized the utility’s obligation to meet at least a minimum level of reliability and power quality.

Other: The concept of performance-based rates and service quality guarantees seems to be gaining acceptance by utilities and regulators.

Other Sources for Terms and Definitions

Several sources define terms such as “sag,” “notch,” “undervoltage,” and “swell,” but not always consistently. Some of the more prominent references are these:

  • IEEE Std. 100-1988, IEEE Standard Dictionary of Electrical and Electronic Terms
  • IEEE Std. 1100-1999, IEEE Recommended Practice for Powering and Grounding Electronic Equipment
  • IEEE Std. 1159-1995, IEEE Recommended Practice for Monitoring Electrical Power Quality
  • American National Standards Institute/National Fire Protection Association, Standard 70, National Electric Code
  • B. Kennedy, Power Quality Primer, New York: McGraw Hill, 20

Organizations: Various organizations

Targeted industry segment: All in the power industry, including consumers, designers and electricians

Limitations: Terminology and definitions are still evolving, both within the United States and internationally, although there are attempts to make U.S. definitions consistent with international [International Electrotechnical Commission (IEC)] definitions. A major problem has been inconsistencies among utility power system designers, industrial power distribution designers, and end users. The rise in electronic loads (“the digital society”) means that some power system phenomena that previously were of no consequence are now critically important to the end users’ processes or can threaten personnel safety. There is not yet a single recognized standard for definitions and calculation techniques, but many industry groups are working together to remedy this situation.

Strengths: The references cited will provide a good foundation for power quality terminology. The Power Quality Primer provides an excellent overview of power quality concerns for both the provider and consumer.

Other: Industry groups, such as IEEE, are close to agreement on common terminology and definitions. Some of these standards (i.e., 1100, 1159) are listed again in the Power Quality Standards, as they provide requirements or guidelines in addition to definition

Part 1: Introduction and Discussion of the Measurement “Toolkit”

Published by

  • John D. Kueck and Brendan J. Kirby, Oak Ridge National Laboratory
  • Philip N. Overholt, U.S. Department of Energy
  • Lawrence C. Markel, Sentech, Inc.

Published in Measurement Practices for Reliability and Power Quality: A Toolkit of Reliability Measurement Practices, 2004

Prepared by Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6285 managed by UT-BATTELLE, LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725


Introduction and Discussion of the Measurement “Toolkit”

This report provides a distribution reliability measurement “toolkit” that is intended to be an asset to regulators, utilities and power users. The metrics and standards discussed range from simple reliability, to power quality, to the new blend of reliability and power quality analysis that is now developing. This report was sponsored by the Office of Electric Transmission and Distribution, U.S. Department of Energy (DOE).

Inconsistencies presently exist in commonly agreed-upon practices for measuring the reliability of the distribution systems. However, efforts are being made by a number of organizations to develop solutions. In addition, there is growing interest in methods or standards for measuring power quality, and in defining power quality levels that are acceptable to various industries or user groups. The problems and solutions vary widely among geographic areas and among large investor-owned utilities, rural cooperatives, and municipal utilities; but there is still a great degree of commonality. Industry organizations such as the National Rural Electric Cooperative Association (NRECA), the Electric Power Research Institute (EPRI), the American Public Power Association (APPA), and the Institute of Electrical and Electronics Engineers (IEEE) have made tremendous strides in preparing self-assessment templates, optimization guides, diagnostic techniques, and better definitions of reliability and power quality measures. In addition, public utility commissions have developed codes and methods for assessing performance that consider local needs. There is considerable overlap among these various organizations, and we see real opportunity and value in sharing these methods, guides, and standards in this report.

This report provides a “toolkit” containing synopses of noteworthy reliability measurement practices. The toolkit has been developed to address the interests of three groups: electric power users, utilities, and regulators. The report will also serve to support activities to develop and share information among industry and regulatory participants about critical resources and practices.

The toolkit has been developed by investigating the status of indices and definitions, surveying utility organizations on information sharing, and preparing summaries of reliability standards and monitoring requirements—the issues, needs, work under way, existing standards, practices and guidelines—for the following three classifications:

  • terms and definitions of reliability;
  • power quality standards, guidelines, and measurements;
  • activities and organizations developing and sharing information on distribution reliability.

As these synopses of reliability measurement practices are provided, it must be noted that an economic penalty may be associated with requiring too high a reliability level from the distribution system for all customers. It may be appropriate for the distribution system to supply only some base, generally accepted level of reliability. This base level would be adequate for the majority of customers. Users who need a higher level may find it economical to supply using distributed energy resources (DER) and other local solutions to reliability and power quality needs. Local solutions implemented by the customer may be the most cost-effective method for addressing the more stringent needs of a digital economy. These local solutions include energy storage, small distributed generators, and microgrids.

This report also considers the market’s role in addressing reliability issues and requirements. The customer’s needs are discussed in view of issues such as power quality requirements of digital electronic equipment, the cost of outages, the cost of storage and new infrastructure, and natural gas prices. The market role in addressing these issues and requirements is explored. The economic considerations associated with the reliability issues are discussed, as well as the levels at which these economic decisions could be made. Finally, a discussion is provided of the role DER could play in addressing reliability needs, and the possible role of the market in providing needed levels of reliability.
The toolkit is provided in a set of appendices. These appendices are summarized as follows:

A. Terms and Definitions of Reliability—a listing and synopsis of the major standards and codes for reliability

B. Power Quality Standards, Guidelines, and Measurements—a listing and synopsis of the significant standards for power quality

C. Activities and Organizations Developing and Sharing Information on Reliability and Power Quality—a list of organizations having a significant ongoing activity in power quality and reliability

D. Summary Table: Power Quality Development Activities—a succinct table which provides the power quality topic, the standards body, the project identification, and the title of the document

E. Discussion of the Quality–Reliability–Availability Approach—an EPRI initiative that takes an integrated look at power quality, reliability, end user needs, and the service contract

F. Industry Initiatives to Define Power Quality (SEMI, CBEMA, ITIC)—a summary of specific acceptable levels of power quality established by three industry organizations.

This report provides a measurement “toolkit” to address the interests of electricity users, utilities, and regulators.

Acknowledgments

The authors wish to thank Thomas Key and Arshad Mansoor of EPRI-PEAC, Georg Shultz of Rural Utilities Service, Mike Hyland of the American Public Power Association, Steven Lindenberg and Robert Saint of NRECA, Bernard Ziemaniek of EPRI, and Diane Barney of the NARUC Subcommittee on Electric Reliability Review for their kind contributions and review of this document.

EMI Measurements: Methodology and Techniques

Published by Vladimir Kraz, OnFILTER, Inc. 3601-B Soquel, CA 95073 USA, Tel. +1-831-824-4052, Email: vkraz@onfilter.com

Abstract

This paper describes some aspects of methodology, instrumentation and techniques of measuring high-frequency electrical noise (EMI) in electronic manufacturing environment. High frequency measurements are quite different from typical ESD-related measurements which this paper explores.

Introduction

High-frequency noise (often, albeit technically incorrectly, called “EMI”) is one of the sources of electrical overstress (EOS) that damages sensitive components. It also causes malfunction of electronic equipment.

A parameter that cannot be measured cannot be controlled. Measurements of high-frequency signals differ radically from measurements of traditional ESD parameters. This paper will present methodology and techniques for some common high-frequency measurements in manufacturing environment as well as outline critical properties of the EMI and EOS signals as related to a manufacturing environment. Many details are omitted due to limitations of space.

The basic differences between EMI and ESD measurements

Most ESD measurements deal with DC or extremely low-frequency signals of very high
impedance. Examples include static field or voltage, high resistance values of dissipative materials, ionization balance and alike. This is exactly opposite to measurements of high frequency signals which require high bandwidth, often to up to gigahertz range, and, especially for conducted measurements, matched low impedance of the instrument. Use of common-type ESD instrumentation or other general-purpose tools produces not only inaccurate results, it simply cannot produce any relevant results at all. As an example, consider ubiquitous static field meter. Its vibrating sensor works at a frequency of ~300…600Hz and after signal processing the theoretical bandwidth of 1/2 of that frequency ends up to be of less than 10…15Hz. The refresh rate of the display further limits it to ~3Hz or less.

Feedback-operated instruments, such as certain static voltmeters and CPMs have similarly low bandwidth. As a reference, conducted EMI on power lines starts at several tens of kilohertz and radiated emission of consequence starts at low megahertz. Common multimeter or current clamp have typical bandwidth of up to several kilohertz with the best of them up to few hundred kilohertz, which is grossly insufficient. Thus, a completely different set of tools and a very different methodology needs to be used by an ESD specialist for analysis of EMI environment.

Fundamentals of High-Frequency Signal Metrology

Measurement Types: High-frequency signal measurements can be of time domain, frequency domain, or broadband.

Frequency domain

Frequency domain measurements while incapable of discerning the waveform of the signal, are capable of providing the spectral characteristics of the signal. Most of applications for frequency domain measurements are in EMC compliance tests, wired and wireless communication and broadcast. Frequency domain measurements are performed with specialized instruments – spectrum analyzers – and to a lesser degree using FFT function of some oscilloscopes.

Time Domain

In time domain the waveform of the signal is studied and a variety of parameters are collected, among them are peak, average or RMS values, rise and fall times, energy, repetition rate and others. Time domain measurements are especially suited for transient signals, such as artifacts of ESD Events, noise on power (AC and DC) lines, ground and alike. Time domain measurements are made with an oscilloscope. Most high-frequency measurements in electronic manufacturing environment are time domain.

Broadband

Broadband measurements provide data only on one parameter of the signal – its amplitude regardless of signal frequency, waveform and anything else. A ubiquitous multimeter is a broadband instrument. Since time-domain measurements are prevalent for electronic manufacturing environment the rest of this paper focuses of this type of measurements.

Coaxial Cables

Essential test accessory for measurement of high frequency signals is coaxial cable of the right type. Coaxial cable provides two benefits for high frequency measurements: shielding of often weak measured signal from outside influence and elimination of induced loop currents that would be present in case of conventional test leads spread apart. Of course, a proper coaxial cable need to be selected. Among the parameters to watch for are cable impedance and attenuation at high frequencies within the band of interest. A note of caution: coaxial cables are fragile by nature. Bending and kinking them will irreversibly
damage them causing signal distortion.

Impedance Matching

At high frequencies the impedance matching is a must for most cases – impedance mismatch leads to signal distortions and ringing. The most typical output and input impedance of high-frequency instruments is 50 Ohms, although in some cases, such as TV cable, it may be 75 Ohms. Figure 1 shows the same signal measured with matched impedance (50 Ohms) and mismatched impedance (an oscilloscope set to 1M). As seen, mismatched impedance causes significant ringing of the signal and also readings of higher peak amplitude.

Figure 1. The same signal measured with matched (a) and mismatched (b) impedance

A good example of a mismatch is measurements of ESD Events using a passive near field probe (usually “ball” or “stick” antenna) and an oscilloscope as described in [1] and [2]. Such passive near-field probes which are, in essence, small antennae, may have 50 Ohms impedance only at some particular frequency. Since an ESD Event signal has wide spectrum, there will be plenty of frequencies at which the output impedance of an antenna is not 50 Ohms and this causes significant ringing.

Ringing due to Boundary Effects

Reflections from the edges of board or other metal surface also cause ringing – ultimately, it is also an impedance mismatch. Figure 2 shows how the reflections at the edge of the PC board occur. This document provides good examples of ringing caused at least in part by the boundary reflections and how the timing of the reflections is defined by the size of the board [3].

Figure 2. Signal reflections due to boundary conditions

Not all ringing of ESD Event signals is artifacts of measurements – some are genuine signal, but separating it from the measurement errors need to be meticulously done.

Frequency Response

Radiated emission is measured using antennae. Most antennae do not have flat frequency response. Many are tuned to a specific frequency or band. Even broadband antennae, such as log periodic ones used for EMC test may have very uneven frequency response offering up to more than 20dB (10 times) difference in sensitivity (Antenna Factor – AF) within the band of interest (see Figure 3) [4]. When the measurements are done using spectrum analyzer which measures one frequency at a time, a correction factor can be applied for each frequency. However, no correction can be applied for time domain measurements since all frequencies are measured at once.

Figure 3. Log-periodic antenna and its antenna factor

The result is distorted waveform that misrepresent the amount of energy of the discharge. Figure 4 shows comparison between frequency responses and transient responses of a regular passive antenna (equivalent to the abovementioned “ball” antenna) and a specially-designed active antenna for ESD measurements with the flat frequency response. As seen, the regular frequently-used passive antenna has very poor response at low end of the spectrum.

Figure 4. Comparison between regular passive antenna and a special flat-frequency active antenna

This results in apparent absence of energy component of an ESD Event and misjudgment of its effect. A note: this active antenna has an inverting amplifier inside, therefore the polarities of the picked up signal appear to be opposite.

Oscilloscope Triggering

A typical oscilloscope triggers on either positive or negative threshold, but not on both. However, EMI transient signals can have peaks at either polarity. It is important to do signal capture using trigger on both polarities – positive and then negative. Neglecting any one polarity may cost you missed signal.

Many EMI events are multiple. If Single mode of triggering is used, only first such event is captured. If Normal mode is used, it would the display the last captured event. What happens between these events is thus lost. Some digital storage oscilloscopes have memory that can store more than one captured event which helps. Where the problem rises is in trying to observe both the waveforms of each event and the interval between them on one screen to understand the timing. The duration of an ESD Event may be only few nanoseconds but the spacing between multiple discharges may be several milliseconds or longer. Screen resolution is not sufficient to display such short events if several of them are captured in one shot. In such cases specialized monitors and data
acquisition system may be required.

Multiple non-ESD EMI Events are much easier to capture and observe because such events have longer duration and their repetition rate is in the same range as the event’s duration.

Differential Signals and Ground Loops

Measurements of conducted emission with the scope are often made using regular oscilloscope probe. The problem is that ground contact of the probe is connected to mains’ ground via power cable of the scope. This creates ground loop because ground in the electrical outlet is not necessarily, and most likely not, ground vs. which the measurements should be made. In addition, this may add noise coming from the mains’ ground to the measured signal.

There are several ways to avoid the above problems. One is to use battery-powered oscilloscope. This would resolve ground loop problem, but will leave in place a problem of capacitive coupling between the chassis of the scope and grounded surfaces nearby. Another way is to use either a differential probe or two probes on two channels and use “A-B” function. This would eliminate the parasitic capacitance problem.

Figure 5. Special power line EMI Adapters

A third way is using special power line EMI adapters (Fig. 5) which provide balanced input and have single-channel output. They also protect oscilloscope from the potentially damaging voltages on measured conductors, such as power lines.

Measurements of noise on power lines

Noise on power lines of all kinds is the prime source of EMI in manufacturing environment. The biggest challenge of this type of measurements is that the signal of interest, i.e. high-frequency noise is significantly smaller than the AC mains voltage. In addition to not being able to see the desired signal, this also causes triggering problems. Another challenge is that high voltage on mains (250V RMS corresponds to 353V peak and required scope range of at least 700V on the screen. The solution to this problem is special power line EMI adapters which perform several functions: complete blockage of AC mains voltage while passing through high-frequency signals without alteration; balanced input and overload protection.

Waveform Properties

Once the waveform is captured by a high-speed digital storage oscilloscope, it needs to be properly analyzed. Depending on the purpose of the measurements sometimes different parameters may need to be considered.

Figure 6. Fundamental properties of ESD Event

ESD Events

The three critical parameters for analysis of properties of ESD Events as shown in Figure 6 are peak amplitude (either polarity), rise time and energy (area under the curve) [5]. Peak amplitude is a function of maximum discharge current through the device. Rise time indicates how fast the discharge energy flows into the device – the faster the worse for the device. “Area under the curve” of the pulse is an indication of how much energy was injected into the device. Another parameter to consider is whether multiple discharges are taking place (a frequent occurrence). This is important for two different reasons. One is that closely-spaced multiple discharges add to the energy of the discharge and may have cumulative effect; another is that measurements of multiple discharges requires special equipment and special techniques.

EMI of non-ESD Origin

High-frequency noise in manufacturing environment is largely transient signals and repeatable pulsed signals. The waveform of the signals is quite different from the ESD Events. There is no sharp rise time; the ringing is frequently present and the signal is often periodic (Figure 7). The repetition of EMI signal is largely synchronized with one of the following:

  • AC mains. Note that the repetition frequency is twice the mains frequency, i.e. 100Hz for a 50Hz system and 120Hz for a 60Hz system.
  • Switching power supply frequency. This typically ranges anywhere from 40kHz to 1MHz
  • Servo motors. This frequency is often ranges from 8kHz to 20kHz.

Figure 7. Typical noise on power lines caused by light dimmer

There are also plenty of non-synchronized occasional EMI signals caused by turning on and off lights, motors and other electrical equipment, as well as operation or end switches and solenoids. For the narrow scope of electronic manufacturing environment, the following parameters are of interest: peak amplitude (both positive and negative), rise and fall times and energy (the area under the curve of the waveform). All these parameters except the last one are easily obtainable from the scope itself. The energy needs may need to be calculated from the collected captured waveform data. This is where flat frequency response of an antenna makes a difference between correct and incorrect data.

Real-Life Example: Servo Motor Noise

Semiconductor manufacturing tools such as wire bonders, IC handlers and alike use servo and variable-frequency motors. These motors utilize pulses that generate significant currents in tool’s ground which, in turn, can cause electrical overstress (EOS) in the devices. Two types of measurements can be made – voltage and current. Figure 8 shows current measurements in ground synchronized with the rise of the drive pulse.

Figure. 8 Ground Current in Servo Motor

Current measurements were made with Tektronix’ CT1 current probe.

Figure 9 shows reduction of ground current after connecting OnFILTER’ servo motor filter between the servo amplifier and the motor.

Figure 9a. Ground Current in Servo Motor without Filter: 1.12A

Figure 9b. Ground Current in Servo Motor with the Filter: 0.0186A

Figure 9c. Measurement Setup

As seen, the EMI and a possible EOS problem was easily identified, the mitigating measures were implemented (special servo motor filter) and the improvements were verified.

Conclusion

Proper measurement methodology and techniques are essential for effective management of EMI environment in manufacturing environment. Understanding of properties of high frequency signals and specifics of their measurements help ESD specialists at the factory to be successful in this endeavor.

References

[1]Unifying Factory ESD Measurements and Component ESD Stress Testing, J. Montoya et. al., ESD Symposium Proceeds, 2005
[2] Electrostatic Discharge Detection: Antenna and Oscilloscope, Intel instructional video, http://www.intel.com/content/www/us/en/quality/esd-detection-antenna-and oscilloscopevideo.html?wapkw=ball+antenna
[3] Investigation on Discharge Current Waveforms in Board-Level CDM ESD Events With Different Board Sizes” Yuan-Wen Hsiao and Ming-Dou Ker, International ESD Workshop 2008
[4] Inside EMC Antennas, Tom Lecklider, Feb. 2010
[5] Verification of ESD Environment in Production, V. Kraz, Il Controllo dell’Elettricita Statica, Milano 29 Ottobre 2002

Survey on Assessment of Power Quality Cost in Shanghai China

Published by Qing Zhong1, Wei Huang2, Shun Tao3, and Xiangning Xiao4

Abstract

Power quality (PQ) issues cause a lot economic loss. It is very helpful for the governmental decisions of the PQ policies and supervisions to know the PQ loss from macroscopical aspects. This paper presents the procedure and some interesting findings of the survey of PQ cost of the customers in Shanghai during 2010-2011. The survey was carried out in two steps with 147 valid questionnaire samples among 18 sectors and 40 face to face interviews among 7 sectors. Based on the statistics of the answer of the respondents, there is a lot of information about PQ issues in Shanghai, such as the most frequent PQ events, the most PQ sensitive devices, the causes of PQ events, the effectiveness of the mitigation devices and so on. Finally, the total PQ loss of respondents were counted. By the linear fitting analysis, the PQ cost and annual PQ loss in Shanghai can be estimated. The PQ cost estimation is with high reference value for the government who is in charge of the policies of PQ issues and the customers who select the technical and economic strategies for PQ problems.

Index TermsPower Quality; Cost Analysis; Relevance Analysis; Macro Prediction

INTRODUCTION

With the development of the digital economics, the digital devices will play more and more important roles in the society. There is hardly a commercial or industrial facility that can suffer a loss of productivity if a serious PQ event were to impact its digital environment [1]. Therefore the economic impact of PQ issues on the whole society is to take serious consideration for the governments, utilities, device manufactures and power customers. PQ cost assessment is to know the PQ loss of the country or areas from macro-approach, which is helpful to execute the PQ management and select the right solution with optimal economic benefit.

IEEE published IEEE Standard 1346-1998 to give a guideline to investigate the economic aspect of PQ [2]. People focused on the cost of harmonics at first, including the additional energy loss, premature ageing and misoperation of the equipment [3][4]. Then with the high demand of the reliability, the methods to evaluate cost of outage had been developed [5][6]. In the recent digital society, the methods to estimate the loss of the voltage sag and short interruption was the hot point of the researches nowadays [7-9]. All the methods can be sorted as two approaches: deterministic and probabilistic. Probabilistic assessment has considered the probability of the PQ events, which is more flexible. The deterministic methods need a lot of economical data, which is very hard work.

There were many countries and areas had surveyed such questions including the loss of power interruption and other PQ issues. In 2001, EPRI published a report based on Primen survey in the United States, which showed that the collectively loss is $45.7 billion a year due to outages and another $6.7 billion a year due to other PQ phenomena. It was estimated that the U.S. economy loss between $104 billion to $164 billion due to outages and another $15 billion to $24 billion due to other PQ phenomena [10]. The Leonardo Power Quality Initiative (LPQI) team published a report about the results of European PQ survey, which comprised of 62 face-to-face interviews across eight European countries in 2008 [11]. The report showed that the PQ loss was about €150 billion, with industrial and services customers wasting around 4% and 0.142% of their annual turnover. In 2001, Taiwan did a survey of interruption cost for 284 high-tech factories. The cost of interruptions was represented as a customer damage function, which gave interruption cost as function of interruption duration. The cost in high-tech was about $37.03/kW for 2 seconds interruption, which was higher than traditional industries [12]. In South Korea, the Korea Electrotechnology Institute in cooperation with Gallup Korea conducted an interview-based survey on 660 industrial customers of various sizes and sectors. The survey presented a method to evaluate productions, sales and extra labor costs respectively, which was incurred from interruption duration and a method to evaluate interruption costs per power use according to interruption durations by industrial customer types [13]. There were many approximate works had done around work and achieve different results, from macro level, the impact of PQ on the society was very obvious and enormous.

In 2010, a survey on PQ cost of power customers was organized by Economic Operation Bureau, Development and Reform Commission, China and executed by Asia Power Quality Initiative (APQI), China Chapter. The survey made experiments in Shanghai, China, one of the largest international cities. The survey team was composed of governments, universities, institutes, utilities and customers. The survey was carried out in two steps with a duration from April 2010 to April 2011. The survey distributed about 500 questionnaires in 18 sectors and did 40 face-face interviews in 7 sectors. By the statics of the answer of the respondents, the PQ cost and annual loss could be estimated. The macro results of survey can help the government improving the PQ management policy and the customer understanding the true effect of PQ issues too.

This paper presents the designation, procedure some key results of the survey. The paper is organized as follows. Section II gives the scope and procedure of survey; section III gives the key results of PQ concerns from the statistics of the first step; section IV gives the key results from economic aspect of PQ, including the PQ loss statistics of the respondents and the PQ cost analysis in the second step. Finally, the survey is summarized and the main conclusions are drawn in the final section.

SURVEY DESIGNATION AND PROCEDURE

Scope of survey

There were 8 PQ problems investigated in the survey:

  • Long interruption (including planned and unplanned)
  • Short interruption
  • Voltage sag and swell
  • Harmonic
  • Unbalance
  • Flicker and fluctuation
  • Surge and transient
  • Power rationing

The respondents were distributed in 18 key sectors in Shanghai such as banks, foods, semiconductors and so on.

The structure of the PQ cost was in reference to the survey executed by LPQI in Europe [11]. The cost of PQ was composed of two categories: process interruption and noninterruption. The cost of process interruption was composed of six components:

  • Works in process (WIP)
  • Process slowing down
  • Process restarting
  • Equipments
  • Others
  • Savings

The cost of process non-interruption was the same as the cost of process interruption but excluding the WIP.

Survey designation and procedure

The survey was designed with two steps. Step one was carried out with questionnaire to evaluate the PQ concerns of the power customers and select the respondents for step two. Step two was carried out with face to face interviews. The respondents were acquired to fix the loss statistical forms which show the loss data of the power customers including 6 components of the PQ cost.

In step one, there were more than 500 questionnaires distributed to the power customers with different product values, power consumptions and supply voltage levels in 18 key sectors of Shanghai. But only 147 answers were valid. In step two, there were 40 face to face interviews with the respondents among 7 sectors, who were selected from step one. But only 29 answers were valid.

PQ EVENTS STATISTICS

In step one, some aspects of PQ problem can be drawn from the questionnaires of respondents. The respondents chose the sort of PQ events which took effect on the production mostly. The results show in figure 1. 68.9% of respondents consider the short interruption is the first one. The second most PQ event was the flicker and fluctuation. But unfortunately, the customers often misunderstood the voltage sag as the voltage fluctuation. The devices affected by PQ events were shown in figure 2. The most sensitive device affected by PQ issues was computers, which was accepted by 76.9% of respondents. Considering the reasons which cause the PQ problem, the statistic results show in figure 3. Most customers (85.6%) think that the utilities should take responsibility of PQ problems. The customers also thought that the nature (50%) and self faults (51.5%) were the main causes.

(%) N=122

Fig.1 statistics of PQ events affecting on the customers

(%)N=122

Fig.2 statistics of devices affected by PQ events

(%)N=132

Fig.3 statistics of reasons causing PQ events

(%)N=128

Fig.4 statistics of devices mitigating PQ problem

Many respondents chose some devices to mitigate the PQ problem, and the statistic results was given in figure 4. The most popular device is UPS (Uninterrupted Power Supply). 56.3 percent customer applied UPS to mitigate the effect of PQ issues. At the same time, 40.6% of respondents selected SVC (Static var Compensator). Still 14.8% of respondents did nothing for PQ problems. Therefore, to evaluate the PQ cost can help the customers select the correct strategies to solve the PQ problems.

Most of the customers, who applied some approach to reduce the impact of PQ problems, were not satisfied with effectiveness of the mitigation devices. 9.1% of respondents considered that the devices are invalid at all, while only 5.5% of respondents consider the effectiveness is very good as shown in figure 5. Relevance analysis between the device and PQ problems was done according the answers of the respondents and the results were given in table 1. In table 1, when P-value is smaller than 0.05, the two components is consider as related. So, according to table 1, the relationship between harmonics and active/passive filters was correct as we know, but other relationship is not suitable for the mitigating strategies. The conclusion can be drawn that the customer cannot choose the correct solutions for PQ problems excepted for the harmonics, which also cause the effectiveness of solutions is not satisfied.

(%) N=132

Fig.5 statistics of effectiveness of devices mitigating PQ problem

TABLE 1 RELEVANCE BETWEEN PQ PROBLEMS AND MITIGATING DEVICES


ECONOMIC ASPECT OF PQ

PQ loss statistics

In step two, there were 40 respondents accepting the face to face interviews and 29 valid samples with the detail statistics of the economic loss of PQ in the factories. The respondents were distributed in 7 sectors: semiconductor, steel, chemical, automobile, food, pharmacy and service, as shown in table 2. The PQ loss was divided into two parts. One was the direct loss which is the loss per PQ events multiplying the times of the PQ events happening; the other was indirect loss of PQ problem which is the investment of the mitigating devices in every year. And the total PQ loss was shown in table 3. The total PQ loss according to the 29 respondents was about $17 million collectively in 2010. In the table, N was the number of the samples.

And the structure of the PQ loss was given in figure 6. In food and pharmacy, the PQ loss was all caused by voltage sags. In semiconductor, the loss caused by voltage sags was the most. While in steel and service, the PQ loss caused by unplanned interruption was the most.

Fig.6 Structure of PQ loss of the respondents

TABLE 2 STATISTICS OF THE RESPONDENTS IN STEP T


TABLE 3 TOTAL PQ LOSS OF THE RESPONDENTS ($)


PQ Cost analysis by confidence levels

The PQ cost analysis was to achieve the total annual PQ loss in Shanghai city. The linear fitting analyses between the PQ loss and product values or power consumptions were done according to the statistics of the respondents. The analysis result between PQ loss and product values is shown in figure 7. The confidence interval of PQ cost per product values is [2.218*10-3, 6.478*10-3] when confidence level is 95%. The analysis result between PQ loss and power consumptions is shown in figure 8. The confidence interval of PQ cost per kWh is [4.502*10-3, 8.822*10-3] $/ kWh when confidence level is 95%. The PQ cost in Shanghai was less than in Europe (2008), which is about 8.353754*10-4 for product value and 8.055854*10-3€/ kWh. The reason may be that the proportion of high technology industries in the total economics in Shanghai is smaller than Europe.

With the statistics of total annual product value and power consumption of the city, the confidence interval of total annual PQ loss in Shanghai can be calculated as table 4. The annual PQ loss in Shanghai was about $0.597-1.758 billion according to the product value and about $0.612-1.177 billion according to the power consumption in 2010.

Fig.7 Linear relevance analysis between PQ loss and product value

Fig.8 Linear relevance analysis between PQ loss and power consumptions

TABLE 4 THE CONFIDENCE INTERVAL OF TOTAL ANNUAL PQ LOSS (2010)


PQ Cost analysis by each sector

The other way to estimate the PQ cost is to evaluate the PQ cost per product value in every sector with the statistics of the respondents. The PQ cost per product value of semiconductor was the largest which is 1.893*10-2. The PQ cost per product value of service was 1.275*10-2. Therefore the total annual PQ loss was $0.4735 billion for the 7 sectors in 2010. For “Industry” sector, the PQ loss was $0.271 billion. The results are given in table 5.

TABLE 5 THE TOTAL ANNUAL PQ LOSS WITH EACH SECTOR (2010)


For the “Industry” sector the estimation of how much wastage was caused by poor PQ is 0.164% of annual product value, for “Service” sector, the proportion was 4.12%. While in Europe the proportion was 4% for “industry” and 0.1419% for “service”. This shows that the industry in China is rely on the PQ less than Europe, and the conclusion for “service” is contrary.

With the comparison with the total PQ loss annual achieved by the two approaches, the result of analysis from each sector close to the result from the total amount. So for the macro analysis focusing on the PQ loss, the total annual PQ loss in Shanghai can be set as $ 0.597-1.77 billion.

CONCLUSIONS

This paper gives the procedure and some key results of the survey on the assessment of PQ cost in Shanghai. The Survey is executed by APQI China Chapter from April 2010 to April 2011. The survey was carried out in two steps. In step one, there were 147 valid samples in 18 sectors. In step two, there are 40 face to face interviews in 7 sectors, including 29 valid respondents. The survey focused mainly on the 8 PQ problems and designed 6 parts of the PQ loss. With the analysis of the answers of the respondents, some main conclusions can be drawn:

  1. The respondents thought that the most frequent PQ event which affects the procedure of the customers in Shanghai is short interruption. But there was some confusion when the customers distinguished the difference about the flicker and voltage sags.
  2. The respondents thought that the most sensitive device for the PQ issues is computer and the most mitigation device is UPS. But the effectiveness of the mitigation device is not very satisfied for 40.9% of respondents. The relevance analysis between PQ problems and mitigation devices show that the customer cannot select the right solutions.
  3. 85.6% of respondents thought that the utilities should take the responsibility of the PQ events.
  4. There was linear relationship between PQ loss and the product values or power consumptions. The confidence interval of linear coefficient was [2.2210-3, 6.48*10-3]
    between PQ loss and the product values and [4.502*103, 8.822*10-3] $/kWh between PQ loss and the power consumptions, when confidence level was 95%. The coefficient was less than that in Europe.
  5. The total annual PQ loss of 29 valid samples was $16,830,689 in 2010 collectively. It was estimated that the total annual PQ loss in Shanghai is about $ 0.597-1.77 billion in 2010 with 95% confidence level.

ACKNOWLEDGEMENT

Thanks for the Economic Operation Bureau, Development and Reform Commission, China, who organized this survey. Thanks for the Shanghai Municipal Commission of Economy and
Information and Shanghai Municipal Electric Power Company who supported this survey a lot. Thanks for the other members in APQI China Chapter who contributed their hard works for this survey.

REFERENCES

[1] DOE/NETL. Provide power quality for the digital economy. Oct. 2009, http://www.netl.doe.gov/martgrid/referenceshelf/whitepapers/Provides%20Power%20Quality_APPROVED_2009_11_02.pdf.
[2] IEEE recommended practice for evaluating electric power system compatibility with electronic process equipment. IEEE Standard 1346-1998, 1998.
[3] E. Emanuel, M. Yang, and D. J. Pileggi. The engineering economics of power system harmonics in subdistribution feeders: A preliminary study”, IEEE Transactions on Power Systems, Vol. 6, No. 3, August 1991, pp. 1092 – 1098.
[4] P. Caramia, G. Carpinelli, E. Di Vito, A. Losi, and P. Verde. Probabilistic evaluation of the economical damage due to harmonic loss in industrial energy systems”, IEEE Transactions on Power Delivery, vol. 11, no. 2, April 1996, pp. 1021 – 1031.
[5] J. T. Crozier, W. N. Wisdom. A PQ and reliability index based on customer interruption costs. IEEE Power Engineering Review, vol. 19, pp. 59 – 61, 1999.
[6] K. K. Kariuki and R. N. Allan, “Evaluation of reliability worth and value of lost load,” IEE Proceedings: Generation, Transmission and Distribution, vol. 143, pp. 171 – 180, 1996.
[7] P. Heine, P. Pohjanheimo, M. Lehtonen, and E. Lakervi. A method for estimating the frequency and cost of voltage dips. IEEE Transactions on Power Systems, vol. 17, pp. 290 – 296, 2002.
[8] J. V. Milanovic, C. P. Gupta. Probabilistic assessment of financial loss due to interruptions and voltage dips – Part I: The methodology. IEEE Transactions on Power Delivery, vol.21, pp. 918 – 924, 2006.
[9] J. V. Milanovic, C. P. Gupta. Probabilistic assessment of financial loss due to interruptions and voltage dips – Part II: Practical implementation,” IEEE Transactions on Power Delivery, vol. 21, pp. 925 — 932, 2006.
[10] Primen. The cost of power disturbances to industrial & digital economy companies. EPRI CEIDS, June 2001.
[11] Targosz, Roman, Jonathan Manson. European Power Quality Survey Report. LPQI, 2008.
[12] Y. Shih-An, S. Chun-Lien, C. Rung-Fang. Assessment of PQ cost for high-tech industry. Power India Conference, 2006 IEEE, 2006.
[13] S. B. Choi, K. Y. Nam, D. K. Kim, S. H. Jeong, H. S. Ryoo, J. D. Lee. Evaluation of interruption costs for industrial customers in Korea. Power System Conference and Exposition, 2006. PSCE’06.

1Qing Zhong, received Ph. D and MSc in South China University of Technology in 2003 and 2000, and BSc in North China University of Technology in 1997, all in Electrical Engineering. He is now teaching in School of electric power, South China University of Technology. His main fields of interest include Power Quality, HVDC transmission control, and power electronics control.

2Wei Huang, works for International Copper Association. He is the project manager of Power Quality and is also the coordinator of APQI. APQI is aiming at improving power quality in Asian industries by creating awareness on the origins of the problems and building capability on the technical, financial and managerial aspects of power quality. He received a bachelor’s degree in Economics from Zhejiang University and MBA degree from Shanghai Jiaotong University. His main fields of interest include PQ, energy efficiency and distributed energy.

3Shun Tao was born in P.R.China, on Nov. 18, 1972. She received PhD degree and M.S. degree in North China Electric Power University (NCEPU) in 2008 and in 2005 respectively. She has been with the NCEPU since 2008, and she had a Postdoc procedure at the Electrical Engineering Laboratory de Grenoble (G2Elab), Institute National Polytechnique de Grenoble (INPG), France in 2010. Her research interests include Smart Distribution network technologies and power quality.

4Xiangning Xiao was born in P.R.China on March 5, 1953, is currently a professor in North China Electric Power University. He became a Member (M) of IEEE in 2003. His research interests include power quality and power electronics applications.

DC Current Sensors for Revenue-Grade Metering

Published by J&D is a world leader for high technology, high-performance current sensor that perfectly meet the needs of the metering sector. Email support@hqsensing.com

Published in SMART ENERGY INTERNATIONAL ISSUE – 5 | 2018

A recent report by the US Electric Power Research Institute (EPRI) predicts that by 2020, the DC distribution market will account for 50% of the total load, as the use of microgrid and other digital loads increases globally.

However, for use in conjunction with existing AC distribution systems, it is necessary to convert AC power to DC power, or DC power to AC power, which inevitably results in power loss.

To remedy this situation, there is an increasing trend to utilize DC distribution systems. Within a DC distribution system, power conversion is not needed, which leads to increased energy efficiency. Furthermore, since DC has no frequency, there is the added advantage of eliminating reactive power loss and inductive interference on the line (refer Figure 1).

To enable the construction of DC distribution systems, components for low-voltage DC (LVDC) power distribution infrastructure systems are being developed all over the world. As part of that effort, a new standard is currently being determined for DC metering and monitoring for LVDC distribution systems. IEC62053-41 Ed.1, the standard for DC electric meters, is currently under development by the project team TC13, and the standard is scheduled for completion in September 2019. In parallel, a NEMA DC sub-metering ANSI standard for DC electric meters is being established as part of a separate project called ESM1. The EMerge Alliance1 announced in 2016 the formation of a new committee to establish revenue-grade DC metering requirements for low and medium-voltage applications.

Currently, the requirements for this task are being updated.

In South Korea, KEPCO has defined LVDC for DC voltages of 1500V with single polarity and 750V with dual polarity. Based on this definition, KEPCO is currently pilot testing LVDC distribution lines at the Gochang Power Testing Centre. In particular, KEPCO is conducting research on power devices for microgrids that can use DC and AC power distribution as well as energy storage systems (ESS) and EV fast chargers. Also, LG Electronics is accelerating the development of home appliances for DC.

KEPCO is developing a new standard for the Korean market based on two existing standards – the EMerge Alliance standard for DC distribution systems, and the IEC62053-41 standard for DC power billing in electricity meters.

Since March 2016, J&D has begun three projects to develop new products for DC metering (refer Figure 2).

The first project is to develop two types of DC electricity meters with accuracy per the 1.0 and 0.5 class. One meter is a single meter for EV fast chargers, and the other one is a transformer-operated meter for ESS.

The single meter is designed with an internal DC current sensor, whereas the DC voltage is obtained using a resistive voltage divider. The DC power is calculated based on signals from both sensors. The DC voltage sensor has a measuring range of 150V~500V and combining it with a DC current sensor for 100A and 200A enables a DC electricity meter capable of DC power billing for EV fast chargers with 50kW and 100kW capacity.

The transformer-operated meter under development uses a J&D DC high-voltage sensor and a DC high-current sensor. It will be used primarily for ESS and DC microgrid applications.

The second project is the development of a high-precision DC current sensor to be used for DC single meters and transformer-operated meters. Currently, most of the DC current sensors on the market are specified for full-scale accuracy. Naturally, these sensors are not suitable for DC electric meters, since these require accuracy relative to the measured current. Sensors specified for full-scale accuracy result in increased errors at lower currents (refer Figure 3).

Sensors with sufficient accuracy are usually not competitive for the meter market due to their high price and high power consumption. Due to the large size required for a suitable power supply, it is not easy to develop a compact DC electric meter.

For DC current sensors, three technologies are usually used:

  • The first one is the Fluxgate technology, which is highly accurate but not competitive in price.
  • The second one is a closed-loop technology that can have a medium level of precision but is not very competitive in price.
  • Finally, the open-loop technology has adequate precision. However, this technology is vulnerable to magnetic fields.

J&D has analysed the three technologies described above and found a way to design an optimal DC current sensor that could be applied to DC electricity meters. For the design of a compact DC electricity meter with the lowest possible power consumption, the DC current sensor must have adequate reading accuracy with minimal power consumption. To have a competitive advantage, research is focused on developing an optimal DC current sensor that is also economical.

The results for this intense research are the JOM and JCM series of sensors:

  • The JOM series is a DC current sensor designed for DC single meters with a 1.0 accuracy class. It is also designed to achieve a 0.5 accuracy class by applying open-loop technology. It uses a zero drift operational amplifier and a high permeability core to minimize DC offset and to achieve a compact size. Also, it has been designed to operate at low power. Therefore, it is possible to design a compact DC single meter for 100A and 200A, using the JOM series of sensors.
  • The JCM series is a product for DC single meters with 0.5 accuracy class (100A and 200A), and for transformer operated meters with 1.0 accuracy class (200A~4000A). Using AOCT
    technology to remove the DC offset from the hall sensor front-end circuit and the differential amplifier, based on closed-loop technology, and using high-permeability cores, the JCM series support accuracy class 0.2 meters. The design minimizes voltage loss caused by the DC resistance and inductance of the feedback coil. Notably, the JCM series sensors have been designed to have low operating power. Therefore, by using the J&D JCM series CTs, it is possible to develop a compact DC transformer operated meter.

The third project was to develop a DC voltage sensor for DC transformer operated meters. J&D’s new DC voltage sensor is a closed-loop product utilising a stable isolation design and the AOCT technology from the DC current sensor. This sensor achieves accuracies of 0.3% in the 1500V~5000V DC range.

Based on the advances of DC voltage accuracy and DC current sensor accuracy, J&D developed a high precision DC electricity meter that addresses the growing demand for LVDC distribution systems.

When J&D discovered at the development stage that conventional metering ICs could not meet the requirements for DC electricity meters, J&D engineers consulted Silergy Corp, a leading metering IC manufacturer for support, which resulted in a fruitful phase of cooperation.

MAX71315C, a single-phase metering IC was selected as the best solution for the application. Silergy provided J&D with broad technical support and assistance. We want to take this opportunity to send our gratitude to Silergy.

The accuracy and overall performance of J&D’s DC electric meter has been evolving, and its first solution will be presented at European Utility Week 2018 (Vienna, November 2018).

J&D’s strategy is to lead by preparing ahead of the market rather than following it. We would like to cooperate with leaders from around the world in developing DC electric meters or DC distribution systems. With J&D’s leading technology, it will be possible to enable our partners to capitalize on the new market trend.

1 The EMerge Alliance, is a not-for-profit industry association founded in 2008 with the goal of setting standards for industrial LVDC distribution.

Existence of Electrical Pollution

Published by

  • Puneet Chawla, Assistant Professor, Electrical Eng. Department, Ch. Devi Lal State Institute of Eng. & Tech., Panniwala Mota (Sirsa)
  • Rajni Bala, Lecturer, Electrical Eng. Department, Ch. Devi Lal State Institute of Eng. & Tech., Panniwala Mota (Sirsa)

ABSTRACT

Power Quality is deteriorating continuously in developed countries. Poor power quality, also known as dirty electricity, ubiquitous pollutant, refers to a combination of harmonics and transients generated primarily by electronic devices and by non-linear loads. It flows along wires and radiates from them and involves both extremely low frequency electromagnetic fields and radio frequency radiation. Until recently, dirty electricity has been largely ignored by the scientific community but dirty electricity is adversely affecting the lives of millions of people.

Recent inventions of metering and filter equipment provide scientists with the tools to measure and reduce dirty electricity on electrical wires. Several case studies and anecdotal reports are presented regarding the existence of electrical pollution in this paper.

Keywords: Electrical pollution, dirty electricity, surge voltage, floating neutral, arc flash hazards, harmonics, electromagnetic spectrum, flickering, flick meter, Graham-Stetzer (GS) meter and GS filters.     

INTRODUCTION

Electrical pollution is not something you can see, smell, taste, or touch. It is not something that you can sense, making it difficult for one to be aware of the presence of electrical pollution. With this in mind, it is important to understand what causes electrical pollution and what to look for in our everyday environment and home.

Electrical pollution”[1] is a misused and misunderstood term that has no basis in engineering or electrical science. There are a variety of normally occurring electrical phenomena that arise from our everyday use of electricity. “Electrical pollution” is being loosely used to describe these phenomena. These electrical phenomena include:

  1. Stray voltage
  2. Electric and magnetic fields
  3. Earth currents
  4. Transients and high frequency noise
  5. Harmonics
  6. Flickering
  7. Surge Voltages
  8. Lightning  
  9. Floating Neutral on power system

Stray voltage is current from on and off farm sources. Stray voltage on farms has been detected by observing behavioral changes in farm animals and some health problems for humans. This is often a localized condition due to poor grounding of the farm or utility electrical system, or a failure of some electrical application.

Electric and magnetic fields, or EMF are weak, invisible fields of energy that exist around anything that carries or uses electricity. The strength of these fields quickly decreases as you move away from the source.

Earth currents are very low magnitude electrical currents that can be detected in the soil. Natural activity such as movement of magma deep within the earth and solar flares create some of these currents, while the delivery and use of electrical energy create others.  Most of the earth currents are the result of local electrical loads by the end consumer of electrical power.

Transients and high frequency noise on the wiring of our homes and businesses are created by the use of modern electronic devices such as radios, televisions, cell phones and microwaves. This “noise” is normally very small (a tiny fraction of one volt) compared to the standard 120 volts at a typical wall outlet. As with earth currents, the source of transients and high frequency noise is primarily the end user. Because of the design of the electrical distribution system, this noise cannot be transmitted very far from its source. We occasionally experience this noise as momentary interference (snow) on a television screen or a fuzzy buzz in our communications systems.

Electromagnetic spectrum


Dirty Electricity from 2 kHz-150 kHz (transient bursts)

Harmonics is a component of waveform of alternating current of a frequency which is a multiple of fundamental current is called harmonic component.

The quality of power can be simply defined with parameters as below:

  1. The supply voltage should be within guaranteed tolerances of declared value. A lower voltage called a sag and a high voltage are undesirable, long duration of cut offs are to be abhorred up to 10%.
  2. The supply frequency must lie within guaranteed parameter of 3% in the country.
  3. The wave should be a pure sine with allowable limits for distortion.
  4. Sudden voltage distortion must be contained.
  5. Supply of three phases of a 3-Φ system should be balanced.
  6. The earthing system should serve its purpose satisfactory.

The utility’s main duty is that to provide the electricity which satisfy all above factors. Bu the harmonic injection by various industries is the main reason of bad quality of power.

Flicker is defined as the impression of unsteadiness of the visual sensation induced by a light stimulus whose luminance or spectral distribution fluctuates with time. It is caused by the voltage variations in the electrical power system and brings annoyance to human beings. The human eye is the most important responder to the light.

There is a lamp-eye-brain model built called as IEC flickmeter. It is able to simulate the physiological sensitivity of the human eye that is subjected to are reference incandescent lamp of 60W, 230V.

Lightning and Surge Voltages Overvoltages are designed as transient voltages at transient surges. This means that they are short-lived, temporary oscillations. Their shape and frequency depends on the impedance of the circuit. A transient surge is a sudden (shorter than a milliseconds) rise in the flow of power. Voltages can peak up to 12 times the nominal system voltages. Transient surges result from a number of resources, the most common of which are internal, such as load switching and even normal equipment operations. In fact, 80% of transients are generated internally. External transient are the result of lightning and load switching by utilities and upstream facilities. Overvoltages are primarily caused by:

  1. Lightning due to atmospheric discharges.
  2. Transient Switching operations.
  3. Faulty Switching operations.
  4. Electrostatic discharges.

Floating Neutral If the star point of Unbalanced load is not joined to the star point of its power source (Distribution Transformer or Generator) then phase voltage do not remain same across each phase but it vary according to the unbalanced of the load.

As the potential of such as isolated Star point or Neutral point is always changing and not fixed s it is called as Floating Neutral. Power flows in and out of customers premises from the distribution network, entering via the phase and leaving via the neutral. If there is a break in the neutral, return path electricity may then travel by a different path. Power flow entering in one phase returns through remaining two phases. Neutral point is not at ground level but it floats up to Line voltage. This situation can be very dangerous and customers may suffer serious electric shocks if they touch something where electricity is present.

Broken neutrals can be difficult to detect and in some instances may not be easily identified. Sometimes broken neutrals can be indicated by flickering lights or tingling taps. If you have flickering in lamps, this may cause a serious injury or even death.

E-waste [2] The e-waste is influenced by several factors, but it can be broadly classified into two categories: E-waste Generated and E-waste imported.

The center of Science and Environment reports that the E-waste generated in the country ranges from 350,000 tones to 400,000 tones, where 50,000 tones of e-waste is illegally imported into the country every year. The growth of e-waste in the country is influenced by low end of life of electronic and electrical products due to frequent release of new models, reasonable and attractive prices, low refurbishing and most importantly lack of recycling infrastructure in the country. This is also due to the lack of balance between the wastes generated and recycled (252,868 MTA). The Indian government is in full swing to contain this situation through the implementation of Hazardous Wastes (Management, Handling and Transboundary Movement) Rules.

In these characterizations of electrical pollution, high frequency signals pollute regular electrical currents traveling in wires and currents through the earth. To better understand the background for the causes of electrical pollution, it is helpful to learn the basics of how the electrical current works.

Direct current is similar to battery power where current flows back and forth between energy terminals.

Alternating current is a wave-like movement of energy that oscillates back and forth, and the energy flows in the direction of the load. The rate of oscillation is defined as frequency. At an electrical grid base, the current oscillates at 50 times per second, or 50 Hz. Regular “clean” power enters homes, buildings, and offices at 50 Hz. The increased use of electrical power overloads electrical grid base, which distributes the power. Power is “dirty” or polluted when it contains the high frequency signals flowing through overloaded wires, and not just clean 50 Hz power.

The pollution of electricity is often compared to how water is polluted. At the source, water is clean. It is what comes with the water and pollutants along its path to the recipient that makes the water harmful to humans. However, like water pollution in many ways, electrical pollution is complex and often difficult to understand for the common consumer. The causes are varied and sometimes cannot be identified with certainty. However, the bulk of overloaded electricity bases can be attributed to the reliance on electrical appliances in today’s environment. In the 1950’s, the National Electrical Safety Code required a neutral wire to return wire to utilities. In this code it was forbidden to use the earth as a neutral return. This was a worsening problem in rural, farm areas where the currents were being returned to the soil affecting the feeding of animals. Later, the Public Service Commission allowed utilities to use grounding rods to serve as neutral wires for return. This was done instead of increasing the size of the neutral rods. Installing ground rods is a less costly solution than making the neutral rods larger in size.

The grounding rods serve as an alternate and additional pathway for the energy to return to the substation instead of to the earth. The use of electricity has dramatically increased in the past 50 years causing stress on the electrical infrastructure. Those concerned about electrical pollution say the size of neutral wires to make sure energy is returned to its source needs to be much larger. The current regulated size of the neutrals is not large enough to handle the load due to the greater use of electricity. The currents that are not properly directed are emitted into the environment or into homes or offices where electrical devices are widely relied upon by consumers.

Neutral wires are not often sized for the modern electrical load. Power that is misdirected into the earth or home environments contains a much higher frequency that the 50 Hz classification making it “dirty” or unclean. Knowing that consumers use a more significant amount of electricity in today’s modern environment, there is a concern that electrical pollution is affecting humans. Those concerned about electrical pollution advocate for stricter regulations and for the widespread use of filters to measure and control “dirty” electricity. Considering electrical pollution can come from a variety of sources, the subject is complex and there is still a lot to be learned on the topic. In the meantime, some technology has been created to measure and control electrical pollution. This is especially important for those who have realized they are “electrically sensitive” and are experiencing health problems which are attributed to electrical pollution.

ARC FLASH HAZARDS IN POWER SYSTEM [3]

The consideration of arc flash hazards is a relatively new concern for power system analysis, working and design. However, it is a concern that is rapidly momentum due to increasingly strict worker safety standards and system reliability requirements that demand work on live electrical equipment’s. Recently, enacted guidelines and regulations regarding arc flash hazards have focused power industry attention on quantifying the dangers of arc flash events n low and medium power electrical equipment. The arc flash hazard analysis is mandatory and more popular in European countries and US to ensure safety of the personnel working under live conditions. However, it is becoming popular in India as well so as to ensure the system reliability. Since the incident energy from an arcing fault is directly proportional to the arc clearing time, reducing the arc time is very beneficial. It results in reducing the personnel protective equipment (PPE) level requirements and limiting direct damage to the equipment.

An arcing fault is the flow of current through the air between the phase conductors or phase conductor & neutral ground. Arc flash is the ball of fire and molten metal as well as a pressure force or blast that explodes from an electrical short circuit. Electrical arcs from, which a medium that is normally an insulator, such as air, is subjected to an electric field strong enough to cause it to become ionized. This ionization causes the medium to become a conductor which carry the electric current. An arcing fault can release tremendous amount of concentrated radiant energy at the point of the arcing in a small fraction of a second, resulting in extremely high temperature, tremendous pressure blast and shrapnel hurling at high velocity.  Arc flash temperatures can easily reach 14,000 to 16,0000F (77600C to 88710C). These temperatures can be reached by a fault in several seconds. The heat generated by the high current flow may melt or vaporize the material and create an arc. This arc flash creates a brilliant flash, intense heat, and a fast moving pressure wave that propels the arcing products. Some of the effects of arcing fault include:

  1. Extreme heat, Pressure waves and Sound waves.
  2. Molten metal, shrapnel and vapor
  3. Intense light.

Arc flash is related to the available fault current and total clearing time of the over current protective device during a fault. It is not necessarily linear as lower fault currents can be sometimes result in a breaker or fuse taking longer to clear, thus extending the arc duration and raising the arc flash energy. To perform an accurate arc flash hazard analysis, a realistic value for three-phase bolted fault and total clearing time for the affected over current protective device must be known. Arc flash hazard analysis and study is required to be carried out in the installations where the worker is operating under live conditions.

Arc flash is measured in thermal energy units of cal /cm2 and for arc flash analysis is referred to as the incident energy of the circuit. 1.2cal/cm2 of thermal energy on a person’s skin for a short period of time generally produces a second degree burn. Second degree burn is considerably curable and painful and occur if the temperature of human skin is raised to 80oC for 0.1 seconds. Depending on the material clothing may ignite when temperature reach between 370-760oC (700-1400oF). if clothing and equipment are worn to limit the exposure of the worker to limits below those identified above, the worker should walk away from an accident with minimal injury.

Types of Faults A bolted short circuit occurs when the normal circuit current by-pass the load through a very low impedance path resulting in current flow that can be thousands of times the normal current. All equipment needs to have adequate interrupting ratings to safely contain and clear the high fault currents associated with the faults.

In contrast, the arcing fault is the flow of current through a higher impedance medium, typically the air, between phase conductors or between phase conductor and neutral or ground. Arcing fault currents can be extremely high in current magnitude approaching the short-circuit current but are typically between 38% to 89% of the fault. The inverse characteristics of typical over-current protective devices results in longer clearing time for an racing fault due to lower fault values. The amount of energy released during an arcing fault depends upon the voltage, the current and the duration of arc. The arc duration is dependent on the arcing fault current magnitude and the protective device settings. Due to its nature, the magnitude of an arcing fault is subject to many variables and is difficult to perfectly predict.

Causes of Electric Arc Arcs can be initiated by the following:

  1. Accidental touching.
  2. Accidental dropping of tools.
  3. Failure of insulating materials.
  4. Improperly designed or utilized equipment.
  5. Improper work procedures.
  6. Spark discharges.
  7. Over Current across the narrow gaps between conductors of different phases.
  8. Dust and impurities on insulating surfaces.
  9. Fumes or vapor of chemicals.
  10. Corrosion of equipment parts, thus weakens the contacts between terminals.
  11. Condensation of vapor and water dripping on the surface of insulating surface.

Reasons of Address arc Flash The following are the reasons to address the arc flash:

  1. Protect the workers from potential beam and prevent the loss of life.
  2. Comply with Occupational safety and Health Administration (PSHA) codes with National Fire Protection Association (NFPA) standards on employee safety, NFPA-70E.
  3. Prevent loss to organization through loss of skilled manpower, litigation fees, higher insurance costs.
  4. Increases production uptime by reducing accidents.

Classification of Hazard Risk category NFPA 70E defines 5 levels of risk category for arc flash hazard based upon the calculated indecent energy at the working distance. As per the risk category, different Personnel Protective Equipment (PPE) are required to be worn by the person working near to live equipment.

Table 1: Classification of Hazard Risk Category

CategoryEnergy LevelPPQ RequirementRemarks
0N/ANon-melting, flammable materials
15 cal/cm2Hard hat, Safety glasses or safety goggles. Hearing protection (ear canal inserts) Heavy duty leather glovesThe Hazard Risk category levels should be restricted up to category 2 as the PPE required to be worn are easy to work with.
2 8 cal/cm2 Hard hat, Safety glasses or safety goggles. Hearing protection (ear canal inserts)
Heavy duty leather gloves.
The Hazard Risk category levels should be restricted up to category 2 as the PPE required to be worn are easy to work with.
3 25 cal/cm2 Hard hat, Safety glasses or safety goggles. Hearing protection (ear canal inserts)
Heavy duty leather shoes.
Hazard Risk category of category 3 and category 4 shall not be recommended as the PPE used will be bulky.
4 40 cal/cm2 Hard hat, Safety glasses or safety goggles. Hearing protection (ear canal inserts)
Heavy duty leather shoes.
Hazard Risk category of category 3 and category 4 shall not be recommended as the PPE used will be bulky.

Benefits of performing Arc Flash study Below are some of the benefits of performing an accurate arc flash hazard analysis:

  1. Enhance the system reliability with proper protective device coordination study.
  2. Equipment evaluation analysis is very important.
  3. Since the system can be modeled on software, it will be very easy to make future changes or upgrade with minimal expense and efforts.
  4. Drastically lessen chances of having to make a very unpleasant visit.
  5. Provide the best possible PPE for workers and technicians.
  6. Possibly lower insurance premiums.
  7. Brings electrical system up to date by providing current one-line diagram.

Harmonics is a component of waveform of alternating current of a frequency which is a multiple of fundamental current is called harmonic component. The harmonic component of twice the fundamental frequency is called as second order harmonics and the triple frequency harmonics is triple harmonic component. Complex waveforms are produced due to superposition of sinusoidal wave of lowest frequency f ,i.e. fundamental frequency and waves of different frequencies whose frequency are an integral multiples of basic frequency, such as waves of frequencies 2f, 4f, 6f called as even harmonics and of 3f, 5f, 7f are called odd harmonics.

Types of harmonics various types of harmonics are

  1. Harmonic voltage distortion.
  2. Harmonic current distortion.
  3. Interchange voltage & current component.
  4. Subharmonic distortion

Voltage harmonic distortion which is generally present in supply of power from utility. The distortion of current waveform is called current harmonic distortion which is generally injected by the non-linear loads to supply the utility and corrupts it.

Non-Linear loads When the current drawn through the circuit is non sinusoidal even there is a pure sinusoidal supply from the utility, then the load is called as non-linear load. Some examples of non-linear loads are:

  1. Transformer saturation.
  2. Thyristor controlled equipments.
  3. Ac/dc, ac/ac, dc/ac converters.
  4. Battery chargers.
  5. Electronic and medical test equipment.
  6. PCs and office machines.
  7. Induction heaters.
  8. Synchronous machines (non-sinusoidal air gap flux).

Evaluation of Harmonic distortion Any real waveform can be produced by adding sine waves together. It can also be shown that this combination is unique. It may be note that fundamental and third harmonic component waveforms for two cases (in phase and out of phase) result in two waveforms with no change in amplitude. So, when the odd harmonics are in phase with fundamental component, distorted resultant waveform of square type wave and when harmonic is shifted by 900 phase, the distorted resultant becomes more like as a spikes.

Table 2: Evaluation of Harmonic distortion

Name/OrderFrequency SequenceRelative Voltage(%)
150+3
2100
315005
4200+
52506
63000
7350+5
8400
945001.5
10500+
115503.5
126000
13650+3.5
14700
1575003
178502
19950+1.5
21105000.5
2311501.5

Effects of harmonics The effects of harmonics on power system are as follow:

  1. Over voltages and excessive currents due to series and parallel resonance.
  2. Increase in losses and consequence heating of transformers and rotating machines.
  3. Overloading of power factor correction shunt capacitors leading to excessive current to fuse blowing.
  4. Ageing of insulation of the electrical equipments. Hence reduction of efficiency of power system.
  5. Increases errors in energy meters.
  6. Mal functioning of protective gears such as relays, circuit breakers.
  7. Inductive interference with neighboring communication network.
  8. Tripping of machines at smaller loads.
  9. Fire hazards

Sources of harmonics [4]: Harmonic pollution in power system is generally caused by non-linear loads. The sources of harmonics in power system can be classified as follows:

a) Harmonic originated at high voltages by supply authorities:

  1. HVDC system.
  2. Back to back system.
  3. Static VAR compensation system.
  4. Wind and Solar power converters with interconnection.

b) Harmonics originated at medium voltages like large industrial loads like Traction equipment, variable speed drives, thyristors controlled drives, induction heaters, Arc furnaces, Arc welding, capacitor bank, electronic energy controllers.

c) Harmonic originated at low voltages at consumers end like single phase loading, uninterrupted power supplier, semiconducting devices, CFL, solid state decies, domestic appliances and accessories using electric devices, electronics chokes, electronic fan regulators/light dimmers.

Solution for minimizing harmonic current effects [5]

  1. Over-sizing or derating of the installation- This solution does not eliminate harmonic currents flowing in the low voltage (less than 1000V AC) distribution system but masks the problems and avoid the consequences. The most widely implemented solution in a new installation is over sizing of neutral conductor.
  2. Specially connected Transformer- This solution eliminates 3rd order harmonic currents. It is centralized solution for a set of single phase loads.
  3. Series Reactors- This solution consists of connecting a reactor in series with non-linear load.
  4. Tuned Passive filters- A filter may be installed for one load or a set of loads. The filter rating must be coordinated with the reactive power requirements of the loads.
  5. Active harmonic filters- The active harmonic filters are used to introduce the current component to cancel die harmonic component of the non-linear loads, such as Series Filters, Parallel Filters & Hybrid Filters.

Lightning and Surge Protection [6]

Bolts of lightning exhibit extremely high currents. So, they cause a large voltage drop and accordingly, a large rise in potential even in well-earthed building or system despite low earthing resistance.

Overvoltages due to direct lightning strokes When lightning strikes a lightning conductor or the roof of a building which is grounded, the lightning current is dissipated in to the ground. The impedance of the ground and the current flowing through it create large difference of potential, this is called as overvoltage. This overvoltage then propagates throughout the equipment via the cables, damaging equipment and results in explosion.

Overvoltages due to indirect effects of lightning strokes Overvoltages are also produced lightning strikes in the vicinity of a building due to increase in potential of the ground at the point of impact. The electromagnetic fields created by the lightning currents generate inductive and capacitive coupling, leading to other overvoltages. Within a radius upto several kilometers, the electromagnetic field caused by the lightning in clouds can also create sudden increase in voltage. Thus causes, irreparable damage to the sensitive equipment’s such as fax machines, computer power supplies and safety & communication systems.

A lightning strike (direct or indirect) can have a destructive and disturbing effect on installations located up to several miles from the actual point of the strike. During a storm, underground cables can transmit energy from a lightning strike to equipment installed inside building. A lightning protection device such as lightning rods or Faraday cage, installed in a building to protect against the risk of a direct strike can increase the damage to electrical equipment connected to the main power supply near or inside the building.

Conducive coupling Overvoltages are transferred directly into circuits via common earthing impedances. The magnitude of the overvoltages depends on the ampereage of the lightning and the earthing conditions. The frequency and wave behavior are mainly determined by the inductance and speed of the current rise. Even distant lightning strikes can lead to overvoltages in the form of travelling waves which affect the different parts of electric systems by the conductive coupling.

Inductive coupling A high-ampere lightning strike generates a strong magnetic field. Starting from here, overvoltages reach nearby circuits by means of induction effects, according to the transformer principle, e.g. directly earthed conductor, power supply lines, data lines etc.

Capacitive coupling A capacitive coupling of overvoltages is also possible. The high voltage of the lightning generates as electric field of high electric strength. The transport of electrons  can cause a capacitive decay to circuits with lower potentials and raise the potential concerned to an overvoltage level.

Radiation coupling Electromagnetic wave fields (E/H field), that also ensue during lightning (distant field condition, E/H field vectors perpendicular to each other), affect conductor structures in such a  way that coupled overvoltages must be expected even without direct lightning strikes. Permanent wave fields from strong transmitters are also able to  cause coupled interference voltages in lines and circuits.

Lightning protection on Motors However, during lightning and switching of circuit breakers, re-strikes are pre-strikes during opening and closing of a circuit breaker may lead to a broad band frequency spectrum of overvoltages. Such overvoltages may have an oscillating character and lower amplitude at the Motor terminals. In addition they may arrive at the motor input without any changes in amplitude and waveform so the lightning arresters will not able to detect any overvoltage near the terminals as well as the input sides. If the oscillating frequency component of the external overvoltage is equal to the natural frequency of windings, then magnitude of the internal resonance overvoltage has its maximum value. So, overvoltages generated in transient and lightning strikes in electrical power equipment may be dangerous for insulating system despite the applied overvoltage protection. In order to ensure protection from transient surges, installation of lightning protective devices is a must.

Surge Protection Pumping stations are particularly vulnerable to lightning-induced voltage surges on incoming power lines, since it is characteristic of their operation to be in use during thunderstorms. So, special care should be taken to reduce the magnitude of these voltage surges to avoid major damage to the electrical equipment contained within. A small investment can greatly reduce voltage stresses imposed on rotating machines and switchgear by lightning-induced surges. There are two transient elements of a voltage surge that require different protective equipment. The protection of the major insulation to ground is accomplished by station surge arrestor which limit the amplitude or reflections of the applied impulse wave within the power windings

Medium voltage motors To obtain the most reliable protection of the motor’s major and turn insulation systems, a set of arrestor and capacitors should be installed as close as possible to the motor terminals. The arresters should be valve-type, station-class designed for protection. The leads from the phase of motor to the capacitor and from the capacitor to ground should be as short as possible.

Low voltage motors These are of 600V and below have relatively higher dielectric strength than medium voltage motor. Normally, when higher speed motors of this voltage class are connected through a transformer protected by Station-class arrester on the primary side, no additional protection is required. However, due to more expensive slower speed motors employed in pump stations, plus the critical nature of these motors, the minimal additional cost of lightning protection is required. A 3-phase valve-type low voltage arrester should be provided at the service entrance to the station and a three-phase capacitor should be provided at each motor terminal.

Surge capacitors [7] have been commonly used as protection device to mitigate transients. The combination of surge capacitors and surge arresters has been used to protect medium voltage motors from steep-fronted voltage surges. The purpose of using the surge capacitor is to reduce the rise time of the surge. The following points must be observed to guarantee fault-free operation of an electrical drive:

  1. Correct design, a suitable motor has to be selected for each application.
  2. Professional operation, professional installations and regular maintenance are preconditions for such operation.
  3. Good motor protection, this has to cover all possible problem areas:
  • It must not be tripped before the motor is put at risk.
  • If the motor is put at risk, the protection devices as to operate before any damage occurs.
  • If damage cannot be prevented, the protection, device has to operate quickly in order to restrict the extent of the damage as much as possible.

Impact of Floating Neutral in Power Distribution [8]

If the star point of Unbalanced load is not joined to the star point of its power source (Distribution Transformer or Generator) then phase voltage do not remain same across each phase but it vary according to the unbalanced of the load.

As the potential of such as isolated Star point or Neutral point is always changing and not fixed s it is called as Floating Neutral. Power flows in and out of customers premises from the distribution network, entering via the phase and leaving via the neutral. If there is a break in the neutral, return path electricity may then travel by a different path. Power flow entering in one phase returns through remaining two phases. Neutral point is not at ground level but it floats up to Line voltage. This situation can be very dangerous and customers may suffer serious electric shocks if they touch something where electricity is present.

Normal Power Condition and Floating Power Condition

Normal power condition On 3-Φ systems, there is a tendency for the star point and phases to want to ‘balance out’ based on the ratio of the leakage on each phase to earth. The star-point will remain close to 0V depending on the distribution of the load and subsequent leakage. 3-Φ systems may or may not have a neutral wire. A neutral wire allows the three phase system to use a higher voltage while still supporting lower voltage single phase appliances. In high voltage distribution situations it is common not to have a neutral wire as the loads can simply be connected between phases.

The neutral should never be connected to a ground except at the points at the service where the neutral is initially grounded (at Distribution Transformer). This can set up the ground as a path for a current to travel back to the service. Any break in the ground path would then expose a voltage potential. Grounding the neutral in a 3-Φ system helps stabilize phase voltages. A non-grounded neutral is sometimes called as floating neutral and has a limited applications.

Floating power conditions Power flows in and out of customers premises from the distribution network, entering via the phase and leaving the neutral. If there is a break in the neutral paths electricity may travel by the different path. Power flow entering in one phase leaves through the other two phases. Neutral point is not to ground level but it Floats up to Line Voltage. This situation can be very dangerous and customers may suffer serious electric shocks if they touch something where electricity is present. Broken neutrals can be difficult to detect and in some instances may not be easily identified. Sometimes broken neutrals can be indicated by flickering lights or tingling taps. If you have flickering in lamps, this may cause a serious injury or even death.

Various factors which cause Neutral Floating There are several factors which are identified as the cause of neutral floating. The impact of Floating Neutral is depend on the position where neutral is broken.

At the 3-Φ Distribution Transformer

  • Neutral failure at transformer is mostly failure of neutral bushing.
  • The use of Line Tap on transformer bushing is identified as the main cause of neutral conductor failure at transformer bushing. The conductor start melting and resulting broke off neutral.
  • Poor workmanship of installations and technical staff also one of the reasons of neutral failure.
  • A broken neutral on 3-Φ transformer will cause the voltage float up to line voltage depending upon the load balancing of the system. This may damage the customer equipment connected to the supply.
  • Some customer will experience overvoltage while some will experience low voltage.  

Broken overhead Neutral conductor in LV line

  • The impact of broken overhead neutral conductor at LV overhead distribution will be similar to the broken at transformer.
  • Supply voltage floating up to Line voltage instead of phase voltage. This type of fault condition may damage customer equipment connected to the supply.

Broken Service Neutral conductor

  • A broken neutral of service conductor will only result of loss of supply at the customer point. Not any damages to customer equipment.

High Earthing Resistance of Neutral at Distribution Transformer

  • Good earthing resistance of earth pit of neutral provide low resistance path for neutral current to drain in earth. High earthing resistance may provide high resistance path for grounding of neutral at distribution transformer.   
  • Limit earth resistance low to permit adequate fault current for operation of protective devices in time and to reduce neutral shifting.

Over Loading and Load unbalancing

  • Distribution network overloading combined with poor load distribution is one of the most reason of neutral failure.
  • Neutral should be properly designed so that minimum current will be flow in to neutral conductor.
  • In overloaded unbalancing network lot of current will flow in neutral which break neutral at its weakest point.

Poor Workmanship and maintenance

  • Normally LV network are mostly not given attention by the maintenance staff. Loose or inadequate tightening of neutral conductor will effect on continuity of neutral which may cause floating of neutral.

How to detect Floating Neutral Condition in Panel Transformer secondary in star connected, then Phase to Neutral = 240V and Phase to phase= 440V.

Condition 1: Neutral is not floating In this condition, the voltage will be same as per the above voltages under balanced condition.

Condition 2: Neutral is floating

All appliances are connected In this condition, it is being fed with a very small current flowing from the phase supply via the plugged-in-appliances to the neutral wire.

All appliances are disconnected In this condition, the neutral will no longer seem to be Live because there is no longer any path from it to the phase supply.

  • Phase to phase voltage: 440C AC
  • Phase to Neutral voltage: 110V to 330V AC
  • Neutral to Ground voltage: 110V
  • Phase to Ground voltage: 120V

Methods of elimination of Neutral Floating There are some point needs to be consider to prevent such situation of neutral floating.

  1. Use 4 Pole Circuit breaker/ELCB/RCBO in distribution panel provides tripping to the circuit without damage to the system.
  2. Use voltage stabilizers with wide input voltage range with high & low cutoff.
  3. Good workmanship and maintenance.

HEALTH EFFECTS OF ELECTRICAL POLLUTION [10]

While the term electrical pollution is not a scientific term, there has been a lot of research and case studies done to understand the connection between electrical pollution and human health. Some are as under:

  • The wire and transformers are not only delivering the juice to run the electrical devices, but are also the carrier of dangerous high frequency currents. The high frequency currents most commonly created by computers and other electronic devices are circulated by various wires and system emitting high frequency currents in homes or office environments. Some of those health problems being attributed to electrical pollution include fibromyalgia, attention deficit disorder, asthma, chronic fatigue syndrome, diabetes, multiple sclerosis and migraine headaches.
  • Reduced blood glucose levels.
  • Improved diabetes.
  • Improved asthma.
  • Improved multiple sclerosis.
  • Higher PH (reduced acidity).       
  • Increased headaches frequencies.
  • Improved teacher & student well-being.      
  • Reduced insomnia.
  • Higher dirty electricity has been correlated with Increased cancer incidence.
  • Experiencing Fatigue.
  • Chills
  • Fever and dry throat on consistent basis.
  • Reduced sleeping.
  • Due to some internal factors such as electronic equipment that distorts 50hz power when the dc power has created ac power. A distorted 50Hz wave is a normal 50Hz current polluted by high frequency voltages and currents.

A filter and a meter have been created to measure and control the electrical pollution, the filters are inexpensive and have proven to be effective in controlling the harmful high frequency currents from entering homes or offices. The Graham-Stetzer [11] (GS) meter and GS filters are the most common tools to measure and reduce electrical pollution. The technology used to create the GS meters and filters is based on electromagnetic theories and power engineering principles. The filters provide a low impedance path for high frequency currents from the hot wire(s) to the neutral wire path bypassing the customer loads. Filter frequency ranges from 4 kHz to 100 kHz provide optimal results for cleaning the electricity. Any frequencies above 100 kHz or below 4 kHz are hard to detect by the filters.

Averaging or RMS meters do measure the amount of electricity present, but the GS meters have demonstrated their ability to measure the amount of harmful electricity present. Electrical current enters the body more readily at higher frequencies, and body current at those higher frequencies can be harmful. The GS meter measures currents at those higher frequencies by measuring the sum of the frequencies above 60 Hz.

CONCLUSION

Electrical pollution, otherwise known as dirty electricity is a term used to describe a type of electrical phenomenon occurring worldwide. However, the phenomenon is not widely known, and can be complex to understand. But research and case studies have shown that consumers should learn about electrical pollution, how it is controlled and measured, the health effects, and public protection against electrical pollution. With this in mind, it is important to understand what causes electrical pollution and what to look for in your everyday environment and home. Many people complain about a variety of side effects to dirty power, these can include headacks, ringing in the ears, trouble focusing, and a variety of other symptoms. If you suffer from some of these symptoms then you may want to discuss this with your doctor. Electrical pollution can be controlled with special filters designed by Graham Stetzer. Graham Stetzer filters (GS filters) can help reduce the harmful electricity that enters home or office environments. The GS filters work best when the utility has an adequate neutral conductor. This means that the conductor can handle more than the standard utility practice to meet thermal or voltage regulation. The above prospective are the major issues responsible for electrical pollution in our environment and can be the solution techniques for implementation.     

References

  1. Magda Havas, “Health concerns associated with Energy efficient lighting and their Electromagnetic emissions”, Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR), June, 2008.
  2. S Durairaj & Revathy Subhiah Rajaram, “E-Waste Recycling-A potential business opportunity in India”, Journal Electrical India, Edition January, 2014.
  3. Mrugen Sheth & Amrita Tondon, “Understanding of Arc Flash Hazard in Power System”, Journal Electrical India, Edition September, 2014.
  4. V V Khatavkar & S N Chapbekar, “Study of Harmonics in Industries- A power quality aspect”, Journal Electrical India, Edition November, 2006.
  5. Sachin K jain & S N Singh, “Estimation of Grid Harmonics in the Modern Electric Power Systems”, Journal Electrical India, Edition July, 2012.
  6. Nagarjun Y, “Lightning and Surge Protection for Motors”, Journal Electrical India, Edition July, 2013.
  7. R. Ramanujam, “An introduction to Power System Stability, Control & Operation”, ISTE-WPLP Learning Material Series, Edition 2014, page no. 75
  8. Jignesh Parmar, “Impact of Floating Neutral in Power Distribution”, Journal Electrical India, Edition February, 2014.
  9. Rabinarayana Parida, Pranati Panigrahi & Bibhu Prasad Nanda, “Flicker meter and its application for Lightning apparatus”, Journal Electrical India, Edition June, 2013.
  10. Report of “The National Foundation for Alternative Medicine”, “The health effects of electrical pollution”, 1629 K Street NW Suite 402, Washington DC, 2006.
  11. Report of “Stetzer Electric”, “Manufacturers of dirty electricity filters and meters”, pp. no. 6-8, www.dirtyelectricity.org, 2005