*by Bob Shively, Enerdynamics President and Lead Instructor*

One area I find in our classes that we often spend time discussing is the concept of capacity factor. This is important because it describes how often a given power plant is putting electricity onto the grid. If you don’t fully understand the concept don’t feel bad – it’s a common situation but with just a bit of explanation we can make it clear.

**Units of Capacity and Energy**

First you need to be sure you understand the concepts of capacity and energy and the units that are associated with them. If you need review or explanation, please take a moment to view this video:

Given that you now understand capacity and energy, let’s discuss how these apply to capacity factor.

**Rated Capacity**

Each generating unit has a rated capacity, also known as the maximum power rating. This quantity defines the maximum power in megawatts that the unit is designed to provide to the grid. While the unit may be able to produce electricity at a higher level, it will reduce its life in doing so. And thus units are not often run beyond their maximum rating. However, many units can be operated at levels well below their rated capacity. For example, an operator may have a 300 MW rated unit, but only needs 200 MW at a certain point in time. So the unit is operated at 200 MW, even though it could actually produce 300.

**Energy**

The amount of electricity put onto the grid over time is called energy, and is determined by the unit’s actual operating level multiplied by the amount of time the unit is run. This quantity is typically stated in MWh. For instance, if the 300MW unit is run at 200 MW for two hours it will have an output of 200 MW x 2 hours, or 400 MWh.

**Capacity Factor**

The ratio of a unit’s actual output to its maximum possible output at its rated capacity is called capacity factor. In the example of the 300 MW unit whose output was 400 MWh over two hours, the unit would have a capacity factor of 400 MWh divided by 300 MW x 2 hours, or 600 MWh, which would be its maximum output. So the capacity factor of the unit for those two hours was 67%. Capacity factor is used to determine how fully a unit’s capacity is utilized.

**Why It Matters**

Capacity factors vary significantly by unit type. Based on Energy Information Administration (EIA) data for 2010, here are U.S. capacity factors by fuel type:

*Fuel Type/Capacity Factor*

Coal/62%

Petroleum/7%

Natural Gas/24%

Nuclear/86%

Hydro/38%

Wind/27%

Solar/15%

Geothermal/58%

One thing this tells us is that a dollar invested in nuclear capacity, for example, buys significantly more energy than a dollar invested in petroleum capacity. For units owned by utilities, this is important to ratepayers because the capital costs of the unit with a high load factor can be spread over more kWh, thus resulting in lower rates required to recover the capital costs. Similarly for units owned by merchant generators in competitive markets, the owner has more kWh to recover a reasonable return on their investment and thus can charge a lower price for their output.

So it is clear that high load factors are more attractive when making investment in new power plants. But for existing units, the above data tells us at least one more interesting fact. Since natural gas power plants are only used at 24% capacity factor, there is lots of room for gas units to provide more power to grid. This means that, at least on average across the U.S., natural gas has significant potential to reduce the output of coal power as gas prices fall and coal units must spend more money on environmental compliance[1].

[1] Interested in learning more about the impacts of environmental compliance on electric markets? Inquire about Enerdynamics’ latest seminar offering – Power Emissions Regulation and Markets – now available for groups within your organization.