Behavior programs, in the form of home energy reports (HERs), have provided consistent savings of 1-2% for residential customers for over a decade, helping customers reduce their monthly bills and helping utilities meet their energy savings goals.
An essential characteristic of many behavior programs is the randomized control trial (RCT) design. The opt-out RCT design—considered the ‘gold standard’ of evaluation—is a cornerstone for behavior programs, and for good reason:
- Through randomization, the opt-out RCT provides an unbiased measurement of savings. Comparing change in energy use between the treatment and control groups mitigates self-selection effects as well as exogenous factors (economy, weather, energy rates) that might affect energy use.
- The opt-out design allows the utility to include large amounts (tens of thousands) of customers which gives more customers exposure to energy conservation messaging and education. While savings per individual home are small, across tens of thousands of homes the savings account for notable portions of residential energy efficiency portfolios.
- The randomization and large treatment and control group sizes provide the statistical power to measure statistically significant savings even if the savings for each individual home are small. Evaluators have validated results from hundreds of customer cohorts. Meta-analysis and research reviews have further validated savings.
Behavior programs are a black box compared to other energy efficiency programs, such as home weatherization or heating and cooling equipment replacement. In behavior programs, implementers send home energy reports, customers take various actions (or multiple actions), and evaluators then validate energy savings. While we have a general understanding of how customers are likely to act, we lack the specificity seen in equipment or shell improvement programs, which precisely identify what is reducing energy use. Therefore, relying on rigorous evaluation methods has been the most appropriate approach for behavioral programs.
However, the strengths of these evaluation approaches can also be longer-term limitations that may be stifling opportunity. For example:
- The RCT design is not suitable for offerings that require action by the customer, such as registering for an online portal, behavioral demand response, games, alerts, and other behavior programs that require more proactive action on the part of the customer than passively receiving a report.
- The RCT design excludes a portion of customers from participation. Tens of thousands of customers are relegated to the control group. It is increasingly difficult for utilities to deny customers access to tools that can help them save energy and money.
- More utilities are offering a slate of opt-in tools and programs to help customers manage their energy use. It becomes more complicated—or not possible—to create distinct treatment and control groups where customers may have access to online portals, high bill alerts, different billing options (pre-pay, flat billing), and other program models.
As utilities expand their education and engagement offerings, we will need to find creative ways to overcome the challenge of measuring savings from expanded offerings while being cautious in how we design the expanded offerings.
How can we give more customers access to energy education and management tools while giving utilities credit for their outreach and investment?
Can we accept less rigorous evaluation methods? The industry generally accepts quasi-experimental methods (e.g., matched comparison groups or variation in adoption approaches) for opt-in programs. However, these still require having a large pool of nonparticipating customers that are reasonably like the current participants. Instead of opting for the “gold standard” methods, it may be time to phase in “bronze” methods.
Let’s invest in testing alternative methods:
- For example, we can compare results from population-level normalized meter energy consumption (NMEC) approaches to RCTs. If the differences warrant it, we can develop adjustment factors to apply to widespread implementation of NMEC methods for behavior programs.
- Can we use a smaller deemed savings value, say 1.5%, and accept more uncertainty? We can invest in testing and understanding what “counts” as participating (number of portal logins, e.g.) and limit savings to customers who meet those criteria. And test this assumption every few years by looking at trends in energy use among participants.
The power of home energy reports comes from the power of normative feedback to influence our actions. As we expand the types of behavior change and educational offerings for customers, we still need to make sure the program models are based on social science research and offer a clear mechanism by which we expect the program to result in behavior changes.
Behavior change-based programs help utilities connect with their customers and provide customers with tangible, accessible strategies to save energy and money. Expanding our measurement methods while staying attuned to social science principles in program design will result in effective programs with defensible savings estimates.
 Stewart, James and Todd, Annika. 2020. “Chapter 17: Residential Behavior Protocol. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures”. United States. https://www.energy.gov/sites/prod/files/2015/02/f19/UMPChapter17-residential-behavior.pdf.