The Need for Standardization in Home Energy Report Evaluations

Posted December 5, 2022 By Tabitha Munson

 

ILLUME conducted a meta-analysis of recent home energy report (HER) evaluations. This type of analysis is common in the biomedical field and is a useful methodology to determine an average value for a metric reported across multiple studies. This technique establishes a weighted average of the metric of interest by considering factors such as the precision of the metric for each study, sample size, treatment effect size, and variability in study approach. Meta-analyses are useful for answering questions that require a look across studies but can also be hindered by variability in the data reported across studies.

The objective of our research was to determine if HERs are still a useful behavioral energy efficiency program and calculate the average energy savings for HER programs across recent evaluations. We compared energy savings across different factors such as fuel type, HER vendor, and the mode of report distribution (printed, emailed, or both) to explore if HERs may be effective under different circumstances.

Whether it’s variation in the level of detail that is provided on the HER program design or the metrics and measures of precision that are chosen to report on, nearly every evaluation was like a new code to decipher.

As we collected data on various HER program evaluations to perform this analysis, we were surprised by how much variation exists in how findings are presented in evaluations. While we found measurable trends like increasing savings over time, HER evaluation inconsistencies made it more challenging to compare different program designs.

A call for greater standardization in HER program evaluations

Below are some of the variations that we noticed during this research study:

  • Evaluations used different words for savings values, such as Relative Savings, Measured Savings, Verified Savings, Net Savings Prior to Uplift, or Savings with Double Counting. These terms could refer to a variety of savings values such as vendor savings, evaluator verified savings, unadjusted savings, or adjusted savings. In addition, some reported “net” versus “gross” savings for the HER program, while others reported that all savings are “net” savings because of the RCT design of HER programs. We had to carefully examine each report to determine what it meant by its reported values.
  • Evaluations reported measures of precision (i.e., standard deviation, standard error, confidence intervals) inconsistently. Some only provided this for the total savings, some only for per household savings, and others only for percent savings. This meant that we were not only limited by what type of savings value an evaluation reported, but also by which savings value an evaluation chose to provide a precision statistic. A cohort is a group of customers who receive home energy reports starting at the same time.
  • Some evaluations provided savings values at the program level while others provided savings values at the cohort level.
  • Evaluations reported the number of participants in the treatment group inconsistently reporting either the number of participants in the treatment group when that cohort began or the number at the beginning of the evaluation program year.
  • While some evaluations provided information on the HER program design such as report format, type of messaging, customer targeting, or distribution mode (i.e., email, print, or both), other evaluations were unclear about this type of information or did not address it at all.
  • Some evaluations provide savings values for each treatment group cohort every year while a few evaluations provided this information every quarter or every month.

Our findings noted above along with other inconsistencies present a challenge to systematically analyzing HER program evaluations. It is unfortunate that even after evaluations have done the work of carefully assessing programs, differences in reporting hinder leveraging this information to increase our overall understanding of HERs across multiple evaluations completed for different utilities and regions.

Greater standardization is the key

It is important that evaluations use the same words and concepts, such as “net” and “gross,” to convey savings values that are comparable across evaluations. Consistent vocabulary allows for accurate comparisons and a better understanding of the tools. Furthermore, it is important for evaluators to report the same values in a consistent manner, such as including precision statistics and reporting savings across the same intervals. As it is, fewer evaluations than we expected had both enough information for a meaningful analysis and in a format that was comparable. Providing consistent information on HER program design in evaluations would also allow for more comparisons of HER programs across factors. Increasing standardization would allow for more studies to include comparison analyses, allowing for deeper insights on HERs efficacy.

It’s time to work together across the requirements and preferences of evaluators, utilities, and jurisdictions to standardize home energy report program evaluations. With a few tweaks, standardized reporting has the power to make a huge impact on our ability to analyze across evaluations and improve our understanding on the efficacy of home energy report programs.

Read ILLUME’s case study: Testing the Efficacy of Home Energy Reports; A Meta-Analysis of HER Program Evaluations.

Ink