Reporting and Evaluating Sandbox Projects

Regular reporting and evaluation allow administrators to evaluate the effectiveness of both their sandbox mechanism and the individual pilot projects being carried out within them. Standardized reporting templates support consistent data collection across projects that is actionable and aligned with program objectives. Identifying specific key performance metrics can allow both administrators and participants to assess project outcomes and identify trends, and can ultimately inform decisions about future scaling and regulatory reforms. Supplementary metrics and key performance indicators are typically tailored to individual projects.

When compiling a reporting scheme for sandboxes, administrators can consider a number of reporting elements, including clarity and consistency, standardization and flexibility, learning objectives, data variance, and final reports.

  • Standardized core elements: Requiring baseline categories across all projects, such as costs and revenues, customer impacts, and progress on key metrics, increases potential for comparability across reports.
  • Clarity and consistency: Clearly define what participants should report, as well as how often and in what format they should submit these reports. More frequent reporting allows for closer project supervision and higher transparency but places higher burden on project participants, especially if reporting process is highly in-depth.
  • Flexibility: Allowing participants to define additional, project-specific metrics can retain important specificity and details in reporting.
  • Learning objectives: Reports that address research questions and lessons learned can contribute to a culture of learning and provide important takeaways for both sandbox administrators and for future project participants.
  • Data variance: Inclusion of both qualitative and quantitative data can help paint a more complete picture of project progress and success. See table below for more guidance on quantitative metric requirements.
ProgramFrequencyReporting Requirement
California EPIC ProgramAnnual The CA EPIC program includes both an annual reporting process (to the CPUC) and a legislative reporting process (to the CA Legislature). Together, these reporting processes allow for project oversight and monitoring, as well as assurance of statutory compliance and public accountability.
  • Annual Reports: The CPUC requires that EPIC administrators submit annual reports detailing project statuses.
Legislative Reporting: Under Public Resources Code Section 25711.5(f), the CEC is required to prepare and submit an annual report to the Legislature, which must include identification of funding recipients, descriptions of their projects, and information regarding project costs.
Michigan New Technologies and Business ModelsAnnual Utilities must provide an annual written report on ongoing pilots, filed in the utility's dedicated docket. Reports must include the following:
  • Implementation schedules
  • Qualitative description of pilot and customer benefits
  • Pilot progress against objectives and key performance metrics
  • Impacts on underserved communities
  • Costs and revenues
  • Customer satisfaction data
  • Any proposed changes
New York Reforming Energy VisionQuarterlyProjects must report quarterly on technology performance, customer engagement, DER integration, and market impacts. Data collected will inform regulatory changes, rate design, and DSP functionalities. The proposals submitted for each demonstration project include information on the specific metrics/data to be included in these reports.
North Carolina Innovation Prototyping ProcessEvery 6 months Every six months after Prototype approval, Duke Energy must report on the following to the Commission -
  1. Costs and revenues
  2. Participation
  3. Customer impact
  4. Customer satisfaction
  5. Lessons learned
  6. Progress on key performance indicators related to the research questions including but not limited to carbon and equity impact
  7. Proposed changes including plans to scale the Prototype or terminate it

The final report (filed within six months of the Prototype ending) must include the following -

  1. Full accounting of costs and revenues
  2. A cost benefit analysis for a full program version of the pilot
  3. Participation
  4. Customer impact
  5. Customer satisfaction, including post-pilot survey results
  6. Findings in regard to the pilot's established research questions including key performance indicators and a narrative explanation on the efficacy of the pilot in relation to its original goal
  7. Anticipated next steps
Vermont Innovative Pilot ProgramEvery six months

The Commission requires that GMP file Pilot project status reports GMP thirty days after each six-month interval of project duration, with the exception of the final report, which the Commission requires sixty days after the end of the project. Per the guidelines of the Innovative Pilot Program, reports must cover the following:

  1. Brief description of the Innovative Pilot.
  2. Customer participation in the Innovative Pilot including the number of products or units, number of customers enrolled, and the distribution of the product by county/town.
  3. Financial information regarding the costs and revenues where applicable (equipment revenue, additional kWh margin, O&M maintenance, O&M service, depreciation, return on rate base, and net gain or loss).
  4. Load control device saturation information, where applicable, including whether the product is controlled by GMP for peak shaving purposes, and if so, the number of units controlled, the control device, the response rate, and capacity available in kilowatts.
  5. Narrative explanation of how the Innovative Pilot is advancing the goals of the Innovative Pilot program outlined in the eligibility requirements.
  6. Next steps.

The final report must cover the metrics above, and:

  1. Assessment of customer satisfaction.
  2. Lessons learned during the Innovative Pilot.
  3. Whether the Innovative Pilot will be advanced to a tariff-based offering, and if so, why.
  4. If the Innovative Pilot will not be advanced to a tariff-based offering, the reasons why not.

Additionally, once a Pilot project has been in operation for one year, GMP must survey customers to gauge customer satisfaction with the project and assess whether the project met their needs and goals.

The Vermont Data Reporting Plan template is downloadable.

Washington, DC Pilot Project FundQuarterlyPilot projects require three quarterly reports and a final annual report, with the option for more frequent reporting determined on a project-by-project basis.

The following table provides more detail on applying guiding principles to setting project performance metrics.

CategoryDescription
Alignment with Program ObjectivesSet metrics that reflect sandbox goals/objectives defined during the sandbox design process (e.g., if a core objective of a sandbox is promoting resilience, include metrics that gauge the degree to which sandbox projects specifically reduce vulnerabilities to specific hazards). Set metrics that are broadly consistent with overarching state policy goals, such as enabling grid flexibility.
Relevance Across Project TypesSet metrics that are applicable to a range of projects. Administrators may choose to present higher level metric requirements, and ask that participants include specific key performance indicators on a project-by-project basis.
Relevance Across Project PhasesSet metrics that address all project phases to gauge project effectiveness across the entire project timeline. Including phase-specific metrics may help identify specific successes and shortcomings related to each level/component of a project.
Customer ImpactSet metrics that gauge both the costs and the benefits of the project to both participating and non-participating customers. This may be done more qualitatively through customer surveys and regular solicitations for customer feedback, or quantitatively through system measurements such as outage times or bill savings.
System ImpactSet metrics that gauge grid impacts such as reliability and resilience, energy savings or generation, or overall emissions impacts. Including both customer and system-based metrics can paint a more holistic picture of project impacts.
Scalability Set metrics that capture potential for project scalability, for example, market readiness, time to deploy technology, or cost-per-customer. These metrics may help administrators decide on whether or not to progress the project out of the pilot phase and into broader development.
Measurability and TimelinessSet metrics that are quantifiable through a feasible plan for data collection. Set metrics that are measurable within the timeframe of the sandbox project and reporting periods, and that clearly communicate key findings/performance aspects of the project.
identifies key testbed objectives, the process by which to measure success within these objectives, and potential metrics associated with this measurement of success.
Objective of the OfferingProcess to MeasurePotential Metrics
Identify, develop, and communicate the customer value proposition of DR to PGE's customers.Customer SurveysAwareness, consideration, evaluation, and attitudes in pre and post conditions
Work with customers to establish and retain a high level of customer participation in DR offerings. Customer Surveys, Customer Interviews, Data AnalyticsParticipation level, Dropout rate, load reductions, etc.
Learn how to recruit and retain customers' participation and translate these learnings for development of cost-effective strategies to be applied to service territory offerings.A / B testing on messaging and process; extrapolation to PGE territoryCost per recruit, Drop outs, business and residential customer profiles/segments
Collect information on DR potential that can inform resource potential studies [and] achieve maximum technical potential.Customer Surveys, Interviews, onsite visits, DR impact analysisAdditional controllable equipment observed, or self-reported, actual demand reduced by participants
Create new offerings that can quickly translate to broad deployment program offerings.Monitor evolution of offerings and introduction of new programs# of new programs, customer adoption, and retention
Coordinate on new program development with other demand-side measure providers such as the Energy Trust and NEEA.Monitor NEEA, the Energy Trust, and other initiatives in the Testbed, customer surveys, and customer usageProgram interactions on adoption, retention, and DR Response
Study and understand the system operational implications of high levels of DR and gain insight into how high levels of flexible load-necessary to meet PGE's carbon reduction goals-is expected to have upon the system.Customer usage impact analysisMeasure impacts against system and sub-station peaks, selected wholesale market criteria, DR interactions

Key metrics of the CT IES program are informed by Connecticut's Equitable Modern Grid Framework.

Ideation & ScreeningPrioritization & SelectionProject DevelopmentAssessment & Scale
Economic BenefitDoes the project provide economic value?What are projected job or economic benefits?What were job and economic impacts?What economic and job benefits could the project provide at scale?
Cost-EffectivenessDoes the project estimate cost-effectiveness?Which projects have the greatest cost-effectiveness?What were actual costs and benefits?How much value could the project deliver to all ratepayers?
Programmatic or Market GapsDoes the project address gaps in existing customer offerings?Would the project improve or expand customer offerings?What were customer participation and satisfaction levels?Would the project foster competition and customer choice?
EquityDoes the project consider underserved communities?Would the project create or improve opportunities for underserved communities?What were impacts to underserved communities?Would the program address or create equity challenges at scale?