Regular reporting and evaluation allow administrators to evaluate the effectiveness of both their sandbox mechanism and the individual pilot projects being carried out within them. Standardized reporting templates support consistent data collection across projects that is actionable and aligned with program objectives. Identifying specific key performance metrics can allow both administrators and participants to assess project outcomes and identify trends, and can ultimately inform decisions about future scaling and regulatory reforms. Supplementary metrics and key performance indicators are typically tailored to individual projects.
When compiling a reporting scheme for sandboxes, administrators can consider a number of reporting elements, including clarity and consistency, standardization and flexibility, learning objectives, data variance, and final reports.
| Program | Frequency | Reporting Requirement |
|---|---|---|
| California EPIC Program | Annual | The CA EPIC program includes both an annual reporting process (to the CPUC) and a legislative reporting process (to the CA Legislature). Together, these reporting processes allow for project oversight and monitoring, as well as assurance of statutory compliance and public accountability.
|
| Michigan New Technologies and Business Models | Annual | Utilities must provide an annual written report on ongoing pilots, filed in the utility's dedicated docket. Reports must include the following:
|
| New York Reforming Energy Vision | Quarterly | Projects must report quarterly on technology performance, customer engagement, DER integration, and market impacts. Data collected will inform regulatory changes, rate design, and DSP functionalities. The proposals submitted for each demonstration project include information on the specific metrics/data to be included in these reports. |
| North Carolina Innovation Prototyping Process | Every 6 months | Every six months after Prototype approval, Duke Energy must report on the following to the Commission -
The final report (filed within six months of the Prototype ending) must include the following -
|
| Vermont Innovative Pilot Program | Every six months | The Commission requires that GMP file Pilot project status reports GMP thirty days after each six-month interval of project duration, with the exception of the final report, which the Commission requires sixty days after the end of the project. Per the guidelines of the Innovative Pilot Program, reports must cover the following:
The final report must cover the metrics above, and:
Additionally, once a Pilot project has been in operation for one year, GMP must survey customers to gauge customer satisfaction with the project and assess whether the project met their needs and goals. The Vermont Data Reporting Plan template is downloadable. |
| Washington, DC Pilot Project Fund | Quarterly | Pilot projects require three quarterly reports and a final annual report, with the option for more frequent reporting determined on a project-by-project basis. |
The following table provides more detail on applying guiding principles to setting project performance metrics.
| Category | Description |
|---|---|
| Alignment with Program Objectives | Set metrics that reflect sandbox goals/objectives defined during the sandbox design process (e.g., if a core objective of a sandbox is promoting resilience, include metrics that gauge the degree to which sandbox projects specifically reduce vulnerabilities to specific hazards). Set metrics that are broadly consistent with overarching state policy goals, such as enabling grid flexibility. |
| Relevance Across Project Types | Set metrics that are applicable to a range of projects. Administrators may choose to present higher level metric requirements, and ask that participants include specific key performance indicators on a project-by-project basis. |
| Relevance Across Project Phases | Set metrics that address all project phases to gauge project effectiveness across the entire project timeline. Including phase-specific metrics may help identify specific successes and shortcomings related to each level/component of a project. |
| Customer Impact | Set metrics that gauge both the costs and the benefits of the project to both participating and non-participating customers. This may be done more qualitatively through customer surveys and regular solicitations for customer feedback, or quantitatively through system measurements such as outage times or bill savings. |
| System Impact | Set metrics that gauge grid impacts such as reliability and resilience, energy savings or generation, or overall emissions impacts. Including both customer and system-based metrics can paint a more holistic picture of project impacts. |
| Scalability | Set metrics that capture potential for project scalability, for example, market readiness, time to deploy technology, or cost-per-customer. These metrics may help administrators decide on whether or not to progress the project out of the pilot phase and into broader development. |
| Measurability and Timeliness | Set metrics that are quantifiable through a feasible plan for data collection. Set metrics that are measurable within the timeframe of the sandbox project and reporting periods, and that clearly communicate key findings/performance aspects of the project. |
| Objective of the Offering | Process to Measure | Potential Metrics |
|---|---|---|
| Identify, develop, and communicate the customer value proposition of DR to PGE's customers. | Customer Surveys | Awareness, consideration, evaluation, and attitudes in pre and post conditions |
| Work with customers to establish and retain a high level of customer participation in DR offerings. | Customer Surveys, Customer Interviews, Data Analytics | Participation level, Dropout rate, load reductions, etc. |
| Learn how to recruit and retain customers' participation and translate these learnings for development of cost-effective strategies to be applied to service territory offerings. | A / B testing on messaging and process; extrapolation to PGE territory | Cost per recruit, Drop outs, business and residential customer profiles/segments |
| Collect information on DR potential that can inform resource potential studies [and] achieve maximum technical potential. | Customer Surveys, Interviews, onsite visits, DR impact analysis | Additional controllable equipment observed, or self-reported, actual demand reduced by participants |
| Create new offerings that can quickly translate to broad deployment program offerings. | Monitor evolution of offerings and introduction of new programs | # of new programs, customer adoption, and retention |
| Coordinate on new program development with other demand-side measure providers such as the Energy Trust and NEEA. | Monitor NEEA, the Energy Trust, and other initiatives in the Testbed, customer surveys, and customer usage | Program interactions on adoption, retention, and DR Response |
| Study and understand the system operational implications of high levels of DR and gain insight into how high levels of flexible load-necessary to meet PGE's carbon reduction goals-is expected to have upon the system. | Customer usage impact analysis | Measure impacts against system and sub-station peaks, selected wholesale market criteria, DR interactions |
Key metrics of the CT IES program are informed by Connecticut's Equitable Modern Grid Framework.
| Ideation & Screening | Prioritization & Selection | Project Development | Assessment & Scale | |
|---|---|---|---|---|
| Economic Benefit | Does the project provide economic value? | What are projected job or economic benefits? | What were job and economic impacts? | What economic and job benefits could the project provide at scale? |
| Cost-Effectiveness | Does the project estimate cost-effectiveness? | Which projects have the greatest cost-effectiveness? | What were actual costs and benefits? | How much value could the project deliver to all ratepayers? |
| Programmatic or Market Gaps | Does the project address gaps in existing customer offerings? | Would the project improve or expand customer offerings? | What were customer participation and satisfaction levels? | Would the project foster competition and customer choice? |
| Equity | Does the project consider underserved communities? | Would the project create or improve opportunities for underserved communities? | What were impacts to underserved communities? | Would the program address or create equity challenges at scale? |