Developing a Straw Proposal for the Regulatory Sandbox

Objective: Develop and propose a detailed structure for the regulatory sandbox, ideally in partnership with key stakeholders in order to reduce the administrative burden and possible friction during a formal proceeding.

A well-structured, comprehensive framework that outlines key design elements of the sandbox mechanism, including program objectives, scope, derogations, funding, and cost recovery, fosters a shared vision and common understanding amongst sandbox stakeholders. This helps to create a collaborative and trusting environment where innovation can thrive.

Commissions may choose to develop, or request that utilities develop, a straw proposal for the sandbox mechanism and solicit comments and input on the structure. Hosting informal working group meetings to develop the proposal prior to a more formal comment solicitation period can help to garner buy-in and ideas from a wide range of stakeholders from the outset and lead to a smoother formal phase.

Items to address in a straw proposal are:

  • Sandbox objectives
  • Sandbox scope and eligibility
  • Derogations
  • Funding and cost recovery
  • Enabling flexibility, making adjustments, and governance
  • Evaluation, Reporting, and Scaling

Example straw proposals and design documents include:

Regulatory sandboxes seek to accelerate the testing and integration of innovative technologies by creating a controlled environment that reduces traditional regulatory constraints. The main purpose of a regulatory sandbox is to promote safe and informative innovation. Sandboxes also frequently also aim to promote other complementary objectives and focus innovation towards specific outcomes, which may be unique to state and utility priorities.

Administrators can design sandboxes to address utility-specific challenges or to facilitate broader reforms within current regulatory frameworks. In turn, sandbox objectives may be broad (e.g. “create a more flexible grid”), or specific (e.g. “study participation impacts of opt-out dynamic rates”), depending on the jurisdiction’s needs. Sandboxes may also establish guiding principles that further help to guide decision-making and project development.

Best practices for setting program objectives include the following:

  • Set clear and ambitious objectives
  • Use stakeholder engagement processes to develop sandbox objectives
  • Demonstrate buy-in for sandbox objectives from relevant regulatory and utility leadership and stakeholders
  • Align objectives with wider regulatory constructs and policy objectives such as improved grid reliability, improved grid resilience, and energy affordability
  • Ensure that learning, speed, and eventual scaling are primary objectives
    • Explicitly focus on facilitating scaling new products and services, not just running trials
    • Anchor objectives in showing that technologies can operate in real energy markets/systems and with real consumers
    • Acknowledge that an outcome indicating that a product or service isn’t suitable for the energy markets is still an informative outcome
  • Provide information on what the sandbox is expected to achieve

Examples of sandbox objectives include the following:

  • Support innovation and cultivate scalable solutions to meet state energy policy objectives
  • Quickly test new market offerings, technologies, programs, and business models
  • Accelerate adoption of flexible grid resources and create a nimbler grid
  • Improve interconnection timelines and reduce interconnection costs

In the context of regulatory sandboxes, scope refers generally to the bounds of the mechanism, including timing (e.g. how long the sandbox itself will be in place and how long individual trials may run, programmatic focuses (e.g. what types of technologies or programs are acceptable), and areas or issues that are in-bounds vs. off-limits (e.g. regulations that the commission is open to modifying and those that are inflexible).

When defining the scope of a regulatory sandbox, administrators can consider how to balance minimizing risk (e.g. market distortion) while providing enough opportunity to test innovative solutions. Clearly defining the scope of a regulatory sandbox effectively constrains projects to reduce the risk of unintended consequences.

Eligibility requirements further expound on the scope to identify specific programmatic elements that pilots must meet to participate in the sandbox mechanism.

Best practices include the following:

  • Clearly identify the regulatory barriers that the sandbox intends to address
  • Clearly define terminology such as “pilot,” “demonstration,” and “innovative” upfront and with stakeholder input
  • Provide information on what markets, technologies, programs, and regulations are in scope and the limits of the sandbox and what is within the regulator’s authority with regards to possible derogations
  • Create multiple pathways to participation so that innovators and stakeholders other than utilities can put ideas forward
  • Collaborate with other relevant authorities if needed (e.g. permitting agencies, departments of transportation), depending on the scope of the sandbox

Examples of different sandbox scopes and eligibility criteria include:

  • The Portland General Electric Smart Grid Test Bed focuses on demand response programs
  • The North Carolina Prototyping Process has broad eligibility but excludes energy efficiency and demand-side management that are covered in a different process
  • The New York REV Demonstrations aim to test business model changes to identify new revenue streams
  • Connecticut's sandbox program cycles run for 18 months each
  • Projects in Hawaii must go beyond the sale of basic electric services, align with regulatory goals, and incorporate cost-sharing provisions for non-local vendors and prioritize local vendors
  • Where relevant, Vermont pilot processes run parallel pilots to compare options for utility- vs. third-party ownership of assets

A Derogation is a modification to or exemption from typical regulations or rules deployed as part of a regulatory sandbox. Derogations are defining features of regulatory sandboxes that allow innovators to test new solutions outside of current regulatory frameworks, under close supervision from the regulator or another authority, with the ultimate aim informing future regulatory changes.

Commissions may have existing legal frameworks to draw on in the design of their sandboxes, or they may need to work with legislators to identify and pass supportive legislation. Straw proposals and resulting sandbox frameworks may not explicitly label the modifications to existing rules or practices as “derogations”, but it is important to clearly identify how the sandbox enables regulatory flexibility and reduces traditional regulatory barriers to innovation.

In the U.S., commissions have typically established derogations as part of the sandbox framework itself (e.g. expedited review of applications, relaxes cost recovery scrutiny), such that the same derogations apply to all projects that participate. In some sandboxes outside of the U.S., the framework requires each individual project to propose to the regulator a derogation that is necessary to enable that specific innovation.

Best practices include the following:

  • Consider how specific to be with derogations or modifications to regulatory constructs - overly specific boundaries might restrict innovators, while being too broad may cause confusion over the sandbox objectives and parameters
  • Consider whether derogations should apply to all projects or whether one-off modifications are appropriate depending on the innovation the sandbox is supporting
  • Sandbox implementers may provide bespoke guidance to innovators looking to participate
  • Identify foundational legal and regulatory frameworks that both enable responsible innovation and allow for regulatory flexibility and minimal oversight
  • Deploy other innovation vehicles when appropriate

Examples of derogations include the following:

  • Implementing expedited timelines for regulatory review
  • Eliminating the need for regulatory approval altogether
  • Eliminating the need for requests for proposals / allowing single-source vendor selection
  • Shared risk parameters between utilities and vendors
  • Relaxing or eliminating licensing requirements

Cost of service regulation and cost recovery practices can be a barrier to innovation. Therefore, it is important to communicate clearly about funding availability and cost recovery processes. This can include carving out funding streams, clearly establishing expectations for when cost recovery will be allowed, and even relaxing regulatory requirements for cost recovery (e.g. allowing cost recovery even if trials will not ultimately scale or did not meet expectations).

In establishing funding and cost recovery practices for the sandbox, regulators can consider how best to balance flexibility (providing ample funds to support meaningful innovation and allowing projects to adjust over time) with ratepayer protection (ensuring that customers are not exposed to unnecessary or disproportionate cost burdens or risks). Ultimately, effective funding and cost recovery structures enable experimentation without compromising customer interests. Clear cost recovery mechanisms - whether funded through utility rates, grants, or other means - can reduce utility hesitancy to spend on innovation and facilitate administrative efficiency.

Best practices include the following:

  • Reduce uncertainty over cost recovery
    • Regulators can consider allowing cost recovery even if trials aren't successful
  • Look into funding sources beyond customers rates when designing a sandbox
  • Consider cost or risk-sharing sharing requirements for innovators

Common practices related to funding and cost recovery include the following:

  • Creating funding carve-outs
  • Identifying maximum project and total sandbox spending caps
  • Stating that sandbox projects will receive cost recovery as long as they don't deviate from the approved scope (without approval)

The majority of longstanding regulatory sandbox programs in the United States have undergone adjustments, informed by their successes and shortcomings, as well as by input from stakeholders and pilot participants. These modifications may include changes to oversight structures, shifts in funding allocations, clarifications to expectations and parameters, and amendments to eligibility criteria. The sandbox framework can establish checkpoints to review the sandbox structure and make modifications as needed.

It is important to enable flexibility and continuous learning for both individual projects and the sandbox mechanism itself as part of the sandbox framework. For truly innovative trials, it is likely that programmatic adjustments will be necessary, and it is important that regulators enable such adjustments while minimizing possible risks to customers associated with deviating from approved programs. Regulators may consider requiring that program administrators formally notify the commission of substantive changes to the program scope, and if the Commission does not respond, the changes are considered acceptable. It is also helpful to create venues for regular, candid, non-punitive conversations between regulators, utilities, innovators, and stakeholders.

Governance structures play an important role in enabling flexibility and accountability in regulatory sandboxes. Sandboxes frequently include advisory councils or working groups composed of representatives from utilities, state agencies, and other stakeholder groups, such as advocacy organizations, academic institutions, and customer groups. Advisory bodies can provide important feedback on proposed pilots, evaluate program performance, bring expertise in new areas, and suggest modifications to sandbox processes.

Best practices include the following:

  • Build in feedback loops and opportunities for iterative improvement, both for individual projects and the sandbox itself
  • Identify key stakeholders and encourage stakeholder input
  • Establish governance councils (e.g., working groups and advisory councils) to shape program design
  • Share lessons learned

Examples of mechanisms that enable flexibility and governance structures include:

  • The Connecticut IES program is committed to “continuous learning” and aims to foster a regulatory environment that supports continued innovation by following an adaptive, agile approach and leveraging insights and feedback from applicants, innovators, and internal stakeholders to refine the program over time. Connecticut uses an advisory council to help vet proposals and guide sandbox implementation.
  • As part of the Hawaii Innovation Pilot Framework, the HPUC has issued guidance to HECO to iterate on and improve the process over time.
  • Parties interested in applying to California's EPIC program through the CEC can attend and provide feedback at public workshops and contribute comments for consideration.
  • The Ontario Innovation Sandboxoffers confidential informational services for innovators to speak with regulatory staff about an idea or possible program.
  • The DC Pilot Project Working Group (PPWG) is one of six groups formed under MEDSIS in 2018, tasked with “providing a set of recommended actions and next steps to the District of Columbia Public Service Commission.” The PPWG provided recommendations on the final structure and key guidelines for the governance, selection, and management of pilot projects. The MEDSIS Working Group process, which identified overarching Working Group goals, was open to the public.
  • Duke Energy established an Innovation Prototyping Working Group (IPWG), composed of Public Staff and other interested stakeholders, to explore suitable areas of innovation for prototyping. The IPWG outlines research questions and expected outcomes, establishes performance metrics, and supports alignment with laws and Commission orders.

Well-designed program evaluation and reporting mechanisms support the selection of the best pilot projects and ensure that these projects remain accountable and impactful.

After initial application evaluation and project selection, participants report on ongoing projects aligned with the sandbox evaluation protocols and expectations. Sandboxes that include systematic and structured project documentation may be able to better react and adapt quickly to challenges that arise and will better understand project outcomes. These insights and data are often important factors in determining whether or not projects will scale to broader deployment beyond their pilot phase. Documentation also supports information sharing and can help sandbox learnings reach other jurisdictions.

See Sample reporting templates and key metrics for success for more information and utility examples.

Best practices include the following:

  • Establish upfront clear evaluation and reporting requirements using metrics that align with sandbox and project goals and desired outcomes
  • Dedicate sufficient staff resources to the sandbox, including cross-functional teams with pre-identified roles and processes can support quick and thorough review
  • Employ evaluation templates or standard form documents to reduce administrative burden.
  • Monitor and document sandbox outcomes
  • Support scaling of successful pilots by establishing parameters for what constitutes a “successful” project and providing clear pathways for pilot projects that meet these parameters to transition into broader deployment

Examples of Evaluation, Reporting, and Scaling practices include:

  • To qualify for the Connecticut IES program, projects must meet a set of pass/fail criteria designed to protect customers and align with overall and cycle-specific program goals. Only projects that meet all criteria move forward. Projects within the IES program follow a “phased approach” and use a “fail fast” mentality to support innovation at lower risk. Each phase includes an off-ramp to help only the most viable projects advance to potential large-scale deployment.
  • In Vermont, the Commission requires that GMP file pilot project status reports thirty days after each six-month interval of project duration and sixty days after the end of the project. These status reports have clearly defined information requirements that broadly capture pilot design, customer engagement, operational performance, and alignment with program goals. Final reports that include assessments of customer satisfaction, lessons learned, and whether the pilot will be advanced to a tariff-based offering.
  • Within the North Carolina Innovation Prototyping Process, Duke Energy must report on pilot financials, customer participation, and project performance, as well as insights on customer outcomes, alignment with research goals, and proposed adjustments to the Commission every six months after Prototype approval. Final reports that outline total costs and revenues (including a cost-benefit analysis), customer participation, impact, and satisfaction.
  • The Connecticut IES program operates within a 4-phase process. Phase 4, Assessment & Scale, requires innovators to submit a final report on project performance and lessons learned. The Program Administrator compiles recommendations to guide the Authority on scaling decisions. Projects deemed ready for deployment are be invited to submit regulatory applications, and each project receives a clear "go" or "no go" decision at the end of Phase 4, with outcomes filed in the IES Program cycle docket.