Step 5: Data Quality Scoring
Introduction
This methodology outlines a structured scoring scheme for evaluating the availability and quality of publicly accessible data required for Supplier-Specific Scope 2 (SSS) emissions reporting within the Granular Registry SSS Reporting platform. The scheme assesses key data elements—Resource Mix, Hourly Proxy Data, RPS/EAC Retirements, Sales Volume, Overall Usability, and Data Freshness—to quantify the readiness of supplier-region pairs for generating credible SSS reports. It aligns with the GHG Protocol Scope 2 Guidance principles, including data quality criteria such as accuracy, completeness, timeliness, and usability, while incorporating insights from the RE100 Technical Criteria for passive procurement and CRS guidelines for the standard delivery of renewable energy accounting.
The scoring scheme uses a simplified 0–10 scale per category, promoting ease of understanding and application without complex weighting. Each category is rated holistically, combining aspects of availability, quality/granularity, timeliness/verification, and accessibility/usability into a single score. The overall score is the arithmetic average of the six category scores, providing a balanced view of data readiness. Thresholds guide interpretation: 8–10 (Excellent: Fully supports advanced SSS reporting); 6–7 (Good: Usable with minor proxies); 4–5 (Fair: Limited, requires supplements); <4 (Poor: Inadequate, default to grid averages or location-based methods).
This methodology is designed for integration into the SSS Resource Registry documentation, enabling users to evaluate and compare data availability across suppliers and regions systematically. It supports ongoing GHG Protocol revisions by highlighting data strengths and gaps, facilitating recommendations for enhanced public disclosures.
Scoring Principles
Scale and Calculation: Each of the six categories is scored from 0 (no usable data) to 10 (comprehensive, high-quality data). The overall score is calculated as: Overall Score = (Sum of Six Category Scores) / 6. Round to one decimal place for precision (e.g., 7.3). No weighting is applied to maintain simplicity and equal emphasis on all elements.
Holistic Rating: Scores integrate multiple dimensions:
Availability: The presence of data in public sources.
Quality/Granularity: Depth and detail (e.g., breakdowns vs. aggregates).
Timeliness/Verification: Currency and assurance (e.g., audited vs. self-reported).
Accessibility/Usability: Ease of retrieval and use (e.g., APIs vs. PDFs). Assign scores based on a balanced assessment; for example, high availability but low quality might yield a mid-range score.
Data Sources: Evaluations must rely solely on public, verifiable sources (e.g., EIA, ENTSO-E, IEA reports, utility disclosures). To ensure objectivity, cross-verify ratings using at least three independent sources per category.
Application Scope: Apply per supplier-region pair (e.g., Duke Energy - North Carolina). Re-evaluate annually or upon significant updates (e.g., new regulatory filings or revisions to the GHG Protocol).
Threshold Interpretation: Use scores to inform SSS report generation: Excellent scores enable full market-based claims; Poor scores necessitate proxies or disclaimers, aligning with Scope 2 dual reporting requirements.
Detailed Scoring Criteria by Category
Each category includes scoring guidelines with examples to ensure consistent application. Analysts should document rationale, sources, and any uncertainties (e.g., pending data releases) for transparency.
1. Resource Mix
Definition: Composition of the supplier's electricity generation sources by fuel/technology, including emissions factors and SSS ties.
Scoring Guidelines:
8–10: Comprehensive public disclosure with detailed breakdowns (e.g., % by wind/solar/nuclear), emissions factors, and verification (e.g., audited annual reports with API access).
6–7: Solid aggregates with some granularity (e.g., basic % renewables and factors in PDFs).
4–5: Partial data requiring minor proxies (e.g., regional averages with limited verification).
0–3: Minimal or absent data, fully reliant on external proxies.
Example: A U.S. supplier with EIA eGRID breakdowns and utility report verification scores 9; a Chinese supplier with aggregated NEA data scores 6.
2. Hourly Proxy Data
Definition: Temporal profiles of generation/load to support hourly matching, including deliverability metadata.
Scoring Guidelines:
8–10: Detailed public hourly data with supplier-specific granularity and easy access (e.g., downloadable CSVs from ENTSO-E).
6–7: Adequate daily/monthly proxies with some verification (e.g., interactive dashboards).
4–5: Basic annual snapshots requiring interpolation.
0–3: Absent or outdated, necessitating full proxies.
Example: EU suppliers with ENTSO-E real-time data score 10; emerging market suppliers with annual IEA estimates score 4.
3. RPS/EAC Retirements
Definition: Records of RPS compliance or EAC retirements, including volumes, types, vintages, and exclusivity proofs.
Scoring Guidelines:
8–10: Full details with registry access and verification (e.g., M-RETS breakdowns with audit trails).
6–7: Aggregated compliance data with some granularity (e.g., PDF reports on volumes).
4–5: Inferred retirements from partial sources.
0–3: Untracked or absent, risking double-counting.
Example: U.S. suppliers with DSIRE RPS filings and registry verification score 9; Asian suppliers with limited I-REC data score 5.
4. Sales Volume
Definition: Annual electricity deliveries by sector, with trends and SSS allocations (e.g., pro-rata shares).
Scoring Guidelines:
8–10: Sectoral breakdowns with SSS ties and structured access (e.g., EIA-861 CSVs with audits).
6–7: Basic totals with trends in reports.
4–5: Estimated volumes from aggregates.
0–3: Absent or unreliable.
Example: U.S. suppliers with detailed EIA data score 10; Indian suppliers with CEA aggregates score 6.
5. Overall Usability
Definition: Integration of metrics meeting Scope 2 criteria, with ease of use for SSS reports.
Scoring Guidelines:
8–10: Seamless, machine-readable integration compliant with all criteria (e.g., APIs linking mix to retirements).
6–7: Partial compliance with human-readable sources.
4–5: Siloed data requiring manual synthesis.
0–3: Inaccessible or non-compliant.
Example: Integrated platforms like ENTSO-E score 9; fragmented PDFs score 5.
6. Data Freshness
Definition: Currency of data, reflecting how up-to-date and relevant it is for current-year SSS reporting.
Scoring Guidelines:
8–10: Real-time or current-year data with frequent updates (e.g., quarterly refreshes and assurance of no lags).
6–7: 1–2 years old with some verification of ongoing relevance.
4–5: 2–3 years old, requiring minor adjustments.
0–3: >3 years old or stagnant, rendering it unreliable for claims.
Example: Real-time IEA dashboards score 10; biennial reports score 6.
Last updated
Was this helpful?