CarbonGraph: Our Guiding Principles, Modeling Methodology, and Platform Outputs.

This document explains how CarbonGraph does Life Cycle Assessment: our guiding philosophy, the methods that power our calculations, and the outputs that make results transparent and usable.

On This Page

  • Executive Summary
  • Our Guiding Principles
  • Core Calculation Logic: Definitions
  • Core Calculation Logic: Life Cycle Inventory (LCI)
  • Core Calculation Logic: Life Cycle Impact Assessment (LCIA)
  • Core Calculation Logic: Modeling Special Cases
  • Model Structure & Configuration: Model Locking Mechanism
  • Model Structure & Configuration: Configurable Parameters
  • Model Structure & Configuration: Background Dataset Integration
  • Model Structure & Configuration: Security and Trust
  • Outputs and External Alignment: EPD-Ready Outputs
  • Outputs and External Alignment: Scenario Analysis
  • Outputs and External Alignment: Enterprise-Level Reporting
  • Outputs and External Alignment: Communication & Collaboration
  • Active Development & Roadmap
  • Appendices

Executive Summary

The purpose of this document is to provide a transparent account of how CarbonGraph models, calculates, and communicates life cycle results. It is designed to give confidence in both our methodology and our platform by explaining the principles behind our calculation logic, the structure of our models, and how our outputs align with established LCA and EPD standards.

This report serves multiple audiences, including:

  • LCA modelers and analysts: who want to understand how their models are constructed in CarbonGraph.
  • EPD reviewers and verifiers: who require assurance of methodological conformance.
  • Program operators, researchers, and stakeholders: who depend on reliable, auditable results.

This document functions as a white paper on CarbonGraph’s modeling philosophy and methodology, offering the clarity needed to build trust across users, auditors, and the wider community. By openly describing how our system works, we aim to set a new benchmark for transparency in digital LCA platforms.

At the highest level, our modeling philosophy can be summarized in three guiding principles:

  • Speed: the efficiency and flexibility of our graph-based modeling approach, which allows modular, reproducible models to be built and adapted quickly.
  • Control: the ability for practitioners to capture the unique features of their products, processes, or scenarios with precision.
  • Transparency: rigorous data management practices combined with a visual-first interface, making it easy to identify hotspots and understand where impacts are generated.

This foundation—transparent, graph-based, visual-first modeling—enables CarbonGraph to support both expert practitioners and broader stakeholders. Models are modular, reusable, and auditable, encouraging a “don’t repeat yourself” approach that captures the unique details of each product without sacrificing clarity. With these principles, CarbonGraph delivers LCAs and EPDs that are technically rigorous, adaptable to different program requirements, and ultimately more trustworthy.

Our Guiding Principles

CarbonGraph’s approach to life cycle modeling is grounded in a clear set of principles and design decisions. These foundations shape how the platform is built, how models behave, and how results can be trusted and communicated.

Our guiding principles can be summarized as:

  • Speed: the platform is designed for efficiency and flexibility, enabling practitioners to build, adapt, and scale modular models quickly.
  • Control: modelers have the freedom to capture the unique features of their products and processes, ensuring that results reflect what makes each system distinctive.
  • Transparency: rigorous data management, combined with a visual-first interface, makes it easy to trace results, identify hotspots, and understand how impacts are generated.

In terms of product design, CarbonGraph follows the principle of “low floor, high ceiling, wide walls.”

  • Low floor: easy to get started, with an accessible interface for new users.
  • High ceiling: advanced features provide power and flexibility for expert users.
  • Wide walls: multiple ways to accomplish a given objective, supporting a variety of modeling styles and workflows.

These principles are reinforced through key design decisions:

  • Transparent, graph-based models: every process and flow is visible and connected, creating models that are intuitive to understand and audit.
  • Visual-first interface: results and model structures can be explored graphically, helping users identify hotspots and explain outcomes clearly.
  • Modular and reusable: models are built with a “don’t repeat yourself” (DRY) philosophy, encouraging reuse across products and systems while preserving uniqueness.
  • Flexible but rigorous: the platform balances practitioner flexibility with alignment to established methodological standards.

Finally, CarbonGraph is aligned with internationally recognized standards for life cycle assessment. Our methodology supports models that conform with:

  • ISO 14040 and ISO 14044: the foundational standards for LCA principles and requirements.
  • ISO 21930: sustainability in building and construction works.
  • EN 15804: core rules for environmental product declarations of construction products.

Together, these philosophical foundations ensure that CarbonGraph is both accessible and rigorous, giving practitioners speed, control, and transparency without sacrificing methodological integrity.

Core Calculation Logic: Definitions

At the heart of CarbonGraph’s calculation logic are processes and flows. These are the fundamental building blocks of every model and the basis for how life cycle systems are represented in the platform.

A process (sometimes called an activity in other LCA tools) defines everything needed to transform a set of inputs into a set of outputs. Each process has:

  • A set of input product flows and input elementary flows.
  • A set of output product flows and output elementary flows.

Processes represent a small slice of a supply chain. By linking many processes together, modelers can represent the entire life cycle of a product — from raw material extraction, through manufacturing and distribution, to use and end-of-life. This is what gives CarbonGraph its graph-based structure: each process is a node, and flows are the edges connecting them.

CarbonGraph distinguishes between different types of flows:

  • Product flows: intermediate products or energy that move between processes. These represent hand-offs across a supply chain, such as pig iron moving into a steel-making process, or electricity being delivered to a factory.
  • Elementary flows: exchanges directly with the environment. These include emissions to air (e.g., CO₂), water use, or land occupation. Impacts are attributed to these flows by applying characterization factors during the impact assessment phase.
  • Accounting flows: non-physical flows used to track additional quantities. These do not represent material or energy directly exchanged with the environment, but rather serve to capture accounting categories. A common example is cost, which CarbonGraph treats as a flow so that it can be consistently allocated, aggregated, and compared alongside environmental impacts. Other accounting flows include constructed flows that tally up impacts into specific indicator categories.

This unified treatment of flows means that CarbonGraph can track physical quantities, emissions, and costs all within the same modeling framework. Whether the flow represents steel, CO₂, or dollars, it is handled transparently and consistently as part of the life cycle system.

Why this matters

  • Models are consistent: every process and flow is defined in the same structured way.
  • Results are traceable: materials, emissions, and even costs can all be followed through the graph.
  • Models are flexible: treating costs and constructed flows like any other flow makes it possible to capture economic and environmental dimensions together.

Core Calculation Logic: Life Cycle Inventory (LCI)

The first major calculation step in CarbonGraph is the life cycle inventory (LCI). This determines how much of each process in the model is required to satisfy the declared unit of the system.

CarbonGraph calculates the LCI by traversing the connected graph of processes and assigning each one a system-level scaling factor. This ensures that all upstream requirements are included in the correct proportions, so the declared unit can be traced consistently through every process and flow in the model.

How scaling works — an example

  • Imagine a downstream fabrication process requires 10 kg of crude steel.
  • An upstream steelmaking process produces 1 kg of crude steel per run.
  • To meet the declared unit, CarbonGraph automatically scales the upstream process by a factor of 10.
  • This scaling also applies to all inputs into the upstream process — electricity use, scrap steel, pig iron, emissions, and so on — so that the entire system balances.

The declared unit is usually the final product at the right-most end of the graph, such as 1 kg of crude steel at the factory gate or 1 aluminum can delivered to a consumer. However, CarbonGraph also allows models to be re-based at an intermediate stage. For example, a practitioner may want to declare the unit as “1 ton of pig iron” instead of the finished product. The platform adjusts scaling automatically to reflect this choice.

By calculating system-level scaling factors in this way, CarbonGraph ensures that:

  • All upstream and downstream inputs are consistently represented.
  • Declared units can be flexibly set at the final or intermediate stages.
  • System inventories remain proportional and auditable across all scales.

Why this matters

  • Inventories are balanced: system-level scaling factors guarantee that upstream and downstream requirements align with the declared unit.
  • Results are reliable: practitioners don’t need to manually reconcile inputs and outputs — CarbonGraph does it automatically.
  • Models are flexible: declared units can be set at the final product or any intermediate stage, and the platform adjusts scaling accordingly.

Core Calculation Logic: Life Cycle Impact Assessment (LCIA)

After the inventory is known, CarbonGraph translates flows into impacts via life cycle impact assessment (LCIA). Each elementary flow in the LCI is multiplied by a characterization factor (CF) from a chosen method to quantify its contribution to an impact category (e.g., global warming potential).

What LCIA does in practice

  • For every elementary flow (e.g., CO₂ to air, CH₄ to air, NOx to air), CarbonGraph retrieves the corresponding CF from the selected characterization method.
  • It multiplies the quantity of the flow by the CF and sums across all flows to produce results per impact indicator (e.g., GWP, AP, EP).
  • Flows are handled with their proper compartments (to air, water, soil, etc.) and units, ensuring CFs apply correctly and consistently.

Concrete examples

  • CO₂: If the LCI includes 10 kg of CO₂ to air, and the CF for CO₂ under the selected GWP method is 1 kg CO₂e/kg, then the contribution is 10 kg CO₂e.
  • Methane (CH₄): If the LCI includes 2 kg of CH₄ to air, and the method uses a CF of approximately 36 kg CO₂e/kg, the contribution is 72 kg CO₂e. (Exact CF depends on the method/version and time horizon configured.)

Selecting characterization methods

  • CarbonGraph supports industry-standard LCIA methods such as TRACI, ReCiPe, and EF 3.1. Users choose the method appropriate to their project or reporting context.
  • Methods are versioned; results clearly indicate which method/version was used so they can be reproduced and compared consistently.

How grouping and reporting work

  • LCIA results can be reported by grouping (e.g., life cycle stage, EPD module, Scope 1/2/3, data source). These groupings mirror how the LCI was organized.
  • When the LCI is expressed as a single vector (system totals), LCIA returns total impacts per indicator. When expressed as a matrix (grouped LCI), LCIA returns contributions by groupwhile remaining mathematically consistent with the total (groups always sum to the overall result).

Matrix view (for readers who like the math)

  • Let LCI be the vector (or matrix) of elementary flows, and CF be the characterization factor matrix (flows × indicators). Impacts are computed as: Impacts = LCI × CF.
  • This single operation supports both total results and grouped breakdowns, depending on how the LCI is structured.

Why this matters

  • Results are traceable: every impact value is tied back to specific flows and CFs.
  • Results are comparable: method/version tagging ensures apples-to-apples comparisons.
  • Results are flexible: the same LCIA can be presented as totals, by stage/module/scope, or by data source (primary vs secondary) without changing the underlying math.

Core Calculation Logic: Modeling Special Cases

Not every life cycle model is straightforward. Real supply chains often include complexities like processes that don’t balance perfectly, multiple products leaving the same step, or different ways of grouping results for reporting. CarbonGraph is designed to handle these cases without sacrificing transparency or mathematical consistency.

Mass Balance

At the system level, CarbonGraph always assumes that what goes into a model must equal what comes out, scaled to the declared unit. This assumption is built into how the engine calculates scaling factors, so that the overall system is always balanced. At the process level, however, the platform gives modelers flexibility. Individual unit processes are not forced to close by default, because in practice, data may be incomplete or simplified. Instead, CarbonGraph provides optional checks that can alert a user if a process does not balance. This way, practitioners can explore and iterate while still maintaining confidence that the full system is coherent and auditable.

Aggregation

Once impacts are calculated, they often need to be reported in different formats depending on the audience. CarbonGraph uses a matrix-based approach that makes this flexible. The same underlying inventory can be grouped and reported by many different perspectives:

  • EPD modules, such as A1–A3 (production), A4 (transport), B (use phase), C (end-of-life), and D (benefits beyond).
  • GHG Protocol scopes (Scope 1: direct emissions, Scope 2: purchased energy, Scope 3: supply chain impacts).
  • Life cycle stages, such as raw material supply, manufacturing, distribution, use, and end-of-life.
  • Data source, allowing a clear distinction between primary (collected) and secondary (reference) data.

Importantly, these different groupings always sum back to the same overall inventory. This means users can cut the data many different ways — to meet reporting requirements, to highlight internal priorities, or to support enterprise-level roll-ups — while knowing that the math always reconciles.

Multi-Product Allocation

A common challenge in LCA is what to do when a process produces more than one useful output. For example, a steelmaking process may also generate slag, or a chemical plant may co-produce two market chemicals. In these cases, the environmental burdens of the process need to be divided among the different products in a way that is fair and transparent.

CarbonGraph supports all of the standard allocation approaches recognized by the LCA community:

  • Avoid allocation: wherever possible, expand the system boundary or redefine the product flow to capture both outputs together.
  • Physical allocation: split impacts in proportion to a measurable property, such as mass or energy content.
  • Economic allocation: split impacts according to the relative market value of the co-products.
  • Substitution: give credit for avoided burdens, such as when a by-product displaces another material in the market.

CarbonGraph enforces its directed acyclic graph structure even in these cases by modeling co-products or waste treatments as negative flows. This means that if slag is considered a by-product of steelmaking, its treatment appears as a negative input into the system. This preserves mathematical consistency while still allowing the model to explicitly represent by-products, waste products, and recycling credits.

Unit Process Calculations

Each individual process in CarbonGraph can be thought of as a “mini-LCA.” A process has its own inputs, outputs, and impacts, and these are calculated in exactly the same way as for the system overall. This makes unit processes interpretable on their own and allows practitioners to zoom in and evaluate the contribution of a single step without having to look at the full model. It also makes it easier to review, because a verifier can inspect the details of a process independently.

Benchmarking & Validation

To ensure trust in the results, CarbonGraph has been benchmarked against leading reference datasets (e.g., ecoinvent, USLCI) and validated by comparing results with other LCA tools. When the same inputs are provided, CarbonGraph produces the same results, to within tiny differences caused only by floating point arithmetic. In addition, the platform is continuously tested with an automated suite of unit tests, which confirm that as new features are added, the core calculation logic remains stable and reliable.

Why this matters

  • Models are consistent: mass balance rules and aggregation methods ensure that results always add up.
  • Models are flexible: allocation rules, negative flows, and unit process visibility make it possible to handle co-products, waste, and complex supply chains transparently.
  • Results are trustworthy: benchmarking against reference datasets and ongoing automated testing confirm that outputs remain accurate and stable as the platform evolves.

Model Structure & Configuration: Model Locking Mechanism

Once a model has been built and calculated, it often needs to be reviewed, approved, or published. CarbonGraph supports this through a model versioning and locking system that ensures results remain stable over time.

How versioning and locking work

  • Version commits: a user can commit a model, which saves it as a read-only, fixed-in-time version. Each model maintains a full version history.
  • Review and approval: committed versions are useful for reviewers and program operators, who need to verify a frozen model without risk of it changing underneath them.
  • Database enforcement: read-only status is enforced at the database layer with row-level security policies, so it is technically impossible to alter a committed version.
  • Publishing updates: when new data or refinements are needed, a fresh version can be committed without affecting previous ones.

Characterization model support

At the time of committing a version, users can indicate which characterization methods the model supports (e.g., TRACI, ReCiPe, EF 3.1). This allows committed versions to serve as reusable building blocks in larger models, with confidence that they remain valid when applied upstream.

Visibility and permissions

  • Models can be set to private (visible only to the creator or team).
  • They can be restricted (accessible only to specific collaborators).
  • Or they can be made public, for sharing with the wider community.

All of these visibility options are enforced by the same row-level security controls in the database, ensuring that permissions are robust and reliable.

Why this matters

  • Results are stable: once committed, a version cannot be altered, giving reviewers and stakeholders confidence in what they are evaluating.
  • Models are reusable: version history allows past work to serve as the foundation for new models, without duplication or risk of overwriting.
  • Access is controlled: permissions and visibility settings ensure the right people see the right models at the right time.

Model Structure & Configuration: Configurable Parameters

CarbonGraph allows models to include parameters — flexible variables that can be defined, linked, and adjusted to represent different scenarios. Parameters are one of the most powerful features of the platform, because they turn static models into dynamic systems that can be tuned and explored.

What parameters can represent

  • Scenario toggles: such as switching between renewable and grid electricity.
  • Process assumptions: such as energy consumption of equipment, yield rates, or material efficiency.
  • Product specifications: such as bill of materials, component weights, or recycled content.
  • Transport details: such as distance traveled or mode of transport.
  • Cost factors: such as material or energy prices, tracked consistently alongside environmental flows.

How parameters are used

Parameters are not just static values. They can be defined with ranges, linked across multiple processes, and even embedded directly in the formulas that define exchange amounts. This allows a change in one parameter to cascade through an entire system.

Parameters can also be shared across multiple system-level models, enabling connected hierarchies where enterprise-level assumptions flow down into facility- or product-level models.

Unlocking scenario analysis

Because parameters can be varied and recomputed instantly, CarbonGraph supports powerful scenario analysis. Modelers can define “what-if” cases, such as increasing recycled content, switching suppliers, or improving energy efficiency, and immediately see how these changes affect results. Outputs can be displayed as tables, waterfall charts, or comparative graphs.

Why this matters

  • Models are dynamic: parameters allow practitioners to explore variations without rebuilding models from scratch.
  • Models are connected: parameters can link across product, facility, and enterprise levels, enabling top-down consistency.
  • Results are actionable: scenario analysis provides immediate insights into trade-offs, supporting better design and business decisions.

Model Structure & Configuration: Background Dataset Integration

CarbonGraph includes an ever-growing library of reference datasets that can be incorporated directly into user models. These background datasets represent common processes such as energy production, transportation, or material extraction, and they allow modelers to build complete systems without having to manually recreate everything from scratch.

How background datasets are managed

  • Datasets are ingested upon request for projects and stored in CarbonGraph as if they were modelers themselves, making them seamless to integrate.
  • Users can mix datasets within a model (e.g., combining USLCI energy data with ecoinvent transportation data). It is the practitioner’s responsibility to follow standards when mixing datasets across sources.
  • All datasets are versioned, so that users can reference specific releases of databases and reproduce results even as new versions are published.

Seamless integration

Because background datasets are treated like any other process in the platform, they can be attached to models directly and linked across supply chains. Modelers can also see exactly which dataset was used, ensuring transparency and traceability.

Where to learn more

Details of currently available datasets, their coverage, and recommended use cases are listed on the Datasets page.

Why this matters

  • Models are comprehensive: background datasets fill in supply chain steps that users don’t need to model from scratch.
  • Models are transparent: datasets are versioned and traceable, so results can always be reproduced and reviewed.
  • Models are flexible: practitioners can choose the datasets that best suit their project, while retaining responsibility for methodological alignment.

Model Structure & Configuration: Security and Trust

CarbonGraph is a cloud-based platform hosted on Amazon Web Services (AWS). We follow established best practices for security, privacy, and reliability to ensure that all models and datasets are protected at every stage.

How we secure the platform

  • Cloud infrastructure: CarbonGraph is deployed on AWS with secure configuration, monitoring, and redundancy.
  • Access control: permissions are enforced at the database layer using row-level security (RLS), ensuring that users only see the models and datasets they are authorized to access.
  • Version control: committed models are locked in the database and cannot be changed, providing a tamper-proof record of results.
  • Auditability: version history and security policies provide a transparent trail for verification and review.

Policies and compliance

CarbonGraph’s approach to privacy and security is described in detail in our Privacy Policy and End-User Licensing Agreement. These documents outline how data is collected, stored, and protected, and the rights users have over their information.

Why this matters

  • Data is protected: industry-standard cloud infrastructure ensures models and datasets are secure and reliable.
  • Access is controlled: robust permissions prevent unauthorized use or accidental exposure.
  • Records are trustworthy: version control and audit trails provide confidence that results are stable, traceable, and tamper-proof.

Outputs and External Alignment: EPD-Ready Outputs

CarbonGraph supports the generation of EPD-ready outputs. Once a model has been built and calculated, the platform can produce results formatted for use in Environmental Product Declarations (EPDs), as well as other reporting frameworks.

What EPD-ready outputs include

  • Standardized tables: inventory and impact assessment results are formatted to align with typical EPD requirements.
  • Indicator coverage: impact results are provided across the indicators required by major EPD program operators.
  • Declared unit reporting: all results are tied back to the declared unit, ensuring they can be compared and reviewed consistently.

Reviewer and verifier support

CarbonGraph makes it easy for reviewers to inspect results. Models can be locked and shared in a read-only state, annotations can be added to explain assumptions, and reviewers can drill down into the model itself to verify calculations.

Why this matters

  • Results are standardized: tables and indicators align with industry norms for EPDs.
  • Results are verifiable: reviewers can trace outcomes back to the model and its assumptions.
  • Results are ready to publish: outputs are already structured for use in EPD documents and other disclosures.

Outputs and External Alignment: Scenario Analysis

One of the most powerful outputs CarbonGraph supports is scenario analysis. By varying model parameters, practitioners can explore “what-if” cases and immediately see how results change. This makes the platform a tool not just for reporting, but for decision-making.

What scenario analysis can explore

  • Design choices: compare different product specifications, such as recycled content, material substitutions, or component weights.
  • Operational improvements: evaluate changes in process energy consumption, efficiency, or yield rates.
  • Supply chain options: switch suppliers, transport distances, or energy mixes to test sensitivity.
  • Business trade-offs: assess cost factors alongside environmental results, since costs can be tracked as accounting flows.

How results are presented

  • Results can be displayed as comparative tables, showing differences between scenarios across all impact indicators.
  • Waterfall charts illustrate the incremental effect of changing individual parameters.
  • Results can be broken down by life cycle stage, scope, or module, highlighting where the biggest differences occur.

Why this matters

  • Models become decision-support tools: scenario analysis shows the effect of choices before they are made.
  • Results are actionable: practitioners can prioritize improvements by testing sensitivity across multiple levers.
  • Communication is visual: charts and breakdowns make it easy to share insights with non-technical stakeholders.

Outputs and External Alignment: Enterprise-Level Reporting

CarbonGraph scales from individual product LCAs to facility and enterprise-wide reporting. Models can be linked together, tagged consistently, and rolled up to produce portfolio views that align with organizational frameworks (e.g., GHG Protocol, ESG disclosures) while preserving full drill-down traceability.

How enterprise roll-ups are constructed

  • Linking models: product-level LCAs feed facility models; facility models feed enterprise models. Each node remains reusable and version-locked, so roll-ups stay stable and auditable.
  • Consistent tagging: processes and results can be tagged by Scope 1/2/3, life-cycle stage, EPD module, region, business unit, and data source (primary vs secondary) to enable consistent aggregation.
  • Parameterized hierarchies: shared parameters propagate top-down (enterprise → facility → product), ensuring common assumptions (e.g., grid mix, internal prices, transport policies) are applied consistently.
  • Versioned building blocks: roll-ups reference committed versions of underlying models and datasets, preserving reproducibility across reporting cycles.

What the enterprise sees

  • Portfolio dashboards: consolidated views of impacts across products, facilities, markets, or business units, with filters for period, scope, method, and geography.
  • At-a-glance breakdowns: results grouped by stage/module/scope/data source to highlight hotspots and primary-data coverage.
  • Comparability by method: organization-wide consistency on characterization method/version (e.g., TRACI, ReCiPe, EF 3.1) and declared units, captured in model metadata.
  • Scenario portfolios: enterprise-level “what-ifs” (e.g., renewable procurement, recycled content targets, supplier shifts) evaluated via shared parameters and surfaced as tables or waterfall charts.
  • Audit trail: every number traces back to specific processes, flows, datasets, and model versions, enabling reviewers to drill down from enterprise metrics to unit processes.

Collaboration and roles

  • Modelers maintain product and facility LCAs; program reviewers/verifiers can inspect committed versions; auditors validate traceability; clients & leadership consume simplified, visual outputs for decisions.
  • Role-appropriate sharing is enforced in the database (row-level security), ensuring sensitive models are visible only to the right teams while still supporting organization-wide roll-ups.

Reporting contexts

  • GHG Protocol alignment: scope tagging and energy accounting support corporate inventories and facility reporting.
  • ESG & stakeholder disclosures: portfolio-level impacts and scenario results can be exported to dashboards and reports, with references to method/version and data coverage.
  • Internal performance management: target setting, sensitivity analysis, and tracking of interventions (e.g., energy efficiency, materials changes) across products and sites.

Why this matters

  • Results are consistent: shared parameters, tags, and method/version metadata ensure apples-to-apples aggregation across products and facilities.
  • Reporting is actionable: roll-ups highlight hotspots and reveal the impact of enterprise-level interventions before they’re implemented.
  • Evidence is auditable: enterprise numbers trace back to committed model versions, datasets, and unit processes, supporting robust internal and external reviews.

Outputs and External Alignment: Communication & Collaboration

CarbonGraph is designed not only to compute results, but to help teams explain and review them. Communication tools, role-aware sharing, and inline context make it easy for modelers, reviewers, auditors, and clients to collaborate around a single source of truth.

Role-aware collaboration

  • Modelers build and refine models, define parameters, and run scenarios.
  • Reviewers & verifiers access committed versions, add annotations, and inspect flows, scaling, and CF applications without altering the model.
  • Auditors rely on version history and row-level security to verify provenance, permissions, and reproducibility.
  • Clients & stakeholders receive simplified views (tables, charts, summaries) that highlight drivers, trade-offs, and options.

Explaining results clearly

  • Visualizations: hotspot diagrams, stage/module/scope breakdowns, and waterfall charts help non-technical audiences grasp where impacts come from and how scenarios differ.
  • Annotations: inline notes capture assumptions, data sources, system boundaries, allocation choices, and rationale—kept alongside the model for review.
  • Method & version tags: outputs clearly indicate the characterization method/version, dataset releases, and declared unit, ensuring like-for-like comparisons.
  • Cost alongside impacts: when costs are modeled as accounting flows, results can present environmental and economic outcomes together for decision support.

Sharing the right view with the right audience

  • Private, restricted, or public: choose visibility at the model/version level, enforced by database row-level security.
  • Reader-friendly exports: standardized tables and charts can be packaged for reports, disclosures, and stakeholder communications.
  • Direct model access for reviewers: share a committed version so reviewers can drill down to unit processes and trace numbers without risk of edits.

Why this matters

  • Collaboration is smooth: role-aware sharing and annotations keep discussion anchored to the live model.
  • Communication is clear: visuals, method/version tags, and grouped views make results easy to explain.
  • Reviews are efficient: committed versions, provenance, and drill-down traceability reduce back-and-forth.

Active Development & Roadmap

CarbonGraph is an actively developed platform. We work closely with practitioners, reviewers, and organizations in the LCA community to refine features, resolve issues, and expand coverage. Feedback from users directly informs our roadmap.

How development evolves

  • User-driven improvements: we prioritize bugs, feature requests, and ideas surfaced by the modeling community.
  • Ongoing validation: every release is benchmarked against reference datasets and verified through automated tests to ensure stable results.
  • Regular releases: updates are versioned and documented, so users can track changes and understand how new functionality affects their workflows.

Our roadmap includes

  • Expanded dataset coverage: adding more reference databases and updating existing ones.
  • Enhanced scenario tools: deeper parameter controls and visualization options for “what-if” analysis.
  • Enterprise reporting features: more robust roll-ups, dashboards, and ESG reporting integrations.

Why this matters

  • The platform stays relevant: continuous updates align with evolving standards and reporting requirements.
  • Results remain reliable: ongoing validation and testing ensure stability across releases.
  • Users feel heard: feature development responds directly to community needs.