Researchers at the engineering department of Columbia University, New York, are developing ways to use big data sets to calculate carbon footprints faster.
The researchers wanted to measure the carbon footprint for 1,137 different PepsiCo products.
The researchers developed a predictive model which generates an estimated “emission factor” for each material, as an alternative to manually mapping the ingredients and packaging materials against commercial life-cycle assessment databases.
It follows a 5 year project involving the The Earth Institute, Columbia University, and PepsiCo, Inc, with an original aim to evaluate and standardise carbon footprinting and labelling in the UK and US. PepsiCo has been pilot-testing the methodology since summer 2011.
“Our novel approach generates standard-compliant product carbon footprints for companies with large portfolios at a fraction of previously required time and expertise,” says Christoph Meinrenken, the study’s lead author and associate research scientist at Columbia Engineering and The Earth Institute.
Any carbon footprint generated can be audited against the World Resources Institute life-cycle assessment (LCA) standard.
Without this system, calculating carbon footprint for a big range of products, as a supermarket might wish to do, is a massive manual task, with enormous amounts of data to be collected and analysed. But if companies use aggregate data instead, they miss out on the detail.
“Mining all the ‘big data’ that’s already available in companies’ data warehouses will enable us to calculate the carbon footprints of thousands of products virtually simultaneously,” said Christoph Meinrenken, the study’s lead author and associate research scientist at Columbia Engineering and The Earth Institute.
This automated information can help companies speed up their assessments of the impact of reduction strategies, such as using less carbon-intensive fertilizers when making orange juice.
The data should also get better over time.