Open and Accurate Air Quality Monitors
We design professional, accurate and long-lasting air quality monitors that are open-source and open-hardware so that you have full control on how you want to use the monitor.
Learn MoreYou may or may not know this, but at AirGradient, we test every fully assembled monitor before it leaves our hands. That might sound obvious, but it’s actually quite unusual in the air-quality monitoring industry. Most monitor manufacturers — companies like PurpleAir, Qingping, and nearly every other — build their devices around sensors made by third parties such as Plantower, SenseAir, and Sensirion. In fact, inside the AirGradient ONE and Open Air, you’ll find components from each of those suppliers as well.
This approach keeps costs down, but it comes with a trade-off: the sensors themselves are often “black boxes.” Outside of the sensor manufacturers, nobody really knows the details of how they work. Their internal algorithms and processing are kept secret, so we don’t know whether readings have been smoothed, averaged, or adjusted for factors like humidity. In practice, this means the number you see as the ‘raw’ output may already be shaped by choices the sensor manufacturer made behind the scenes. Worse is that none of this is disclosed.
We wrote a full article on why this is an issue, but it essentially comes back to one key point: if these sensors are black boxes, how can we trust them? Even as a monitor manufacturer, we have relatively little information about how each sensor works. This also means we don’t know exactly what algorithms and other kinds of post-processing are being applied, which can potentially lead to skewed results and incorrect conclusions.
At the end of 2024, we experienced this problem firsthand. One of our sensor suppliers made an undocumented change that technically remained within the official performance specifications, but the new sensors suddenly behaved differently, and not always in consistent ways. This wasn’t just a single batch, either, as it affected all newer sensors, each behaving with its own characteristics.
While it caused some headaches at the time, we now look back on this experience as a positive one. From it, we were able to enhance our own testing procedures and develop some unique features that no other air quality manufacturer in this price range currently offers. We’ve now been doing our own in-depth testing for over one year, but we never shared exactly what we do to ensure our monitors are accurate and how we’re constantly pushing the bar. Today, I wanted to share a bit more about our in-depth testing procedure at AirGradient and also show our new test reports that come with every pre-built monitor.
But first, it’s worth answering the question that’s probably already forming in your mind: if we can’t always rely on the sensor manufacturers, how do we know what “accurate” really looks like? The only way was to bring testing in-house with a reference-grade instrument.
When it comes to air quality monitoring, reference-grade instruments are the gold standard. These devices are used by governments, universities, and research organizations worldwide and can measure particulate matter and gases (depending on the instrument in question) with high precision and accuracy. The problem is cost: at $30,000 or more, they’re far beyond the reach of most consumers and even many companies. That’s before you even begin to discuss the maintenance and regular servicing needed!
Because we’d already been working to improve the accuracy of our monitors, we had purchased one of these instruments (a Palas Fidas 200) just a few months before encountering the issue with our PM sensors. After some setup time, this allowed us to recalibrate the affected sensors and restore the performance we had seen previously. In fact, our own custom calibrations went a step further, improving the accuracy of all sensors we ship.
For the past year, we tested and applied custom calibrations to every batch of PM sensors we used. This approach worked well because sensors from the same batch tended to share similar characteristics, meaning one correction could be applied across the board. The results were promising, but we wanted to push things further.
Last month, we went a step further and moved from batch calibration to individual calibration. Every pre-assembled device we ship now has its PM sensor calibrated one by one (For kit users, we still provide batch calibrations). This change means our monitors are not only more accurate, but also that we understand their behavior far better. It also sets us apart: very few manufacturers at this price point test their monitors at all and none (that we are aware of) individually test every sensor. Most simply rely on the specifications provided by the sensor maker!
But that begs the question: how do we actually test our monitors? Once a device is assembled in our factory in Thailand, it doesn’t go straight into a box. Instead, every (pre-built) unit is placed in our custom test chamber, where we compare its readings directly against our European certified Palas Fidas 200 reference-grade instrument.
The chamber is a large, sealed room equipped with both a Palas Fidas and any number of monitors currently being tested. A controlled inlet lets us adjust the particle concentration inside with good precision. To generate particles, we burn incense in a separate chamber outdoors and then pull the smoke inside through a pipe with built-in fans. This setup gives us a repeatable way to raise and stabilize particle levels - conditions that would be difficult to recreate consistently in any other environment.
For this test, we use incense smoke because it allows us to have a supply of burnable materials with reproducible characteristics and particle ranges (between incense sticks), allowing us to ensure our testing remains consistent. However, it is worth noting that incense smoke particles are heavily weighted towards smaller particle sizes (mostly between 0.1 and 0.3µm).
Each test runs for more than three hours. During that time, we step the concentration up in stages, holding it steady at different thresholds from 0 µg/m³ to 60 µg/m³. This lets us see how the sensors perform both at low concentrations and when particle levels begin to climb. By comparing their outputs with the reference device, we get a clear picture of how accurately each monitor is reporting in our test chamber conditions.
The raising and testing at different particle concentrations is especially important because it gives us a more thorough understanding of how the sensors behave across a range of possible environments. A sensor that tracks well at low concentrations might drift or lose accuracy at higher levels, and vice versa. By testing a wide range, we can identify how our monitors perform in both lower concentrations and more polluted conditions.
For calibration, we focus on PM2.5. Although our sensors also measure PM1.0 and PM10, the Plantower sensors are only reliably calibrated for PM2.5 which is the pollutant most often regulated and most strongly linked to health impacts. These particles are small enough to penetrate deep into the lungs and bloodstream, which is why regulators (and we) treat PM2.5 as the most critical measure.
It is worth noting that we also test the other sensors (such as CO2, temperature and relative humidity) to ensure they also perform within specification. However, we do not currently calibrate or correct these sensors.
So, what happens if a monitor’s readings don’t match the Palas Fidas? That’s where our calibration process comes in. We found that sensors from the same batch behaved almost identically, so one correction could be applied across the whole group. For a long time, we would test a sample of sensors from each batch, work out the average scaling factor (based on the PM count), and then apply that correction to all the monitors in that batch.
While this worked well, we wanted to take things further and we had already learned many lessons from the batch-wide calibration process. As of this month, every pre-assembled monitor goes through the full test on its own. We compare each sensor directly to the Palas Fidas, then calculate an individual scaling factor to bring it in line. Because all monitors go through the same process, performance across our devices is far more consistent.
Alongside this individual testing, we’ve also rolled out a new version of our test reports. If you’ve purchased a pre-built AirGradient monitor before, you’ll be familiar with the older reports, which looked something like this:

Those reports included a PM2.5 section showing how the monitor performed during testing compared to the reference. While this confirmed the sensor was functioning correctly, we didn’t apply device-specific calibrations. Instead, we verified the sensor, applied the batch-specific calibration, and shipped the monitor after confirming that it was performing within spec.
From this month, all pre-built monitors now ship with a more detailed V2 test report. This updated version shows exactly how the raw readings from your monitor correlated with the Palas Fidas reference. Using that data, we calculate a unique scaling factor for each sensor, which brings its accuracy in line with the reference. To make the improvement clear, the report also plots the calibrated data on the same graph.
In addition, we now include a scatterplot comparing the calibrated AirGradient readings with the Palas Fidas. Because the calibration is derived directly from the reference, the two sets of data align very closely — typically with a low average error (RMSE). Once those correction parameters are defined, they’re applied to the device before it leaves our factory. This means that every pre-built AirGradient monitor arrives pre-calibrated and is ready for installation.

So far, we’ve seen no evidence of drift in the Plantower PM2.5 sensors we use, which means this calibration should remain valid over the sensor’s lifespan. Note, however, that the calibration parameters might well change depending on the aerosol characteristics (size distribution, chemical composition; see also here). Therefore, the gold standard for sensor calibration is with a reference instrument at the deployment site under local conditions.
Air quality data only matters if you can trust it. That’s why accuracy is one of our top priorities at AirGradient — not just for us, but for everyone who depends on our monitors to make decisions about the air they breathe.
By calibrating each device individually and checking it against a reference-grade instrument, we can bring our monitors much closer to the truth than if we simply trusted the sensor manufacturer’s numbers. This means you can have more confidence in the values you see on the screen and dashboard.
We also believe in being transparent about how we get there. Black boxes help no one: if you can’t understand how a device works, you can’t fully trust it. That’s why I wanted to share our testing procedure, and explain the new V2 test reports.
This isn’t the end of the journey. We’ll continue refining our methods, experimenting with new approaches, and sharing what we learn along the way. Part of this refinement is the aerosol we use during testing. Incense is convenient and consistent, but it produces unusually small particles that may not represent real-world air.
Since particle size can affect sensor response, we’re experimenting with alternative sources to make the calibration environment more representative of ambient conditions. Of course, no calibration is perfect. Our process can’t account for every environment or pollutant, but by being transparent, we make sure you know exactly what to expect.

We design professional, accurate and long-lasting air quality monitors that are open-source and open-hardware so that you have full control on how you want to use the monitor.
Learn MoreCurious about upcoming webinars, company updates, and the latest air quality trends? Sign up for our weekly newsletter and get the inside scoop delivered straight to your inbox.
Join our Newsletter