AirGradient Open Source Air Quality Monitors
We design professional, accurate and long-lasting air quality monitors that are open-source and open-hardware so that you have full control on how you want to use the monitor.
Learn MoreTwo weeks ago, I had what I can only describe as a punch-to-the-stomach moment (which luckily doesn’t happen very often). The AirGradient ONE - our monitor that was recognized in one of the world’s most rigorous scientific evaluations - suddenly became “Not Recommended” by WIRED magazine in their The Best Indoor Air Quality Monitors review.
Yes, this is the same monitor that got two awards from the AirLab micro sensor challenge , one of the most rigorous sensor testing programs, beating more than 30 other well known brands. And yes, this is also the same monitor that the University of Cambridge chose -after rigorous testing- for the largest study on classroom air quality in the world. And yes, it’s the same monitor that is loved by thousands of Home Assistant users for its easy and local integration.
Today, I want to share why we feel that this is a flawed review and the larger picture of how influential publications’ recommendations affect manufacturers as well as consumers.
As the founder of AirGradient, many of you know that transparency is how we operate. This is why we are open-source and why we openly talk about our successes as well as our failures. We work closely with scientists and communities to improve air quality.
We’re often more critical of ourselves than others are, constantly raising our own bar. So when something like this happens, it feels deeply unfair, like when a teacher gives you an F because your pencil broke during the exam, while giving As to students who didn’t even submit their work. But let’s not get ahead of ourselves; we will explain the details below.
We’re a small team building open-source monitors, competing against companies with massive marketing budgets and PR machines. We started as a volunteer project in Thailand, putting impact before profit.
What we lack in marketing power, we make up for with genuine care, transparency, and authenticity. Over 50,000 users trust our open-source approach because we believe in giving people control over their data and the ability to repair their own devices. Our monitors offer much more value at a lower price, and our competitors are probably not very happy about this.
Many companies would try to sweep this “Not Recommended” review under the rug or send lawyers. But I feel I have a responsibility here to address this head-on, because our community deserves to know the details, and hopefully, this will trigger a larger discussion around the integrity of tech journalism.
Now, at this point, I would really recommend that you read the review so that you can form your own opinion about it.
The primary reason cited for the “Not Recommended” label was a failing display on the review unit. Let me be clear: this was a legitimate hardware failure, and we take full responsibility for it. As soon as we learned about the issue, we immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators.
However, the reviewers logic is difficult to follow when you compare it across products:
How can a product be penalized for a failing display when another recommended product has no display? How can an indoor monitor without CO2 sensing - essential for understanding indoor air quality - be recommended over one that includes this crucial measurement?
If the author had reached out regarding the failing display (a single unit from any brand can fail - even those known for high quality) and noted that their first unit failed in the final review, we would understand. But to fail the whole unit on what we know is an isolated issue feels very unfair.
The icing on the cake is that I actually corresponded with the author about the issues and sent detailed explanations of our methodology concerns. The responses I got back were limited to brief acknowledgements like “Received. I’m on deadline for three other stories and cannot give you a timeline.”
This response encapsulates many things wrong with the current state of tech journalism: recommendations that affect livelihoods and purchasing decisions are treated as just deadlines to meet, not as professional evaluations that deserve rigorous methodology and meaningful dialogue.
I wonder how much the author actually cares about the quality and correctness of her reviews.
WIRED’s recommendations don’t just influence individual purchases; they have the power to shape entire market perceptions. When they say “Not Recommended,” small companies like ours feel a real reputational and financial impact.
Now, if this were based on a clear methodology and evaluation, I would probably be the first one who would jump into gear and improve our product (which I have done in the past, e.g. when we had issues with the calibration of our PM sensors). But getting ‘Not Recommended’ based on basically non-existent methodology leaves me frustrated. How do you argue against personal preference?
BUT it’s not only us who lose, ultimately it’s the consumer!
When a major publication abandons objective methodology in favor of subjective impressions, readers get recommendations based on one person’s preferences rather than a fair and comprehensive evaluation. They miss out on products that might better serve their actual needs, whether that’s due to repairability, accuracy, connectivity or comprehensive sensor coverage.
Here’s what any credible product review should include—especially when the publication claims to identify “the best” products:
Without these foundations, “best of” guides become opinion pieces masquerading as authoritative recommendations—and that’s not fair to either manufacturers or consumers trying to make informed decisions.
We know that testing air quality monitors is not easy. This is why we’ve outlined proper testing methodology in our guide: How to Test an Air Quality Monitor. From a publication as influential as WIRED, can we not expect a proper evaluation?
Who is the ultimate loser? The consumer. When accuracy matters for health decisions—understanding air quality affects everything from asthma management to sleep quality—flawed methodology means people miss products that might better serve their actual needs.
To WIRED: don’t worry, we’re not hiring lawyers or reputation management firms to try burying this review. That’s not who we are. Instead, we’re doing something different—we’re amplifying it. We want people to see exactly what passes for “comprehensive” evaluation at your publication.
Here’s what we stand for: prioritizing sensor accuracy over marketing budgets, building products people can actually repair in a throwaway world, and being transparent about our failures while working to fix them. These are the principles that built our community of 50,000+ users.
Most companies would be celebrating a “Best Overall” rating and staying quiet about the methodology. But here’s the thing: even if we’d won, I’d still be writing this article. Poor methodology doesn’t just hurt the companies that get unfair ratings—it hurts consumers making health decisions based on flawed recommendations, and it hurts the entire industry when subjective preferences masquerade as expert evaluation.
This is bigger than AirGradient. When publications with millions of readers abandon rigorous standards, they’re not just affecting one company’s sales and reputation —they’re shaping market perceptions, influencing purchasing decisions that affect people’s health, and setting a precedent that opinion journalism is acceptable where technical expertise should rule.
We remain committed to what we’ve always done: building accurate, repairable, and affordable monitors through independent scientific testing and open-source transparency. We’ll keep standing by our products with comprehensive support, because that’s what our community deserves.
We’ve escalated our concerns to WIRED’s editorial leadership—not seeking special treatment or demanding apologies, but asking for the consistent, professional evaluation that consumers deserve when making decisions about their health and indoor environment. We believe in independent journalism, but independence means nothing without professional standards.
So yes, we’re embracing our “Not Recommended” rating. It’s become a badge of honor—proof that we prioritize substance over relationships, transparency over marketing polish, and community trust over media approval.
To our community: The whole AirGradient team and I appreciate your continued support through this. You’ve stood by us not because of what magazines say, but because you’ve experienced our commitment to accuracy, repairability, and transparency firsthand. That means everything.
But this experience reinforces something that goes beyond AirGradient: when influential publications abandon rigorous methodology, it creates a broken ecosystem. Manufacturers get incentivized to invest in PR relationships instead of product quality. Consumers lose access to reliable information exactly when they need it most—when making decisions about their health and indoor environment. And the companies actually doing the hard work of innovation get drowned out by those with bigger marketing budgets.
The air quality monitoring space is already confusing enough. People are trying to protect their families from pollution, manage asthma, improve sleep quality, and make informed decisions about their indoor environments. They deserve better than subjective opinion pieces dressed up as authoritative guides.
Here’s what I’m curious about: Is this the norm now? Are you seeing this same pattern across other product categories? When you’re researching purchases—whether it’s air quality monitors, smart home devices, or any technical product—what sources do you actually trust?
And specifically for situations like this: How would you want us to handle it? Should companies stay quiet when review methodology breaks down? Should we be more aggressive in calling this out? Or is transparency and open discussion the right approach?
I’m asking because this affects all of us. Every time we let poor methodology slide, we’re accepting a world where marketing budgets matter more than product quality, where personal preferences get presented as expert evaluation, and where the companies trying to do right by their communities get penalized for it.
Drop a comment below or reach out directly—I read every message, and your perspectives help shape how we navigate these situations. Because at the end of the day, we’re not just building monitors; we’re trying to build a better way of doing business, and that requires all of us pushing for higher standards.
Curious about upcoming webinars, company updates, and the latest air quality trends? Sign up for our weekly newsletter and get the inside scoop delivered straight to your inbox.
Join our NewsletterWe design professional, accurate and long-lasting air quality monitors that are open-source and open-hardware so that you have full control on how you want to use the monitor.
Learn More