Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would be very careful with this kind of evaluation. There are lots of sensors that perform well vs reference instruments but not in the field.

If you want real evaluations, SCAQMD (the folks in charge of air quality for southern california) do evaluations of commercially available, low cost sensors, both against reference instruments and the lab.

See http://www.aqmd.gov/aq-spec/evaluations

You can see what i said is true from the table - lots of sensors that are very well correlated with reference instruments in the lab, but suck horribly in the field.

I would think you would be better off submitting the sensor in question to them, and letting them put it through its paces.

(They publish within a month of finishing testing, and testing takes ~8 weeks)



IMO, the single biggest problem is that even five-figure reference instruments disagree considerably in the 0-20 ug/m3 range, and if you look at the figures of the SCAQMD tests, you won't see much more than noise in the scatter plots in that range, with increasing consistency at larger concentrations. Some of this is due to mismatches in response times and synchronization, but a lot of it is due to different disturbances, noise processes, and varying sensitivities to different particle size distributions in different sensor types.

With inexpensive optical scattering sensors, the situation is even worse. While it is easy enough enough to "count" individual 2.5um particles, the scattering equations work out to an order of 10^6 reduction in scattering amplitude, per particle, going down to 0.3um (when measured with red or infra-red light), and different particle compositions will scatter differently. On top of that, the number of particles increases significantly per unit mass concentration, making the signal processing a lot harder once one can't just threshold individual "blips" in the signal.

Whether the sensors are particle counting or nephelometric in principle, the basic trade-off is that to see smaller particles, they need higher amplification factors, which in turn increases thermal noise, also amplifies stray light, and makes the device more sensitive to EM interference. Many signal processing pipelines do simplistic noise filtering, throwing out much of the baby with the bathwater.

On top of these principal difficulties, the optical scattering type sensors are quite sensitive to temperature variation, and aging of the photoelectronic components, which is why field tests under varied conditions often depress their accuracy even further.

Long story short, it is very easy to build a sensor with qualitatively good correlations to actual PM concentration, as long as the PM concentration is sufficiently high, but the health effects have no threshold, and every added bit of pollution counts, starting at zero. Unfortunately, commodity PM sensors are quite bad at quantifying these low, yet meaningful, ambient pollutant levels, which is probably why IKEA chose their traffic light thresholds the way the did: not because of how it relates to health, but because this is what they could do with a $12 device.


+1

I don't have a problem with the OP's analysis of the Ikea sensor. It seems generally reasonable. But rather with their implication that the DIY sensor being promoted is of higher quality.

Because accuracy is very much a "finished product" implementation issue, not just the sensor itself, such implication is off-base, particularly in direct comparison to any finished commercial product. The accuracy of the DIY version is going to vary -- a lot -- depending on the builder of such.

I wonder if AQMD would evaluate a "reference build" of the DIY sensor, seeing as there is an enclosure as part of the "reference" implementation. At bare minimum, the airgradient site should have comparison to highly ranked (by AQMD) sensors, such as purple air. I'm actually quite surprised there are no such comparisons on the site.


Thanks for the link; that’s a great resource!

To summarize their current results: Purple Air (especially version 2) is the only one that doesn’t suck at measuring PM 2.5. None of the consumer-targeted gaseous sensors they tested work.

Did I miss something? Is some other third party running more comprehensive tests?


Assuming you care PM1.0/PM2.5 (which are what are truly harmful), and want R^2 >0.9:

Atmotube pro Elitech (for PM2.5) Purpleair

The field evaluations have good expositions of the underlying data in slide form. For sure the purpleair is the best bet that i can see. Note that they are pragmatic as well - if you read field evaluations, AQMD folks generally think >0.8 R^2 is very good. Which probably makes sense comparing a $50-$250 device to a several thousand dollar reference instrument.

Note that one serious issue is that a bunch of the sensors aren't just uncorrelated, they are often dramatically undercounting. It would be one thing if they were dramatically overcounting, and told you air was horrible when it wasn't. But they are actually telling you air is fine when it isn't even close to fine.

I actually went down the same path as OP about 6 months ago. I have used a dylos meter in my old woodshop, and was building a new one, and wanted to see what the the best thing to do was. After a bunch of looking, i found these folks. I am not aware of others doing this breadth of testing.

(It turn out the dylos meter is either accurate, or overcounts, depending on temperature/humidity. This is actually acceptable since it won't tell me things are good when they are bad, but it also turned out the purpleair was consistently good for the same price :P)


So the two purpleair devices are using PM1003 and PM5003 sensors, respectively. These are available off the shelf as well. I suppose the salient question is: does purpleair calibrate / select these themselves or add anything to them (e.g. filter), or can you just use a PMS5003 for 30 bucks and get similar results?


Afaik you can use the $30 module and be within acceptable tolerances, but you really should have at least a temperature and humidity sensor as well, because as that changes the other sensors will report differently.

I've noticed that even being exposed to pure carbon dioxide the sensors I have will increase the counts of CO, VOC, and HCHO, which is unlikely and a false read.

I only care about CO2 for my devices, so my two $30 sensor based devices are fine.


The Sensirion does pretty OK for the price and form factor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: