Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a big observational campaign happening right now that is trying to get to the heart of this called EUREC4A (main website: http://eurec4a.eu and information sheet: http://eurec4a.eu/fileadmin/user_upload/eurec4a/2019/Press/E...). The key uncertainty between different climate models is whether shallow clouds in the tropics (which in general cool the Earth through radiation) will disappear or not and make our planet even warmer - some models predict this (!)

We are 100s of individuals from more than 30 national and international partner institutions with planes, drones, ships, ground stations and autonomous buoys trying to understand how and why these clouds form, how that links to their environment, dust from Africa, the ocean current, the jetstream aloft, etc. The positions of the different platforms can be seen live on the main website and the data we are producing is appearing on there too.

If you have any questions about what we're doing I'd be very happy to help or point you towards the right people :)



I suppose my immediate question is - is there any way for an non-professional with a pair of eyes and a smartphone to contribute in a crowd-sourcy way?


Yeah - get out the vote.


Well, thank you for that helpful suggestion. I was interested in knowing whether the research needed more data points and if so, whether a 'citizen scientist' could contribute.


Seems like a recipe for bad data and undiscoverable flaws.


Not really; you'd probably take the Folding@Home approach† to work distribution: assign each small piece as a job to N non-trustworthy "workers" (humans, computers), and then trust the results if all N results concur. If they don't, fail the job back into the work queue to be tried again later with different workers. (And maybe give the workers a reputation score, kicking out the ones who participate in enough failed jobs.)

† does this general strategy have a name?


Yes, it's called BOINC[1] platform. And as another comment mentioned there is already a climate prediction project[2] based on that platform.

[1] https://boinc.berkeley.edu/

[2] https://www.cpdn.org


There's already a related climate project:

https://www.climateprediction.net/

I used to work for them a few years ago.



Not if done correctly. See this as an excellent example: https://www.zooniverse.org/projects/zooniverse/snapshot-sere...


iNaturalist.org does this for finding plants and animals but to be honest, I don't know how they deal with bad data. They even have schoolkids submitting data during class. So, kind of adversarial data collectors!


Why do you want to a horde of “non-professional[s] with a pair of eyes and a smartphone” to contribute if literally hundreds of professionals with high-end equipment, meteorological observation satellites, and oodles of computational firepower are already on the task?

The main problem has never been a lack of science, the problem has always been (and alas, continues to be) a lack of political wherewithal to actually rapidly implement the drastic steps necessary to avoid the worst outcomes.

Maybe we’re finally getting what we collectively deserve?


>Why do you want to a horde of “non-professional[s] with a pair of eyes and a smartphone” to contribute if literally hundreds of professionals with high-end equipment, meteorological observation satellites, and oodles of computational firepower are already on the task?

Because the Earth is huge, data collection is expensive, and funds are limited. There is a lot of [literal] ground to cover and it's perfectly feasible to augment professional work with amateur data, now that pressure, temperature, and location sensors are relatively cheap and ubiquitous.


Satellites view half the globe at once. But you know that just as well as I do.


> Satellites view half the globe at once. But you know that just as well as I do.

They don't, because of perspective. Even satellites in Geosynchronous orbits don't see half the globe. They are above the equator but they can never see both poles at once. Satellites closer than that only see a small part of the globe. A satellite would have to be infinitely far away to see an entire hemisphere at once.

A GPS satellite at 20000 km of altitude only sees 38% of the Earth.

https://www.wtamu.edu/~cbaird/sq/2013/05/10/since-one-satell...


Not true. How much is seen depends on the focal length and lens


And distance. A satellite in polar orbit at 850km above the surface will only see a small part of the globe at once, even with a wide lens.


I suppose that’s true, abstractly speaking. But does it make sense to assume that the world’s meteorological community would put satellites into geosynchronous orbit that are not suited for the purpose of observing the Earth’s atmosphere from geosynchronous orbit? I think we can safely discount that assumption.


You're assuming that "observing the Earth's atmosphere" is a yes/no question when it's actually a spectrum. I'm not familiar with the matter, but I can imagine a scenario like "We need to record a very particular part of the EM spectrum and all those existing satellites happen not to have suitable equipment for that."


I think what precedes the problem you mention (and one of the main points in the article) is the problem of generating reduced uncertainty in the model. Maybe there’s a chance for more granular data to enhance that?


The problem isn’t the granularity of observations, it’s the granularity of the grid with which the simulations are run. As I understand it (by analogy with my own field of macroeconomics) the current limit is the fact that the grid is coarser than the average size of a cloud, and that therefore cloud cover needs to be approximated. That is the real source of uncertainty.


> The problem isn’t the granularity of observations

Perhaps you should tell the people who are running the "big observational campaign happening right now" that they can pack up and go home.


Interesting. So is the grid limited due to computational complexity, like the mesh in finite element analysis?


Precisely. But don’t take my word for it. If you’re into a light introduction, consider watching Sabine Hossenfelder’s interview of Tim Palmer (a theoretical physicist’s interview of a leading climatologist is a thing of beauty in its own right).


I don’t know why you are being down-voted but IMHO you certainly don’t deserve to be.


I don’t know since they aren’t accompanied with comments but my hunch is they may be conflating scientific consensus with model uncertainty. I.e., if I bring up model uncertainty, then I’m casting doubt on the scientific consensus (which isn’t the case). It’s more about our confidence in the models which is the point of the article.

Nate Silver’s “The Signal and the Noise” gives a much better description than I could do


How do these models address overfitting? I'm sure this is taken good care of, but from just reading the article it almost sounds like some details were added to make it fit recent changes better.


You don't really optimise parameters for large scale climate models, you put in your best guess at the initial values and then wait for weeks to months depending on your goal. With function evaluations that costly, I don't really see how you have time to overfit.

The big worry is bias, a lot of assumptions about how physics can be approximated are needed to make these models, but there isn't time to test every combination of them at the full scale. So you do small scale tests and hope nothing unexpected happens in the full run.

For scale, a performance basline paper from 2018 reports achiving 0.23 simulated years per wall clock day using 4900 GPU cores [1].

[1]: https://www.geosci-model-dev.net/11/1665/2018/gmd-11-1665-20...


Can we get the source for the models?


http://www.cesm.ucar.edu/models/ccsm4.0/ seems to offer what you are looking for (first Google hit for “NCAR climate model source code”) but without a cluster of supercomputing nodes, what are you planning to do with it?


ESCM is a smaller one and it seems there are transport matrix models which could be small enough for a single workstation?


Study it. Read the source. Maybe adapt it to a system that isnt a supercomputer. We'll see.


This is mentioned elsewhere in the comments at least some of the code for climateprediction.net appears to be on GitHub: https://github.com/CPDN-git.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: