Looks like they're using French nuclear to help black start the grid [1]. Solar and wind systems could have caused the instability issues that led to the Spanish Nuclear requiring isolation. It will be interesting to see the final investigation, but my bet is that "induced atmospheric vibration" is a PR deflection from a badly designed and operated system [2].
[1] https://transparency.entsoe.eu/transmission-domain/physicalF...
[2] I'm not necessarily blaming the engineers, but the politicians who force those engineers to put square pegs in round holes. For example, I can imagine politicians making a short term decision to skimp on energy storage while increasing renewable penetration. Surely renewable systems must be less reliable without storage given the lack of rotational inertia?
According to [1], 3/4 came from renewables during the re-startup. Most of today's inverters are black-start ready. That is, they don't require external energy to startup, unlike nuclear, which is heavy, costly, and slow. German readers may find fefe interesting here [2]. He debunks many of these myths.
It’s not really a problem, you just adjust the size of your assumed storm. We have lots of climate models and data to adjust predicted sizes.
Climate change effects are already being included when calculating wind and wave loading in many codes.
The real issue is that engineering codes use frequentist methods which make it hard to consider uncertainty, which often makes it unclear what the real safety factors are. This issue is being solved by using probabilistic engineering techniques, and in future, more sophisticated causal inference.
Climate. Models aren't really specific enough to predict a new 500 year storm in a specific location.
Those thresholds and definitions are based on the data record, and already encoded into regulation and a 100 years of construction.
What we see instead is Regulators simply increasing the requirements from a X year storm to a 2X year storm, and leaving the definitions. This is what I have seen with the California building code
You're on the right track, but your framing is still off wrt how the engineering design process works.
Assuming that designing for a 500-yr storm has anything to do with 'predicting what a 'future 500-yr storm' (or 25-yr or 100-yr) looks like is dead wrong. Irrelevant.
The 'definitions' are not left alone, they are updated as time goes on. But with historical data, and they are not extrapolated/predicted out into the future.
Engineers (PEs) design by taking known criteria and then applying probabilities and factors. They do not predict criteria. It's a subtle but important distinction.
A 500yr event, by definition, is actually the one year probability of a 1/500 chance event.
And it's up to the designing engineer to choose and state whatever the assumptions are that go into that.
But a levee designed this year will use this years current 'storm definition' just as it uses this year's building code. Not a future one.
(Sometimes the storm/ event definitions seem stale because things like flood maps might only get updated every few decades.)
> It’s not really a problem, you just adjust the size of your assumed storm.
Adjusting how you call the storm doesn't make the wall bigger: that's the problem. (It also makes their statement untrue in the present day, regardless of if it was true when the wall was designed.)
It's the engineer defining his design criteria (the 'design-storm'), based-on and benchmarked-to local historical data, including recurrent intervals.
The wall doesn't need to be bigger if next years data changes. It was designed for a (this year) 100yr- or 500yr-storm, not a guess of hypothetical future one.
No because it's exactly the opposite. Kessler syndrome results from satellites smashing into each other with high relative velocities. The entire point of formation flying is zeroing out relative velocity. That's how you stay in formation with something.
I was imagining them flying close-by, so any failure would have a higher chance of causing a cascade. I couldn’t find any info on how close they are, but I imagine it’s too far for this to be an issue.
The article says they're going to be 144 meters apart. A collision of things that are in basically the exact same orbit is also not going to be much of a risk, as the collision would be at a low relative velocity. Collisions between satellites are a kessler syndrome risk because due to differing orbits, the collision velocities can be in km/s, which can spread debris into a wide range of orbits, thus causing the heightened risk of cascading failures.
The goal is to have them a bit more than 100m apart. During observations they aim to maintain millimeter precision.
I'm pretty sure the risk of collision has been analysed to death. I would expect they've analysed what would happen if one or both devices suddenly stop listening to commands, and that even then there's essentially no risk.
It still doesn't matter according to physics. There's a reason why two NASCAR cars can be going 150+ miles an hour, and yet "rubbin's racin'," but a head-on collision at 60 miles an hour will kill you. The two race cars going 151 and 149 mph rub at ~2mph relative velocity, while the head-on collision with the drunk occurs at 120mph relative velocity.
And flying formation, be it in space or in the air, by definition involves getting as close to zero relative velocity as possible . . . or else you aren't in formation.
I grew up in an isolated location with a big drinking culture. As a youngster I wasn’t aware that there were places to move to without this problem. You could choose to avoid the drinking culture, but that was basically social suicide in both University and in Industry.
Escrow was pretty popular on Raidforums, high reputation forum users would not only escrow transactions directly but also do things like pseudo-publicly validate or invalidate claims by checking a sellers data or tools against their own collections (presumably the benefits of being a professional hacker escrow and seeing lots of data).
I disagree, computers are just a tool. Learning basic programming in primary school helps build a foundation for more sophisticated knowledge in high school. It can also help contextualise mathematics.
The problem is abuse of the tool, as you describe. In my primary school, we were taught how to use Microsoft Office by using a workbook the computer teacher licensed to the school. We had some of the best computer resources amongst local schools at the time, yet learnt almost nothing about computers. It took me until university to write a line of code.
This was the big shortcoming of computers in my education too. The kids famously knew more than the adults, but all we really knew how to do over them was use a two button mouse correctly and to browse a gui menu system. There was no one around to actually introduce you to how it all worked. I hope schools are better about that today, but its hard when the sort of expertise you need to quickly iterate off of generational knowledge is able to pull quadruple or more a teachers salary and works someplace willing to pay that much instead.
Where I live the only kids who program are those who are lucky enough to have a teacher interested in robots. They are taught to program robots as a fun activity rather than a part of the curriculum.
I understand your sentiment about the opportunity costs for those with computing knowledge, but it wouldn’t be too hard to require teachers to do a basic computing unit at university. In the same way the best mathematicians are not usually school teachers, yet “regular” teachers seem to perform sufficiently.
I suffer from this too. I can spend all night reading x y or z out of my own interest, but turn it into an assignment, and its like I cannot avoid pushing it to the very last minute possible. Of course I thought it would help by majoring in my interest, but that only made me see the interest as another academic exercise and drove me off it to an extent, and didn't in fact cure me of my academic block.
Your intuition of scientists updating views is wrong, see Kuhns scientific revolutions for more examples. Scientists generally stick to preconceived notions long after there is data to refute them.
E.g. heliocentrism, germ theory, cigarettes being unhealthy, lobotomies being optimal
Existential threat means things can’t get worse (you just stop existing).
Ignoring the potential of geoengineering implies you are sure that we have a better solution which will work. I hope we don’t have to use geoengineering but I am certain the status quo of hoping renewables and batteries in the 1st world will save us is not going to work.
Threat is not certainty. If somebody is out there to kill you, you are under an existential threat, until they are caught. It's not a reason to start playing Russian roulette, saying "I'm already in danger, cannot get worse than that!"
>The same logic applies to shutting down oil and gas production: if we are not certain, then why risk the downsides?
You are conflating the cost of not extracting a fraction of a fraction of current production capacity with the risk ending human existence through changing systems we do not have anything close to a full understanding of nor precise control over.
Degrees of risk matter, in fact they are the whole point of GPs logic.
[1] https://transparency.entsoe.eu/transmission-domain/physicalF... [2] I'm not necessarily blaming the engineers, but the politicians who force those engineers to put square pegs in round holes. For example, I can imagine politicians making a short term decision to skimp on energy storage while increasing renewable penetration. Surely renewable systems must be less reliable without storage given the lack of rotational inertia?