the closest computational equivalent to writing that sentence we have today involves executing a model with 1.5 billion parameters. (and I couldn't find an estimate of the energy cost of that)
that's only about 100x smaller than contemporary transistor features, so realistically there is a upper limit to the energy benefit of biocomputing structures which is about 10,000x, based on energy typically scales with size squared.
It's many orders of magnitude larger than 40 watts * 5 seconds. :)
I have a running bet with some friends that one of the following is true:
(1) The brain is somehow leveraging quantum computing to achieve polynomial or square root acceleration on combinatorial search and optimization problems.
(2) P=NP and there exist polynomial time classical algorithms for these problems.
(3) The naturalistic hypothesis fails and intelligence is somehow "supernatural" and does things that cannot be described or modeled within the confines of physical space-time.
I cannot think of any alternative that can possibly explain how the brain can do what it does on ~40 watts. Everything we have learned to date argues that intelligence and cognition involves a whole lot of massive combinatorial problems that can't possibly be performed classically on so little power.
I'm going to throw out (3) because it doesn't make any sense (to me), and we haven't found any evidence that this is true.
(2) seems possible, but highly unlikely.
(1) seems the most probable of the three options, and although I believe we have found evidence that biological systems exploit quantum effects in some instances, there doesn't seem to be any indication that brains (human or otherwise) use quantum effects for computation.
The thing that you seem to be discounting is that the bulk of the work has already been pre-computed. Our brains can do what they do in 5 seconds * 40 watts because they have been "designed" to do so via billions of years of evolution. In ML terms, the training stage has already happened by the time your brain starts thinking, it is simply doing inference at that point.
I agree that #1 is by far the most likely. #3 would mean we (meaning natural science) are wrong about the nature of the universe. I included it mostly to get across the mystery we have here, namely that what brains do cognitively on so little power appears to be "impossible" by classical CS metrics.
I think you are incorrect about precomputation though. The human genome is not very large. It's smaller than Windows 10 or Wikipedia. It's also not substantially different from that of a mouse or a chimpanzee. Most of what it encodes is highly conserved metabolic stuff. All the richness of human cognition is realized through a vanishingly small subset of that already small genetic code.
Nearly all learning and cognition happens after birth, meaning it's done by the brain (unless #3) using absurdly less energy than any known method of computation.
I think this is reflective of a massive blindspot.
A program to print "hello world" isn't very large, but it doesn't compile itself or produce its own operating system or produce the hardware to run the OS to run the compiler...or produce the companies to produce the hardware and software...or produce the economy to produce the companies... Clearly there is information in the compiled program that is not in the source code or the language spec.
That may be the case but I don't think it solves the power mystery. It may be a simple algorithm but it does an awful lot of np-hard/np-complete things on very little power. Among these are absurdly fast learning and fuzzy associative search.
> It's many orders of magnitude larger than 40 watts * 5 seconds. :)
Only if you also want to carry a fully preemptive running operating system with a posix layer, netfilter so you don't get hacked, retpol mitigations, openssl to negotiate tls connections, an ssh daemon so you have access, docker, electron, etc.
We have different architectures that can perform computation million times more efficiently than general computers. Of course, they lose on other axes (like precision).
Whats 54398456905 * 23423645745? Your 40 W brain can't compute that in a minute, yet a 0.01 W calculator can in a millisecond.
We can build less accurate computers and analog computers. Neither of these even begin to approach what brains can do. A self-driving car's computer takes hundreds of watts to run, uses reduced precision and custom silicon wherever possible, and does not begin to approach the navigational ability of a mouse or bird whose brain consumes less than one watt of power.
The human brain didn't evolve to perform consciously explicit and exact calculations on huge numbers, but our navigational and positional awareness abilities do far more impressive things with far more data much faster than this. A monstrous amount of effective but subconscious number crunching is involved in being aware of where your body is in space using nothing more than vision and sensorimotor feedback, taking apart auditory input (including FFT-like transforms), etc.
I really think CS people suffer from Dunning-Kreuger when they hand wave around the impressiveness of biological systems. Study some actual biology and neuroscience. What biological systems do as a normal part of metabolism and cognition is as awesome and mind-blowing as the vast energies, times, and distances found in astronomy. Computers are specialized devices that perform impressive feats of specialized computation but they do not even approach what biological systems do in terms of total data throughput per unit energy, learning ability, or associative and versatile memory to name just a few.
Edit: computers seem so impressive to us because we built them specifically to do the things we didn't evolve to do very well, but I have little doubt that if there were some kind of evolutionary forcing function selecting us for conscious explicit number crunching ability we would not need computers and wouldn't have built them.
> A self-driving car's computer takes hundreds of watts to run, uses reduced precision and custom silicon wherever possible, and does not begin to approach the navigational ability of a mouse or bird whose brain consumes less than one watt of power.
I would not trust the brain of a mouse or a bird to drive me in a car. Also the self-driving car computers which take hundreds of watts to run do not take advantage of custom silicon to the greatest possible extent, because the relevant algorithms are evolving rapidly. There is probably at least an order of magnitude or two of power efficiency that can be gained with current systems if the algorithms were truly baked into the chips.
I wasn't comparing performance at a specific task but performance at tasks of equal or greater difficulty.
Mouse and bird brains have evolved to operate mouse and bird bodies, not cars, and their learning ability isn't as powerful as a primate or a human so I doubt they could learn to drive a car as well as us or our specialized self-drive computers.
But... what they do manage in terms of controlling mouse and bird bodies is vastly more sophisticated and impressive than driving a car. A mouse runs around on four independently controlled legs and can tackle a vast array of terrains while dodging or chasing moving objects. Birds can navigate in 3d space while flying with articulated flapping wings with complex control surfaces operated by dozens of muscles.
Driving a car is ridiculously easy compared to anything like that. If mouse and bird brains had evolved to control cars I'd absolutely trust them to drive me around at least as much if not more than I trust a Tesla's autopilot. Driving is a simpler problem than operating a mouse body.
Don't get me wrong: our self-drive AIs are amazing engineering achievements. I'm just pointing out the impressive performance of tiny brains using fractions of a watt of power at much more difficult tasks.
The thing that blows my mind and makes me hypothesize quantum computing or even P=NP is the power requirements of those brains. It's "impossible." I'm not suggesting that we can't figure it out, just that we haven't yet and that it's probably going to take more or different approaches than we think it will take.
Immune systems were once considered so "impossible" that it led several researchers to abandon science in frustration, but we eventually got a good understanding of what was going on (and it's impressive!). Understanding immune systems had to wait for molecular genetics and modern evolutionary learning theory among other things. I suspect that really replicating brain-like performance will have to wait for something as far beyond our current state of the art as those were in the 1920s.
Parent was making the point that we have no computer with a similar architecture as the human brain (billions of tiny compute elements). Artificial neural networks try to simulate that, but the simulate billions of parameters on thousands of core (CPU/GPU).
Of course it's highly inefficient, just like for the brain is highly inneficient to exactly multiply two numbers.
So you also suffer of Dunning-Kreuger, you imagine that all that computers can be are von-neumann machines.
It isn't even necessary here. The "propaganda" the article opens with can be rephrased as this: some people in the Swedish government wanted something controversial, and social media i.e. their citizens talking to each other reflected the usual mishmash of true, half true and not true things people often believe about more or less anything controversial.
However, because this time the controversial thing was to do with NATO, it's obviously all the Russian's fault. They have no evidence, and nobody is willing to put their name to such a claim (it's all "analysts"), but no matter, it's definitely the Russians.
Besides, whose word do we have that these beliefs were false? The defence minister!
From the WSJ we learn this: "Swedish lawmakers formally backed an agreement Wednesday that allows the North Atlantic Treaty Organization to more easily operate on Swedish territory during training or in the event of a conflict or other crisis."
That doesn't sound so very different from the supposedly false claim that "NATO could attack Russia from Sweden without government approval". thelocal.se says:
"Originally signed in September 2014, the HNSA with Nato would allow the alliance to transport helicopters, aircraft and ships across Swedish territory, but only upon Sweden’s invitation.
As the agreement involves changes to Swedish law in order to give Nato personnel privileges and immunities with regards to areas like tax and custom rules, it required parliamentary ratification."
It's only a tiny step from that description (NATO soldiers have immunity from some areas of law) to the one apparently circulating (NATO soldiers have immunity in some other areas). Does this really require the invocation of Russian propaganda, or the Facebook telephone game sufficient?
After the Crimea takeover, the Swedish internet was filled with comments with very, very bad Swedish who defended Putin. There were some articles that interviewed the people in St Petersburg employed to write these comments.
The main difference with the Putin supporters today is that the Swedish language is excellent....
(And for a fact -- even in the ex communist party, few Swedes support starting wars in Europe again, in 1930s style. There are obviously lots of Putin propaganda online.)
44 Climate Models all fighting to out-panic one another, not a single one guessing low enough to predict the actual values for 2012 (when it seems the dataset in question ended)
... and a seemingly more reputable one showing roughly the same thing:
Attacking a persons 'trustworthiness' instead of dealing with their arguments and evidence is pretty much the dictionary definition of the ad-hominem diversion. It doesn't interest me to learn that he kicks cats or dresses in lingerie and calls himself Marjorie at the weekends. If you believe that he is wrong, then show where and how he is in error.
"Are you aware that Claude Pepper is known all over Washington as a shameless extrovert [pervert]? Not only that, but this man is reliably reported to practice nepotism [necrophilia] with his sister-in-law and he has a sister who was once a thespian [lesbian] in wicked New York. Worst of all, it is an established fact that Mr. Pepper, before his marriage, habitually practiced celibacy [???]."
No, because that was actually irrelevant. In this context, Soon's record within the scope of climate research is what's being scrutinized, not his personal life.
If Soon's opponents were attacking his love of Dune or his tendency to eat falafel, there might be an analogue here.
Irrelevant. The technique you used was the same as Smathers, and your intent was the same - to damage someone's reputation by insinuations and smears. It is low behavior.
Smathers' accusations related to issues that had no bearing on Peppers' merit as a political candidate or his ability to carry out his official duties. My 'insinuations' (actually, again, statements of fact) are related to Soon's behaviour within the context of climate science. If you cannot grasp this, you are not qualified to engage in debate. If you do not wish to for whatever reason, it makes it pretty clear that you are not interested in good faith discussion of this issue and are not worth anyone's time in that regard.
A damaging and false insinuation is a damaging and false insinuation, whatever ground it purports to cover. Smathers chose smears that would do the maximum damage to Pepper as a politician, you did the same for Soon as a scientist.
You can't look at someone's financial interest to know whether what they said is true or not. Similarly for any other attribute about them that you don't like.
There are many great thinkers who were gay. We don't invalidate their work because of that.
At best, you need to keep that in mind and take what they said with a grain of salt. Funding gives you a clue about which areas to be more critical about, but just because they have an interest one way or the other doesn't invalidate what they said.
If someone has been found to be a nutjob, you may casually dismiss what they said as a time saving device or because there is low probability what they say has any value to you. But even a nutjob is sometimes right.
Really? You think it doesn't matter that the primary author on a paper about climate science doesn't even have an undergraduate-level education in the subject? That the second one credited has a history of accepting large sums of money to write papers endorsing spurious claims DIRECTLY RELATING to climate change?
A lot of the IPCC lead authors are paid by NGO's (like Greenpeace) with a vested interest in climate alarmism. Do we discount their work too?
Climate science covers a lot of different areas, everything from economics, through hard chemistry and fluid dynamics, to pure statistics. No one person can be an expert on all of this, and no one qualification will make anyone competent in all of them. Experts from related disciplines are perfectly qualified to speak on "their" areas of climate science.
Are you going to tell me that every person who has ever written a paper on computer science needs to have a degree in it? While I won't question this guys qualification might be questionable - making a blanket statement that someone must be specifically educated in a subject to write a good paper on it is specious.
How often does it happen that a layman manages to get published in a well-regarded journal? Out of all the papers that laypeople publish anywhere, how many survive scrutiny from experts in the paper's problem domain? And out of those, how many that actively seek to overturn a paradigm succeed?
Based on this metric alone, it is highly unlikely that Monkcton is qualified to discuss climate change, and as it happens, his published work tends to be published by fairly obscure journals whose standards of review are questionable, and when they pass the desks of career climatologists, the result is generally unfavourable to him.
There is a difference between "layman", "well known expert in their field", "so and so with a degree in $field" "well known expert in their field with a masters in $field"
If you read my reply, I don't question the guys qualifications, I was objecting to the blanket statement of "you must have a degree in $field, to be expert" - many papers in technology, are written by people without degrees in that field.
I did read your reply; I'm saying that in the aggregate, a credible paper is unlikely to be written by someone without formal schooling in the relevant field.
Further, technology is applied science - it is not unlikely that one can become an expert through informal and professional practice. Your previous comment was about computer science, which is not necessarily the same thing, and which is closer to mathematics than anything else. Climatology is concerned primarily with physics and chemistry, but also geology and in some cases, paleontology. Most of these fields share little in common with pure maths or engineering. The comparison, then is not totally valid.
The basic training you require to be a competent scientist is hard to come by outside of academia. The actual work of science tends to be done in a laboratory. It's highly, unlikely, then, that someone who has put in the years (often decades) of work in academia to be on par with a hobbyist, whatever that may look like in this context.
Neither you nor the person you are responding to probably has the requisite qualifications to actually tell …
Judging something like this without relying on outside signals seems rather impossible and pointless if you are not, you know, an actual expert. No matter how much you want to believe you can be one about everything …
It wasn't libel the last time you brought it up and it isn't libel this time. Soon failed to disclose non-trivial amounts of funding that he received from parties who have a vested interest in deriding climate science. Given how often his work has failed to pass muster when scrutinized by climate scientists and skeptics, it is hard to fathom how any of this can amount tosimple incompetence.
The article you linked to is almost comical in its petty malevolence, well beyond the point of self-satire. This kind of character assassination, however reprehensible, is ultimately irrelevant. If you believe Dr. Wei Hock Soon is wrong, then show where and how he is mistaken.
Climate scientists have been doing that for almost 25 years at this point, and Soon's response has pretty much been to complain that he's being bullied and that science is being politicised. I find that to be actually comical, almost as much as the presumption that an intelligent and intellectually honest person could do this for as long as Soon has. And that his association with political and industrial think tanks is a non-sequitur in this regard.
The first graph on that second link is a bit confusing and seems pretty disingenuous. It has the "observations" region stretching to 2050. The rest of the article seems much more factual and interesting, but why start with something so misleading if your supposed goal is to debunk misleading projections?
Spencer is pretty out there. He's gone on record to say that warming proponents are advancing an argument that will lead to more deaths than the NSDAP's policies did, and is a signatory to the Evangelical Declaration on Climate Change, which suggests that this is largely a matter of faith for him...
He does also maintain one of the satellite records, which does show global warming over the period 1960-2000 (not so much the last 10 years because of the global warming hiatus).
Did you even read the article? look at the university of York dataset, which clearly shows the 1960-2000 warming followed by the 2000-2010 hiatus. Note the York dataset is strictly observationally independent of the UAH dataset.
Hans von Storch, professor at the Meteorological Institute of the University of Hamburg discussed this issue in a recent interview with Der Spiegel. He remarked that less than 2% of model runs reproduced the 'pause'.
SPIEGEL: Just since the turn of the millennium, humanity has emitted another 400 billion metric tons of CO2 into the atmosphere, yet temperatures haven't risen in nearly 15 years. What can explain this?
Storch: So far, no one has been able to provide a compelling answer to why climate change seems to be taking a break. We're facing a puzzle. Recent CO2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn't happened. In fact, the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) -- a value very close to zero. This is a serious scientific problem that the Intergovernmental Panel on Climate Change (IPCC) will have to confront when it presents its next Assessment Report late next year.
SPIEGEL: Do the computer models with which physicists simulate the future climate ever show the sort of long standstill in temperature change that we're observing right now?
Storch: Yes, but only extremely rarely. At my institute, we analyzed how often such a 15-year stagnation in global warming occurred in the simulations. The answer was: in under 2 percent of all the times we ran the simulation. In other words, over 98 percent of forecasts show CO2 emissions as high as we have had in recent years leading to more of a temperature increase.
Simple: this is exactly what you would expect on a high plateau. Think about it in terms of climbing a mountain with a fairly flat top. For a long time you're moving continuously up-slope, then when you get to the plateau you wander around randomly and frequently find outcroppings that are higher than anything you've encountered before. That doesn't mean you're still climbing, and if we were still climbing at the rate seen from 1980-2000 the "global mean temperature" (which is a thermodynamically meaningless arithmetic average) would be even higher than what we see today.
People who continually beat on extrema (like Denialists who claim that cold weather on the East Coast last winter is somehow proof that AGW isn't happening) are adding noise to the argument, not signal. The physically meaningful number is the heat content of the Earth/ocean system, and there's quite a bit of evidence it is rising, and that a significant portion of that rise is due to human activity.
This makes me think of when financial journalists/broadcasters constantly report that the SPX or the DJIA or the FTSE or whatever are hitting 'all time highs' and it's a really useless piece of information. Investors want to know how much it went up by on the day (and what he trend of the last few days/months has been), the fact that it poked through to a new high level is not important.
Apparently, the past models did not, because they did not model the long term interaction of the oceans with the atmosphere, and the current "hiatus" is mostly about the atmosphere temperatures, while most of the warming is currently happening in the oceans.
The more sophisticated current models do match the recent observations if you feed them the past data:
Agreed. I was definitely expecting a reference, and I was confused that the table of contents listed the authors & their positions before mentioning the title of the chapter.
I lived in China for 6 months and saw plenty of cops with guns and saw undercover cops roughly throw around a street vendor who had gotten too close to the forbidden city for their liking. Your view here seems to be willfully obscured.
Having your opinion formed by the media is a necessity of living in a world where you can't get first hand information for everything, i.e. a constraint imposed by reality. the difference is that in the west you can choose what media you want to pay attention to, and the media itself is free to report what it wants.
It isn't that complex. The idea behind the metaphor isn't to communicate what an uncovered call is, it's to illuminate what technical debt really means. It's a philosophical analysis describing that "debt" isn't an accurate metaphor for what techical debt really is. The fact that this way of thinking about techical debt "isn't widely known" is exactly the point. Most folks think about techical debt in terms of a thing that must eventually be repaid regardless of the future of the software while presenting it as an uncovered call is much more accurate as to what is actually happening: you're trading a now benefit for the future potential that it will cost you nothing or it could be so expensive that it might cost you everything. The decision to incur technical debt is actually an investment that the benefit now is more valuable than the future cost. Debt is completely different; debt implies that it must, eventually, one day be repaid with interest; however not all technical debt must actually be repaid-- for example if you build feature x in a sloppy but expeditious way, you've incurr d techical debt, however if feature x is eventually depreciated the you "won" because your overall cost of the feature is less than it would have been had you not incurred the techical debt. But, if feature x becomes the center of your product's world and everything else is blocked by the debt, then you lose greatly in comparison to the cost it would have taken to build it debt-free in the first place, thus the risk is theoretically unlimited and not just limited to the amount of technical debt incurred -- because other things are now being delayed because of that debt. So the goal, in my opinion is to manage risk, incur technical debt minimally and only when there's a high probability of that debt not ever having to be repaid. If you know, with a high degree of certainty that the debt will need to be repaid, it's cheaper not to incur it in the first place.
I think the uncovered call metaphor is a brilliant way of framing this issue. It doesn't matter if we know finance or not, it's the way of looking at the problem that provides us with a good framework with which to make development decisions. If we are aware of the unlimited downside and manage that risk, we can actually be more aware of when it makes sense to incur technical debt and when it doesn't. In my opinion, it, just like options trading, is all about risk management and certainly not about traditional debt management.
I totally understand this pain point before I began reading on options I was clueless, mystified but ultimately realized options are akin to trading stocks (buying low selling high) but with the added time restriction and ability to make money in any market.
Options are tricky to understand but essentially, think of it as just a piece of paper or a contract between you and the person buying it from you. It's a buy/sell market that exists on these "papers". Options trading essentially revolve around buying these papers which usually have an expiry date of between weeks to years and at what price you can buy the underlying good before the expiry date. The profit is made from the valuation of this paper going up and down based on the underlying value of what this paper represents. The paper can be contract about beef, corn, oil and share price of Apple. Just like the stock market, you want to pay cheap price for a paper and sell it when it goes high, but the beauty of options is since you don't actually trade the actual good itself, you can create a dizzying array of strategies and combinations to make money in any type of situation, but with the condition that you have to be right about what market we are in (trending up, down, sidways, volatility etc).
Writing an option is like selling a piece of paper that says the person buying your paper will have the ability to buy the stock at the price written on the paper. If the price is low and the stock price goes up, well you now have to buy X number of shares that was written on the paper at the high price to give to this guy, causing great deal of loss to you (since you sold the paper for some cash earlier).
If the stock price falls, then you could buy the shares at the low low price using the money you made from selling the paper and give it to the other guy, who most likely won't ask you to do this and just think about the time he paid you to write him a piece of paper that is now "expired" or worthless.
Permission to "view and fork" is not the same as permission to copy and paste into a proprietary software product (i.e. this may not be legit for use by "enterprise" developers)