Is there an AI system with functionality at or equal to a human brain that operates on less than 100W? Its currently the most efficient model we have. You compare all of humanity's energy expenditure, but to make the comparison, you need to consider the cost of replicating all that compute with AI (assuming we had an AGI at human level in all regards, or a set of AIs that when operated together could replace all human intelligence).
So, this is rather complex because you can turn AI energy usage to nearly zero when not in use. Humans have this problem of needing to consume a large amount of resources for 18-24 years with very little useful output during that time, and have to be kept running 24/7 otherwise you lose your investment. And even then there is a lot of risk they are going to be gibbering idiots and represent a net loss of your resource expenditure.
For this I have a modern Modest Proposal they we use young children as feed stock for biofuel generation before they become a resource sink. Not only do you save the child from a life of being a wage slave, you can now power your AI data center. I propose we call this the Matrix Efficiency Saving System (MESS).
No one will ever agree on when AI systems have equivalent functionality to a human brain. But lots of jobs consist of things a computer can now do for less than 100W.
Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation.
> Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation.
Add to that the tier-n dependencies this urban lifestyle has—massive supply chains sprawling across the planet, for example involving thousands upon thousands of people and goods involved in making your morning coffee happen.
Wikipedia quoted global primary energy production at 19.6 TW, or about 2400W/person. Which is obviously not even close to equally distributed. Per-country it gets complicated quickly, but naively taking the total from [1] brings the US to 9kW per person.
And that's ignoring sources like food from agriculture, including the food we feed our food.
To be fair, AI servers also use a lot more energy than their raw power demand if we use the same metrics. But after accounting for everything, an American and an 8xH100 server might end up in about the same ballpark
Which is not meant as an argument for replacing Americans with AI servers, but it puts AI power demand into context
Obviously we don't have AGI so we can't compare many tasks. But on tasks where AI does perform at comparable levels (certain subsets of writing, greenfield coding and art) it performs fairly well. They use more power but are also much faster, and that about cancels out. There are plenty of studies that try to put numbers on the exact tradeoff, usually focused more on CO2. Plenty that find AI better by some absurd degree (800 times more efficient at 3d modelling, 130 to 1500 times more efficient at writing, or 300 to 3000 times more efficient at illustrating [1]). The one I'd trust the most is [2] where GPT4 was 5-19 times less CO2 efficient than humans at solving coding challenges
Eh, I feel like if we know there is a vulnerable population (children), and we know that the social media will cause harm (increased suicides, depression, etc), and we know that that ultimately it puts people in a place where they can't reasonably choose a better path (collective action, power imbalance, poor reasoning, no long-term perspective, social peer pressure, etc.), and where the parents are unable to reasonably control the behavior too, then a ban is warranted.
By that token, I honestly think we should ban more things that we know don't have an upside, and only have downsides, and the people who partake are generally doing so because of mental / physical shortcomings. In other words, if we know a reasonable person would not want to partake in a behavior unless due to manipulation and weakness, then I feel protecting that person is a kindness.
I myself have suffered from addictions that I can't seem to easily "choose to stop" even though I constantly wish I could. I really wish I wouldn't have been exposed to these things when I was younger and thought it was just fun. If I could, I would pay to go back and prevent my younger self from ever trying it -- because I had no way to know. And then I am a bit astonished none of the adults had that kind of concern. Sure a few people said "that stuff isn't good", but ultimately that lost to all the other factors (constant propaganda, ads, peer pressure, convenience, taste, addictive qualities, cost). It was never a "free choice" because there was huge information and power imbalance at play, and the "responsible adults" who could help did nothing.
Maybe one other thing to consider -- I think its usually best to have a killer game idea that seems fun, then design around it (and select the proper engine), rather than simply "I want to build an engine with some capabilities, then I'll figure out what games to make later."
Have seen this with a lot of software "frameworks" (web, game, graphics, etc.). Nothing wrong with writing an amazing tech demo just for the hell of it, but then when it comes time to do "real world" tasks, the frameworks are often in search of a fitting problem.
I really respect his work on the engine, the math, and the engineering is excellent, but I'm not sure it makes for an interesting game above and beyond what we already have.
Adding additional smoothness to existing voxel engines I'm not sure would have much effect, unless you have something specific like moving a ball on a smooth surface (but the SDF I'm seeing here wouldn't support that either).
As for "create a hole, then close it behind you", this is about as game-changing as open/closing doors (or tunnels with doors). I'm open to suggestion, but honestly this is amazing tech, I just don't think it will create very fun games.
Kind of feel the same about the demos I see of spherical or non-euclidean geometry games. Its very interesting, and impressive, but seems like it is an engine in search of a game.
It also seems like it might be important to determine what share of _new_ development is being created for investment. The share of existing homes being owned isn't as important.
There's also the 2nd order effect. The price of property might go down if there weren't investors with tons of money to spend on a property.
All housing is investment due to the way it's owned, regardless of who owns it. For most people, it's by far their biggest investment, resulting in extreme political activity.
We must confront that fact, or negate it, for example by taxing away the investment opportunity.
Investors buy housing to rent it out to others, that's the investment. People who own their own home implicitly pay themselves rent, by choosing to live in what they own versus renting instead.
Getting non-resident owners out of housing only means that renters have been banned from an area.
Isn't that circular reasoning? Charging rent means its an investment, but also just living in a home is "paying yourself". Is there any way to live on a property that isn't an investment (I suppose just staying there while the government owns it?).
No, as long as the housing is bought and sold and priced by the market it's always investment. Homeowners talk freely about how it's an investment, choose their own home for its investment properties.
It's not circular it just is, under our ownership scheme. Adding a 100% land value tax does remove the investment characteristic, though, as it removes the speculative aspect of land, and in fact drives the price of land to $0.
undifferentiated to the laymen perhaps, just as I'm sure programmers are to the uninitiated. Guessing if you dug into their past cases, their degrees, grades, etc, you would see differentiation. If you were facing life in prison or the death penalty, you would probably want to know the specifics and not just "lawyers are interchangeable, give me a script to read in court -- its something we should have automated already."
Perhaps a more apt approach would be to expose more information about lawyers to the public, so they can make a better decision. There are sites that do this (showing average win rates, average payout, response time, years in practice etc).
But maybe people don't know about these sites, or there are other reasons people select based on advertising rather than cold logical analysis.
The kind of data you refer to make for bad metrics, and if widely used would create incentives that are at odds with established legal ethics.
You know how study after study has shown that when selling your house, real estate agents always push you to accept early offers regardless of price, but when selling their own house, they will keep it on the market longer in order to get more favorable offers? The real estate system has a built-in conflict of interest. The legal system has a lot of enforceable ethical rules to prevent such things. But the metrics you want to use subvert that.
I’m not going to take your case - the potential payout is too low. I’m not going to take your case - it will take too long to get a resolution. Let’s not examine this line of thinking - it might blow the case wide open. You want the other side to change their behavior? No, it’s better if you demand money. Etc…
Ultimately this move might have just been to increase visibility for an otherwise niche awards show (which it has clearly done). Also by eliminating the obvious best indie game of the year -- it opens up the field a bit to more "normal" contenders. Expedition 33 is basically a AAA-quality game, its only considered "indie" because a small unknown team made it.
Everyone in this thread keeps treating human learning and art the same as clearly automated statistical processes with massive tech backing.
Analogy: the common area had grass for grazing which local animals could freely use. Therefore, it's no problem that megacorp has come along and created a massive machine which cuts down all the trees and grass which they then sell to local farmers. After all, those resources were free, the end product is the same, and their machine is "grazing" just like the animals. Clearly animals graze, and their new "gazelle 3000" should have the same rights to the common grazing area -- regardless of what happens to the other animals.
I'm not sure why you are replying to me. I made no such treatment of them.
The analogy isn't really helpful either. It's trivially obvious that they are different things without the analogy, and the details of how they are different are far too complex for it to help with.
This sentiment is constantly echoed on this site -- "just look at past times where tech removed jobs, this is no different". But the difference now is that we will soon have super-humans in terms of intelligence, dexterity (robots), and cost (cheaper, no healthcare, etc.).
I put the onus on the yay-sayers, can you name a job that a human can do that this new AI / robot cannot (or will not soon) do? Otherwise, I think its time to stop drawing false equivalence with agriculture, luddites, etc. Those were "narrow" machines, incapable of coding, writing a symphony, or working in a factory. In the next decade we're talking about building a better human.
I think a better example is to draw a parallel to horses. There is nothing left for them to do; we keep a few around for sport and entertainment, as a novelty. At one time, they were indispensible, but there's no rule that any organism (including humans) has infinite economically viable uses. At some point, everything worth doing (economically) might be automated to the point that human labor no longer makes sense (and hence we have high unemployment). There is no cosmic law written that "if jobs are replaced by tech, new jobs shall fill the space!" Just look at areas in the rust belt where literally nothing replaced the lost jobs -- there is just rampant unemployment, black market dealing / drugs, and despair.
That's a very different argument about a hypothetical future problem that may or may not ever actually materialize. (I'd argue given the current trajectory of AI it probably won't for the foreseeable future.)
But yes, if we develop artificial superinteligence to the level where humans become literally useless (e.g. we don't just automate 90% of everything, but 100%, and there's actually no tasks left in the world that humans can do better or cheaper than computers) then assuming humanity survives we'll need a different economic system for distributing the nearly-unlimited resources resulting from that. Probably in that situation the best thing to do would be to ask the AI to design our new economic system, since it would obviously do a better job at that than any human.
Couldn't it get a lot worse much sooner than that? Even if a handful of industries collapse, its not clear we have more jobs for 100M displaced workers. I just haven't seen any proposals of what that future looks like that seem good, but I do hear "don't worry more jobs will appear". But can anyone say where 50-100M workers will go? All the answers I think of or see seem like things that can easily be automated.
Depending on how much gets automated and how quickly, yes that could be a temporary, short-term problem. I personally think the transition will happen slowly enough that it'll barely be noticeable, but if I'm wrong and we somehow automate 50M people out of a job in the space of a few years, that will indeed lead to an oversupply in the labor market, temporarily resulting in high unemployment and low wages for workers with the affected skillsets (including unskilled workers).
Where displaced workers will go though is not something that can or should be planned out in a centralized fashion, because the best answer to that question is different for each individual and depends on their skills, preferences, and life situation, balanced against the needs and desires of other consumers in the unimaginably complex web that is the global economy.
Despite not knowing exactly where everyone will end up though, I think I can still be confident that they will find something, because the incentives to do so are very strong, both on a personal level (needing to find work) and the entrepreneurial level (finding useful things for displaced workers to do could make you very rich).
As another commenter put it a while back, unemployed workers are an unused resource, and the economy is very good at finding uses for unused resources.
Unintended side effect might be that congress would use their power to make bonds a more profitable avenue of investment, creating a very reliable, high-interest retirement vehicle for average people (rather than getting into the roulette wheel of the stock market). Imagine if USG-backed 15% treasuries were a thing?
Where do you think the money for those 15% interest payments would come from?
There are historical examples of what happens when lawmakers mandate high amounts of money creation. It doesn’t end with the people being better off. It usually destroys the economy.
reply