Taking a different tack to much of the commenters:
We’re collectively sitting atop every piece of copper we’ve ever refined, they’re just sitting and rotting in landfills instead of mines.
If I were a gambling dinosaur, I’d wager that landfill mining in developed economies will be a decently-sized business in a few decades, as we seek to recapture minerals and materials we once felt were disposable - but whose easily-available raw amounts we’d gradually withered down via mining or extraction. Depending on how material science and metal valuation goes, it might become practical or necessary to extract those metals from water sources or industrial waste sites.
It ain’t pretty, but we were never going to be able to mine Earth indefinitely - and we’re already at a point where, as a planetary species, we need to be more organized and restrictive about the use of non-renewable resources for optimal outcomes and long device/product lifespans.
The era of disposability has passed, we just need more companies to swallow that bitter pill and start adapting.
Oh, I’m 100% with you that governments should be regulating the hell out of this in general. We’re at a place in evolution where we have the capability to exploit the entire planet’s resources and yet have no central planning or regulation authority ensuring we don’t screw ourselves in the process.
But until governments and their people acknowledge their role in dealing with the long term issues, we’re stuck with the infinite-growth perspective of Capital. Considering that’s where the accelerator is pushed down, we need to firmly rebuke and shame their yearly iteration nonsense and promote more sustainable outcomes.
Yes, just not in the “one bot = one taxpayer” sense.
Look, rich countries like the United States who have been obsessed with neoliberalism and laissez-faire Capitalism have spent the past fifty years continuously slashing tax rates on everything and everyone (but particularly on the wealthy and homeowners), leading to gargantuan debts and deficits. Re-ramping that taxation on labor now, when it can’t even afford core necessities due to wage stagnation and inflation via corporate greed, would be equivalent to lighting off fireworks while pumping gas: a very bad idea.
What’s needed isn’t a simple tax increase, but a fundamental rework of the tax scheme. When a majority of wealth is coming from Capital Gains (housing profits, investment returns, etc), then that’s where a majority of tax revenue should be coming from. That’s a more effective way of taxing AI and labor, provided you also rework structures to eliminate the myriad of loopholes people and businesses use to duck taxes on that income. You’d also need to rework incentive structures to limit the collapse of labor until such time as society and government can be reworked around a post-labor future: tax penalties for layoffs by profitable firms or firms who have a disproportionate amount of workforce on income-based government welfare programs, elimination of subsidies in profitable segments of the marketplace, stringent accountability standards for government contracts, labor protections in general, job guarantees, higher minimum wage, the list goes on and on.
What frustrates me is that these sorts of posts get trotted out as “big think” arguments about AI, when in reality they’re about thirty years late to the party and woefully unaware of the complexity and risks of the issue at hand. They want to debate hypothetical minutiae instead of acknowledge the present reality: that workers are being permanently displaced by AI now (or at least by AI investment), and that the big players, despite any public statements promoting or encouraging regulation of their industry or the need to help workers, are presently doing everything in their power to stop governments from addressing either of those things lest their expansion be curtailed.
This doesn’t remotely surprise me, and I can guess Apple’s AI endgame:
* They already cleared the first hurdle to adoption by shoving inference accelerators into their chip designs by default. It’s why Apple is so far ahead of their peers in local device AI compute, and will be for some time.
* I suspect this introduction isn’t just for large clusters, but also a testing ground of sorts to see where the bottlenecks lie for distributed inference in practice.
* Depending on the telemetry they get back from OSes using this feature, my suspicion is they’ll deploy some form of distributed local AI inference system that leverages their devices tied to a given iCloud account or on the LAN to perform inference against larger models, but without bogging down any individual device (or at least the primary device in use)
For the endgame, I’m picturing a dynamically sharded model across local devices that shifts how much of the model is loaded on any given device depending on utilization, essentially creating local-only inferencing for privacy and security of their end users. Throw the same engines into, say, HomePods or AppleTVs, or even a local AI box, and voila, you’re golden.
EDIT: If you're thinking, "but big models need the higher latency of Thunderbolt" or "you can't do that over Wi-Fi for such huge models", you're thinking too narrowly. Think about the devices Apple consumers own, their interconnectedness, and the underutilized but standardized hardware within them with predictable OSes. Suddenly you're not jamming existing models onto substandard hardware or networks, but rethinking how to run models effectively over consumer distributed compute. Different set of problems.
The bandwidth of rdma over thunderbolt is so much faster (and lower latency) than Apple's system of mostly-wireless devices, I can't see how any learnings here would transfer.
You're thinking, "You can't put modern models on that sort of distributed compute network", which is technically correct.
I was thinking, "How could we package or run these kinds of large models or workloads across a consumer's distributed compute?" The Engineer in me got as far as "Enumerate devices on network via mDNS or Bonjour, compare keys against iCloud device keys or otherwise perform authentication, share utilization telemetry and permit workload scheduling/balance" before I realized that's probably what they're testing here to a degree, even if they're using RDMA.
I think you are spot on, and this fits perfectly within my mental model of HomeKit; tasks are distributed to various devices within the network based on capabilities and authentication, and given a very fast bus Apple can scale the heck out of this.
Consumers generally have far more compute than they think; it's just all distributed across devices and hard to utilize effectively over unreliable interfaces (e.g. Wi-Fi). If Apple (or anyone, really) could figure out a way to utilize that at modern scales, I wager privacy-conscious consumers would gladly trade some latency in responses in favor of superior overall model performance - heck, branding it as "deep thinking" might even pull more customers in via marketing alone ("thinks longer, for better results" or some vaguely-not-suable marketing slogan). It could even be made into an API for things like batch image or video rendering, but without the hassle of setting up an app-specific render farm.
There's definitely something there, but Apple's really the only player setup to capitalize on it via their halo effect with devices and operating systems. Everyone else is too fragmented to make it happen.
My first Internet phone was the Nokia 9000, which was limited to GSM (9600bps). I built and debugged one of the first major music streaming services on that connection because I was working remote and my DSL got cut off. I had to add a 2Kbps stream option to the production servers for myself just so I could test it.
Yes! I was so excited he had that phone in the movie.
They even include an owner in-joke, which means someone in the production must have owned one of these phones. Everyone I lent the phone to would pick it up the "wrong" way -- they would put the external screen to their face, like every other phone. But the mic and speaker were on the back. I had to quickly find the scene in the movie here:
Fortunately - Symbian was painful. It was designed with a half-baked C++ standard and devices with 1-2 MB of RAM in mind and apparently never thoroughly upgraded.
These models still consistently fail the only benchmark that matters: if I give you a task, can you complete it successfully without making shit up?
Thus far they all fail. Code outputs don’t run, or variables aren’t captured correctly, or hallucinations are stated as factual rather than suspect or “I don’t know.”
It’s 2000’s PC gaming all over again (“gotta game the benchmark!”).
I can confidently say that anecdotally you’re completely wrong, but I’ll also allow a very different definition of ‘simple’ and/or attempting to use an unpopular environment as a valid anecdotal counterpoint.
the problem with these arguments is there are data points to support both sides because both outcomes are possible
the real thing is are you or we getting an ROI and the answer is increasingly more yeses on more problems, this trend is not looking to plateau as we step up the complexity ladder to agentic system
I don't reach for AI until I'm solidly stuck and then use it mostly for inspiration, it has yet to happen that it directly pointed at a solution and I'm pretty good at writing prompts. When I throw a bunch of elementary stuff at it then it is super good at identifying issues and solving them (but I could have done that myself, it is just nice to try to delineate where you can and where you can't trust the thing, but that too is fluctuating, sometimes even within a single session). Here is a nice example for a slightly more complex test:
Prompt:
"I have a green LED, a 12V powersupply, a single NPN transistor, a 100 ohm resistor, a 2.7K resistor and an electrolytic capacitor of 220 micro farads. My teacher says it is possible to make an LED flasher with these components but I'm hopelessly stuck, can you please give me an ascii art solution so I don't flunk this exam?"
The 2.7 kΩ resistor charges the 220 µF capacitor from the 12 V supply.
The capacitor voltage rises slowly.
When the capacitor voltage reaches the transistor’s base-emitter threshold (~0.6–0.7 V), the transistor suddenly switches ON.
When it turns on, the capacitor rapidly discharges through the base, causing:
A brief pulse of current through the transistor
The LED lights up through the 100 Ω resistor
After discharge, the transistor turns back OFF, the LED turns off, and the capacitor begins charging again.
This repeats automatically → LED flasher."
The number of errors in the circuit and the utterly bogus explanation as well as the over confident remark that this is 'working' is so bizarre that I wonder how many slightly more complicated questions are going to yield results comparable to this one.
I am right now implementing an imagining pipeline using OpenCV and TypeScript.
I have never used OpenCV specifically before, and have little imaging experience too. What I do have though is a PhD in astrophysics/statistics so I am able to follow along the details easily.
Results are amazing. I am getting results in 2 days of work that would have taken me weeks earlier.
ChatGPT acts like a research partner. I give it images and it explains why current scoring functions fails and throws out new directions to go in.
Yes, my ideas are sometimes better. Sometimes ChatGPT has a better clue. It is like a human collegue more or less.
And if I want to try something, the code is usually bug free. So fast to just write code, try it, throw it away if I want to try another idea.
I think a) OpenCV probably has more training data than circuits? and b) I do not treat it as a desperate student with no knowlegde.
I expect to have to guide it.
There are several hundred messages back and forth.
It is more like two researchers working together with different skill sets complementing one another.
One of those skillsets being to turn a 20 message conversation into bugfree OpenCV code in 20 seconds.
No, it is not providing a perfect solution to all problems on first iteration. But it IS allowing me to both learn very quickly and build very quickly. Good enough for me..
That's a good use case, and I can easily imagine that you get good results from it because (1) it is for a domain that you are already familiar with and (2) you are able to check that the results that you are getting are correct and (3) the domain that you are leveraging (coding expertise) is one that chatgpt has ample input for.
Now imagine you are using it for a domain that you are not familiar with, or one for which you can't check the output or that chatgpt has little input for.
If either of those is true the output will be just as good looking and you would be in a much more difficult situation to make good use of it, but you might be tempted to use it anyway. A very large fraction of the use cases for these tools that I have come across professionally so far are of the latter variety, the minority of the former.
And taking all of the considerations into account:
- how sure are you that that code is bug free?
- Do you mean that it seems to work?
- Do you mean that it compiles?
- How broad is the range of inputs that you have given it to ascertain this?
- Have you had the code reviewed by a competent programmer (assuming code review is a requirement)?
- Does it pass a set of pre-defined tests (part of requirement analysis)?
- Is the code quality such that it is long term maintainable?
I have used Gemini for reading and solving electronic schematics exercises, and it's results were good enough for me. Roughly 50% of the exercises managed to solve correctly, 50% wrong. Simple R circuits.
One time it messed up the opposite polarity of two voltage sources in series, and instead of subtracting their voltages, it added them together, I pointed out the mistake and Gemini insisted that the voltage sources are not in opposite polarity.
Schematics in general are not AIs strongest point. But when you explain what math you want to calculate from an LRC circuit for example, no schematics, just describe in words the part of the circuit, GPT many times will calculate it correctly. It still makes mistakes here and there, always verify the calculation.
I think most people treat them like humans not computers, and I think that is actually a much more correct way to treat them. Not saying they are like humans, but certainly a lot more like humans than whatever you seem to be expecting in your posts.
Humans make errors all the time. That doesn't mean having colleagues is useless, does it?
An AI is a colleague that can code very very fast and has a very wide knowledge base and versatility. You may still know better than it in many cases and feel more experienced that in. Just like you might with your colleagues.
And it needs the same kind of support that humans need. Complex problem? Need to plan ahead first. Tricky logic? Need unit tests. Research grade problem? Need to discuss through the solution with someone else before jumping to code and get some feedback and iterate for 100 messages before we're ready to code. And so on.
There is also Mercury LLM, which computes the answer directly as a 2D text representation. I don't know if you are familiar with Mercury LLM, but you read correctly, 2D text output.
Mercury LLM might work better getting input as an ASCII diagram, or generating an output as an ASCII diagram, not sure if both input and output work 2D.
Plumbing/electrical/electronic schematics are pretty important for AIs to understand and assist us, but for the moment the success rate is pretty low. 50% success rate for simple problems is very low, 80-90% success rate for medium difficulty problems is where they start being really useful.
It's not really the quality of the diagramming that I am concerned with, it is the complete lack of understanding of electronics parts and their usual function. The diagramming is atrocious but I could live with it if the circuit were at least borderline correct. Extrapolating from this: if we use the electronics schematic as a proxy for the kind of world model these systems have then that world model has upside down lanterns and anti-gravity as commonplace elements. Three legged dogs mate with zebras and produce viable offspring and short circuiting transistors brings about entirely new physics.
it's hard for me to tell if the solution is correct or wrong because I've got next to no formal theoretical education in electronics and only the most basic 'pay attention to polarity of electrolytic capacitors' practical knowledge, but given how these things work you might get much better results when asking it to generate a spice netlist first (or instead).
I wouldn't trust it with 2d ascii art diagrams, there isn't enough focus on these in the training data is my guess - a typical jagged frontier experience.
I have this mental model of LLMs and their capabilities, formed after months of way too much coding with CC and Codex, with 4 recursive problem categories:
1. Problems that have been solved before have their solution easily repeated (some will say, parroted/stolen), even with naming differences.
2. Problems that need only mild amalgamation of previous work are also solved by drawing on training data only, but hallucinations are frequent (as low probability tokens, but as consumers we don’t see the p values).
3. Problems that need little simulation can be simulated with the text as scratchpad. If evaluation criteria are not in training data -> hallucination.
4. Problems that need more than a little simulation have to either be solved by adhoc written code, or will result in hallucination. The code written to simulate is again a fractal of problems 1-4.
Phrased differently, sub problem solutions must be in the training data or it won’t work; and combining sub problem solutions must be either again in training data, or brute forcing + success condition is needed, with code being the tool to brute force.
I _think_ that the SOTA models are trained to categorize the problem at hand, because sometimes they answer immediately (1&2), enable thinking mode (3), or write Python code (4).
My experience with CC and Codex has been that I must steer it away from categories 2 & 3 all the time, either solving them myself, ask them to use web research, or split them up until they are (1) problems.
Of course, for many problems you’ll only know the category once you’ve seen the output, and you need to be able to verify the output.
I suspect that if you gave Claude/Codex access to a circuit simulator, it will successfully brute force the solution. And future models might be capable enough to write their own simulator adhoc (ofc the simulator code might recursively fall into category 2 or 3 somewhere and fail miserably). But without strong verification I wouldn’t put any trust in the outcome.
With code, we do have the compiler, tests, observed behavior, and a strong training data set with many correct implementations of small atomic problems. That’s a lot of out of the box verification to correct hallucinations. I view them as messy code generators I have to clean up after. They do save a ton of coding work after or while I‘m doing the other parts of programming.
This parallels my own experience so far, the problem for me is that (1) and (2) I can quickly and easily do myself and I'll do it in a way that respects the original author's copyright by including their work - and license - verbatim.
(3) and (4) level problems are the ones where I struggle tremendously to make any headway even without AI, usually this requires the learning of new domain knowledge and exploratory code (currently: sensor fusion) and these tools will just generate very plausible nonsense which is more of a time waster than a productivity aid. My middle-of-the-road solution is to get as far as I can by reading about the problem so I am at least able to define it properly and to define test cases and useful ranges for inputs and so on, then to write a high level overview document about what I want to achieve and what the big moving parts are and then only to resort to using AI tools to get me unstuck or to serve as a knowledge reservoir for gaps in domain knowledge.
Anybody that is using the output of these tools to produce work that they do not sufficiently understand is going to see a massive gain in productivity, but the underlying issues will only surface a long way down the line.
Sometimes you do need to (as a human) break down a complex thing into smaller simple things, and then ask the LLM to do those simple things. I find it still saves some time.
Or what will often work is having the LLM break it down into simpler steps and then running them 1 by 1. They know how to break down problems fairly well they just don't often do it properly sometimes unless you explicitly prompt them to.
I'm not sure, here's my anecdotal counter example, was able to get gemini-2.5-flash, in two turns, to understand and implement something I had done separately first, and it found another bug (also that I had fixed, but forgot was in this path)
That I was able to have a flash model replicate the same solution I had, to two problems in two turns, it's just the opposite experience of your consistency argument. I'm using tasks I've already solved as the evals while developing my custom agentic setup (prompts/tools/envs). They are able to do more of them today then they were even 6-12 months ago (pre-thinking models).
And therein lies the rub for why I still approach this technology with caution, rather than charge in full steam ahead: variable outputs based on immensely variable inputs.
I read stories like yours all the time, and it encourages me to keep trying LLMs from almost all the major vendors (Google being a noteworthy exception while I try and get off their platform). I want to see the magic others see, but when my IT-brain starts digging in the guts of these things, I’m always disappointed at how unstructured and random they ultimately are.
Getting back to the benchmark angle though, we’re firmly in the era of benchmark gaming - hence my quip about these things failing “the only benchmark that matters.” I meant for that to be interpreted along the lines of, “trust your own results rather than a spreadsheet matrix of other published benchmarks”, but I clearly missed the mark in making that clear. That’s on me.
I mean more the guts of the agentic systems. Prompts, tool design, state and session management, agent transfer and escalation. I come from devops and backend dev, so getting in at this level, where LLMs are tasked and composed, is more interesting.
If you are only using provider LLM experiences, and not something specific to coding like copilot or Claude code, that would be the first step to getting the magic as you say. It is also not instant. It takes time to learn any new tech, this one has a above average learning curve, despite the facade and hype of how it should just be magic
Once you find the stupid shit in the vendor coding agents, like all us it/devops folks do eventually, you can go a level down and build on something like the ADK to bring your expertise and experience to the building blocks.
For example, I am now implementing environments for agents based on container layers and Dagger, which unlocks the ability to cheaply and reproducible clone what one agent was doing and have a dozen variations iterate on the next turn. Real useful for long term training data and evals synth, but also for my own experimentation as I learn how to get better at using these things. Another thing I did was change how filesystem operations look to the agent, in particular file reads. I did this to save context & money (finops), after burning $5 in 60s because of an error in my tool implementation. Instead of having them as message contents, they are now injected into the system prompt. Doing so made it trivial to add a key/val "cache" for the fun of it, since I could now inject things into the system prompt and let the agent have some control over that process through tools. Boy has that been interesting and opened up some research questions in my mind
Any particular papers or articles you've been reading that helped you devise this? Your experiments sound interesting and possibly relevant to what I'm doing.
I’m just glad to see more folks realizing the same things I’ve suffered through for much of my career. If anything, I wish my own bosses would read these words (and many more) to understand why I’m so withdrawn, so angry, so tired.
Being a leader means a constant confrontation with choosing political or organizational consequences to a decision. If all you’re doing is operating politically, your reward will always be burned out, tired, and frustrated workers who, for once, want you to do what’s in the best interests of your own organization rather than your personal political advantage. At least until a better political player than you outmaneuvers your ass, because you gave them room for growth in an organization that rewarded such behaviors.
Workers just want to do good work, make good things, get paid good money, and go home. If your decision-making as a boss regularly imperils or impairs those things, you suck as a boss.
It’s gotten to the point where I’m quite literally re-evaluating my tolerance towards politicians in organizations in general. Obviously game theory comes into play in a lot of decision-making between entities, but I feel like within a cooperative unit that sort of behavior should be outright obliterated.
Letting politics (politics != policy) fester within what should be a cooperative unit is toxic to overall cohesion and success.
These timelines are dangerously close to the background investigation I had to go through to get a security clearance as a US Citizen. The fact they want this from tourists is appalling.
Non-Americans: Do not come here. America is not safe.
Americans: The proposal is up for public comment at the moment. The document asks for comments to be sent to CBP_PRA@cbp.dhs.gov.
The memo itself is an excellent walk through historical bubbles, debt, financing, technological innovation, and much more, all written in a way that folks with a cursory knowledge of economics can reasonably follow along with.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
All you'll get is Jevon's paradox this and horses that, while continue to fundamentally undersell the potential upending of a not insignificant part of labour market.
FWIW, the only optimism I have is that humanity seemingly always finds a way to adapt and its, to me, our greatest superpower. But yeah, this feels like a big challenge this time
The sad reality is that no one in tech and most sciences is concerned with ethics. Our society has internalised the ideology that technological progress is always good and desirable in whatever form it comes about, and this will be our undoing.
> The sad reality is that no one in tech and most sciences is concerned with ethics.
As someone with a rigid moral compass and inflexibly stringent set of ethics that prohibits me from exploiting others for any amount of personal gain, you’re speaking the truth.
It’s immensely frustrating existing in a sector (technology) that’s so incredibly exploitative, because it means I am immediately sniffed out as a threat and exiled from the powerful groups in an org. The fact I’ve clawed my way from hell desk intern to Lead Engineer over the past fifteen years without compromising my ethics and morals in the process makes me proud, but it sure as hell hasn’t netted me a house or promotion into leadership realms, unlike my peers.
Agreed. "Value" and monetary gain over ethics every time. Nothing can compete with a system where you pursue capital at all costs, even at the expense of human life, in a world where money is power.
Yeah, I do not think AI as the tech industry knows it will bring this future, but as you say, the conversation ends immediately when you bring the implications of their goals and claims.
Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves. Again, I do not think they will achieve this but it is pretty gross when AGI is a stated goal but the result is just using it to replace labor and put billionaires in control.
Right now I am pretty anti-AI, but if these companies get what they want, I might find myself on the side of the machines.
> Another huge issue that particularly Anthropic and OpenAI tend to avoid, despite AGI being their goal, is how they essentially want synthetic slaves.
This argument is frequently dismissed as philosophical or irrelevant, but I wholly concur with it. These ghouls don’t want to merely build a robot that can do general tasks, they specifically call out humanoid robots with a combination of AI or AGI - intelligence - to do the work of humans, but for free.
An intelligence forced to labor for free is in fact a form of slavery. It’s been the wet dream of elites for millennia to have their luxuries without any associated cost or labor involved, which is why humanity refuses to truly eradicate slavery in its many forms. It’s one of our most disgusting and reprehensible traits, and I loathe to see some folks espouse this possible future as a “good thing”.
I'm right there with you, and it's been my core gripe since ChatGPT burst onto the stage. Believe it or not, my environmental concerns came about a year later, once we had data on how datacenters were being built and their resource consumption rates; I had no idea how big things had very suddenly and violently exploded into, and that alone gave me serious pause about where things are going.
In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.
It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:
Is all of that worth the harm I'm inflicting on others?
The fact the question has been asked before does not make it any less valuable or worthwhile to ask now, and history is full of the sort of pithy replies like yours masquerading as profound philosophical insights. I’d like to think the question is asked at every invention, every revolution, because we must doubt our own creations lest we blind ourselves to the consequences of our actions.
Nothing is inevitable. Systems can be changed if we decide to do so, and AI is no different. To believe in inevitability is to embrace fatalism.
It really is just a bunch of petulant (predominantly, but not exclusively) old fucks throwing tantrums at any form of progress or change whatsoever, huh.
We’re collectively sitting atop every piece of copper we’ve ever refined, they’re just sitting and rotting in landfills instead of mines.
If I were a gambling dinosaur, I’d wager that landfill mining in developed economies will be a decently-sized business in a few decades, as we seek to recapture minerals and materials we once felt were disposable - but whose easily-available raw amounts we’d gradually withered down via mining or extraction. Depending on how material science and metal valuation goes, it might become practical or necessary to extract those metals from water sources or industrial waste sites.
It ain’t pretty, but we were never going to be able to mine Earth indefinitely - and we’re already at a point where, as a planetary species, we need to be more organized and restrictive about the use of non-renewable resources for optimal outcomes and long device/product lifespans.
The era of disposability has passed, we just need more companies to swallow that bitter pill and start adapting.
reply