Great post. And if you want some control support for your cronjobs perl App::Cronjob[1] can provide features such has exclusive locking, so a job won't run if the previous run is still going, or provide a timeout, and some options for sending mail on success or failure
> I think this is incorrect. Specifically the Windows ARM support. Official hardware support page indicates that the Windows version requires x64. I unfortunately don’t have the hardware to confirm for myself. But Blizzard is the kind of company that would have made a blog post about that.
It has been around for a while, circa 2021. They made a forum post when they released it.
> There's something to be said for the restrictions of an environment when you're learning how to operate in a domain that seems to shape future thinking.
When at University the academic running the programming language course was adamant the Sapir–Whorf hypothesis applied to programming language. ie language influences the way you think.
Reading the YCombinator link there's a mention of APL and a comment by dTal[1] which includes saying:
> "A lot of the mystique of APL is because it's illegible ... nothing more than a DSL for 'numpy-like' code. .. same demo, using Julia and the result is (in my opinion) much more legible: ... let n=sum(map(
sum() in Julia is more clear and more readable at a glance than +/ in APL, but the APL version is a combination of two things. + which is a binary addition function, and / which is reduce, a higher-order operator or meta-function. sum() in Julia doesn't lead you to think about anything else except what other builtins exist. The APL notation leads you to wonder about combining other commands in that pattern, like times-reduce is ×/ and calculates the product of an array of numbers. From the notation you can see that sum and product are structurally related operations, which you can't see from names sum() and product(). Then you change the other part by wondering what plus does if used with other higher functions, like +\ (scan) and it's a running-sum across an array. (i.e. "+\ 1 1 1 1" gives "1 2 3 4", the sum so far at each point).
So the notation isn't just about readability, it's a tool for thinking about the operations. Different notations enable you to think about different things. If we imagine there was no sum() then you might write:
sum = 0
foreach (n in numbers) { sum += n }
product = 0
foreach (n in numbers) { product *= n }
and whoops that doesn't work; this notation brings to the focus that sum has to start with 0 and product has to start with 1 to get the right answer and you can wonder mathematically why that is; APL notation hides that just like it hides the looping. Different notation is a tool for changing the what people think about - what things we must attend to, cannot attend to, and what new things a notation enables us to see. dTal's next reply:
> "the power of abstraction of APL is available to any other language, with the right functions. ... there's nothing to stop anyone from aliasing array-functions to their APL equivalents in any Unicode-aware language, like Julia (oddly, nobody does)."
Maybe nobody does it because if you can't take the patterns apart and put them back together differently without an APL engine behind it, is there any benefit? Take an example from APLCart[2]:
{⍵/⍨∨\⍵≠' '} Dv # Remove leading blanks [from a character vector]
In C# that task is str.TrimStart() and I assume it's a loop from the start of the string counting the spaces then stopping. Calculating length - num_of_spaces, allocating that much memory for the new string, copying the rest of the string into the new memory. I wouldn't think it was do-able using the same higher order function (\ scan) from a running sum. What this is doing to achieve the answer is different:
{⍵≠' '} ' abc def' # make a boolean array mask
┌→──────────────────────┐ # 0 for spaces, 1 for nonspaces
│0 0 0 1 1 1 0 0 0 1 1 1│
└~──────────────────────┘
{∨\⍵≠' '} ' abc def' # logical OR scan
┌→──────────────────────┐ # once a 1 starts,
│0 0 0 1 1 1 1 1 1 1 1 1│ # carry it on to end of string
└~──────────────────────┘
{⍵/⍨∨\⍵≠' '} ' abc def'
┌→────────┐ # 'compress' using the boolean
│abc def│ # array as a mask to select what to keep
└─────────┘
Now how do I remove the leading 0s from a numeric array? In C# I can't reach for TrimStart() because it's a string only method. I also can't assume that there's a named method for every task I might possibly want to do. So I have to come up with something, and I have no hints how to do that. So I have to memorise the TrimStart() name on top of separately learning how TrimStart() works. That notation gives me a clear readable name that isn't transferable to anything else. In APL it's:
{⍵/⍨∨\⍵≠0} Dv # Remove leading zeroes [from a numeric vector]
That's the same pattern. Not clear and readable, but is transferable to other similar problems - and reveals that they can be considered similar problems. In C where strings are arrays of characters, you aren't doing whole array transforms. In C# strings are opaque. In APL strings are character arrays and you can do the same transforms as with numeric arrays.
Which part of that would you alias in Julia? I suspect you just wouldn't write a trimstart in this style in Julia like you wouldn't in C#. You wouldn't think of using an intermediate boolean array.
It's not just about "readability", the APL notation being concise and self-similar reveals some computy/mathematical patterns in data transforms which "giving everything a unique English name" obscure. And APL notation hides other patterns which other notations reveal. i.e. Different notations are being tools for thinking differently about problems, Notation as a Tool for Thought.
I find it very telling that both Samsung and SK Hynix already stated that they don't plan to expand capacity - officially to prevent overcapacity in the future. It would also be plausable that both doubt OpenAI will follow through with the contract.
Expanding manufacturing capacity takes many years. Memory has historically been a cyclical business with boom and bust periods. It’s reasonable for manufacturers to be cautious about deciding to expand.
If the demand holds I’m sure they’ll expand. Until then, I think they see it as a short term supply spike.
They don't need to expand capacity to fulfill this contract.
They would want to expand capacity if they believed this increase in demand is long lasting - the implication is therefore that they don't believe it, or not enough to risk major capital expenditures.
You saw the same with GPU makers not wanting to expand capacity during the Cryptocurrency boom. They don't want to be left holding the bag when the bubble pops.
Part of that equation, FWIW, is that certain countries would flood the market with supply to make any new projects suddenly unprofitable.
Which sucks extra bad because if you shut the project down but start it back up you can't just flip a switch. Gotta put together a whole new team and possibly retrain them.
I sincerely hope that OpenAI goes down in flames with those DRAM contracts are going to the highest bidder then so probably Google or whatever AI competitor.
Honestly having problems remembering the other AI companies without googling it. I recall MS, Facebook, Amazon, Google, and Anthropic.
When OpenAI fails, all the AI projects everywhere else will also be killed. Such is the nature of bubbles. With any luck the farm land where the datacenters were built will be sold back to the farmers for half off and they get a free barn out of the deal.
It's more likely that overcapacity is put to work in a plan B, like cheap cloud virtual desktops. Why spend effort on spying and tracking users when their whole desktop computer is in your data center?
I love the randomness of this question, at least in my context.
Las time I bought something at Sears was the spring of 2008; a new set of tires for my car, and they were bad so I never went back. Also, there aren't a lot of Sears in Mexico.
When the AI bubble pops, it's going to take the good parts of AI out with it.
Capitalism has never been about the survival of the fittest. That's just weird Nietzschean-Libertarian fantasy where someone ends up blaming the lack of truly free markets for their inability to get a date.
Any large building in a rural area will be used as a barn if it has no other useful purpose. It's kind of hilarious when I pass by the old AT&T long lines facility being used as a hay barn.
I refuse to read the AI slop that passes for journalism about the contracts OpenAI bought, but if they take physical delivery and open actual datacenters built with the RAM they'll be parted out at the minimum, if not absorbed by another AI provider or Big Tech in general.
> prevent overcapacity" is just a fancy way of saying "we prefer to gouge consumers at little risk to us."
No it’s not. Memory business has been cyclical for years. Over expansion is a real risk because new manufacturing capacity is very expensive and takes a long time to come online.
If they could make new manufacturing come online quickly they would do it and capture the additional profit of more sales.
If you present an operating profit of €25 billion USD, yes, in a healthy true market competition would force you to either A) eat into your profit margin by reducing prices or B) invest in R&D and capacit-
Actually, let me eat my words, you are right. As I typed this I saw some news from an hour ago[0] about SK Hynix planning to invest about $500 billion into 4 more fabs. I imagine [hope] Samsung will follow, and together with Chinese memory fabs ramping up both in capacity and technology, prices will return to earth in 2027, maybe 2028.
Guess I am just a little too bitter because GPU prices finally seemed to normalize after half a decade of craziness. Topped with corporations in the West usually forgoing investment and using profits like these to do massive stock buybacks and dividends, souring my expectations.
Additional profit? They're making a lot more money right now than if they had more supply.
The risk of overexpansion is real but I really doubt they want to expand much in the next couple years. They don't have to worry about being undercut by small competitors so they can enjoy the moment.
No they are making higher margins, but not getting as much profit as they could have.
Look at the standard Econ 101 supply-demand curve.
If they could make and sell twice as many chips, it would not cut there margins anywhere near half. They would be making much more.
When demand spikes up and down there will be pain. Because booms are not predictable, in timing, size or duration. And accelerating supply expansion is very expensive, slow, and risky.
Many boom prompted RAM supply expansions have ended in years of unprofitable over capacity.
Price spikes like we are seeing reflect tremendous pent-up/increased demand.
Any price increase reduces purchases by many customers. This tends to keep prices stable. With only small changes in price relative to regular changes in demand.
Yet prices have gone way up.
Which means that many people and businesses are cancelling, delaying, or scaling back their RAM purchases. And yet new demand is incredibly high.
To get prices down, supply would have to grow tremendously. Enough to soak up even more purchases from the very motivated, and to cover all the purchasers that have currently pulled back.
There's room for making more, but I don't think doubling makes sense from a profit point of view.
Especially because the demand curve that's skyrocketing right now is the RAM that isn't in long-term contracts. Doubling all production would much more than double the RAM available for normal purchases.
> To get prices down, supply would have to grow tremendously. Enough to soak up even more purchases from the very motivated, and to cover all the purchasers that have currently pulled back.
Is "down" here back to normal levels?
But normal levels are like a tenth of the profit margin. They'd make significantly less money doing that.
Now thinking of this from the other side, 2 big DRAM producers are taking the risk to dedicate a very big part of their production to AI and if we assume they also have similar deals with other AI companies or big datacenters, what is their risk profile if the AI bubble bursts? Are they viable as companies then ? What is their plan B ?
Their risks are none. They are not increasing capacity, only selling the available one to the highest bidder. Whenever these AI companies run out of money, these producers can simply resume their regular business.
It only depends on whether they get addicted to the high prices.. as long as they can withstand a collapse in the prices then you're right they have minimal risk
Not sure about DRAM companies, but many businesses would still go under if they sold their annual production to a company that then goes bankrupt and won't pay anything for the delivered goods.
Hopefully they get paid more than once a year. Their risk is completely dependent on 1. the net X days until they are paid, and 2. How fast they delay shipment when/if a payment is delayed.
> what is their risk profile if the AI bubble bursts?
Exactly. This is why they’re not scrambling to invest in additional capacity. If these memory manufacturers went all in on new capacity it would take years to build out. If the bubble bursts, or even if it doesn’t burst and just tapers off back to normal demand, they would be in a bad position with excess manufacturing capacity that isn’t paying off.
I think the price increases we are seeing are a direct result of the skepticism about AI scale viability. The big dram houses aren’t increasing capacity, due to the risks you mention.
So demand from other sources has to be suppressed through being priced out in order to meet those supply promises made to OAI in ignorance of their true scale.
This is OAI doing suppliers dirty by making economy distorting moves without transparency, intentionally distorting the market in an effort to hurt competitors.
Yet another example of the “free market” creating destruction for the general public.
As a thought experiment, replace “dram” with “rice” or another essential food stock. Market manipulation such as this is wildly irresponsible, anti-humanity and antithetical to public good. Wars are started over less.
This is an excellent example of the actual alignment of OpenAI as an organization. Yet we are to trust them with leading the way in the alignment of our manque oracles of truth and power?
> This is OAI doing suppliers dirty by making economy distorting moves without transparency, intentionally distorting the market in an effort to hurt competitors. Yet another example of the “free market” creating destruction for the general public.
At the speed OpenAI is growing, it's far more likely they're trying to protect themselves first, not harm competitors. The market only exists because it's free / semi free. Were it controlled by statist bureaucrats - which is the sole alternative back in reality - the situation would be drastically worse. Just ask Soviet Russia. You'd get your meager once every ten year DRAM ration and you'd like it.
The general public isn't the standard of morality or good. Invoking it is meaningless.
I think we can dispense with the strawman Soviet Russia alternative lmfao.
In a reasonably well regulated market, deception at that scale (that utterly destroys competitive buildouts by externalizing the costs that normally would be borne by a customer needing an exceptional order) would be a clear violation of market laws. The fact that deceptive, aggressively anticompetitive behavior such as this , blatantly harmful to other innovation passes as “free market” is a laughable assertion… this is merely the will of the stronger, not any reasonable definition of a free market. A free market implies transparency in pricing and demand, alongside fair competition practices.
Anyone else planning to innovate in the ML space just took a huge hit thanks to OAI, including scientists, pharmaceutical companies, and other things that arguably operate mostly in the realm of clear public good.
Their inherent assumption that might = right is a very powerful indication of their inability to be trusted in the control of a tool / weapon that has more potential to steer the future of humanity than nuclear power/weapons ever did. It’s clear that A: they don’t see AI as any big deal, or B: they don’t care how their actions affect humanity in any nuanced sense of the concept.
> The big dram houses aren’t increasing capacity, due to the risks you mention.
Except they are
> SK hynix to boost DRAM production by a huge 8x in 2026, still won't be enough for RAM shortages
> It's also not just SK hynix that is boosting DRAM production capacity, with both Samsung and Micron rapidly increasing their respective DRAM production numbers.
That's such an impossibly big number for that timeline. The actual news is they're ramping up their newest node, which they were doing anyway, and which was a small percent of their total production.
lol 8x in 2026 hahahahahahaha that is one of the funniest things I’ve ever heard of coming from a semiconductor manufacturer. Maybe 8x as many of something they weren’t selling beforehand, but increasing production on full fabs by 8x? I’d love to be wrong but this makes zero sense to me.
You mean OpenAI will profit selling their RAM stocks if the AI bubble bursts? I doubt it honestly. If the AI bubble bursts, then global demand will collapse altogether crashing the value of HW.
Never gonna happen. Cash has no intrinsic value exccept maybe for use in fire / toiler paper. GPUs while currently inflated in price will always find enough value. Their price might go down 50-75% but never 99%
That's the meaning of intrinsic calor - the device can do what it can do, regardless of market conditions. Today it has the value of fifty teraflops, and tomorrow it still does, unless it breaks. However, intrinsic value cannot be measured in dollars.
And yet we're talking about electronics here, they don't have sentimental value and just because compute capacity is unused there are no guarantees that it will be used, even at a per unit cost approaching €0.
I'm sure that farmers during the Great Depression were also consoling themselves with the "intrisinsic caloric value" of their corn.
As I said, the intrinsic value of a GPU is not measured in €. In fact, the lower the sale price gets, the better a deal it is, not worse - you get the same intrinsic value for less extrinsic cost.
There are also intrinsic costs, mostly power consumption.
Food calories are cheaper to convert into something useful. It's not like GPUs, once bought for peanuts, turn into perpetual motion machines. They need power, cooling, a whole infrastructure built around them.
GPUs would have taken the world by storm already in the roughly 30 years since they've been around.
Even for GenAI it's likely ASICs take over at some point if we really care about performance.
GPUs used to cost 20% of what they cost know and Intel and AMD make perfectly serviceable GPUs for most PCs. NVIDIA top of the line GPUs won't suddenly be plugged in to lowly laptops.
Yes, lots of companies will buy them for cheap, but these AI beasts also have OpEx costs. Not every alternative use is worth the money and there are 0 guarantees that the alternative costs cover the gap. NVIDIA sell 80% of GPUs for AI now.
I think people don't realize just how big this bubble is.
After GPU crypto mining became unprofitable Chinese manufacturers took "mining only" cards, desoldered the GPU and built new graphics cards using the chips.
So at least the lower end stuff (RTX6000) could be repurposed like that.
Didn't Microsoft drop 16 bit application support in Windows 10? I remember being saddened by my exe of Jezzball I've carried from machine to machine no longer working.
Microsoft has dropped 16-bit application support via builtin emulator (NTVDM) from 64-bit builds of Windows, whether it happens to be Windows 10 or earlier version of Windows, depends on user (in my case, it was Windows Vista). However, you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.
It does, if you use an old enough version of windows that SUA is available :). I never managed to get fontconfig working so text overlapped its dialogue boxes and the like, but it was good enough to run what I needed.
True, but at this point you're basically doing Windows-on-Linux-on-Windows. But why not anyway... applications will anyway run way faster than on the hardware they were originally thought for.
Are you talking about CPU support? I installed a 32 bit program on basic linux mint just the other day. If I really need to load up a pentium 4 I can deal with it being an older kernel.
That's exactly what I mean, I wish Linux was more like NetBSD in its architecture support. It kind of sucks that it is open source but it acts like a corporate entity that calculates profitability of things. There is one very important reason to support things in open source: Because you committed to it, and you can. If there are practical reasons such as lack of willing maintainers (I refuse to believe out of all the devs that beg to have a serious role in kernel maintenance, none are willing to support i386 - if NetBSD has people, so too Linux), totally understandable.
You'd expect Microsoft to support things because it doesn't make money for them anymore or some other calculated cost reason, but Microsoft is supporting old things few people use even when it costs them performance/secure edges.
Well for now the kernel still supports it. And the main barrier going forward is some memory mapping stuff that anyone could fix.
Though personally, while I care a lot about using old software on new hardware, my desire to use new software on old hardware only goes so far back and 32 bit mainstream CPUs are out of that range.
I think eventually 32 bit hardware and software shouldn't be supported. But there are still plenty of both. We shouldn't get rid of good hardware because it's too old, that's wasteful. 16bit had serious limits but 32 bit is still valid for many applications and environments that don't need >3GB~ ram. For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there, that's why they use Arm mostly, and that's why Arm has thumb mode (less instruction width = smaller die size). I'm sure the tiny amounts of money and energy saved by not having that much register/instruction width adds up when talking about billions of devices.
Open source isn't where I'd expect abandonware to happen.
> We shouldn't get rid of good hardware because it's too old, that's wasteful.
Depends on how much power it's wasting, when we're looking at 20 year old desktops/laptops.
> 32 bit is still valid for many applications and environments that don't need >3GB~ ram.
Well my understanding is that if you have 1GB of RAM or less you have nothing to worry about. The major unresolved issue with 32 bit is that it needs complicated memory mapping and can't have one big mapping of all of physical memory into the kernel address space. I'm not aware of a plan to remove the entire architecture.
It's annoying for that set of systems that fit into 32 bits but not 30 bits, but any new design over a gigabyte should be fine getting a slightly different core.
> For example, routers shouldn't use 64bit processors unless they're handling that much load, die size matter there
I don't think that's right, but correct me if I missed something. A basic 64 bit core is extremely tiny and almost the same size as a 32 bit core. If you're heavy enough to run Linux, 64 bit shouldn't be a burden.
This is an aside. Yesterday I was in a shopping centre (ie a mall) and a bunch of kids ran through the food court, maybe 10 of them all around the 9-12
A grumpy lady shouted at them "kids you shouldnt be running!"
I turned to whom I was eating with and our discussion could be summarised as "kids should be running. The problem isn't they're running, the problem isn't even directly where they're running. Where they're running is a symptom of them having no where else to run"
No? I grew up in a rural area, with fields and places to run... and run I did.
A nearby huge city had a mall. City being 30k people. Yet left in that mall, with 10 friends, I'd run there too.. until chastised. No real difference 50 years ago, in a rural area with a mall than now.
Groups of kids running tend to bump into things, fall into people, excited kids aren't known for taking care. It's been typical for at least going back to the 50s to stop that.
It's also why kids are typically told to stop running around a house.. and to go outside.
So strongly disagree that it is a symptom of no where else to run. Of course, I find it sad if kids have no place to go run.
I don't disagree with you, but the fact that something has been done since the 50s when it comes to child care is not necessarily an indicator that it's good. We imposed many things on children during that time that would be widely considered damaging and counter productive today.
Telling kids not to run around indoors where they can collide with objects or people, break things, injure themselves, and generally get in the way isn't damaging - or at least is significantly less damaging than the perception in this thread that telling kids not to do something is awful.
This is just standard manners and teaching children how to interact with an adult society. Why does anybody think telling kids not to run indoors is wrong?
No need to put words in my mouth. I specifically referred to the fact that just because something has been done since the 50s that it doesn't have automatic relevance when it comes to modern child raising, not that telling kids today to cut it out when running on public places is a bad thing.
It does not even have to be urban areas. We have parks all around the city. Our schools have playgrounds. Everything is still there from when I was a kid, i.e. ~20 years ago.
Kids should be running but not if they cause a nuisance - this is the part not highlighted in the article, societal oversight. When kids are out in the forests they aren't bothering or harming anyone, but when in public they will have to conform to some standards / rules.
"It takes a village" is a well known saying, I've always interpreted that that it's not just the parents that raise kids.
Sorry, but no. You shouldn't be running in crowded areas like food courts (or indoor areas not specifically created for athletics), and playing smug semantic arguments like that doesn't help.
The kids aren't running because they're unable to go outside. They're running because no one's been enforcing that they act within the standards of basic decency.
Kids should be screaming and singing sometimes, but you wouldn't tell someone in the library not to hush them.
> You shouldn't be running in crowded areas like food courts (or indoor areas not specifically created for athletics)
I guess this is a cultural thing, i.e. what is expected of kids. Among my age-group in Eastern Europe (25-30 y/o), we joke around that our parents didn't let us stay in home, which has a lot of truth to it. Once we were out in the city, they didn't even have a idea where we went, and we didn't have mobile phones either. We used to run around everywhere without exception - malls, forests - you name it. That is still expected of kids nowadays, but the kids themselves are far more drawn to the digital world nowadays
> And in Eastern Europe 25-30 years ago, other adults would have no problem yelling at you to behave in their own language/words.
Nobody yelled at us then or even thought that we were doing something wrong. If you would yell at a kid in a shopping mall for running around like crazy - people would look at your weirdly. It was expected of kids to behave this way in my culture, and still is to this day. This may not be the case elsewhere, hence why I think that there is a heavy cultural aspect.
You're right it is cultural, I was thinking more Slavic where bad behavior from other kids isn't tolerated by adults and they have no fear expressing it.
>and playing smug semantic arguments like that doesn't help.
How is it semantic? They go outside and now they are running in a giant parking lot. They go a bit further and now you're a bad parent for not keeping an eye on your kid. Tell them to sit down and play on a tablet and you're also a bad parent.
There's no winning here.
>you wouldn't tell someone in the library not to hush them.
I don't consider a mall the equivalent of a libary in this situation.
Just because this trend of kids having less free play outside doesn't excuse parents of these kids from taking any space they want. Any reasonable person can see there are still boundaries, are we just disagreeing on what those are? Kids still can't/shouldn't run at swimming pools, it's been that way for decades (just an example).
> If you let your kids run around in giant parking lots I would argue you are a bad parent.
> Ever heard of parks?
I remember being bored as hell when my parents used to take me to the city park. Many other kids thought the same, too. I couldn't wait to run around with my friends wherever else in the city afterwards. I'm thankful for my "bad parents" for letting me roam around anywhere I wanted, as was the norm back then for kids where I grew up in Europe
I'm guessing you are talking with someone who is used to life in the North American suburbs, where kids need to be driven around and most of the options for activities are indoors.
Sadly, yes. The nearest park is 5 miles from me or the mall. The buses run on the hour and will get you within 2 miles of the park. They stop running around 7pm.
You joke. I still use an old winamp 2.81 on my windows machine.
About 15 years ago I came across some plugin dll files that added flac support.
The only issue I ever run into is some non ascii characters in ID3 tags make that file unplayable. But winamp is perfectly capable of editing them.
It's even pretty good in the high dpi monitors because Ctrl-D enables "Double size" mode on the main window and equaliser. And the playlist window has customisable font sizes.
That's similar to me, I started using FreeNAS back in the 9.x days.
At the time the FreeNAS documentation recommend installing to a usb drive. This proved unreliable, but dedicating a drive to it was silly given it couldn't be used for anything else. I had all the things I needed but I wanted to peel back the layers and this seemed like a good excuse
So I threw in a drive and installed freebsd 10 and spent a few days familiarising myself with everything, learned how to configure samba myself, learned how to setup jails with iocage (the old shell version), and finally imported my pool.
I got one of these free energy audit things which included swapping out up to 30 or so bulbs with LEDs. Whatever contractor did it seems to have gotten the cheapest bulbs they could, and the majority of them have failed by 4 or 5 years later. So far so good on the name brand ones I replaced them with.
[1]https://metacpan.org/pod/App::Cronjob https://metacpan.org/dist/App-Cronjob/view/bin/cronjob
reply