ASML recently announced a massive stock buyback in the order of $12B (55% of their revenue) which left me scratching my head.
They reportedly can’t meet demand and the current backlog for new machines is at 2+ years. Surely a company in a sector with such massive capital expenditures has better use of this money? For example scaling production or R&D?
Building their machines takes a long time -- like a couple of years. It's also very labor intensive. And space constrained. Once the machine has been built and tested in Eindhoven, it has to be completely disassembled and shipped to the customer, then effectively completely rebuilt from the ground up. And there's a lengthy burn-in process where they have to tweak it for the specific local conditions -- which can vary if the machine has to be moved even just a foot to the left. Or if a truck drives by on a road a few miles away.
So, it's really hard to scale that process for building new machines.
They've got money burning in their pockets and no better place to spend it, so what do they do? Stock buybacks.
Disclaimer: I did a six month contract working for them several years ago, when they were towards the start of developing their EUV process. It was a real eye opener to have the tour of the facility and have everything explained to me.
We were in a massive boom before, it is probably in their best interest to not try and increase their production facilities at a too fast pace. Building out facilities that are, after the boom, underutilized is not wise is a capital heavy industry such as this one.
Sure, but the parent I commented on said "Building their machines takes a long time" as a reason for not building it out. Which I think is a bad reason.
I agree with you that the need maybe will not be there exactly when they have built out.
Also note that TSMC, Intel and a whole bunch of others are currently starting to build, or plan to build, new fabs (partly due to the new Chip Act). And they will need stuff from ASML.
I don't think that tells you anything about what's in the box; the shipping container is a custom clean room env designed to maximize use of space in a cargo plane.
Harddrives typically come in a vacuum sealed bag, so I wager they could apply the same technique here if the components were small enough.
My point is that the size of the container tells something about the size of the components. "Completely disassembled" would mean every nut and bolt would be taken apart, which I don't think is the case given the size of the container.
Hard drives are not shipped anywhere near a clean room env. The inside of that box is probably one of the cleanest places on earth. It even blows other clean rooms like space craft manufacturing facilities out of the water.
And you're missing the fact that the outside of the box isn't a clean room, and it has to mate with the clean room at the customer facilities. A bajillion baggies of parts is a non starter if the outside of those baggies isn't a clean room env.
My guess is they are spending R&D money on better/simpler/cheaper EUV light sources, so expanding production capacity of existing models isn't a great idea.
Specifically my guess (and I'm no expert) is a plasma wakefield accelerator feeding a free electron laser to get tunable wavelength EUV light far cheaper and simpler than what they've got now. Tons of work to do to get there, but that's always been the case in thier business. Also, if they don't do it someone else might and then they'd be dead.
There are other R&D areas to be handled too. One might ask why not spend more on the R&D, but they are probably at some kind of practical limit there too.
Is anybody close to making that work? The idea was proposed around 2014, but it doesn't seem that anyone has built it.[1]
You can use a big accelerator as a EUV light source. The SLAC beamline was once used to test the concept. Medium sized (160m circumference) synchrotrons have been suggested.[2] One appears to be under construction in China.
A plasma wakefield accelerator is a proposed way to shrink a powerful accelerator down to a more manageable size. Maybe. There's a project underway to use about a third of the 2-mile long SLAC beamline in this mode.
ASML made zapping a droplet of molten tin with lasers work as an EUV source. That's a nightmare.
This approach may or may not be better. It involves zapping a high-pressure gas with lasers.
All the known ways to do this are insanely expensive and look more like physics experiments than production equipment.
A clean light source, where you put electricity in and get EUV out, would be easier to work with. There was talk of "table top synchrotrons" a few years back. One startup built one, although it was more like garage-sized.
Cant answer the question and my knowledge is only from 'one interested' but from how long (~20 years) it took to build the current EUV source and how complicated the alternatives look this is for sure no easy feat.
Can recommend [1] as the title say about the light source. Very interesting even for a non professional (and [2] as on high level overview of the machine and [3] as longer documentation inside ASML).
> Specifically my guess (and I'm no expert) is a plasma wakefield accelerator feeding a free electron laser to get tunable wavelength EUV light far cheaper and simpler than what they've got now
Don't you think they'll also be using a retro encabulator, with the modial interaction of magneto-reluctance and capacitive directance being used to generate the EUV light?
I honestly can not tell if this is a serious question or a joke throwing around physics techno babble... Maybe this is how business people feel when we try to explain why some data somewhere is out of sync because of system problems.
I've just realized I really need to see this video re-shot next to a Shiny-Looking Thing at an ASML facility, preferably one with glowy lights emanating out of it somewhere.
(I've also just realized at least 10 companies have probably re-shot this internally with their own equipment. Argh.)
The buyback event itself is not taxable. For dividends, the recipient must still pay taxes even if they are reinvested (as is common) and this cash flow and tax burden is unwelcome for larger investors.
No. But the whole point of a buyback is that it will presumably cause stock appreciation and that appreciation is taxable, albeit at a lower rate and not until the stock is sold.
In large or complex arrangements the share might never be sold, instead being used as leverage or as part of more complex tax avoidance strategies. Far better than a dividend.
Only if you sell and are not using a structure that will shield you from taxes until an actual withdrawal is made. You can also take out a loan using the stock as collateral to avoid being taxed.
On the UK we have ISAs that are sheltered from capital gains tax. But not from dividend taxes. The same applies to pensions.
Of course this state of affairs relies on companies not all jumping to stock buybacks, or else the treasury will find some new way of getting their pound of flesh
You should probably amend this, since this is very misleading to people unfamiliar with the Dutch tax regime. As you are aware, in the Netherlands net wealth (not capital gains / appreciation) is taxable. (Even if you consider the appreciation consequently also taxable, it's certainly not immediate, as the wealth is only assessed once a year).
You pay an annual % tax on the value of your investments less debt as of January 1st. This means you still pay taxes if your assets lose value, too. It's a wealth tax that pretends to be a capital gains tax.
It doesn't pretend to be a capital gains tax at all. It's a tax on income from assets, which is in practice more or less a 'wealth tax' which is also why it's called the Dutch word for 'wealth tax' in the first place.
It is a tax on an assumed return on assets, determined as a set percentage of wealth. "Vermogensrendementsheffing" means a "tax on return on wealth", not on the wealth itself. In name it is not a wealth tax, but in reality it is, since the assumed return that is taxed has no relation to the true return. This relates to the recent decisions declaring this partially unlawful, see e.g. https://www.tilburguniversity.edu/magazine/supreme-court-net...
ASML depends on lenses from a relatively small company. That company does not have interest in ramping production any more than they already have. They don't care how much money they're offered.
Nobody else can get the job done. So, as they say, it is what it is.
15 years ago, when I was there, Zeiss SMT also did the light sources (discussed elsewhere in the thread), btw, not just supplying the lenses. Don't know if that's still the case. I don't know exactly what the lens/mirror systems for EUV look like, but the "old" 193nm ones were something to behold.
Zeiss has more than just SMT (the guys that build the EUV mirrors), they have consumer products, a medical technology arm and one that is working on optical quality assurance and research microscopy. Alltogether 40k employees and around 6 or 7 billion Euros of annual revenue. Small on a global scale, but not that small.
Great company for engineers, by the way.
Workforce or valuation? It's no secret that these tech companies are over-inflated, at least recently there seems to have been a correction. Companies like Zeiss are much more solid at least.
Have you looked at the way the ceo and other execs are compensated? I assume it is based on stock performance and there is your answer.
>Peter Wennink made €4,820,000 in total compensation as Co-President, Chief Executive Officer and Chairman of the Board of Management at ASML Holding in 2021. €1,020,000 was received as Total Cash, €3,537,000 was received as Equity and €263,000 was received as Pension and other forms of compensation.
The term for this is a positive feedback loop.
>Positive feedback in amplifier
If the feedback signal is in the phase with the input signal, the effective input to the circuit is increased and this type of feedback is called positive, regenerative or direct feedback. It provides increased gain but it also increases distortion and leads to poor stability of gain.
I was checking the ASML technology page[0] as I haven't heard of them before, it was funny seeing that they are a bleeding-edge tech company and they have an image showing jQuery code next to a paragraph about their software: https://i.snipboard.io/pUT4ot.jpg
I'm guessing that the page designer just went to a stock photo site, did a search for "code" or something similar, found that image, licensed it, and inserted it. Job done!
Even calling them a bleeding edge tech company is selling them short. They are the only company in the world that can do what they do. Their website could have pictures of clowns in cars on it, and it wouldn't matter.
How dare they not show a ninja library like React with Redux + Redux Saga + Reselect sprinkled on top. You know - for state management and "full state debuggability". You gotta manage that state. I mean what else - display a simple page with plain JS? Just render html on the server?? That's crazy.
Plenty hardware companies are chock full of unit tests, hardware simulators in-the-loop at multiple levels of abstraction etc etc. I worked at Canon (back then Océ, ie huge pro printers) a long time ago and they had this shit all over. I’ve also seen a litho startup run load tests on the day the hardware was first in one piece, ie the software worked acceptably from day one (incl all the error recovery, edge cases etc etc) because all the hardware had been simulated in software in parallel with it being designed and built.
There's nothing wrong with using jQuery, especially if you target old browsers or use some of its features that are still more convenient than the ES6 equivalent.
Agree, but I don't think that jQuery really represents the core of their software, I find it funny how big the discrepancy between the text and the image is.
It tells that their marketing department is far from their software department. Their lithography rules, and their software is likely a second class citizen.
There is an argument that it has to hit the plateau first before a new technology can take over, the problem being that if the current tech is really scaling by 2x every 2 years, a new technology that is 10 years from commercialization has to be more than 30x better than current technology to be worth considering. However if current scaling slows down, the extra delta required is reduced.
I seem to remember this was an argument made with Josephson Junctions back in the day. They were potentially better, but always far enough out that traditional technologies would exceed the projected performance by the time they launched.
So maybe we need to hope for current trends to plateau if we want something new to replace it.
Aren't there still massive gains on the table? I'm totally naive on this but I recall hearing about other materials, such as graphite iirc, that could achieve 100x CPU hz.
I assume what you mean is that the CPU may get faster but would ultimately stall waiting for data, which can only move so fast. Presumably to continue feeding data to the CPU we'd want to be much more aggressive about caching and prefetching.
Another option would be to not go 100x faster but to go 10x faster and then take the improved cooling + energy usage as a win.
Does the article say anything about indefinite hockey stick growth? It seems a lot more measured, just saying that Moore’s law will keep going for at least another decade or so.
It's actually a measure of the angle over which light can be collected, and it's defined as the sine of the half-angle. For example, if you want to launch light into a fibre-optic cable with NA 0.5, the light needs to be angled 30 degrees or less with respect to the cable (since sin(30) = 0./5); any light with a greater angle won't undergo reflection and so simply shoot out the side of the fibre (conversely, light at smaller angles will be reflected and so go bouncing down the cable until it comes out at the far end - again with a 30 degree half-angle).
Not that I know of. But interestingly there is a theoretical limit. One implication of that is of course that there can’t be an open ended equivalent of Moore’s law; any such law must come to end.
In development High-NA EUV can do 2nm logic, possibly 1nm with double patterning? There's not a lot more room at the bottom after that, given silicon crystal atom distance of .3nm
I do not know from where you have got that number, but it is meaningless.
The silicon atoms do not behave like spheres, only the atoms from many metallic or ionic substances have a behavior close to that of spheres. So it makes no sense to speak about a length "across" a silicon atom.
In the case of silicon, a meaningful number is that there are around 50 silicon atoms per cubic nanometer of silicon crystal.
The gate of a transistor (which is the active part of the transistor, and which has a much smaller volume than the complete transistor) in the latest 5-nm technologies is contained in a fin that might have a width around 5 nm, a length around 20 nm and a height around 50 nm, for a volume around 5000 cubic namometers (it is a complete coincidence that the width of a fin might be around 5 nm to 6 nm for a process named as "5 nm"; there is no relationship between the name of the process and the width of the fin; fins did not even exist for processes with names greater than "22 nm"). Such a gate of the smallest transistor might include around 250 thousand Si atoms. The volume of the complete transistor would be at least 10 to 20 times greater.
However the size of a transistor is not limited by the number of silicon atoms in the gate, but by the number of impurity atoms that control the conductivity of the silicon, and those are much less than the silicon atoms (I have not seen any number for the latest technologies, but they could be e.g. 10 thousand times less than the silicon atoms, so there might be less than 100 impurity atoms in the gate).
>I do not know from where you have got that number, but it is meaningless
Could you expand a bit? If atoms were randomly arranged, or in a non solid state I could kind of understand. But once you get to any kind of fixed structure you can infer 2d spacing from 3d spacing.
Or is your intent to say that the quantity of silicon in a gate is 3d, especially with fins, so a 2d view doesn't give a complete picture?
In silicon crystals and in most other semiconductors, the atoms are kept in their positions by covalent bonds that have certain directions in space, e.g. towards the 4 vertices of a regular tetrahedron in the case of a silicon crystal.
Because of that, the atoms are not packed together like some spheres, i.e. like the atoms in metallic aluminum or in table salt, where you may speak about the diameter of the atomic spheres. They are distributed on a lattice that has empty spaces between atoms and their bonds (i.e. the places where electrons belonging to the atoms are located with high probability).
The distance between the silicon atoms in a silicon crystal varies depending on the direction, so there is no single value that could be considered the diameter of a silicon atom.
The periodic cell of a silicon crystal has the same structure as that of cubic diamond and it has the form of a cube with 8 atoms inside it (an atom in a cube corner counts as 1/8 inside, an atom on a face counts as 1/2 inside).
While this visualization uses balls and sticks, to show the positions of the centers of the atoms, that has nothing to do with the form of the real atoms.
At most you could consider that a silicon atom has the form of the corresponding Voronoi polyhedron, in which case you would have to give several numbers, to describe its size, and not a single "across" value:
The number that I have provided, i.e. 50 Si atoms per cubic nanometer of Si crystal, can be computed by dividing 8 atoms to the volume of the cubic cell of the Si lattice. Given a volume of Si crystal, you can compute the number of Si atoms.
Like I have said, I do not know what means that 0.2 nm value, as the distance between 2 neighbor Si atoms can be larger than 0.5 nm, depending on the direction. In any case you cannot use it to compute anything about the number of atoms in a silicon device.
EDIT: I believe that you might have got your 0.2 nm from truncating the distance between 2 silicon atomic planes in the so-called "111" direction (the direction of the cube diagonal), which is the minimum distance between atomic planes in silicon.
That distance is 0.543 nm * sqrt(3) / 4 = 0.235 nm.
Because this distance is correct only for the "111" crystalographic direction, it cannot be used to compute the number of atoms in some piece of silicon, and it certainly cannot be called as the diameter of a silicon atom ("across an atom" without specifying the direction).
adrian_b's explanation is pretty good explanation of what I'm saying. To expand on that: you cannot pack transistors side-by-side without moving to global custom poly. The result is that there's a large amount of "technically not space for the transistor, but space required for the transistor to work" stuff around the transistor.
The result is that a 2nm process transistors is on the order of 10s of millions of atoms.
Source: I work with the latest nodes.
Here's a Fermi estimate: the M1 is ~16 billion transistors; assume it is 1 cm^2. That gives (10^7)^2 nm^2 / 10^10 transistors, which is ~10^4 nm^2/transistor. Assume the transistor is ~50nm high, which gives ~5x10^5 nm^3/transistor. There are ~100 SI atoms/nm^3, which gives a volume of ~5x10^7 SI/transistor.
BTW, if things really so good, I hear, CANON (yes the same who make photo cameras) works on steppers, and one other Japanese company, forgot name, subsidiary of one known big group.
This is not easy business, but there is space for at least one more player.
Remembers me madness on plastic windows in Ukraine, where lot of people buy machine-tools, with dreams, to sell plastic windows forever.
Some even build full size factories.
But unfortunately all these ended, because hundreds of small businesses in ~10 years, filled all current needs of people, and plastic windows last about 20 years, so once market shrink by about 95%, because now all these tools will do 1/20 of market per year.
PS sure, war will change things, but even most pessimistic opinions, about 30% of homes need big repair (really much less, but now impossible to know exactly), and with existing capacity, they will be filled in cup of years.
While you are debating if eternal growth exists we that actually write software can tell you that the bottle-neck has been RAM since 386 that introduced caches:
DDR3 was the peak of RAM latency, 65nm SSDs where the peak of flash memory longevity both in ~2010!
You cannot scale anything unless it's embarrasingly parallelizable in which case you never had a problem in the first place.
Peak performance is more important than the puny 0.5Gflops/W "5"nm M1 (2.5Gflops/W) has over the 28nm Raspberry 4 (2 Gflops/W).
Everything that is below one order of magnitude improvement is irrelevant = ASML is irrelevant unless devices break and you buy things you cannot repair.
That combined with eternally raising electricity cost, not as a function of marginal prices on a market (like today), but as a permanent depletion of resources leads to only 1 conclusion:
Don't wait for the next generation hardware/firmware locked, rentseeking line of hardware: 14nm X86 and 28nm ARM IS good enough for a lifetime if you control it and make sure it lasts.
There are only 2 programming languages that can survive this long term: C on the client (with a little ++ for convenience) and vanilla JavaSE on the server (with as little deps. as possible).
Bonus prediction: JavaME will be revived in some form for open microcontroller mobile use.
> Peak performance is more important than the puny 0.5Gflops/W "5"nm M1 (2.5Gflops/W) has over the 28nm Raspberry 4 (2 Gflops/W).
Those numbers are very misleading - perf per watt is not a constant but something that very much depends on how far the clock speed is pushed. Anything tuned for perf is going to have significantly worse perf per watt than the architecture is capable of. The same goes for the CPU design - maximizing perf and maximizing perf per watt is going to give you different architectures on the same node.
They reportedly can’t meet demand and the current backlog for new machines is at 2+ years. Surely a company in a sector with such massive capital expenditures has better use of this money? For example scaling production or R&D?