The "book" is accompanying studying material for the course Probabilistic AI at ETH Zurich. Essentially each chapter is the material covered in one lecture (3hrs).
There is no independent scalar floating point unit for most modern CPUs. When scalar floating point arithmetic is needed, it is send to the SIMD unit. This pretty means that scalar and vectorised floating point operations usually have the same latency. If you do any scalar floating point operations, the CPU is just doing vectorised operations except with only 1 useful value.
Is it really true that there's no scalar FPU at all? What about x87?
The instructions are still there even in 64-bit long mode, they use their own registers, and there are enough idiosyncrasies (80-bit extended-double precision, stack-based operations, etc.) that I would expect it to be easier to just include a dedicated scalar x87 FPU than try to shoehorn x87 compatibility into the SIMD units.
If every other exchange is selling $AAPL at $100 and suddenly the top level of one exchange drops to $99, then if you just take out that order you basically gain a free dollar. Do this very fast and have pricing the product accurately and you will print tons of money.
Yeah I gather that is the expectation, but if you are the first to execute an order you will sell that order at the old 100 price before it lowers. You are fighting for making an order before the information spreads to the other bots. (Right?!)
It's the predecessor of Go1. If you buy the Pro/Edu version of Go1 then you get direct low level SDK control, though there are reverse engineering efforts that can do the same on the cheap ~5k version as well
At least in clang, your variant is not any better: uninitialized reads return the special value `undef`, which the compiler can constant-propagate to both uses of `sparse_val`.
But then each `undef` can turn to a different value at each use, even if they both came from the same uninitialized read.
For a similar simulation but using a different method, you can find one here[0]. It obviously lacks the force control, but has the viscosity tunable parameter.
TL;DR: execution of pytorch models on apple's neural engine and standard data-oriented optimisations (changing matrix layout, chunking to optimise temporal cache locality, and minimising redundant memory copies)
From my experience, saying "I'm thinking" actually makes it worse. Instead of thinking for a response, I internally start to panic (mainly thinking whether I have considered everything and the response is correct). After a few seconds of "thinking", I eventually give a response that is no different to my knee-jerk reaction.
I do find replying with an email to be extremely helpful. Even if the response is not correct, it does show you have put in the effort to reflect on the meeting after it has finished.
> After a few seconds of "thinking", I eventually give a response that is no different to my knee-jerk reaction.
In interviews with Magnus Carlsen and the other top players, they all seem to say the same thing: the big difference between a short game (bullet/blitz) and a long game is that they'll have more time to verify the move in the long game. They don't spend more time finding the move.
I find this resonates with me. Very often the instinctive solution I come up with on the spot is often a very good solution and requires only minor tweaks.
> I internally start to panic (mainly thinking whether I have considered everything and the response is correct)
I just add a caveat: "I think X is a good solution but I have not had time to consider all the edge cases, so I will have to verify and come back to you" or similar.
Or, if the problem is complicated I'll just say that: "There are a lot of complexities/edge cases to consider here, I need to think more thoroughly about this. I'll get back to you later".
It's usually easy to iterate the possible solutions. Widdling them down to the best one is the hard part. Still, it helps to write them out. And if you're in a meeting, you should maybe just stop there and then write an AI to rule the bad ones out. (Which of course we could have done without a meeting, but I digress)
Source: I did the course