AI will scrape your blog and your personal philosophy will eventually become a part of collective Human Intelligence. That's a pretty good reason to blog imo.
That reminds me of a gimmick a while ago where GitHub would collect your repositories into an Arctic Code Vault. That was IMO a bit of an incentive for me to upload random bits of git repositories I have on my PC just so that I can say my code will last 1,000 years somewhere in the arctic.
I remember it vaguely but there used to be a badge awarded for being among the first 100 people to solve the problem. I was obsessed with getting that badge to the point that I spent obscene amount of time solving the-then recently released problem even when the following day was my final exams. I did manage to get that badge though. This was circa 2013. Fun times!
That would be something that is intelligent to you. I believe the author (or anyone in general) should be focused on mining what intelligence objectively is.
Best we will ever do is create a model of intelligence that meets some universal criteria for "good enough", but it will most certainly, never be an objective definition of intelligence since it is impossible to measure the system we exist in objectively without affecting the system itself. We will only ever have "intelligence as defined by N", but not "intelligence".
Perhaps it was due to English not being my primary language, but it took me an embarrassing amount of time to learn that probability and likelihood are different concepts. Concretely, we talk about probability of observing a data given an underlying assumption (model) is true while we talk about the likelihood of the model being true given we observe some data.
Yeah, it was a poor choice of nomenclature, since, in common, nontechnical parlance, "probable" and "likely" are very close semantically. Though I'm not sure which came first, the choice of "likelihood" for the mathematical concept or the casual use of "likely" as more or less synonymous with probable.
But the article makes it crystal clear (I had never seen it explained so clearly!):
"For conditional probability, the hypothesis is treated as a given, and the data are free to vary. For likelihood, the data are treated as a given, and the hypothesis varies."
The likelihood function returns a probability. Specifically it tells you, for some parametric model, how the joint probability of the data in your data set varies as a function of changing the parameters in the model.
If that sentence doesn't make sense, then it's helpful to just write out the likelihood function. You will notice that that it is in fact just the joint probability density of your model.
The only thing that makes it a "likelihood function" is that you fix the data and vary the parameters, whereas normally probability is a function of the data.
If you think about it, this has evolutionary advantages as well. No time to feel pain when your life itself may be in peril due to starvation. Finding food for sustenance easily supercedes recovery.
Especially if you haven't done this before, you start experiencing very strong hunger about 8-12 hours after your last meal. This is very, very much in advance of any kind of threat to your life or health from starvation. In fact, the sensation of hunger typically dulls after another 12h or so, so that if you make it past 24h of not eating, you'll typically feel less hunger than you did your first night of skipping dinner.
Reminds me of Simulated Annealing. Some randomness have always been part of optimization processes that seek a better equilbrium than local. Genetic Algorithms have mutation, Simulated Annealing has temperature, Gradient Descent similarly has random batches.
Yes, explaining the "why / how did the SAT solver produce this answer?" can be more challenging than explaining some machine learning model outputs.
You can literally watch as the excitement and faith of the execs happens when the issue of explainability arises, as blaming the solver is not sufficient to save their own hides. I've seen it hit a dead end at multiple $bigcos this way.
Nice effort. As far as textbooks for QM, Electrodynamics, and any sufficiently complex field of study goes, I always feel that these have been written using abstractions that people have developed much later retroactively. I understand the advantages: it makes the entire content concise, structured, and basically straightforward. However, what I crave is a technical book that is based upon the history of the subject. Something that doesn't start immediately with Hilbert spaces but starts off by talking about why Max Plank did what he did, how did Einstein improve upon it, what mistakes were made, what misguided hypothesis were later corrected in what manner, how were different things then unified... you get the point. I think this narrative based approach would motivate me much better than something that's condensed and distilled.
Most Physics undergraduate programs have a course on Modern Physics, which is often taught in the way you are asking for. Though only up to the origins of quantum mechanics. This textbook, for example does this [1].
The problem is that after the basics of QM, there were literally hundreds of papers by dozens of important scientists developing the subsequent theory. And you can no longer teach the subject in a linear historical fashion.
I think the book called "Quantum mechanics" by max Planck and Neil bohr is quite similar to what you need. And atleast in my country it's available for less than 2.5$ usd converted so it's pretty damn cheap
However of course I think you'd be able to find an ebook about it too
Just include max Planck and neil bohr as the authors lol.
I would recommend watching Curt Jaimungal's series of talks with Jacob Barandes. He gives a nice background history of various aspects of QM, including the formulation of Matrix and Wave mechanics (and loads of other ideas). Barandes is excellent at clearly articulating complex ideas in very simple, concise terms. He also has his own formulation of QM based on "Indivisible non-Markovian Stochastic Processes". Even if you disagree with his ideas, the interviews are quite fascinating.
In this interview he goes over pretty much exactly what you mentioned (and a lot more):
Yes - I think that's the one the OP recommended. Great read. Gives a superb historical overview and the reader can follow the twists-and-turns of discovery. You get to 'know' the scientists as they battled the Quantum. Sets the scene before delving into other books that teach the actual Math etc.
Weinberg's Lectures on Quantum Mechanics has an illuminating historical introduction for its first chapter. The introduction to his Quantum Theory of Fields is more specifically about quantum field theory, fittingly, and focuses on later developments.
If you want something that's more focused throughout on the historical progression, a classic book is Jammer's Conceptual Development of Quantum Mechanics, but it assumes you're already familiar with quantum and statistical mechanics.
If you like videos, the physicist Jorge Diaz has excellent videos accessibly detailing the experimental and theoretical history
https://www.youtube.com/@jkzero/playlists
However, what I crave is a technical book that is based upon the history of the subject
Not a book per se, but if interested in videos, run, don't walk to check out Jorge Diaz's channel (see https://www.youtube.com/watch?v=MCJl3-pHGuU for example). It is just what you're asking for.
“QED and the Men Who Made It” [1] might be close to what you’re after for quantum theory at least. Unlike other popular accounts, it gets quite technical and covers a lot of the historical dead ends that people had during the development of quantum field theory.
The introduction to Vol 1 of Weinberg’s Quantum Theory of Fields does this really well, albeit briefly. It feels like getting an “insider’s view” of the historical developments.
reply