Oh goodness, truthfully I'm not sure how well I understand it. I initially set it up a few years ago, and recently futzed with it after my organization changed something about their auth. My configs aren't really in a shareable state so I hope this ramble is at least somewhat helpful.
Basically DavMail connects to outlook at creates a local smtp and imap server which I connect to with mbsyc and msmtp. Mu indexes these emails from the local server created by DavMail, and Mu4e displays them and sends them with that local server as well. Once you have DavMail setup you can basically follow any standard mu/mutt/msmtp/mbsync tutorial, just use localhost and the ports exposed by DavMail.
Getting DavMail setup can be the tricky part, I remember having a lot of trouble, but I think it was related to the fact that the config I was editing wasn't being picked up systemd service that was controlling DavMail. The best advice I can give you is experiment with different authentication modes (davmail.mode in the config) and try sending mail to the DavMail server in an attempt to trigger it to do the authentication workflow.
In the end, set davmail.mode=O365Manual and davmail.url=https://outlook.office365.com/EWS/Exchange.asmx
Upon attempting to send an email from mu4e it opened up a browser to do a microsoft authentication, and then I believe it saved a token in my config file (variable davmail.oauth.<your email>.refreshToken) which has been handling authentication without issue for the past few months.
Some miscellaneous notes. First, this may have been harder for me as it was not possible for me to use the DavMail GUI which might make the authentication workflow easier. I also have two email username@organization and first.last@organization. I have all of my davmail,msmtp, and mbsync configurations using username@organization, My mu4e config references the username@organization maildir folders, but my user-mail-address variable is first.last@organization and that is what recipients see (although mu complains about not knowing about the first.last account). Lastly, this DavMail setup isn't mu4e specific, I initially used it with mutt, and it worked for that as well.
I hope this is helpful, if there's interest I can try to go through the setup from the beginning and create a more in depth tutorial. I wish Microsoft did not make this such a pain, and I wonder if DavMail's days of effectiveness will soon be over...
Very fun! It would be interesting to do this with actual quantum mechanics (ie solving the schrodinger equation for an inverted quadratic potential). The usual bound states of the harmonic oscillator can be analytically continued into resonances with a complex energy, whose imaginary part should give the rate of decay in the situation the article discusses. Haven't checked the calculation but I would imagine this gives the same result. https://arxiv.org/abs/2012.09875 looks like it has the details.
I just skimmed the article and didn't watch the video, but the bit about backpropagation is just wrong. Backpropagation doesn't compute an inverse of the jacobian, it computes its transpose. (although a similar idea to backpropagation could possibly be used to compute an inverse of several reversible layers, but that's not typically how neural networks work)
Backprop itself doesn't invert the computation, but it does give you the direction for an incremental move towards the inverse (a 'nudge' as the article puts it). That is, given a sufficiently nice function f and an appropriate loss ||f(x) - y*||^2, gradient descent wrt x will indeed recover the inverse x* = f^{-1}(y*) since that is what minimizes the loss. I assume this what the article is pointing at.
If you want to be picky, it's true that the direct analogue of continuous optimization would be discrete optimization (integer programming, TSP, etc) rather than decision problems like SAT. But there are straightforward reductions between the two so it's common to speak of optimization problems as being in P or NP even though that's not entirely accurate.
"So, for example, if you can solve some problem \Pi by running a SAT solver ten times, this doesn’t mean that you have reduced that problem to SAT— in reduction, you can only run the SAT solver once."
The section was written horribly -- while I was talking about backpropagation, I was thinking about "what can be done in polynomial time" and there's a mismatch as you explained. Thanks and shame on me, I rewrote it.
In any case, I would recommend watching the video first, the article is just accompanied stream of consciousness for those few who really liked the video.
This is one of the many comments I see on HN where I genuinely have no idea whether it’s satire or just high-level technical talk. I assume the latter, but I don’t know enough to disprove the former
It's not satire. If you have some function with several inputs, the Jacobian is a matrix of all the partial derivatives of this function with respect to each input variable[1]. Since derivatives give the slope of a function, if you think about your function as being like a bumpy surface with the height at each point being the output, this matrix tells you which way (and how far) to change any input if you want to go "up" or "down" in the output.
Backpropogation is a way to optimise a neural network. You want to know how best to nudge the weights of the network to optimise some loss function, so what you do is compute the gradient (ie partial derivative) of that function with respect to each of the weights. This allows you to then tweak the weights of the function so your network gets better at whatever task you're trying to get it to learn. See [2] to understand how this works and and [3] to understand how this relates to the Jacobian, but generally if you're trying to go "downhill" in your loss function it's easy to see intuitively that knowing which way the function slopes (ie the effect of tweaking each of the weights) is important and that's what the Jacobian tells you.
The inverse of a matrix[4] and its transpose[5] are two different operations in linear algebra. Transpose turns rows into columns and columns into rows and the inverse of a matrix is a little harder to grasp maybe without background, but you could think of multiplying one matrix by the inverse of another as a little like division (since you can't actually divide matrices).[6]
I'm willing to bet the actual reason the transition went so smoothly and the library doesn't pay that much in per-article fees is because everybody just used sci-hub.
> The name quantum in
quantum theory is related to the fact that in a separable Hilbert space, any set of
mutually orthonormal vectors is countable.
Pretty sure the name quantum comes from the fact that some physical phenomena (eg absorption spectrum) get discrete allowed values. That in principle has nothing to do with separability (eg you can come up with non separable spaces which have operators with discrete spectrum). In fact presenting things this way is pretty confusing since separable Hilbert spaces do support operators with continuous spectrum (which is not obvious!) As far as I know separability is mostly technical, and often added to make life a bit simpler, since it's pretty hard to come up with useful non separable Hilbert spaces.
A lot of non-quantum waves have discrete allowed values. EM cavities, guitar strings, etc. Quantum waves are described by a special wave equation, actually a complex diffusion equation (first order in time, second order spatially).
... And so that constraint forces R1 to be zero? Note that finding a solution to the problem originally stated is equivalent to a linear programming problem, which (for such a simple problem) can easily be solved exactly.
Dividing by R₁ + 2R₂ is not linear, and neither is multiplying C by R₁ + 2R₂, nor dividing by that product. But you could maybe formulate a linear objective function that solves this problem correctly, and then you could formulate the component selection problem as a MILP problem and solve it with GMPL/MathProg/GLPK or the COIN-OR tools. Glancing at it, though, it isn't obvious to me how to formulate it linearly. How would you do it?
Do not be fooled, julia is not a oo language, despite some similarities. If you try to replicate oo patterns in Julia it's not going to work well, but in idiomatic julia I've never found the need for concrete subtyping: generally julia under emphasizes class hierarchies in favor of types as dispatch vessels and data holders. It's very different from OOP but when you get used to it it's quite nice. The things I miss from oop is the consistency (when there's just one way to model the world you don't have to think quite as much about design) and things like tab completion for method discovery.
I don't actually do much OO. I typically stick with Ada. It does have OO, but I rarely reach for it if I don't need it. Most of my typing interests are actually from the non-OO side.
Too much detail is around in a far too large response to another commenter, though I can't actually suggest it... But at least a bit of the start should provide a little more detail on what I'm looking for with types.
I haven't seen it done yet, but in principle someone could probably write an editor plugin that gives some sort of autocomplete for method discovery in Julia based on `methodswith` or similar, which would be a nice thing to have!