I have wondered this and occasionally seen some related news.
Transistors can do more than on and off, there is also the linear region of operation where the gate voltage allows a proportional current to flow.
So you would be constructing an analog computer. Perhaps in operation it would resemble a meat computer (brain) a little more, as the activation potential of a neuron is some analog signal from another neuron. (I think? Because a weak activation might trigger half the outputs of a neuron, and a strong activation might trigger all outputs)
I don’t think we know how to construct such a computer, or how it would perform set computations. Like the weights in the neural net become something like capacitance at the gates of transistors. Computation is I suppose just inference, or thinking?
Maybe with the help of LLM tools we will be able to design such things. So far as I know there is nothing like an analog FPGA where you program the weights instead of whatever you do to an FPGA… making or breaking connections and telling LUTs their identity
You don't think we know how to construct an analog computer? We have decades of experience designing analog computers to run fire control systems for large guns.
We have also a pretty decent amount of experience with (pulse/spiking) artificial neural networks in analog hardware, e.g. [1]. Very Energy efficient but yet hard to scale.
That’s a very cool abstract, thanks. I suppose it’s the plasticity that poses a pretty serious challenge.
Anyway, if this kind of computer was so great maybe we should just encourage people to employ the human reproduction system to make more.
There’s a certain irony of critics of current AI. Yes, these systems lack certain capabilities that humans possess, it’s true! Maybe we should make sure we keep it that way?
Transistors can do more than on and off, there is also the linear region of operation where the gate voltage allows a proportional current to flow.
So you would be constructing an analog computer. Perhaps in operation it would resemble a meat computer (brain) a little more, as the activation potential of a neuron is some analog signal from another neuron. (I think? Because a weak activation might trigger half the outputs of a neuron, and a strong activation might trigger all outputs)
I don’t think we know how to construct such a computer, or how it would perform set computations. Like the weights in the neural net become something like capacitance at the gates of transistors. Computation is I suppose just inference, or thinking?
Maybe with the help of LLM tools we will be able to design such things. So far as I know there is nothing like an analog FPGA where you program the weights instead of whatever you do to an FPGA… making or breaking connections and telling LUTs their identity