Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
If memristors act like neurons, put them in neural networks (ieee.org)
83 points by hypomnemata on Jan 28, 2021 | hide | past | favorite | 39 comments


I am surprised people have high expectations from memristors. They are just another way to build an analog computer -- better for machine learning, worse for classical ODEs.

But we have not used analog computers for 50 years, and for a good reason -- they are not reproducible, their accuracy is very process dependent and has a hard upper limit, and they are often tuned for a single function.

Would people want a chip which is basically unpredictable -- the performance can vary up by tens of %, they have to be re-trained periodically to prevent data loss, and there is no way to load pre-trained network? I doubt it. Maybe there is an extremely narrow use case, but I do not see it in the mainstream devices.


>Would people want a chip which is basically unpredictable -- the performance can vary up by tens of %, they have to be re-trained periodically to prevent data loss, and there is no way to load pre-trained network? I doubt it. Maybe there is an extremely narrow use case, but I do not see it in the mainstream devices.

Human brains have the same problems and seem to be fairly popular.


True, but they're hard to do experiments with, which is useful if you're building such a brain.


If the trade-off between energy-cheap human-like processing and our computers is reliability, it's definitely worth doing. If we had unlimited human like processing, we would basically have unlimited slaves without affecting biological organism similar to us.

Not to mention, it's worth doing it just for having 100000 times cheaper machine learning.


Oh boy I can’t wait for the unlimited slaves


Don't worry, we will take short cuts, because greed and ignorance, leaving them with agency we will not have accounted for.

Should be exciting.


Is the better alternative, to create actual slaves? Intelligent feeling beings that we put governors on, like a Thrall shock-collar from Star Trek, so they don't get to uppity?

Because that's a horror show of its own right there.


[Goes with morbid flow here]

Perhaps!

If we get it right, and they are in fact beings, there may not be better alternatives. May always end in conflict.

I say that because a core aspect of beings, is that they "be", and that boils down to agency of some kind.

Negative reinforcement runs in conflict to all that and warrants a response, from the oppressed beings point of view. Us being creators will only go so far.

John Varley proposed an interesting solution to this in his Gaea series. A slave was created by connecting all their pleasure centers to an area on their forehead. Any touch other than their own triggered pleasure so intense it is worth pretty much anything to experience.


Ok that's not a shock collar, but creepily similar to keeping folks drugged into compliance. Also an ethical hole.

I'm not sure what the solution is. Intelligent but not self-aware?


Yeah the whole thing is a mess. I don't think there is a solution.

Things that are intelligent in some fashion but not self-aware are machines.

Edit: We will end up making really good machines, and they will present us with the illusion of agency. That's all a good thing. I want agents for a variety of things. Most of us do, and or would benefit from them.

But, none of that actually requires we make other beings. Once we do that, they are beings! They will have agency and all the other stuff we associate with sentience. And the way I see all that is they are peers. They may be simpler than us, like the animals are, or more than us, with obvious and to many, concerning implications.

When we advance machines, no shocks, extreme pleasure, type means and methods are required. And, the price of that is something almost aware, but lacking agency.

Everything pivots on agency.

Should we create something having it, the idea is it can do hard work for us because it will be capable of it like we are.

Otherwise, we have to do all the hard work and end up with something able to do more work, but the investment cost is high. Running costs are much lower. New hard work = another significant investment.

I, nor anyone I believe, can tell us how agency, the self, being aware, and all that intertwine to present as intelligence, as a being of some sort. Answers come slow, and I fear the tests needed to get at them more quickly are ethics bombs all over the place.

We want to make that happen somehow on the assumption we will get a huge return and that because of the potential we know intelligence can bring to the table.

It all seems an awful lot like, "wants cake to eat it too" at this point in time.


Perhaps we can give the slaves feelings of profound happiness when they obey us. Requires some immense scientific and/or philosophical advances though.


Well bounded by the expanding co-moving light cone of mass-energy, barring FTL travel.


Not really...

Psychology does this all the time.

Outcomes can be described as distributions as much as they can binaries. It just takes a change in mindset.


> Human brains have the same problems and seem to be fairly popular.

Not only that but I'm sure there's a usecase where they are more suitable, we just haven't found it.


Human brains are also ridiculously expensive to copy and distribute. Not to mention that you need an extremely loose definition of "copy".


Me and the wife made a couple and whilst they aren't cheap, I wouldn't say they're outrageously costly to make/run.

Not particularly reliable though... and yes, we seem to be having great difficulty copying the required information into them.


I'm in the first trimester of the manufacturing process, excited but also having some sticker shock at day care options.

Never built something so initially unstable that ends up Turing complete before.


> Not to mention that you need an extremely loose definition of "copy".

Makes me wonder, with a nearly perfect simulation for a reinforcement learning agent and a human brain or equivalent architecture, would you be able to make exact copies. Assuming the PRNGs and other settings were deterministic. Now that would be an interesting and controversial experiment.


My thoughts exactly.

People want what is in our heads. Which is it?

Some idealized, and by nature limited sub set or idea, or the real deal?


Human brains try really hard not to be analogue.


> Would people want a chip which is basically unpredictable

Judging by the prevalence of (pseudo-) random numbers in machine learning, I'd say yes. Reentrancy is a big plus, but not always a dealbreaker.

It's possible we might end up with a sort of left/right-brain setup, with a noisy analog hypothesis generator paired with a robust, logic-based evaluator/planner.


> It's possible we might end up with a sort of left/right-brain setup, with a noisy analog hypothesis generator paired with a robust, logic-based evaluator/planner.

That sounds very interesting to explore, definitely would teach us something new.


Those pseudo-random numbers are still reproducable.


Sorry, I meant reproducible, not reentrant.


You might be interested in 'Robust First Computing' - https://www.youtube.com/watch?v=6CNg1Q3RNWI

edit: here is a far better explanation by the terms creator:https://www.youtube.com/watch?v=7hwO8Q_TyCA


I agree with all your points. Perhaps the opportunity is if your neural networks are tolerant to that kind of variance there may be applications where you can shrink the hardware footprint (and cost) dramatically while improving latencies (and maybe power usage).


I encourage folks to actually read the linked article instead of basing their commentary on the shoddy title.

https://www.nature.com/articles/s41928-020-00523-3

Abstract:

> Resistive memory technologies could be used to create intelligent systems that learn locally at the edge. However, current approaches typically use learning algorithms that cannot be reconciled with the intrinsic non-idealities of resistive memory, particularly cycle-to-cycle variability. Here, we report a machine learning scheme that exploits memristor variability to implement Markov chain Monte Carlo sampling in a fabricated array of 16,384 devices configured as a Bayesian machine learning model. We apply the approach experimentally to carry out malignant tissue recognition and heart arrhythmia detection tasks, and, using a calibrated simulator, address the cartpole reinforcement learning task. Our approach demonstrates robustness to device degradation at ten million endurance cycles, and, based on circuit and system-level simulations, the total energy required to train the models is estimated to be on the order of microjoules, which is notably lower than in complementary metal– oxide–semiconductor (CMOS)-based approaches.


Memristors are the technology that seemed poised to usher in a new era of computing. The promise of being able to redraw the current computer architectural hierarchies is tantalizing.

If machine learning applications are what finally get memristors out into the world, I wish them godspeed.


What do you think of memristors vs FPGAs?

They have many similarities -- they both redrew the current computer architectures, they integrate memory and computing, they can have randomness built-in, and they both take less power than mainstream GPUs.

What does memristor provide that specially designed "neural FPGA" can not?


They aren't really directly comparable like you imply. A memristor is basically a resistor that changes resistance when you run current through it. An FPGA is a clever layout of many thousands of logic gates that can be programmed to form arbitrarily complex digital logic circuits. It's like comparing apples to skyscrapers because they're both associated with New York.

Presumably, a memristor based neural network would have the advantage over an FPGA of requiring significantly less silicon area to achieve the same function. I imagine an FPGA based neural network would approximate analog signals digitally, perhaps using floating point "half's" or something. Memristors would directly operate on analog signals, encoding information as amplitudes or pulses of currents and voltages.

Notably, FPGAs and GPUs can't really be directly compared to each other in terms of power consumption unless you specify specific use cases. You can't build the equivalent of a mainstream GPU out of FPGAs without severely limiting the clock speed (because of how physically large it would be), and if you did anyway, it would use many orders of magnitude more power to function. So, a GPU is way more power efficient than an FPGA for rendering graphics. There are certainly problems that a GPU isn't good at solving, and so there's a good chance that an FPGA solution would be more power efficient.


Correct me if I'm wrong, but isn't it a good metaphor to say that FPGAs are just ASIC emulators? I.e. much less efficient than an equivalent ASIC, but good for prototyping and something to use in production if you can't afford to manufacture your own chip (which is most low-volume use cases). Under this metaphor, memristor-based ASICs will obviously be more efficient than FPGAs emulating them, but that's predicated on there being a high-volume use case justifying the creation of a memristor chip in the first place.


Wow there... sometimes digital emulation of an analog circuit is faster than the analog circuit since you can solve for equilibria, easily repeat calculations, use well researched programming tools, etc


My understanding of the use of FPGAs and ASICs that are used to speed up neural networks (such as those in phones) is that they are simply designed to do the types of calculations used for NNs more quickly (matrix operations) and generally at a reduced level of precision. This is very different from a memristor approach where the structure of the network itself would be represented in the silicon. I also think it's unfair to compare the two because it took decades of work to get CMOS transistors to where they are today. I imagine that once commercial applications for memristors appear many optimizations/improvements will present themselves.


"the structure of the network itself would be represented in the silicon" -- ASICs then? perhaps even hybrid analog/digital one, where fixed coefficients are stored in digital memory, while input data is analog.

I believe there is a great value in being able to "snapshot" the state and later load exactly the same state into millions of devices. And I cannot see how this will easily work with memristors.


Isn't the coherer https://en.wikipedia.org/wiki/Coherer the first memristor? If so it was invented in 1890.


>> The devices could also work well within neural networks, which are machine learning systems that use synthetic versions of synapses and neurons to mimic the process of learning in the human brain.

Yann LeCun disagrees:

IEEE Spectrum: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

Yann LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

https://spectrum.ieee.org/automaton/artificial-intelligence/...


Digikey doesn't sell memristors


The movie 'ex machina' is a great example of this discussion... must see


Basic question, but why are transistors not considered fundamental?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: