Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Karla Helena-Bueno discovered a common hibernation factor when she accidentally left an Arctic bacterium on ice for too long.

I love how this story follows the magic pattern of so much of innovation and discovery - an accident. It's refreshingly human and not a mode of discovery that machine learning is going to completely take away from us.



> The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!" (I found it!) but "That's funny..."

-- commonly attributed to Isaac Asimov


As a one-time scientist, I think Asimov may have been tricked by extreme selection bias on "That's funny..." utterances. It almost always precedes a crushing realization that you have fucked something up and probably wasted a lot of time.

You're probably days down exploring that explanation before the eventual "holy shit" (that I never really had the benefit of experiencing).


The most exciting phrase that heralds a new discovery is not "eureka!" but "holy shit!"


The legend of “eureka” is that a fellow who had a specific problem in mind noticed something specific to his problem. Discoveries like sticky notes are not the same thing.


Yes. The "eureka" legend refers to the ancient Greek scholar Archimedes. He was tasked with determining if a gold crown commissioned by the king was pure gold, or if the goldsmith had substituted some silver. While taking a bath, Archimedes noticed that the water level rose as he got in, and he suddenly realized that the volume of water displaced must be equal to the volume of the submerged object. This gave him the insight that by measuring the volume of water displaced by the crown, and comparing it to the volume displaced by an equal weight of pure gold, he could determine the purity of the crown. Excited by his discovery, Archimedes supposedly leapt out of the bath and ran naked through the streets of Syracuse shouting "Eureka!" which means "I have found it!" in Greek.


To date, nobody has found the quote among Asimov's publishing writings [1]. Similar (but not as succinct) wordings have been identified in earlier sources, but nobody has been able to identify the exact origin of the quote.

[1] https://quoteinvestigator.com/2015/03/02/eureka-funny/?amp=1


Mid-1900s science fiction writers really seemed to like just randomly saying stuff and having people believe them. This isn't as bad as "Pournelle's Iron Law", but it's just as fact free, and I don't think it's good to believe fiction writers.

Since then, we've moved on and now instead believe cynical standup comedians or late night TV hosts are the ones who know the truth about everything.


I'm all for it. People get lucky, then try to rationalize the past with a skill narrative. Then they soak up all the grants.


> People get lucky, then try to rationalize the past with a skill narrative. Then they soak up all the grants.

They have to put themselves in the situation to get lucky first. This person got a graduate education, and was competent enough to be selected to be doing research in what is likely a multimillion dollar lab owned by an institution, then she had the knowledge and ability to notice and be able to identify what had "accidentally" happened with a micro-organism that we barely understand.

Luck was the smallest part of this discovery. I would say that the grant money is well spent funding someone so "lucky".


Everyone in science works hard. Only a few get lucky. People get scooped every day.

Source: spent years looking hard for hibernation promotion factor in P. aeruginosa ribosomes via cryo-EM. Got a PhD and worked a whole lot of 16 hour days. Never got lucky.


Many work hard designing and assembling perpetual motion machines


I can understand why, it's clearly possible. Just look at the galaxies moving away from us faster than the speed of light. Anything is possible, if you work out the magic.


But, see, that's the problem. I can't look at them...


Sure, the universe as a whole does not obey conservation of energy. That does not make it particularly useful for a generator.


It makes sense if you consider our obsevable universe as the inside of a huge black hole.


If this story were at all true, then you know very well that not everyone in science works hard. In my graduate cohort, those who did the sets first year, set themselves into research, and worked hard graduated. Those who did not left with a masters, although many found success in other fields. It was quite clearly delineated.


I'm talking about at the PI level. And yes of course a few people don't work hard, but the overwhelming majority do not differentiate themselves by how hard they work, is the point I'm trying to make. Your average PI has the skill set to take advantage of getting lucky.

Not sure what you're insinuating about the story not being true, would you like to see maps?


Are you saying that people with a masters degree don't work hard?


I know some worked very hard, to not work very hard anymore.


> People get lucky, then try to rationalize the past with a skill narrative.

Aka "fundamental attribution error" - overemphasizing internal or personal factors (such as skill or ability) while underemphasizing external or situational factors (such as luck or opportunity) when explaining someone's success or behavior. Fun fact: This bias has a tendency to leave stock traders bankrupt.


> People get lucky, then try to rationalize the past with a skill narrative.

This is literally the opposite of the situation put forth in the article. Accidental discoveries are accidental discoveries.

> Then they soak up all the grants.

What use does a machine learning model have for a grant? This seems like something that is uniquely useful to humans.


If the emergent behaviour was to desire more and more corpus, then grant money would allow the AGI to purchase IP to consume


Ah, but serendipity favours the prepared mind.


I agree with this - but there's far more prepared minds than serendipity, and I think the mistake we make is assuming people can control that serendipity aspect to produce repeat performances.


Maybe AI won't forget bacteria in the ice, but like us, it is really good at finding patterns, but at a massive scale. Instead of an accident it could find the hibernation mechanism from another angle.

And if AGI becomes a thing, it might go "Hey, this is funny" in weird ways after it has ingested enough data.

I love the novel Colossus because almost 60 years ago it portayed realistically how a nascent AGI could behave: https://en.wikipedia.org/wiki/Colossus_(novel)


I think ML is likely to be material to us making many more such discoveries. So much of the current constraint is not in the knowledge to identify the interesting pattern, but the capacity to look for it at scale.


Yeah but you missed the point op was making


That seems an uncharitable view of the reply.

The search space is huge, we sometimes find needles in haystacks by accident, isn’t it exciting that we have tools now that can systematically check every piece of hay?


ML search is more about ‘averages’ based on samples.

Innovations like these are more about ‘shocks’ that surface fitting cannot capture.

Note universal approximation theorem applies only to smooth surfaces.


Not always. Quantile regression exists. And you can develop "no match" categories.


Quantile regression is also about averages.


Averages are formulated as measures of centrality in the L2 norm ("straight line" distance), sum(values) / count(values). Quantile regression uses modifications the L1 norm ("city block" distance); if median (50%) then it is a measure of centrality. Not everything is an average. If you're interested, this is a good (but math heavy) treatment: https://en.wikipedia.org/wiki/Quantile_regression#Computatio...


But the better the mean surface is fitted (in a generizable way), the easier it is to spot outliers.


Well said.


Perhaps. I was thinking along the lines of MarkBurns response - ML will allow us to efficiently look in those places we might otherwise only have searched by accident.

If ops point was rather that “accident”/“luck” are uniquely human… I don’t agree. Luck is when probability works out in your favour - and that can happen all the time with any sort of probabilistic search, which is rife in ML.


I've been reading "The Making of the Atomic Bomb". But it's really about the process of discovery in nuclear physics. And most of the discoveries were made by accident.


Stephen B Johnson's "How we got to now" was enlightening on the topic of discovery for me (https://www.pbssocal.org/shows/how-we-got-now)


But if you don't study the math and physics hard, you will not be able to understand that you may have found something, valuable. It would be like Pearls Before Swine.


I've been using the humble refrigerator/freezer for accidental bacterial science experiments all my life.

(I vividly remember as a kid leaving a slice of bread in the refrigerator as a for-credit experiment until it grew interesting green mold to study)


Very similar pattern to the recent stories about grokking (someone leaving a model training for too long by accident, then discovering something unexpected when realizing the accident)


not necessarily, machine learning can make more accidents faster




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: