I've described this on HN before; forgive me if you've already seen it.
I have chronic migraine. It sounds worse than it is; I rarely suffer attacks anymore, and when I do, they're usually mostly harmless, as in the case I'll describe below.
Contrary to popular belief, migraine is not a headache. It's something a little more like an epileptic seizure. Intense headaches are one common symptom, but there are a lot of others.
One common symptom of migraine is called *scintillating scotoma*. A scotoma is a blind or blank spot in the visual field. A scintillating scotoma caused by migraine is an area of the visual field that is temporarily replaced by a vivid visual aura. Its appearance is commonly the first symptom of a migraine attack, and is often followed by more unpleasant symptoms.
I've seen scintillating scotoma many times over the years. A few years ago I was reading and enjoying a good book, and my scotoma appeared. It was a humdinger: roughly triangular prisms of white light with utterly black zebra stripes moving along them kaleidoscope-style. It took up the lower left middle of my visual field, pulsing and radiating and turning, covering the book in my hand.
I was disappointed. I didn't want to stop reading. I was enjoying the book. So I decided to keep reading until the scotoma made it impossible to continue.
It never did.
It became so vivid that I couldn't see my hand at all, but I still had no trouble reading the book. I even started reading it aloud without any difficulty.
After a few minutes the scotoma faded. In that instance, it was not followed by any other symptoms (that's happened more and more frequently over the years since probably my forties).
I could now see the book again, and could confirm that what I had been reading was indeed what was on the page.
I thought about how to explain my experience. The best hypothesis I've come up with so far is that the neurological process of seeing and the neurological process of being consciously aware of what I'm seeing are not the same thing. They're independent processes. The scotoma prevented me from subjectively experiencing seeing the book, but did not prevent me from actually seeing it, nor from correctly interpreting what I was seeing.
This experience (and one two other odd experiences) has led me to adopt the working hypothesis that many of our cognitive experiences are more complicated than we tend to assume, and that they're often made up of several more or less independent processes. We usually benefit if related processes pretty much work together, so they pretty much do. Because they do, we experience them all together as a single experience, but that's an illusion that unravels if circumstances screw up their synchronization.
I have experienced this several times and almost never have a headache. I definitely can't read what's beyond/obscured by the scotoma. The first time it happened was during a test in high school and the splotch grew until I was forced to read the questions in my peripheral.
I have an anecdote that corroborates your observation that cognitive experiences are more complicated then assumed: I was once very, very sleep deprived. I became aware that my body was reacting to something, as if there was a jump scare in a movie, I unconsciously flinched and pulled away from something without knowing why, and then a perceptual moment later, I heard a loud noise caused by something falling to the floor.
One of the other weird things I referred to is best described as discovering that I could be awake and asleep at the same time. I think it might be another case of our naive ideas about cognitive processes being a little oversimplified.
It sounds like nonsense, of course, but that's because we naively assume that sleep and wakefulness are opposites--that they are mutually exclusive. What if we're wrong about that? What if instead each of them is a set of processes that normally work together, but that can be disrupted, and what if disrupting them makes the boundary between them more porous?
I have CFS, which boils down to having something screwed up in my recovery from fatigue. Nowadays it's not a big deal, as long as I follow some rules, but it took the better part of a decade to reach that point. For several years I used prescription modafinil and armodafinil to control when I was awake (because otherwise I slept eighteen to twenty hours a day).
With the modafinil I managed to reach a stable state where I could usually be awake for a fairly normal part of the day, but a couple of times a month I'd get so tired that I'd fall asleep even with a full dose of the stimulant in me.
Those naps were weird, though, in that I remained conscious through them. I mean I'd lie down, relax like normal falling asleep, and start breathing in that distinctive arrythmic way that tells you someone's asleep. I would just be awake the whole time, watching myself sleep. I experimented with trying to move when I was "asleep". Sometimes I could; sometimes I couldn't.
You can imagine that I really was asleep, and maybe the stimulant just caused me to dream that I was awake, watching myself sleep. The main problem I have with that explanation is that I have never in any other circumstance experienced dreams that were so much the same over and over, and without the usual fantastic elements.
Of course, you could argue that was the modafinil affecting my dreams, and that could be true.
But my hypothesis is similar to the migraine and scotoma blindsight thing: what if awake and asleep are not opposites, after all? What if they aren't actually mutually exclusive? What if, instead, they're just two complexes of cognitive and physiological states that don't normally happen at the same time because they interfere with each other? It's not useful for them to happen together, so our bodies and brains don't normally do that. But mix a strong CNS stimulant with extreme fatigue, and things get messed up.
If it was actually happening, and not some weird drug-induced dream or hallucination, then it strikes me as another case where our cognitive processes are a little more complicated and messy than we normally assume.
Being aware while asleep is something that can happen to advanced meditators. The more aware you are the more you are there is no difference between being awake or asleep. I haven’t heard it brought up in any other contexts except maybe for lucid dream practitioners. I’ve also read accounts from those who have done dark room retreats where perception of being awake and asleep blur and the perception of what you are experiencing has a lot of mind generated in experiences that can seem very real, dream-like, or using Buddhist language, empty.
I went to school at Naropa University, where a lot of meditation was part of the required curriculum (at least, when I was there; I have no idea what it's like now). I never experience the awake/asleep thing in connection with meditation, either at Naropa or in meditation practice outside the school, but I doubt that I could be in any way considered an advanced meditator.
Should be clear: not every advanced meditator experiences this and is not a prerequisite to be “advanced”. It would just not be unusual if it did happen.
You should try and play an audiobook before you lay down to rest and see if you can hear and remember what's being played. Echoing another comment describing this as similar to sleep paralysis, playing things while I'm "asleep" and recalling them once awake has led to successful results
I've been awake while asleep too, that's what sleep paralysis is. Sounds like it was a bit different for you, in that you didn't wake up to it, but it sounds like the same experience otherwise.
Right; in my case I would instead be awake but very tired, and I'd lie down and go to sleep without losing consciousness. The first time it happened I assumed it was a unique one-time experience, but it happened again several times.
It never happened, as far as I can remember, before I developed CFS. It never happened when I was not taking modafinil or armodafinil.
I know what you mean, but I don't know how the heck you could arrange for just the right combination of large doses of modafinil and profound exhaustion at just the right times.
To give you some idea of how profound my fatigue was, let's start with the fact that I've been extremely sensitive to stimulants of any kind since I can remember. A can of coke past 8PM would keep me awake half the night. It's a trait that I apparently inherited from my mother and passed on to my daughter.
After I developed CFS, I could take a full dose of an amphetamine and sleep like a baby. I know because amphetamines were one of the alternatives my physician tried for controlling my sleepiness and attention problems.
Modafinil and armodafinil are the drugs that worked for me. By the way, please don't anyone assume that they'll do the same thing for you that they did for me. Everybody's different. Talk to your doctor, not some random internet storyteller.
It was definitely the caffeine. For example, a can of Diet Coke was just as bad. I also know from experience with other stimulants that I was just extremely sensitive to stimulants in general (and so is my mother, and so is my daughter).
I guess I would say that I'm interested in a desultory way, but without urgency.
I've achieved a good adaptation to it, and my life proceeds mostly as if I didn't have it now, as long as I observe some rules: regular schedule, good nutrition, regular low-impact exercise. (If my exercise gets too high-impact then I cross into the kind of fatigue I cannot easily recover from.)
I do follow science and medical news related to it. The present state of affairs as I currently understand it is that it's still a syndrome rather than a disease, which basically means it's a big bag of symptoms without an agreed-upon underlying cause or mechanism. There's some evidence that ties it somehow to the Epstein-Barr virus (and, perhaps coincidentally, one of my kids had mononucleosis in the months before I developed the syndrome). There's also some evidence that suggests that susceptibility to it is heritable. Some researchers have hypothesized that it's an epigenetic disorder--that is, a genetic disease that is activated by an environmental trigger (such as a viral infection--a large fraction of CFS cases start with a viral infection).
Someone published some work that claims that folks with CFS have distinctively abnormal calcium metabolism, which might account for the difficulty recovering from fatigue.
There are still some people who think it's psychosomatic, or mostly psychosomatic, too.
That's about as much as I remember off the top of my head. As I say, I'm interested, but not urgently so. I seem to have found my accommodation.
No, it's different. I'm an experienced lucid dreamer. I'm also an experienced practitioner of shamanic drumming and the associated waking dream states. It's not the same as that, either.
I have several serious sleep issues including insomnia (not usually due to racing thoughts, just inability to sleep) and circadian issues. What you are describing sounds like what I call "half sleeping", although I can always move if I try (I'll also usually unintentionally wake myself up rather quickly when lucid dreaming and can then move almost immediately; I definitely do not have sleep paralysis, although possibly it is similar but not being able to move). This happens to me quite a lot, maybe even almost daily (sometimes multiple times). I think at least some cases of "day dreaming" are a similar state as well.
Both going to sleep and waking up I often notice this state; when going to sleep, I'll either stay that way long enough to actually get to sleep (usually only a few minutes I think, possibly 15-30 minutes at most on rare occasions) or unfortunately I often unintentionally wake myself up from that state (on those occasions it feels a bit like I have a fear of sleep). After I have slept for a while I am more commonly in that state for 15-30 minutes at a time and maybe sometimes longer, it is hard to remember (total time a day can be longer but I will fully wake up or sleep for a while in between). My memory for sleep related stuff is likely much better than most people due to the decades of insomnia, however it still isn't that good (nor is my memory in general after so much sleep trouble). I think my sense of time is fairly accurate in that state and it is often how I decide to consider it half sleeping or sleeping for my sleep log (I'll often look at a clock before it happens and if the next time I look at the clock it is a fair amount later than I expect then I was likely sleeping, although sometimes when near sleep I've looked at the clock and clearly seen a different time than it actually is so that might be the issue at times as well; the clock thing is not just lucid dreaming since it often happens when I'm standing up on the way to the bathroom). My understanding is the way sleep is defined it is possible at times to maintain awareness between being awake and in light non-REM sleep so I am also not 100% sure that "half sleeping" isn't technically sleep.
My understanding based partly on looking at some research (although I'm hazy in my memory at the moment so definitely look into any particular point if interested with the idea that I might be remembering incorrectly) is that there are a number of independent processes that are usually orchestrated fairly well into what we call sleep. There are a few parts of the brain that coordinate sleep and circadian rhythm, including a part of the hypothalamus. IIRC what we consider to be falling asleep may be closely connected to a state of the thalamus that is exclusive with being awake, but other parts of the brain can indepenently be more like being awake or more like the various non-REM sleep stages. This can happen the opposite way as well with "microsleep". When particularly sleep deprived I'll often involuntarily nod my head down a bit and microsleep in what feels like much of my brain while generally still aware of my surroundings. I think microsleeps may also cause the loss of working memory at times when sleep deprived and various other short term oddities. REM sleep is a different thing entirely with movement inhibition based in the brain stem and where the brain is otherwise mostly in its awake state. While dreaming is more common in REM sleep it is yet another independent mechanism that seems related to dopamine. The memory changes with sleep might be yet another usually connected but distinct mechanism. Sleep also isn't just neurally activated but there are a variety of endogenous sleep promoting and wake promoting substances. So definitely more complicated and messy than we usually think of it with a collection of sometimes mutually exclusive states none of which fully correspond with what we consider to be sleeping.
This all sounds pretty much consistent with my guesses about what was happening. In particular, I'm predisposed to agree with your idea that sleep is a bunch of different processes that are normally well-coordinated. I hypothesize that the combination of my Chronic Fatigue Syndrome and the doses of modafinil disrupted that coordination, resulting in my experience of being awake and asleep at the same time.
Of course, just because we both came up with similar hypotheses doesn't mean we're right, but I'll take it as a tentative working theory for now.
Do me a favor some time and try to read through the scotoma, if you remember to. I think that I would have just assumed I couldn't do it if I hadn't been reading something really interesting when the scotoma started. It's only because it came on while I was already reading that I discovered that I was able to continue.
If you remember to try it, the results will be interesting no matter which way they turn out.
I have migraine once or twice a year, and the most recent attack indeed interrupted my reading, quite simply did not see the letters through the zigzag. As it gradually covered the focal point of my vision I was trying to "squint" mentally but eventually gave up. There was no followup headache, just a slight vertigo.
Not the GP, but I've tried to read through mine before but I've also focused my vision on the edges where the scotoma hasn't reached. I'll try to go "through" it next time...
There is a condition which manifests itself as complete blindness, but if you ask a person to try to guess what's in front of them they usually "guess" correctly:
Interesting. I get these scotomas about every 6 months or so, usually lasting for 30 or so minutes. I was told they were called "ocular migraines", and a cursory glance over the internet doesn't seem to help me differentiate between the two, although I will say that [0] is a very accurate depiction of how it looks when it happens to me.
I normally just take a break from whatever I'm doing and go for a walk. The first time it ever happened to me was the first day of a literature class--that was an awkward event, explaining to my teacher that I couldn't read at the moment.
It's a common misconception. What you're calling "ocular migraines" are more accurately referred to as "migraines with aura without headache", sometimes called "silent migraines". You can get actual ocular migraines as well, but those affect the eye (or optic nerve), and as such, are limited to the eye. Scintillating scotomas caused by migraine auras are visible with your eyes closed, and look the same in both eyes.
The image you linked is what I'm calling "flickering". My full-blown scintillating scotoma is much more vivid and fully-formed. In the incident I'm reporting, it was a 3D physical-looking structure of triangular prisms made of white light, with zebra stripes of utter blackness sweeping rapidly along them, surrounded by an aura that looked like a cross between the sun's corona and the aurora borealis.
Your first link is exactly what it looks like to me (except pulsing/shimmering). I can usually sense it before it starts then it starts as a small aberration and gradually grows to encompass almost my entire vision. Extremely unsettling the first time it happened.
Yeah, mine are always pulsing, shimmering, whirling, or moving in some other way--most often more than one way at once. My earliest ones were like a black hole in the middle of my vision with an aurora borealis slowly radiating ouward while color changes whirled around it at high speed.
I'm no expert; perhaps when I see those, I'm having an ocular migraine.
If it's actually an ocular migraine (which takes place in the eye) rather than a scintillating scotoma (which takes place in the brain), that's a possible explanation for why I could continue reading. The UK's NHS says they tend to happen in just one eye, so if I was having an ocular migraine, it might have left my vision unobstructed in the other eye.
On the other hand, if that's the explanation, then I would sort of expect to have consciously experienced seeing the book while I was reading it, and I didn't--or if I did, that's not the way I remember it.
Of course, memory is untrustworthy, and it's also possible that the vividness of the ocular migraine might have persuaded my brain to tell me I couldn't see the book even though I could.
So ocular migraine is a solid alternative candidate to explain my experience. If I have another scotoma, I'll see if I can figure out a way to determine which thing is going on. Maybe closing one eye and then the other.
One time I was at a friend's house with my laptop and lowered the screen light during the night but forgot about it. The very next day, while working, I started to get that "aura" migraine.
In general, when this occurs, I stop doing whatever I'm doing, take a ibuprofen pill and go to bed for 20 minutes. After that, I have a massive headache but no aura.
What was odd is that the very next day, I got the same migraine, again! This rarely occurs to me (two days in row).
It was later that I realized that the screen light was low and that might be an explanation about why I got two migraines in two days.
Sometimes with accompanying headache, sometimes without. That picture is a great representation of what I see. Although I would say its usually more silvery and shiny. Dunno what triggers it, I've had it sitting watching TV, tryjng to work on computer (which becomes impossible) and just sitting in a restaurant eating lunch.
Normally lasts half an hour or so. Also sometimes I can be left feeling fuzzy and foggy for a day or so after.
Like you say reading just becomes impossible!
Have you found any good medication that helps when they trigger?
I get these at my center of vision when I focus close (less than half a meter) for a few minutes. They're annoying because I can't do work that requires me to look very close for too long, but seem to have no other symptom, and no doctor has figured out what they are.
> I thought about how to explain my experience. The best hypothesis I've come up with so far is that the neurological process of seeing and the neurological process of being consciously aware of what I'm seeing are not the same thing. They're independent processes. The scotoma prevented me from subjectively experiencing seeing the book, but did not prevent me from actually seeing it, nor from correctly interpreting what I was seeing.
This sounds similar to the phenomenon of blindsight:
> they're often made up of several more or less independent processes
I’ve recently read “ A Thousand Brains: A New Theory of Intelligence” by Jeff Hawkins and that is essentially what he argues. That our neo cortex had consists of millions of a similar structure and many of them fire up at the same time.
I've never experienced what you have but it's fascinating.
> our cognitive experiences are more complicated than we tend to assume
I arrived at the same conclusion from different experiences. In my late teens and early 20s I worked for several years as a professional magician doing close-up slight of hand magic in nightclubs, restaurants, bars, parties, trade shows and corporate events. I performed for a lot of different types of people in different contexts and noticed how people would describe an effect they'd just seen me do to a friend who didn't see it. Their memory of something they'd just seen moments before was substantially different than what had actually happened.
This is unsurprising because my performance was carefully designed and rehearsed to create that misperception. What is surprising was the extreme detail and certainty of erroneous perceptions even from highly intelligent expert observers who were 100% focused on my every action with maximum effort. I found this super interesting and spent quite a bit of time exploring it with various experiments and pondering the implications which led me to the same realization you had about our conscious perceptions being less accurate and more variable than we internally experience them to be. In short, our memories can seem like a video recording to us but that's just a very convincing self-illusion.
I learned a bit of hobby sleight of hand as a kid, but never enough to work as a pro.
I did later learn another set of similar skills, though, and owned a business in that field for a while: martial arts.
Some martial arts are like stage magic in that they work in part by training yourself to do things that most people find surprising because they don't know how you do them or what training it requires to pull them off.
Not all martial arts use techniques like that, but some of them do, and some of the tricks of the trade make pretty good party tricks.
One example of a party trick that I've witnessed (and even learned to reproduce, poorly):
I know a guy who can take a dart in his hand, hold it with his arm fully extended, and, without bending his elbow, stick it in a dartboard from normal playing distance. It looks like magic: the dart flies across the room and sticks in the board.
If you have the relevant kind of training, then you know exactly how he's doing it (although, simply knowing it doesn't make it easy to reproduce!), but if you don't, then odds are you don't believe it's even possible.
I'd recommend the book "the man who mistook his wife for a hat" by Oliver Sacks which I seem to remember digs into some phenomonon like these.
There are very interesting differences between certain lesions in the left side of the brain, and in the right side. For instance there are some people that are blind, but do not believe that they are, and vice versa, which lines up with your experience.
This is a great look at just got complicated the brain is. What we experience and are conciously aware of, is only a fraction of the activity in one's brain. People that are convinced their limbs aren't their own, people that lose the comprehension of the concept of 'the left side'(ie of their body), people that are convinced they are stuck in the year 1990-- the brain works in a very specific way and damage to certain areas can derange the whole process and create unexpected effects. Often this is losing access to functionality that we think "just is"-- when in fact such functionality is the result of a complex integrated circuit in the brain.
So really out there, but I've just watched "Your Lie in April" anime and the piano artist has some kind of anxiety attacks while playing the piano, and the symptoms are very similar to what you describe; suddenly things turn gray under-water-like and start turning and his view/hearing of what he's playing disappear, see min 1:32 here:
I couldn't find any illness associated to the condition of not being able to hear yourself/the notes disappearing while playing, but what you are describing, losing yourself while reading as the world closes around the words in a kaleidoscope-style prism, sounds really similar.
As someone with epilepsy, the divorce between what my body is doing/experiencing and what my brain perceives or is asking for are precisely how you describe it.
It leads to a lot of out of body experiences and dissociation. As a weird side effect I also mostly only have lucid dreams now.
One thing I found odd is that there's entire groups of people who chase this kind of mental state. Usually through a state of psychedelic drugs to induce a state of altered mind. It's really odd to me because I hate the feeling myself, being forced into it. Though I suppose it's like anything, where if it's in small doses it's a fun novelty for some.
Anyway, my epilepsy has greatly affected my perception of my own perception abilities.
Very interesting indeed. This seems worthwhile to share with the medical research profession. Although perhaps it is already pretty well known and understood.
A simple demonstration of how complex our eyesight is: put your hand between your eyes, touching your nose, and compare what you see with the left eye closed and the right eye closed. On one side, knuckles, on the other, the palm. Yet when you open your eyes, your brain seamlessly fuses the images together and makes your hand largely disappear.
I have migraines fairly often and had a similar experience during a job interview. It was bizarre and troubling because at first I thought it was a floater, then I realized it was in the same place in both eyes! It grew over the course of a few minutes until it completely surrounded my visual field, then it faded away.
It's fascinating that you were still able to read without seeing! I've had migraines about every two months for the past 12 years, and with very few exceptions they've always followed the same pattern: 30 minutes of scotoma (small to large to small), 30 minutes of peace, then a strong headache for about an hour. I usually still have a mild headache the next day.
I've had the migraines mostly while sitting an the computer or reading, but it seemed impossible to me to continue to read since the zig zags were right above the letters, and frankly quite distracting. Because of that, I've never really tried to continue.
It seems weird to say that, but I'm almost looking forward to the next migraine solely to test if I can still read if I really try. That would be awesome! Thank you for your description!
I've also suffered from what I've understood to be ocular migraines, characterized by the intense ocular disruptions. I went to the ER when the first one hit, because I thought I was going blind.
> It was disappointed. I didn't want to stop reading. I was enjoying the book. So I decided to keep reading until the scotoma made it impossible to continue.
I do the same thing now. I had one come on recently when I was lifting weights. I just decided to power through it. It was perhaps the most intense aural disruption I've experienced but the typical after effects (which, in my case, tend to be very bad and render me useless for 2-4 hours following an "attack") were almost completely subdued. I recall that I felt like I had a slight hangover and was able to fully function after it passed.
Generally speaking, scintillating scotomas don't fill the entire field of vision. They can do, and when they're caused by migraine auras they tend to grow and then shrink again within a 20-60 minute window. I can certainly read when I'm experiencing one. I can continue to work if I really want to, but I find that having a coffee and taking a break is much more comfortable.
Migraine auras suck, but are pretty fascinating. Depending on which part of the brain they affect, different symptoms can manifest. Visual auras à la scintillating scotoma are probably the most common, but slurred speech can happen as well, or reduced sensation in half the body, probably other things as well ..
I didn't mention it, but I really like them. I mean, they're inconvenient, and they make me nervous because for most of my life they were followed by the horrible parts of a migraine, but the scotomas themselves are just so beautiful and interesting that I like them anyway. And in recent years I've been able to enjoy them without so much of the horrible parts afterward.
They certainly are interesting to "watch", but honestly, the first time I've had a scotoma, I was scared I was going blind. I remember thinking whether I'd carelessly looked into a laser at work or something, even though I was sitting at my desk reading Wikipedia when the aura came.
And almost without exception, the visual auras are followed by immense headaches, so I personally would not say I like then very much :D
Pretty different. I used to like to induce pressure phosphenes. I found that a hazard of it was that more pressure produced more spectacular light shows, but that direction led to sore eyes.
Mine grow to fill a good chunk of my entire vision and never shrink. Instead the arc continues to grow until it is outside my field of vision and therefore no longer obstructive.
About 8 years ago, I had many migraines (a few per year but I would essentially be unable to do anything for a couple of hours). Many of those were the results of being in a noisy environment or high physical effort.
I knew whenever I was going to get a migraine because I got the auras (couldn't see in front of me, but if I remember correctly I still had my peripheral vision) and couldn't feel the very tip of my thumbs and my tongue (yes, apparently you can lose this, too). A few minutes after those feelings, sure enough I would get the migraine. When in that state, I have a big headache, can't see much, nausea.
It's been the opposite for me actually. I've had migraines for over 12 years now, and they appear mostly during relaxing, quiet phases a couple of days after stressful events (or prolonged phases of stress).
For example, my wife and I worked overtime and through the weekends for three weeks before our anniversary, because we wanted everything done by then. We went to a resort for our anniversary, and sure enough, on the second day of relaxation I've had an episode.
"A scotoma is a blind or blank spot in the visual field. "
Oh, I've had those - though not as lasting or opaque as they sound for you. I always thought it was one step prior to the "blinding" pain level I'll get from different nerve compression syndromes and injuries I have/had; for obvious reasons I try to manage my life to limit agitating the nerves and to not reach the blinding pain level - where for at least a few moments I can't see or don't experience anything but the pain; quite the short circuit mechanism to grab one's full attention.
Thank you for teaching me something new! I used to get these more frequently but just had one last week for the first time in ages. Mine look quite a bit like [0] and I never knew what they were or that they had a name.
Mine used to happen a bit when playing sport so I always associated them with dehydration, but it seems like that’s probably not it.
> has led me to adopt the working hypothesis that many of our cognitive experiences are more complicated than we tend to assume, and that they're often made up of several more or less independent processes
After finishing my CompSci degree I was bored and did some first year psychology degree courses and that was also one of my main takeaways (I don’t mean it was an original take, just one of the main takeaways I remember from these courses).
See for example split brain individuals[0] where the connection between the two hemispheres is severed/damaged - up to a point this was a common surgery to treat epilepsy a decade ago.
Patients seemed fine on first glance, but ultimately there were interesting incongruences [1]. I’m probably doing injustice to the description of the experiment so please read the linked page, but essentially they created a situation where each eye saw a different image (e.g. one related to chickens and another to snow) and then asked to choose with different hands from a set of pictures the picture most related (e.g. an egg or a snowman).
The right hand (connected to the right hemisphere and thus the left eye) chose one thing connected to what the left eye saw and the right hand chose another.
Let’s say the left eye saw snow and the right saw chickens.
The most interesting part is when the patients were asked about their choices, since only the left hemisphere can talk (the speech center).
So they explained why the left hand chose something correctly, but made up some unrelated explanation about their right hand choices (“eggs are white like snow”)!
In other more horrific terms, the right hemisphere had some level of thinking/reasoning going on but could not speak since it now had no mouth.
So, in healthy individuals there’s definitely syncing between the two hemispheres and different centers going on, but as any complex system I’m sure there are hiccups.
More than interesting. Thank you for sharing this experience. I’ve had mild scintillating scotoma a few times but didn’t know what they were, and never had any other symptoms of migraine.
I'd say your luck was good, in that you didn't have to experience the worst symptoms of migraine, but it was bad in that you didn't get to see some of the spectacular visions that I've been gifted with.
You can find some kind of experimental evidence for your hypothesis that " the neurological process of seeing and the neurological process of being consciously aware of what I'm seeing are not the same thing" in this book: Phantoms in the Brain: Probing the Mysteries of the Human Mind by Ramachandran, Blakeslee
I have struggled explaining what the auras looks like, but for me they look like confetti, in particular the shiny stuff used to celebrate sports championships. Sorry for piggybacking on your fascinating account, but unless you experience it yourself, it is hard for non-affected people to imagine what it looks like.
Yes, there are many levels of perception (there is also blindsight), this can be studied by looking at what happens in neurological disorders like yours and more severe ones. Read the book The Man Who Mistook His Wife by Oliver Sacks if you find this kind of stuff interesting.
I did find this very interesting. When you experienced the Scotoma, was that combined with pain by the eyes? I get migraines from time to time, but could never imagine pushing through the inability to focus my eyes because it usually comes with pain/nausea.
The first migraine I remember was in my mid teens. It was extremely painful, with searing pain through my head and down through my torso. It felt as if someone had stuck a red-hot blade right through my head and into my body and was just holding it there.
Up into my early thirties, migraines continued to be very painful. Like you I often experienced both pain and nausea. Besides the knife through my body and the nausea, I often also experienced any kind of sensory input as painful. Hearing sounds was painful; seeing light was painful; tasting or smelling anything was painful; touching things was painful. During an attack I would generally just try to find a soft, quiet, dark, warm place to lie down until it went away.
Beginning in my late thirties, the painful and unpleasant aspects of the migraines began to diminish. By my forties, the attacks had become much less frequent, and when they occurred, they were much less painful.
The scotoma remained prominent, though. In my teens, it was generally a circular area of blackness surrounded by a rapidly-whirling multicolored aurora. It gradually changed over the years, going through several different shapes and color patterns. Most recently it's usually been brilliant white geometric shapes with rapidly-moving zebra stripes, turning slowly.
The scotoma has always been much more vivid than real vision. The shapes and colors are much much brighter, clearer, and more vivid than any real image I've ever seen.
Over the past fifteen years or so, the frequency is down to once every two or three years, and it's commonly just the scotoma for a little while, and none of the other symptoms, or perhaps a little minor discomfort in my torso and a little dizziness. Sometimes it's just a set of sensations that are really hard to describe--sort of just a hint of the feeling of falling, or a very faint experience of dizziness, together with an impression that part of my visual field is sort of starting to flicker randomly. Usually that flickering turns into the scotoma, but in some recent cases, I just have the flickering for a few minutes, and then nothing else.
In the incident I describe here, it was just the scotoma for a little while. (Sorry I'm not more specific about how long it lasts. I'd guess maybe fifteen or twenty minutes, but my sense of time is kind of screwed up during an attack.)
If you're younger, maybe the trajectory of my migraine offers some hope. Mine have gotten much easier to take over the years. Maybe yours will, too.
I suffer from migraines from time to time, over the years I've found that some foods can trigger them, for example plant oils. I eat salad with olive oil and half an hour later I start getting these headaches that start at the eyes.
Steven Pinker in “How the Mind Works” describes a bunch of phenomena that uncover how the brain is structured and how it processes information. The book doesn't quite have a lot of detail, but is pretty interesting nonetheless.
Take some acid and what you are describing regarding consciousness becomes fairly apparent. The amount of things that automatically happen or get filled in for us goes unnoticed until you get a chance to step outside of it for a while.
I can't say that I agree. I got pretty thoroughly familiar with several psychedelics in the late 70s and early 80s, and although I value those experiences to this day, they didn't teach me anything especially relevant to the scotoma or sleep/wake experiences I've reported.
Interesting. In my experience, I end up cleaning my house the next day, because I noticed that the light switch in my room isn't just a light switch, it's a dirty light switch. I'll see stains or smudges that I never noticed. Suddenly there are tons of details that I can see that just got smeared in "normal" waking life.
Well, I noticed a bunch of things during and after tripping, and some of it was even true, but I didn't ever notice that seeing and being aware of seeing were two separate and separable processes, and I didnt ever notice that I could be awake and asleep at the same time, not until the events I recounted here, both of which occurred years after I had, as they say, gotten the psychedelic message and then hung up the phone.
These are called ocular migraines. I’ve had them since a kid, roughly once every two months or so.
what’s really interesting is after having heart surgery, I would have like 10 a day for weeks. Others in post-cardiac surgery message boards reported the same experience.
It went back to normal frequency months after surgery, but definitely an unsolved mystery!
I used to suffer from these auras from 2005 to 2015. I never had one before, and they completely disappeared since then. I don't think they were ever followed by any headache, either.
A funny but related story: I was visiting Paray-le-Monial where there's a diorama about the visions of Saint Margaret-Mary ( https://en.wikipedia.org/wiki/Margaret_Mary_Alacoque ), who is at the origin of the current cult of the "Sacred heart of Jesus".
In fact from the descriptions, it's very clear that the brilliant, shimmering lights that she related as her visions and revelations of the "Sacred Heart" were simply scintillating scotoma from her migraine, then followed with terrible access of headaches that she interpreted as her mystical ordeal from God.
Disappointingly, I don't remember the book for sure. I think it might have been one of the volumes of Gene Wolfe's Book of the New Sun, which I re-read from time to time for pure pleasure.
Hrm. I had similar experiences beginning a few weeks after a 'mild' TIA in the lower right part of my cerebellum in 2002. But it manifests itself slightly different. My whole field of view, both eyes disintegrates into something of a mix like a https://en.wikipedia.org/wiki/15_puzzle or one side of an unsolved https://en.wikipedia.org/wiki/Rubik%27s_Cube , 'scrambled' into about 250 to 400 fields.
It starts as something like a fast https://en.wikipedia.org/wiki/Greyout within about a second, two at most, and then I'm having this 'screensaver effect' for about 30 seconds to a minute, or so.
Most fields are just scrambled into random positions, sometimes even jumping into others, while about a few dozen are impossible blacks or grays with vertigo inducing feelings, because they seem to jump back and forth between infinity and back. Really hard to describe.
Anyways, this absolutely messes with my orientation, and I shiver or even shake slightly.
However, I do not fall. One time this happened to me while carrying a cup of coffee on a saucer from the coffee machine in a conference room back to my seat.
The cup clattered on the saucer because of my shaking, but I managed to put it down on the next table with a loud clack, and drop myself into some spare chair next to the wall.
Without spilling a single drop!
Witnesses described my moves as something like an industrial robot on fast-forward. Not fluent, but in fast and small consecutive steps. But I couldn't see!
Havn't had one of those for years now, also no debilitating pains, those came first. Thing is, I mostly had these when I felt under (time)pressure, stressed, angry. (I think)
For instance, bicycling fast somewhere to be there as arranged? Happened several times, full brake!
Bicycling fast, really pushing it, but just for fun? Never happened. (So far)
Really, really strange!
Funny thing is, MRIs decades later show nothing. According to the radio- and neurologists there arent even traces of something bad, and my brain has exceptionally good blood circulation.
Oh, back to language!
At the time this happened I had trouble speaking in several ways. First to find words, then to form/speak them. But only in German, which IS my native language.
Not so in English!
Also I didn't really notice it at first, because I tend to not 'think in words', but mostly visual in pictures, movies, abstract mashups of these.
I also had ataxia, but not one-sided as usual, but right arm, and left leg.
Really, really strange. But mostly gone now.
Except for losing temper fast.
That stayed.
Can't have a better defence against BS though! :-)
That's all super interesting; thanks for the description!
It's probably totally unrelated to any of this, but I also think primarily (though not entirely) nonverbally, and expressing myself always involves translating nonverbal thoughts into words.
And, for what it's worth, I'm a native English speaker trained (but horribly out of practice) in German (and Latin and Mandarin and Tibetan and Hungarian and Greek and Esperanto and Lojban --don't ask me to communicate in any of them without several weeks of remedial practice!).
Wow. So many languages. I've got only English as second language. Sometimes I miss Russian and French to research old 'roads not taken' in programming languages, computer architecture, and general algorithms.
Latin and Greek would be good too, of course. For the classics, or classical education I didn't have. Went totally technical early on.
On the other hand, you can talk with me in English, but I can't do much better than a young toddler in German, unless I first spend some serious time and effort to reacquire some fluency.
In my defense, I've had some decent fluency a couple of time in the past. I ran into a German couple in Athens once and we had a nice conversation. They said nice things about my German :-).
But now in all those languages except English I'm just a dabbler.
At least I know that I can acquire some fluency; I've been able at different points in the past to carry on basic conversations in German and Hungarian and Mandarin, and read some of the easier classical texts in Latin and Greek and Tibetan.
The idea of improving my skills in any of them (or all of them!) is appealing enough for me to occasionally work on them (especially on Latin and Greek), but my attention inevitably gets captured by my numerous other interests, and my language skills fade again.
Worse, there's a long list of other languages that I think are really interesting, and that I would no doubt acquire if only it were quick and easy enough. As it is, though, I mostly just read about languages because they're so interesting. Well, and construct them. Conlanging is a hobby, too.
Hrm. I had that interest when I've been younger, but not anymore.
Btw., the English lessons I had in school were rather boring and uninspiring. I didn't really learn it that way, that happened when I had to read Neuromancer by William Gibson, and didn't get it. So I bought a small to middle dictionary from Langenscheid and read it again with the dictionary at hand. Then it somehow clicked for me.
That worked for reading, hearing not so much. That came later by watching movies, news(BBC/CNN), listening to radio(BFBS), and having at least some fluency in speaking even later, because of work.
Nowadays I'm only caught slightly off guard when someone is unexpectedly speaking to me in English in daily life, like people asking for the way, to some location, or things like that. Sometimes I need one to two seconds until it registers what they said, or asked. Sometimes not at all, instantly.
Oh, and under stress my German is getting worse, I'm at a loss for words, have to really 'search' for them, while I could speak it in English without any problem.
"I had that interest when I've been younger, but not anymore."
Should be "I had that interest when I was younger, but not anymore."
I thought you might like the correction; forgive me if I'm wrong. In my experience, differences in the idiomatic use of verb tenses is one of the more difficult things about learning languages.
If you hear a native English speaker say, "I've been younger," then what they're saying is, "I'm old."
In the 1800s, scholars who knew a lot of languages (including, often, several modern and ancient European languages, and increasingly also Sanskrit) started to notice some amazing patterns about similarities in words among all of these languages, including languages from very far apart and that had no visible connection at all from the point of view of their present-day speakers.
As these patterns were explored, they were found to be even more systematic and pervasive than they had appeared. In some cases, one sound would systematically change into another sound from language A to language B.
These patterns helped reveal historical connections between most languages of Europe, Central Asia, and South Asia, showing that those languages had descended from the same ancient sources. (Note that this isn't true of 100% of languages in this region -- there are other unrelated families that turned up -- but it's true of a significant majority.)
The languages that turned out to be related this way included the Germanic family (like German, English, Dutch, and the Scandinavian languages), the Romance family (like French, Spanish, Portuguese, Romanian, and Catalan), the Celtic family (like Irish and Welsh), the Slavic family (like Russian, Bulgarian, and Ukrainian), and also Farsi and all the Central Asian languages related to it, and Hindi/Urdu and all the South Asian languages related to it. Oh, and Greek.
If I tried to list all the languages and language families that were linked up to this family tree, we would probably be here all day, because they go on and on. Just under half the people in the whole world have one or another of them as a native language today!
These languages changed quickly enough in history that most people no longer noticed, or remembered, that most of them were related to one another, yet slowly enough that it's been feasible to deduce an enormous amount of information about the history of particular languages and particular words, once many of the rules and patterns had been identified.
A Grimm's Law example is that the h sound in Germanic languages usually corresponds to the k sound in Romance languages (note that this can be stated much more technically and accurately). So, it turns out that "hundred" and "cent"/"cento"/"centum" actually share the same origin. And "heart" and "cœur"/"corazón"/"καρδία" (origin of "cardiac") (also "cordial"; the root in Latin sounds like "cord-"). And "who" and "que"/"quoi"/"qui" (and similarly for other wh- questions in English and qu- questions in Romance, which were originally hw- sounds in English and kw- sounds in Latin).
Some other examples are t to d ("ten" and "decem"/"diez"/"dez", plus some of the examples above that have a t in English), and f to p ("foot" and the Latin and Greek words from which we get "pedestrian" and "podiatrist", and Romance "pied"/"pie"/"pé"; and don't forget about "father" and "padre"/"pater"/...).
Those are just a couple of the phenomena that people noticed in the 1800s; while the exact development of these patterns gets much more complicated, they've proven to work over and over and over, for hundreds of languages and hundreds of thousands of words!
(... Also, the Grimm from Grimm's Law is one of the Brothers Grimm, from Grimm's Fairytales!)
It's so interesting because now that I am looking for it, so many of these sounds that I did not realize were related now seem like just minor variations of tongue/mouth parts positioning.
In IE languages, the words for water basically follow either "aqua" or "water". Finnish (it's Ugric, not IE) follows the latter, with "vesi" (combining form "vete-").
To be pedantic, Finnish is not Ugric. Hungarian is Ugric, Finnish is Finnic. They both are Finno-Ugric, a.k.a Uralic. Many non-linguist Finns mix up Finno-Ugric and Ugric.
The Proto-Uralic *weti "water" looks like it could come from IE *u̯ódr̥, *udén- "water", but that etymology is problematic because of irregular sound substitutions[0]:
- why would Uralic have *e when it's not present in the IE original?
- why would Uralic drop *r or *n at the end?
[0] Simon, Z. (2020). Urindogermanische Lehnwörter in den uralischen und finno-ugrischen Grundsprachen. Indogermanische Forschungen, 125(1), 239–266. https://doi.org/10.1515/if-2020-011
And a search for the roots of said "shared" terms creates trees of relations which are crucial and incommensurately important to recognize clusters of meanings that link disparate "terminal" terms, and arrive to "patterns" in this compositional structure that are the "heart and soul" of those clusters.
(For example, I recently mentioned in these pages the terms for the "sky" as spawning from ideas of "covering", which also spawn terms for "colour" and "concealing".)
> The languages that turned out to be related this way included
...I see you have left out no less than Italian, the post-Latin at the epicenter!
OT: BTW, Seth, thank you for Let's Encrypt! (Maybe this is also something that OP did not know?)
The book "The Unfolding of Language: the evolution of mankind's greatest invention" by Guy Deutscher explains this very well and has numerous examples. As a language noob, I found it fascinating!
Appreciate the book recommendation, I have read “Through the language glass: why the world looks different in other languages” by the same author and similarly found it fascinating.
In these cases, people who knew both languages consciously adopted the structure of a foreign word by translating its components (morphemes). For example, German has a lot of calques from Latin compounds.
That is to say that it's not always a coincidence or an indication that people inherently think of the concept in the same way; sometimes it's an indication that people thought a foreign word for a concept was awesome, and wanted to borrow it by translating it literally.
Well, English and most indian languages share a common ancestor, but indic languages have drifted just as far as any other from proto Indo-European that you can't really say one branch inherited anything from the other . Indic languages are our cousins, not our grandparents.
Perhaps not just as far though? I don't know how true it is, but I've heard that Lithuanian is considered the living language that is closest to proto-Indo-European.
English is interesting in that it has been mixed up with so many different branches of the tree, Saxon, Scandinavian, Latin, French, etc during different periods. Sometimes it has synonyms coming from eg Norse and French, where the PIE root is the same but the words are quite different, and carrying different connotations.
Loglan and Lojban give entirely too much weight to Chinese and other large languages (in terms of numbers of speakers) that have zero relationship to IE (unless you're heavily into Nostratic). I'd like to see someone combine an exemplary form of Romance languages (like, Latin or early French) with an exemplary form of Germanic languages (like modern Icelandic or reconstructed proto-Germanic) to form a new lingua franca [insert corresponding Germanic term here] for use among language enthusiasts in Europe. Call it Frankenlingua ?
The swedish/scandinavian word for human "människa" has three syllables (norwegian "menneske" sounds more like two syllables) as opposed to german "mensch". It sounds suspiciously similar to the sanskrit word "manusha" which apparently also means human.
I suspect, without having researched it, that människa and mensch are direct cognates where vowels have been inserted for various reasons.
It's not unusual that an affix can have a form with a vowel that gets lost or added, or that cognates can have different morphemes in addition to their shared morphemes. (For the latter case, when writing about Grimm's Law above, I re-learned that although "hundred" is cognate with "centum", the "red" part is a separate Germanic morpheme meaning 'count'. For that matter, the "um" part is a separate Latin morpheme that is a neuter noun ending.)
Mensch, människa, menneske, ..., and (most likely) manusha are cognates. But it's still curious how "manusha" sounds like "människa" while the vowels have been dropped in "mensch".
For some reason German sometimes drops the i in "-isch" (cognate with English "-ish"). But the only reasonably common examples I can think of are Deutsch (etymologically definitely with -isch/-isk) and Kölsch ('pertaining to Cologne'). I don't know if this is somehow an example of a more widespread phonological process in German.
In Swedish, "isk" endings are sometimes contracted to "sk" such as "polsk"="polish" and the only way to say "pertaining to Köln" would be "kölsk" (unless specifically the beer.)
But "människa" is a noun. It's pronounced "menisha" but it doesn't mean "manish". In Norwegian, the word for humane is spelled "menneskelig" literally "menneske-like". (In Norwegian the 'k' is hard.)
As I can gather, the literal meaning in sanskrit of "manusha" = "born of Manu" = human. Manu is a god.
In drug development, a lot of the engineering goes into making sure the pharmaceutical compound -- having been developed -- actually reaches the target area of the body. If the compound is a relatively large particle that needs to get into the blood stream, a nasal spray might result in the compound being deposited at the bottom of a person's lungs without ever getting absorbed. If stomach acids break the chemical down then digestion might not be a feasible approach. Intravenous might work but shots are scary, or worse the medication may need to be administered in low doses over a long period of time, requiring an IV catheter.
For a pill that can be digested, often it will be the case that the pharmaceutical compound needs a lot of buffer space, sugar or something else to prevent it from being absorbed too quickly. So if a pill says "Do not break in half" or "Do not chew", this is because doing these things increased the pill's surface area and the rate that the drug gets absorbed, when optimal rate of absorption has already been calculated.
Time release medications in general are fascinating. There's another ADHD medicine, Vyvanse, that has a really interesting mechanism. The (dextro)amphetamine molecule is chemically binded to l-lysine-- a common amino acid. This makes it inactive, so one can't just snort it. It must be ingested, where in the digestive tract it meets protelytic enzymes that break down the bonds of amino acids. So it detaches the l-lysine molecules and then amphetamine molecules-- but only at a set rate. See, because there's only a finite amount of proteolytic enzymes in the gut this breakdown occurs at a predictable rate. It's so simple yet so complex at the same time, really is a brilliant time release mechanism
Grapefruit juice can also increase the amount of a drug absorbed into the bloodstream. It blocks an enzyme in the small intestine that breaks down many types of drugs.
I don't actually know the answer to this, but I know that heavy particles in the lungs would trigger an intense fit of coughing to get rid of them, and so I doubt that residue would accumulate. I am not a doctor and this is not medical advice.
Harmonicas are amazing instruments. At first people assume them to be like toys (you can buy really cheap ones everywhere) and initially the concept seems really easy: 10 holes, drawing air produces some notes and blowing air produces other notes. But the more you get into it, the more amazing it gets. Soon you discover that you can bend notes by positioning your tongue in a specific way and your repertoire increases tremendously. You learn about overbending, tongue blocking, the "Wawa" sound you make covering the flow of the air with your hand rhythmically (don't remember if that has a name)...
I knew very little about music when I decided to play the harmonica. I looked for an instrument that required little maintenance, were easy to transport and easy to play everywhere. So I decided to learn to play the harmonica by myself. It's easy to start with it, but it gets challenging when you try to learn those techniques mostly because you can't see inside people's mouths to see what they are doing and you aren't used to force your tongue to do those specific movements. When you see what people can do with it, it can be jaw dropping. I'm still a beginner at it but it's so rewarding after practising a lot to get your first bending, to reproduce songs you have learned just by muscle memory, to sound a little bit closer to those blues masters... definitely something worth to try in my opinion.
(Pedantry) That's actually a chromatic harmonica, which is a somewhat different instrument from the diatonic harmonica / blues harp aspaviento is talking about. Chromatic harmonicas have a little lever that mechanically does some of the difficult tongue / breath stuff.
After reading your comment and its replies, I went looking on Youtube for some harmonica videos.
I don't play the harmonica, and I don't have any idea about music. However, I do like the harmonica as an instrument whenever I hear it in some songs.
I find that usually the harmonica always plays multiple notes together, and to my untrained ears, this always makes me feel that it can never hold the stage like maybe, a saxophone can.
Then I came across the video below, and honestly, I was extremely impressed. It felt as if each note was being played by itself and the artist had complete control over the instrument.
Nice one, the only one of these posted harmonica songs I like, although I liked the melodica too. I've enjoyed the harmonica melodies in Falcom games (Ys Origin and Trails in the Sky are the two I can think of but I think there are others):
Ordering a typical printed circuit board with components placed on it is so cheap nowadays that this is a viable option for electeonic hobbyists nowadays (e.g. just ordered 50 pieces of audio output amplifiers at 3,49 € per piece (customs and shipping included), where 1.60 € would be the parts cost of the two chips on it.
Only thing you need for this is:
1. An idea and some knowledge about electronics
2. An EDA software where you design the schematic and the PCB (Horizon EDA, KiCAD, Eagle, ...). This software will give you the Gerber files needed to manufacture the PCBs and the BOM (Bill of Materials) and a Pick-and-Place file which is basically just a CSV telling the manufacturer which part goes where and in which orientation.
3. Using the BOM you find corresponding parts the manufacturer has and put create a CSV that matches their format requirements
Back when I was in school, our introductory electronics class had us make a PCB for a PICmicro programmer on our first day. We got a handout of the schematics. We had to transfer its negative to a copper coated plate using some chemical, used UV light to fixate it, then dissolved the rest of the copper in acid. Finally we got to drill the holes for through-hole soldering points.
Next class we soldered all components. And by the end of two classes, everyone had their own functioning (or not ...) programmer, which we used the rest of the year on a variety of electronics projects. Great fun.
This is 100% accurate and I recommend it. I know that most people reading this right now will say "this is so hard I don't even know where to begin", but, as with anything, once you know how it's done, it's trivial.
The parent comment is a good introduction, watch some KiCAD videos on designing circuits, the process is pretty foolproof if you design the schematic correctly, and then you basically "export Gerbers", upload them to the fab (JLCPCB), and you have PCBs in a few days.
Here's a small PCB I designed, together with Kibot project files that will error-check and export the Gerbers automatically:
Install https://github.com/INTI-CMNB/kibot, run `kibot` in the directory of the designs, and you'll get files that are ready to upload to JLCPCB. Edit the designs etc to make your own thing, that should help you get started.
I find this to be amazing. I last designed a single layer PCB for an industrial control application, in the 1990s, using OrCad for the schematic, and a paint program to lay out the PCB.... needless to say, I look forward to KiCad and just having something show up at my door.
Now all I need is an idea worth spending the money on. 8 )
Out of all the hobbies I've had, electronics (mostly something to do with the ESP microcontrollers) is the most satisfying one once everything works as intended. While electronics knowledge is hard to learn on your own (in my case), there are tons of communities online that are more than willing to help.
I tend to float around the "Electronicity" and "Digital Hackerspace" discords. Other than those there's subreddits like r/AskElectronics, r/electronics, r/esp8266, r/esp32 etc.
Sometimes people advertise their discords there and I like to pop in and see what they offer (how I found out about the previously mentioned discord servers)
I watched a piano/music teaching video from the 60s yesterday that made it all "click" [1]
Most modern music chords patterns are easy to find out on piano.
Start on any of the white keys, and play 7 white keys left to right. That is called a mode: a series of notes separated by certain intervals to compose a song.
Different modes have different vibes. Atmosphere of the song ( sad or happy) is directly correlated by how "willing" the notes are to resolve to the mode root (the first key of 7 you hit).
Now, take a 1-3-5 grip on the white keys (press one key, skip one, press one, skip one, press one). That is called the first chord. Move one to the right, and do it again. That is the second chord. One more to the right is the third etc.
Most popular songs are composed by using mostly the same repeating chord patterns:
- 1-5-1-4 (first, fifth, first, fourth)
- 1-4-5
- 1-5-6-4
If you experiment a little bit with this, you will instantly recognize a lot of songs, although they might be pitched higher or lower.
One step further, is learning different chord fingerings: we started with 1-3-5 which mostly results in a major or minor chord in a mode.
- 5th chords are 1-5
- 7th chords are 1-3-5-7
- sus2 is 1-2-5
- sus4 is 1-4-5
- etc
Now, the next thing you need to learn is to transpose: move songs up or down one or more keys to the left or right, but also taking into account the black keys when moving up/down. This will allow you to play along with a song in the perfect key (starting note of a scale)
There's other things (like modulating modes in songs), but this should give you a headstart.
Ever since I learned that the frequency ratio between adjacent keys on a piano is always a constant twelfth root of 2 regardless of whether you're going black-white or white-white, I've been sort of frustrated with the way pianos are designed. Transposing a song to a different key could have been effortless if keys were structured differently, such as with uniformly alternating white and black keys, rather than the 2 black 3 black groups we have now. By optimizing pianos for C major / A minor, we've made everything else so much more difficult. And no other instrument does this afaik. String instruments don't obscure the constant ratios at all, so it's easier to transpose songs on them.
I suspect someone may reply to this with "it'd be harder to keep track of what notes you're on if keys were uniform like that", but I don't think we actually rely on feeling the keys to know where we are. Would be willing to be proven wrong on that. But I suspect simply coloring/shading each fifth and octave differently should do the job, maybe also texturing them differently for the blind.
Pianist here. I see where you're coming from, but I definitely do also rely on the different shapes of keys to know where I am without looking. I've been learning to play the button end of an accordion and personally I find the uniformity makes it harder. But maybe that's just what I'm used to.
As an aside, I find c major a horrible key to play in, again I think the uniformity makes it harder and less ergonomic, but the same thing makes it easier to teach to beginners hence why everyone starts there.
Another data point: I play the organ, which involves playing a large keyboard with your feet (https://en.m.wikipedia.org/wiki/Pedal_keyboard). The structure of black notes is crucial to being able to play without looking (which is important because you’re simultaneously doing various other things). Concretely, I couldn’t imagine playing the organ without being able to slam my foot into the right hand side of a B flat in order to know that my foot is on top of the B. If you watch an experienced organist’s feet you’ll see them do this all the time.
This is pretty eye opening to me, I would have never thought anyone found C major more difficult to play in than other keys. To me, difficulty is directly proportional to the amount of stuff in the key signature, but that may be because I've never had proper education (~5 years of lessons as a child from a very casual teacher).
The way we happen to notate music doesn't help to endear people to f sharp major, certainly! If you keep practising all the scales and try to learn to do more by ear/improvise, I'm pretty sure you'll end up enjoying other keys more.
It is so annoying that standard guitar tuning isn't all perfect fourths and we've decided to throw in a major third between the G-B strings. Bar chords be damned, I want to be able to visualize it symmetrically and have chord shapes that work the same across the fretboard.
As a longtime guitarist myself, I found it quite easy to learn.
You can think of each row as a guitar string. In the default setup, each row is a perfect fourth above the row below. The green lights are the white keys on a piano; the unlighted keys are the black ones. The blue lights are C natural.
Skip a square diagonally up and to the right for an octave.
Chord shapes are uniform across the whole surface.
Keys are sensitive to pressure, left-right movement, and top-bottom movement. In the default configuration, rocking left and right yields vibrato and moving top-bottom controls some other MIDI parameter (usually filter cutoff, but it's configurable).
Sliding finger pressure to different keys to the right and left gives portamento. Sliding to the next row up or down gives legato.
Once you get used to these parameters, it becomes super expressive.
It's a MIDI controller, not a complete instrument, so you need a MIDI-controllable source of sounds. It's an MPE instrument, so, if you want the full experience, you want some kind of MPE-capable source of sounds (MPE is "MIDI Polyphonic Expression", and it means that multiple MIDI channels are dedicated to the instrument in order to give full, separate expression to each of several voices). MPE is a relatively new MIDI development that is not supported everywhere yet, so you have to pay attention to which sound devices support it if you want the full experience.
I have one; it's my favorite electronic instrument that I've ever tried. I also have an Eigenharp Pico, and I like it, but I like the Linnstrument better. (It doesn't look as cool as an Eigenharp, though :-).
When I decided to get one, I discovered that they're a bit hard to buy. Not very many places have them, and when I was looking, even Roger Linn didn't have any available. I eventually found one for list price at Detroit Modular (https://www.detroitmodular.com/).
I watched the LinnStrumentalist video per your first link. Amazing stuff. And it does seem to reduce the amount of finger contortion required compared to most keyed or stringed instruments.
It does, though, seem to demand that the player look at the keyboard. For an instrumentalist, this may not be a big deal. For a performer accustomed to connecting to his audience with his face, as do many singers, it seems like a big drawback.
Yes, the Eigenharp is easier to play without looking, but I can say a couple of things about the need to look at the Linnstrument.
I've only had mine for a few weeks, but the need to look at it seems to be slowly disappearing. Now, the surface does feel really uniform, which is not so helpful for that. As you probably expect, there's not much on it that you can use to orient your hands by touch. Still, I do seem to be gaining some ability to know where my fingers go without looking. Partly it seems to be just acquiring a feel for how far apart the keys are (and you can feel the boundaries between keys). Part of it is starting to know which keys I'm touching from the pitches it's producing.
The second thing is that I've seen videos of players who attach a guitar strap and sling it over their shoulders (it comes with the attachments for that), playing it flat against their chests.
Taking those two things together, I'm fairly confident that it's possible to learn to navigate the keyboard by feel.
I haven't tried the over-the-shoulder thing, myself. So far, I play it on a table like a piano, or like a lap steel guitar or dobro.
I find it easy to learn, both comfortable, and expressive, and it's quickly become my favorite MIDI controller. I prefer it over both piano-style keyboards and MIDI guitars.
You also want open chords. There’s nothing stopping you from tuning the way you describe you’ll find it much harder to play harmony that’s musical if you do though.
I have one, in the form of a Chromatone keyboard (https://chromatone.jp/chromatone/index.html, no longer available). The uniformity does make it harder to play by touch, but it otherwise has a lot of advantages. I have been experimenting with different colours and tactile markers (little rubber feet intended to stick on the bottom of things) but I haven't settled on anything yet.
This is really interesting, thank you for sharing! I wonder if it would have been more successful if it had just two rows instead. The duplicate rows add more power, but at the cost of adding ambiguity when sightreading, as you have to impromptu figure out what fingering you want to use. And it also makes it more expensive and intimidating looking. I feel like two rows would provide the core benefits while being more accessible. But I could be totally off, as I've never touched this thing.
The more compact nature of the layout means you pretty much need more than 2 rows, or fingering some things seems to become very difficult. It feels like your fingers get tangled more easily if you stick to two rows (which you do at first, out of habit). I don't think it adds much to the cost (the different places to play the same note are all on the same physical lever), but it certainly makes it look intimidating, and the "playing by touch" issue is unsolved AFAIK.
I think the physical size of pianos is why the first decent design "won"; you don't take your own piano to where you're playing, you play whatever's there. In the electronic age there's a chance for Janko to take off, as that's not universally true any more, but I'm not holding my breath.
Transposing is effortless on most modern keyboard instruments: you press the transpose button. Now you have a choice of what key it sounds in and a choice of what fingering you find convenient.
Isn't that better than your suggestion, where you're locked into one fingering?
I don't think real pianos are tuned precisely like that, and the use of a dominant key may have been about historical trade-offs in tuning between different keys, but that's mostly a guess.
There's also the ergonomic aspects of key size and hand width.
Guitar and stringed instruments are easy to transpose because there are no "blessed" notes (other than the open strings, but even those can be turned nonstandard).
Building off of this to talk about functional harmony (I recommend a book called modeology)
Look up roman numeral notation for this next bit.
A V-I is called a perfect cadence in the major scale, and pretty much any other mode in which those chords naturally occur. The most important tones are the major 7th, which is one semitone away from the root and has a strong desire to resolve up, and the perfect 5th, which is a very stable sound that wants to fall up a fourth or down a fifth to the root. As such, the third can be major, minor, omitted, or replaced and the function of the chord remains pretty much unchanged. This is called modal interchange, since we are borrowing from a different mode for the root, in this case any mode other than Ionian (major). Mode changes the strong and weak cadences, however, so be mindful your alteration doesn't make a progression fall apart.
My favorite is ii7, iio7, and Imaj7. Another topic thats fun is tritone substitution, but that would make this a borderline blog post.
o in this context is a diminished chord, which also means that 7 is double flat (bb) because theory.
It can get crazier, there's tons of YouTube videos that explain it better. But
> One step further, is learning different chord fingerings
By the way, pianists use the term "fingerings" for which finger is placed on which key. For example your C-E-G chord which you wrote with 1-3-5, the standard way of playing this as a beginner is the 1-3-5 fingers (1 being the thumb etc). More advanced players often play it with the 1-2-4 fingers. That lets you quickly change to the C-F-A chord with 1-3-5 fingering, which is essentially the F-A-C chord. By moving the hand one key to the right, you get D-G-B with 1-3-5 fingering, and voila, you got your first chord progression:
> correlated by how "willing" the notes are to resolve to the mode root (the first key of 7 you hit).
I've heard this explanation many times before, and I still don't understand how notes can be "willing" to do anything, or how a note can "resolve" to the root. Can someone ELI5?
It's theory, not law, and is in large part cultural and contextual. Basically, the motifs that we are familiar with have a certain notion of consonance and dissonance...consonance being notes that sound 'nice' together (due in part to overlapping harmonics reinforcing the sound), and dissonance being notes that sound a little 'off' or 'tense' (harmonics that don't overlap, so you get a 'beating' effect as the harmonics go in and out of phase).
Note 'resolution' is basically going from dissonance to consonance, providing a pleasing sense of stability.
What is considered consonant and dissonant can change with time and culture, so it's not a completely solid relationship, but overall a general pattern (like the notion of 'Major' being happy and 'minor' being sad is not a universal association).
The willingness attributed to the notes is really about the expectations we have as listeners. For reasons that are some combination of acoustics, our auditory system, and cultural norms, we perceive some sequences of notes as creating tension, and others as resolving that tension.
When you hear a musical line that resolves, it sounds finished. When you hear one that doesn't resolve, it sounds like there's another note that needs to be played to finish it.
Different people emphasize different ones of these factors in their theory of how this works, but one thing we can mostly agree on is that different arrangements of notes played together or in sequence convey different moods or emotions in a way that is fairly predictable, at least within a given cultural context. So composers can intentionally write happy or sad or suspenseful songs or whatever using music theory.
The musical modes can be thought of as a single pattern of note intervals across an octave, but starting on different notes in the sequence. The standard way to lay out the modes is to start with a scale that is called the Ionian mode, the diatonic scale, or the Major scale.
These names all refer to the same pattern of intervals. Counting semitones, this pattern is 2-2-1-2-2-2-1.
(The "2" means two keys on a piano, or two frets on a guitar, for example.)
If you play that pattern, you will hear the familiar major scale in the key that is named by the first note you play. If you start on C, it's C Major.
The other modes are the same pattern, but starting on different notes. If we continue to use the notes of the C Major scale, leaving the pattern where it is, then starting on D instead of C gives us the Dorian mode (in the key of D). Its pattern is 2-1-2-2-2-1-2 -- in other words, exactly the same set of intervals as the Ionian mode, but shifted by one.
If you keep moving the starting note up by one, you get each of the other modes: Phrygian, Lydian, Mixolydian, Aeolian, and Locrian.
Each mode has its own mood or emotional character. The Aeolian mode, for example, is the same sequence of intervals as the Natural Minor scale, and tends to sound melancholy or wistful.
A simple trick for hearing what a mode sounds like is to sit down at a piano, find the first note of the mode you're interested in, and play the first note against each of the other notes in sequence up and down the scale. It's easiest to do if you start with the Ionian mode in C, because it's simply all the white keys on the piano, starting with C and going up to the right. So, to hear what the Ionian sounds like, play C, then C and the white note immediately to the right of it at the same time, then the next white note, then the next, all the way up to the next C (and,optionally, back down again).
If you repeat this exercise, but start on D instead of C, you'll hear that the scale has a different character. Keep going, and you get a nice sense of what each mode sounds like. Dorian's cheerful; Aeolian's melancholy; Phrygian's a little exotic; Locrian is just weird and never seems to resolve.
If you play an F# then a C, does there not seem to be a "relationship" between the notes, that is different from if you play G then C? I think for most people it intuitively makes sense when they hear the notes. To me it sounds like F#->C is "awkward" whereas G->C is more "satisfying". Not sure if the explanation is in culture, neurology or physics or what, but to me an "emotional" relationship between the notes seem apparent on listening.
Since the notes are just frequencies of sound waves, it makes sense to me that some frequencies sound better together and some frequencies don't quite sound resolved when played together.
Consider arm sized, hand sized, fingernail sized, and pinched-fingers "tiny" sized. About 1000, 100, 10, 1 mm. Tray of cookies, hand-sized cookie, chocolate chip, and tiny crumb - yum. Now zoom 1000x, taking tiny sized to arm sized, and they become 1000, 100, 10, 1 um. Call this microview, with microscopic microorganisms measured in micrometers. A grain of salt is sized like a cardboard box, a head hair like a hand-sized pole, your red blood cells like red M&M Minis candies, and bacteria like nonpareil sprinkles. Zoom 1000x a second time for nanoview, with nanotechnology and nanoscale nanoparticles measured in nanometers. Arm, hand, fingernail, tiny, are now 1000, 100, 10, 1 nm. Bacteria are garbage bags and benches, viruses are small sports balls, proteins are chewing gum, and atoms are sand. Beach world, with a grain of salt towering over the city skyline.
Scale/size can seemingly be taught accessibly young... we just don't. Nor use size as an organizational frame to catalyze understanding of the physical world. Asking first-tier medical school graduate students how big red blood cells are... goes surprisingly poorly. But there seems lots of fun to be had.
Years back I was doing passthrough AR, and spiked a zoom to nanoview (atom beach and towering grain of salt; hardwired to a parking-lot view out my window). Was crude but fun. Intended to go to picoview, to demo a physically-realistic atom-bonding interactive, but it got put aside.
Magic school bus uses "zoom you", rather than "zoom objects". Tradeoffs, but one advantage of "objects", especially for chunked-zooming and AR, is you retain your environment to use as a size reference. Eg, "the red blood cell is M&M sized, and a grain of salt is cardboard-box sized, therefore the table, room, and playground are...".
Fwiw, my fuzzy recollection is someone wrote a simple VR zoomer in unity some years back. Re dioramas, you might find some inspiration from http://www.clarifyscience.info/part/Atoms (very slowwwwly loading page - wasn't intended to be public). I did outreach with a few 1 m diameter tables, each with assorted objects. Pool-noodle floats for hairs, Goodsell's molecular bio illustrations in nanoview, etc. Thought about how one might design a larger exhibit.
But I never did manage to find a community interested in this kind of thing, aside from scattered folks at MIT and Harvard. Size/scale is occasionally taught done down towards primary, and is taught in most every science and engineering curriculum. But knowledge of size/scale is rarely then used to teach other things, so there's little incentive to teach it successfully. With "well, that shouldn't come as a surprise, but oh my, yipes" outcomes.
There is a book from the 80ies/early 90ies "POWERS OF TEN: About the Relative Size of Things in the Universe" by Philip & Phylis Morrison. That is exactly that.
It's similar. There are several such continuous or 10x-stepped zooms, but a downside for education, is it's easy to lose context. Were viruses bigger or smaller than bacteria? Was Earth 10^6 or 10^7 m? Whereas, if you're asked say, how big is a glass of water, you're unlikely to say fingernail sized, or spread-arm sized - you've handled the item, and so now have a feel for how big they are. The 1000x chunking let's you leverage that. Once you've eaten red blood cell M&Ms, you know they are 10-ish..., err, ummm, not mm, not nm, so um. The Earth is a blue marble, so 10-ish..., err, mmm, not km, not Gm, so Mm.
A downside of 1000x steps is objects can end up inconveniently sized. 2 um zooms as either 2 m or 2 mm, often inconveniently large/small. Different zooms are good for different things. The small toy car scaled nicely for making roads, vs the larger toy car with moving doors; the small doll scaled nicely for making rooms, vs the larger doll with brushable-not-painted hair. Here, the 1000x step zooms are good for remembering and interconnecting sizes, but for then playing, you'll often want some other zoom.
for a neat visualization of this, https://www.htwins.net/scale2/ — it’s a bit old now, so to view it on mobile the app is pretty much required. i can’t remember exactly what i paid for it on the app store, but i know it’s been worth the few dollars
Gotta spell anything over a voice line? Use it. No more screwing around with "B.. like Bryan, and then U, like .... under". s-c-r-o-l-l-a-w-a-y: sierra charlie romeo oscar lima lima alpha whiskey alpha yankee.
I learned it when I was 14, and I genuinely used this a TON in the decade and half since.
When calling an airline, ask if you can give them the PNR (Passenger Name Record, what your ticket calls the "booking reference") and then read it off with the NATO alphabet. You will get instant credibility with the agent on the other side as a flyer who knows their shit, plus it's hella fast.
I learned this when I was a teen as a simple memory exercise. Each night in bed before falling asleep I'd try to run through the phonetic alphabet and I've retained it ever since (30+ years later).
It's hugely valuable for removing ambiguity and confusion in phone calls. While most people don't know the phonetic alphabet well enough to spell words using it, everyone understands when a word is spelled to them using the phonetic alphabet.
Which is what the name of Monty Python member John Cleese would be if his father hadn't decided to change it to "Cleese" because he didn't like being named "Cheese".
Whenever I need to call an airline about a flight, I look up the NATO spelling of the reservation code. So far call center employees always understood me, used it back, and it made the communication extremely fluid.
A few minutes? Ha, no way. I think it'd take me at least a few days of short deliberate practice sessions to actually memorize this from scratch. Also, useless outside of the US.
Useless outside the US? Ignoring the obvious objection, one of the design goals of the initial iteration of the NATO/ICAO phonetic alphabet was that every word in it would be familiar to you as long as you were fluent in either English, French or Spanish.
The phonetic alphabet is very much recognized and used in any English-speaking country around the world, plus airline staff working in any country period.
In statistics, the mean and median are both averages, but are technically quite different. The mean is the sum of all items divided by the number of them, while the median is the middle item in the sorted list. When they are different, this can have strange effects. For example, when estimating engineering tasks, the mean error is often positive while the median is often negative.
This is why your manager believes that engineers are too conservative and always pad estimates, while projects are always late. Indeed, they can learn both these things from observation without realising that they are contradictory. Because humans learn by the most common observation, which is represented by the median, but outcomes can be dominated by the less common, which affect the mean but hardly affect the median.
One of the rules of thumb of process management is to reduce the median, rather than the average time taken, because outliers are more likely to have one-off causes and so be harder to fix systematically. In fact, attempts to fix outliers can be a distraction and even net negative.
Interesting. I wonder if that applies in SW dev though. In standard processes each correctly produced item satisfies a customer, but for sw dev individual tasks are internal and customers only care about the final outcome.
Also standard processes normally involve Gaussian distributions, so large deviations are rare and can be called outliers, but software estimation has fat tailed or bimodal distributions, so large deviations are part of the standard order of business.
> In statistics, the mean and median are both averages, but are technically quite different.
But numerically, they are not too wildly different! The difference between the mean and the median is at most one standard deviation. If the distribution is unimodal, then the difference is at most sqrt(3/5) standard deviations.
> In statistics, the mean and median are both averages, but are technically quite different. The mean is the sum of all items divided by the number of them, while the median is the middle item in the sorted list.
There was an article posted here, one or two years ago, that explained how mean and median are very similar, as they are the values that minimize residuals or the squared of residuals.
I'm sorry I can't find it right now, I hope some other user knows which article I'm talking about and can link it down here, because it is, in my opinion, an excelent read.
Interesting. I always learned mean/median/mode were different things. The mean is the average, the median is the middle value, and the mode is the largest value. I wasn’t aware of this issue with conflating mean/median.
That being said, I remember learning that and always wondering why I needed to know what “mode” was haha
i found the concept of https://en.wikipedia.org/wiki/Fr%C3%A9chet_mean helpful. it's a generalization of the different kinds of averages and you can express the arithmetic mean, geometric mean, harmonic mean, median or even mode just by swapping out the distance metric in the formula.
The Boltzmann brain hypothesis suggests that it would be more likely for a single brain to spontaneously and briefly form in a void (complete with a memory of having existed in our universe) rather than for the universe to come about in the manner cosmologists think it actually did.
In this physics thought experiment, a Boltzmann brain is a fully formed brain, complete with memories of a full human life in our universe, that arises due to extremely rare random fluctuations out of a state of thermodynamic equilibrium. Theoretically, over an extremely large but not infinite amount of time, by sheer chance, atoms in a void could spontaneously come together in such a way as to assemble a functioning human brain. Like any brain in such circumstances (the hostile vacuum of space with no blood supply or body), it would almost immediately stop functioning and begin to deteriorate.
By one calculation, a Boltzmann brain would appear as a quantum fluctuation in the vacuum after a time interval of 10^10^50 years. This fluctuation can occur even in a true Minkowski vacuum (a flat spacetime vacuum lacking vacuum energy). Quantum mechanics heavily favors smaller fluctuations that "borrow" the least amount of energy from the vacuum. Typically, a quantum Boltzmann brain would suddenly appear from the vacuum (alongside an equivalent amount of virtual antimatter), remain only long enough to have a single coherent thought or observation, and then disappear into the vacuum as suddenly as it appeared. Such a brain is completely self-contained, and can never radiate energy out to infinity.
Well then it’s equally likely that badgers the size of Earth could be forming from time to time, and since they’re the size of earth they would decay much more slowly than our puny local badgers… so shouldn’t we see the occasional Earth-sized badger through the Hubble?
Or how about the much lower odds needed for a copy of Shakespeare’s plays translated to Klingon inscribed on a giant sheet of titanium to appear spontaneously? Far more likely to happen than a Boltzmann brain, right? So should we not have found an item or two like that?
You haven't experienced a whole lifetime. Every moment up until the current [whatever the smallest unit of time really physically possible is] is just a memory of an experience. Subjectively from your perspective there's absolutely no difference between having actually experienced all that and merely having memories of having experienced all of it. You could materialize into existence for exactly one moment and believe you've lived an entire lifetime, but in reality you're just a brain floating in space experiencing one brief moment of a lie told by the chance arrangement of the molecules that make up your being.
Now that's an existential crisis right there. There's no way to determine if I've literally flashed into existence and am remembering this sentence, and am thinking of how to continue it, versus actually existing on the timescale of a human life.
There's another way to look at it: you can think of yourself as having just come into existence this instant, having inherited all memories and evidence of any previous existence.
Now you are blessed with the incredible gift of spontaneously coming into existence, plus a vast treasure trove of inherited experience (both good and bad) that you can explore and try to understand, and of inherited skills and knowledge that you can put to use however you choose.
I came to the same realization at about 12 years old. The mechanics were different than the Boltzmann brain. I saw it as that there are only two things that define the current universe: matter and energy. How could I know the universe wasn’t _just_ created? Every atom of my house, every atom of my body, created or placed in just the spot it is now, with just the energy it has now. My memories from five minutes ago are exactly what a brain that looks like mine would “remember” from five minutes ago.
Looking back, this realization became a feeling and this feeling became the backdrop for much of my growth as a person. Sometimes I was filled with fear. Sometimes I would shed all responsibility because it didn’t feel real.
At the best times, it fills me with a profound sense of agency, balanced with responsibility. It’s like teleporting into someone else’s body. Some things you choose to play along with, taking care of “his” mother. Other things you abandon, changing direction in your life as you need, despite what anyone might say.
It's more likely a Boltzmann brain would be a microscopic quantum computer, dreaming up planets and people and you to get over the boredom of being alive for a picosecond.
> Like any brain in such circumstances (the hostile vacuum of space with no blood supply or body), it would almost immediately stop functioning and begin to deteriorate.
that is not such rare fluctuations are ephemeral. they are ephemeral because the dynamical system is reversible, for the same reason that the molecules in a box of gas dissipate immediately after they happen to coalesce in the corner
> a single brain to spontaneously and briefly form in a void
I recall first hearing about this by being introduced to it via the "giant marshmallow hypothesis"†: it's somewhat equally likely that a giant marshmallow spontaneously and briefly form in a void. A Boltzmann brain is then merely a different arrangement of atoms and stuff.
Quantitative easing (sometimes mistaking called "money printing") isn't directly inflationary. Because when central banks buy bonds, they do not pay with normal money. Instead, they pay with a special type of money that can never enter the real economy (as to not create inflation). This special type of money are called "bank reserves". And bank reserves can only be used to 1) settle interbank transfers and 2) buy more bonds. They can never leave the loop of the banking system. So the next time you see scary looking charts showing that the total amount of money increased a lot in the last two years, keep in mind that that "money" includes bank reserves that are stuck in a closed loop. This is also why Japan, who is the king of QE and "money printing", barely has any inflation.
"Not directly inflationary" does not seem like a very useful thing to say, since QE causally leads to higher inflation.
Japan barely had any inflation for a while because their QE program was temporary, and when it looked like inflation might go above zero, BoJ immediately hiked rates, contracting the economy(2000, 20006). This is how you achieve no inflation.
By the way, recently they've began trying a more expansionary policy once again. Using QE. It is working, so far. It might stop if they dive their head back in the sand!
If the loop is closed then this money has no use, it is not money. If it's not closed it cause inflation.
It's binary. How can you instead define a spectrum of cross talk between real money and sandboxed money?
The link between monetary inflation and price inflation has been "demonstrated" by the Chicago Boys, with Milton Friedman as their guru. It led to the "monetarist" school of thought - which has been the reigning paradigm in economics since.
There is no such thing as "inflation" in general.
There are different types of inflation, such as the rise of the valuations of stocks, the rise of real estate price and day-to-day prices such as food. And monetary inflation.
QE has led to various effects in various countries at different times. There is no absolute correlation "always and everywhere".
The funny thing is that a rise in stock market is always interpreted as a positive thing - despite everybody knowing that bubbles happen very regularly.
Otherwise, inflation is seen as bad - which is ridiculous since the extraordinary low interest rates kept by the FED for years ("printing money") has sustained the economic growth and avoided recession.
Economists are historians of the economy. They are able to explain what happened - and if they agree on the general picture, they disagree on many points. Which is normal, that is a research field, so there are debates.
Some say lessons should be learned from History... well, for sure, all other things being equal, stuff tend to repeat, but as time goes by, the other things are not equal at all - or only to a certain point.
I majored in History but historians are not my first sources to predict the future. Despite having repeatedly failed at predicting anything, we can't help but ask economists to be oracles and ask them to set-up policies.
Recently the fed has been buying corporate debt etfs[1]. Not an expert, but I assume that must increase the money supply in the general economy? QE also pushes down the interest rate, making it easier for banks to lend to the general public.
I think QE usually doesn't lead to inflation if your employment drops at the same time or your population size is shrinking (i.e. Japan). In such a situation consumer demand decreases, balancing out the additional money supply. As evidence of this notice how QE in the US did not lead to high inflation until employment started picking up.
So why don’t we just put $100 trillion of that money into the economy? Why not just pay for everyone’s houses and college debt and personal debt this way?
There you go again. Asking a straightforward question in clear, comprehensible language, without any hand waving. How can you possibly expect the economic sophists to engage with you if you persist in this behavior?
I doubt point 1), pls enlighten me. When a commercial bank receive interest from bonds from a central bank, it can then take out the proportion of 'real money' it put in before. Say, I'm BofA, I put in 10b as compulsory lock (my laymen term), I use some other money to buy bonds and receive interest, say, 200m. Then if that 200m goes to my reserve, I'd have 10.2b, which is 200m above the compulsory lock, then I can take that 200m out and use for business. Money doesn't have the label 'old money, can use' vs 'newly-printed money, can't use', does it?
Minor edit: adding missing 'take' (in 'take that')
So the person who sold those bonds after buying them from the government gets that money - in effect the government gets that money. In a puritan world sure, but if you expect the Fed will buy the bonds you can just buy them from the government and then sell to the Fed. And because there were negative interest rates for a time, that is exactly what happened as any other reason would be irrational.
So it's not a closed loop - it's just printing with more steps.
Likewise now its destruction with the removal of these reserves.
This is a dumb take. By using "reserves" to buy bonds, it means cash that WOULD have purchased those bonds (like a pension fund or your vanguard fund) is now going... somewhere else. Like stocks. If that money goes to stocks instead, then the price will rise. So while it may not be DIRECTLY inflationary, its only like 1 level removed.
Doesn't the US have a fractional reserve banking system? For every $1 in bank reserves, the bank can loan out $9. So even if the special money of the Fed can't be used directly, it has the effect of allowing banks to have a larger base of reserves which means the banks can create money by loaning it out to others.
You probably heard about the quirks in the French numerical system. For example, 99 is translated "quatre-vingt-dix-neuf" which is literally "four twenty ten nine".
This is a legacy of the base 20 numerical system used by the Celtics thousands of years ago before the modern Romance language drove it out. The hybridization with our current base 10 system came with the Roman conquest of the Gaul.
Another example of the base 20 in use is the Quinze-Vingts National Ophthalmology Hospital in Paris, where Quinze-Vingts (fifteen twenties) refers to the original capacity of 15*20 = 300 beds available when built in the Middle Ages.
Yep! And, at least until the 'standardization' of Irish by a bunch of non-native speakers, modern Irish still used a base-20 system. It's actually still in use by a lot of native speakers in the Gaeltacht areas, though it's dying out as it wasn't included in the standard because it was 'too hard' (and, the cynic side of me would say, too different from English). It's a shame, but it's so nice to use and I try to use it whenever I speak Irish as it's part of the distinctly non-English heritage that is being lost in the language as it basically becomes coded English thanks to poor L2 learners.
There's also all sorts of languages that have non-base 10 counting systems. Some use 3, some use 12 and some have even more exotic ones. One in Papua New Guinea even originally has a base-27 counting system!
The Danish language still has remains of this. 60 is 'tres', from three 20's. 80 is 'firs' from four 20's. The odd tens are even weirder, since they count upwards, so 50 is 'halftreds', which is half of the third 20; 70 is 'halffirs'; and 90 is 'halffems', four whole and a half of the fifth 20. But 100 is 'hundred', so 'fems' alone is never used. Also, for larger numbers, the digit order get weird: 123 is hundredogtreogtyve, one hundred and three and twenty.
Btw, in the spirit of the title of this thread, in English you would say "as a French person". In the past, "Frenchman" was used but it's understandably out of favour these days.
"French" can be used as an adjective but not a noun in this context. It's a very common error, I suppose since "français" can be used as an adjective or a noun en français :)
That is wrong. The demonym for a person from France is "French". It only sounds wrong to an English ear because we're so used to calling them Frenchmen, but it's chauvinism on our part, not a linguistic mistake on theirs.
Surely they the French could figure out that "neuf-dix-neuf" could work as well. I'm learning some South East Asian languages and the simplicity is astounding. Who needs "dix" when "dua dua" works just as well? (Indonesia)
Some French speaking countries use a more logic term for higher numbers. In Belgium and Switzerland 99 would be nonante-neuf which is ninety-nine.
My uneducated guess is that at the time, it was easier to keep the base 20 to visualize a higher number. Just like we say "a few dozen eggs" to get an overall sense of the volume, it made more sense to keep it that way instead of a more arithmetic approach. Not sure why it never evolved, tho.
Logical according to whom? Why is base 10 more logical than any of the other bases? I'd argue it's not, it's simply that it's what we're used to so we think it's more logical.
I don't agree with that use of 'logical'. That would, if anything, be 'efficient'. But languages aren't efficient; in fact, there's actually some evidence to the opposite - that languages have built in redundancy to make it easier to pick back up a conversation if part of a sentence is missed. This has been proposed as part of the origin from noun classes.
But I was more going on to the fact that 'base 10' is somehow more logical than any other base as a chosen one for natural languages. I see no reason why that is so.
In Mandarin, you can count to 99 using combinations of just 10 short sounds (without any variation). Ordinal numbers are made using one sound that goes in front of the number.
The names of days are the equivalent of Day 1, Day 2 etc. (except Sunday which is Sunday/Skyday), and months are Month 1, Month 2 etc.
Depends what you mean by a good reason I guess. A lot of very capable mathematicians have put time in to the problem and it's proven to be pretty resistant to a solution. I think problems like this often feel a little incongruous because they are so simple to state but so hard to solve.
I think this is getting at a pretty good point. Half of all numbers (all evens) will scale by .5 to the subsequent number. The other half (odds) will effectively scale by approximately 3/2, as you have shown here. Obviously over a large amount of iterations this should cause the calculations to trend lower and lower.
One difficulty with this approach is that it doesn't help disproving the existence of cycles, it only makes it less likely that the sequence diverges to infinity.
I'm sure there is a use for it, that pattern may be representative of some other phenomenon in the universe that we haven't correlated it to as of yet.
Well there is this famous paper by Volovich [1] where he argues that space is basically pixelized and p-adic numbers might best describe their geometry. But of course this has no connection with the reality we have access to.
p-adic numbers are also used in various areas of number theory (not my expertise).
I see them like the lesser known sibling of real numbers.
One of Bach's most famous church cantatas, BWV 140[1], was written for the 27th (or no. 3^3, the importance of this will be apparent in a moment) Sunday after Trinity, the last possible Sunday before Advent.
The cantata is a chorale cantata based on the Lutheran Hymn "Wachet auf, ruft uns die Stimme", the opening movement is a chorus that is based on this tune. The tune is in Bar form, the first 3 verses are repeated, followed by 6 verse which is sung once through (3x2 + 3x2), and Bach's chorus mirrors this structure. Bach decided to set the piece in E-flat, which is a key signature with 3 flats. He also decided to make the time signature 3/4, so each bar has 3 beats. There are 3 groups of musicians involved (not counting the obligatory basso continuo): 1) vocal forces, 3 parts (Soprano + Alto + Tenor, the bass mostly doubles the basso continuo and is not independent); 2) strings, 3 parts (violin 1 + violin 2 + viola); 3) winds, 3 parts (oboe 1 + oboe 2 + alto oboe). Also, the opening melody of the chorale outlines the 3 notes of the tonic triad of E-flat (E-flat, G, B-flat), while later on, Bach being Bach, there is a miniature fugato where the three lower voices open with the same 3 pitches in a different order (G, B-flat, E-flat).
I'm usual sceptical of most other numerological readings of Bach's music precisely because when Bach wished to make a point with numbers, he makes it damned obvious, as in this case.
I use !$ multiple times every day but you just helped me make the connection to try !^ for the first argument and I think I'm going to start using that as well. Thanks!
At least in zsh, there are also !:1 for the first argument, et cetera with the rest of natural numbers, and something like !:* for all the arguments without the command.
The generic form is !n:m for n-th last command line and m-th argument, with an omitted n being a shortcut for the last line.
And if you don't want the last argument but the nth one, press Alt-n before Alt-. (n starts at 0, so Alt-0 Alt-. in the above situation would get you "mkdir").
True - and a good point - but also a troubling one as the clear implication is that male suffering is really not considered a concern - at least to the news platforms.
Or maybe we don't care much about what happens in our jails. As if being raped in jail was part of the sentence. Note that rape is only the extreme form of the violence that inmates, mostly males, endure.
I can't see where the suffering of men would be silenced because of their gender.
On the contrary, at last, more and more often men express their suffering, which is great because the "silent strong" man used to be the reference. Men as warriors don't ever complain etc.
There are more and more often in the news men stating their feelings: veterans with PTSD, Johnny Depp courageously telling how he has been abused by his partner, men denouncing being on verge of burning-out at work etc.
As a man, I consider that yes, there is still plenty of men suffering in silence. And that needs to come out. The more men are able to get over the silent strong man figure - John Wayne! - inherited from the past, the more the media will talk about it.
Nobody is silencing the suffering of men ; the men need to reconnect with their feelings and learn to express them. Most women are more able to express their feelings and they are indeed "silenced" - even if less and less so.
An example? The "mental load". In many modern heterosexual couples, the woman is still in charge of the general on-going of the home, and the man goes to work without that load. When he comes home, he waits to be asked or just does the minimum.
If you have ever been in charge of a project at work and your colleagues are only participants, you know very well the difference. Being in charge of the general coordination, ensure that everything is done when required, is quite a load.
Much of what you say is true but my point was about the coverage of these issues in mainstream news. I don't think this is particularly a male silence problem.
They comment in the report on whether they believe this occurs elsewhere at similar rates within prisons, and they indicate that evidence suggests yes. However, great point that it likely affects more people in the USA per capita by virtue of the large incarceration rate here.
I wonder what aspects have gotten worse over the years and how the dynamic has shifted, if at all. Or maybe it doesn't matter this was from 20 years ago? :<
This reflects the abnormally high proportion of people jailed in the USA:
- 80K inmates for 68.5M inhabitants in the UK => 0.12%
- 1,215K inmates for 331M inhabitants in the US => 0,37%
There are therefore around 3 times more people incarcerated in the US than in UK, proportionally.
As rapes are very frequent in the US jails (as this terrible report points out) and most of the jail population are men, then indeed, what can conclude that - statistically speaking - that most rape victims are men.
But if you take apart the population in jail, most rape victims are women.
Of course, one can argue that prisoners are human beings like others: they are.
But the gendered statistic on rape is turned upside-down by the totally abnormal proportion of people in jail in the US, when compared to comparable countries, PLUS the totally despicable jail system where rapes are numerous.
Note that rapes are very common as well in jails of comparable countries. I have no information about this, but I can't see why rapes would be more frequent in American jails rather than British jails - all others things being equal. But maybe the US jail system and/or the US jail population may lead to more (or less, why not) rapes: I have no clue about this.
I'm not saying that they are in jail for no reason, but that the US society is ... dysfunctional. That analysis may appear outrageous in the US, but it is something obvious to many other countries.
But from saying "dysfunctional" to point out why it is so, there is a huge leap where political inclinations are bound to express themselves. In this case, these are not biases, because politics is precisely about how we live together in a society. And indeed, people don't live in the same world, in the sense that their interpretative frameworks are so diverse.
That's why I find the formulation of that figure problematic. I would find it less click-bait-y and trolly to say: "Rapes of men in the US jails are incredibly numerous" - maybe adding "to the point of outnumbering the rapes of women".
Why is it problematic? Because it echoes a large incel, masculinust, far-right discourse against feminism. Each time women (and "woke" men, to use their vocabulary) denounce something, there are always troves of righteous men pretending that the issue is true for all sexes alike (since in their views, sex and gender are the same thing).
The formulation "Most rapes victims in the United States are men" is a way of saying that all genders are equally victims of rapes in general. But they are not. In the world, and probably in all countries except the US, rape victims are overwhelmingly women and girls. The US exception is not related to a gender equality but to the abnormal rate of incarceration.
I don't know if there was an intention in the formulation or not. Maybe not, so I'm not at all in an accusatory mode. I'm just saying that this formulation offers ground to misinterpretation.
I find your analysis abhorrent. It’s like you begrudgingly acknowledge the humanity of men prisoners.
> The US exception is not related to a gender equality but to the abnormal rate of incarceration.
It’s related to a gender inequality. The US incarceration system has gender inequality built into it, so more incarceration means more inequality.
You could say indigenous people didn’t really have a childhood sexual abuse problem, but an abnormal rate of state-mandated religious schools. Sounds pretty gross, right?
What does “statistically speaking” mean in your comment? If I say “There are more species of beetle than species of octopus”, does that mean the same thing as “There are more species of beetle than species of octopus, statistically speaking”?
Is this not just because most rapists are men (and sexual assaulters more generally are men), and the only people they can rape in a prison environment are other men?
Which is also why prisons are segregated by sex, to avoid women prisoners being sexually assaulted, raped and impregnated by deviant men. If prisons had no such sex segregation, it would overwhelmingly be the women being raped, not the men.
yes but it doesn’t change the fact. the men who are on the receiving side are just as much victims regardless if a member of the same sex or differnt is the perp
True but I think the point is more that when men want to rape, they'll rape anyone and anything they can. Women, men, children, animals. Their depravity knows no bounds.
It does make me wonder if penectomy and castration should be the standard punishment for rape, not just incarceration. Otherwise they're just being given access to a new set of potential victims.
An electric guitarist usually has a pedalboard that they use to make different guitar tones. Let's approach this like an electrical engineer might.
Time is a straight arrow: call it t.
The guitarist plays, and their guitar pickup records a signal. At any time t, measure the value of this signal's waveform: call it x.
That signal can go into a pedal that maps each value to create an output signal. Call it f(x, t).
The pedal is a linear system when it satisfies f(x1 + x2, t) = f(x1, t) + f(x2, t). In English, you can blend two guitar signals going into a single linear pedal, and it will sound the same as if you used a separate pedal for each guitar.
The pedal is a time-invariant system when it satisfies f(x, t1) = f(x, t2). In English, a time-invariant pedal does not warble over time.
The magic happens when a system is both linear and time-invariant. These systems are so special, we call them "LTI" for short. Feed an LTI system a single, really loud, POP! Something like a clap, or a gunshot. Record the sound of this gunshot, and call it "impulse_response.wav". This file characterizes EVERYTHING about the system. If you know how the system responds to an impulse, you will know how the system responds to any other sound. All you have to do is convolve the sound with the impulse response, i.e. use the contents of impulse_response.wav as a set of weights and pass the sound through it.
The echoes you hear when you speak in a large room is a real-world example of an LTI system. People will go to a cathedral, an open field, or a large tunnel, set up a microphone, and fire a blank pistol. They can go home, convolve the sound of their guitar or voice or whatever with the impulse response, and it will sound exactly like they're in that space! There are whole communities of people making and sharing impulse responses out there, and I just think it's the coolest thing ever.
For any guitarists reading this, I'll bring this back to the pedalboard. Another cool thing about LTI systems is that when you chain multiple of them together, the order does not matter. Reverb, delay, EQ, and wah are all LTI effects. If you've ever wondered why pedal ordering sometimes matters and sometimes doesn't, this is why.
> Another cool thing about LTI systems is that when you chain multiple of them together, the order does not matter. Reverb, delay, EQ, and wah are all LTI.
Delay and reverb may be LTI in your definition, but they very much do matter what order they are in, every time.
Delay before reverb is going to give a more staccato sound, whereas the reverb before delay is probably just going to wash the whole thing out if you’re not careful on your settings.
I suspect you are right when it comes to analog circuits, but they are definitely LTI in a typical digital implementation. I checked this by adding reverb+delay and delay+reverb to two copies of a track in REAPER. I inverted the phase of one, played them together, and they cancelled out.
This method of convolution of the sound is nowadays utilised in "profiling" amps like the ones made by Kemper. There are these particular amps, effects, speakers, and/or their specific combos, which are considered sacred/sweet/holy which are highly sought after. But most of them cost a lot. There's also the problem faced by touring bands where they have to lug around huge setups. And then there's the issue of maintenance and repairs. These profiling amps help musicians capture the sound of their favorite setups and replicate it in a compact setup. Hugely popular, if not for the price. There are still purists who swear by the sound of 'analog' setups, but for most of the purposes the sound is identical. Fascinating stuff.
> The pedal is a time-invariant system when it satisfies f(x, t1) = f(x, t2). In English, a time-invariant pedal does not warble over time.
> Reverb, delay, EQ, and wah are all LTI effects.
The math doesn't work, assuming scalar f, x, t1, t2 as implied by their description. For a delay the x passed at t1 has no correlation to what f calculates to. It needs vectors F(X, t) and X I think.
You found it! I rewrote my comment like six times because I think the rigorous notation where the signal is a function of t is a little messy for a HN comment. I tried picking a notation that was less intimidating and wasn't too obviously inconsistent.
It's worth mentioning that in a typical pedalboard you're probably going to see zero linear effects. The closest ones are probably an EQ and pure digital Reverbs and Delays. Almost all other effects have tons of saturation.
I thought I remembered a single platform where people upload their IRs, but I can't find it. I remember that one being too experimental for my taste. The best-sounding IRs are usually released in sets by a single individual, and mass-compiled in forums like these [0].
According to the Merriam-Webster dictionary, the term "factoid" was believed to be coined by Normal Mailer in a book he wrote about Marilyn Monroe. Per said book, factoids are "facts which have no existence before appearing in a magazine or newspaper, creations which are not so much lies as a product to manipulate emotion in the Silent Majority."
While many use "fact" and "factoid" interchangeably, they are (by original definition) inherently two entirely different things.
I'm gonna throw some Hindu mythology- time travel here.
Did you know that time-travel is a modern concept but in the Indian scriptures, which dates back to thousands of years already mentioned about it. It is a story about King Kakudmi, who sought a husband for his beautiful daughter, Revati. They both had travel to Lord Bhrama's(Creator of the universe) place which was apparently very far from our planet to seek advice. Lord Bhrama then explains them that time moves very differently on his plane of existence and the people they knew on their planet are already dead as it's been 116 million years that has passed in last 20 minutes of Bhrama's time.
They then travel back to Earth after seeking the advice.
Hindu scriptures mentioned a lot of "modern-day concepts" thousands of years back. Isn't it fascinating?
I mean isn't this just how fiction supposed to be ?
I am sure you can find a huge number of "modern day devices" mentioned in science-fiction of yesteryears as well. All inventions start with imagination after all.
Imagine they had a https://en.wikipedia.org/wiki/Metamaterial which would have enabled that. I mean withstanding 22.000°C while being transparent to hard ultraviolet. And some sort of 'ultravoltaic' cells behind that...
...while at least some texts speak of those things as full of crystals and mirrors as part of their engines and on board power plants.
There’s a publication called “Date Panchang” that gets a new version for every new Hindu year. The first few pages talk about time and how Brahma experiences time.
It talks about how long Brahma’s life is and where I’m his life he is right now. Not sure how they calculated that but it’s pretty wild to think about.
What we call “free will” consists of a multitude of forces we have no control over.
Look at your choices and try to find an independent entity making a decision.
You will likely notice your conditioning and preferences, beliefs, emotions, thoughts, physiological and psychological state in the moment, your environment, habits, trauma patterns, all collaborating toward one thing or another.
“Free will” is a shortcut for all these forces we have mo control over. Even when we explicitly feel we’re in control - usually the goal this control serves is not in our control. We don’t choose our preferences, tastes, morality, desires, or thoughts.
Paradoxically, in this complete absence of “choice” as we understand it, lies freedom.
>What we call “free will” consists of a multitude of forces we have no control over.
Yea I also noted that, I always thought, I like red meat, but it was not my decision to like red meat. Sure I can control the urge or desire to have it or not have it. But I never "decided" to like red meat !
I think Sam Harris used something like this to argue that there is no self. Basically he said something along the lines of like “picture a movie in your head” and then “where did that choice come from? You had the ability to choose if the movie that appeared in your head was the movie you wanted to focus on, but you didn’t have a choice about which movie popped into your head first”. And then he talked about like, if you couldn’t control which movie popped into your head first, then “who” did? Is that the self? Or something along those lines anyway—I’m poorly paraphrasing here, but it blew my mind when I tried to think about it.
Yet, none of these forces are ultimately the primary cause of your decision. Rather they combine with your consciousness (your thinking about these forces and your state of thinking) and chance (randomness) to result in your choices.
Put your non-dominant thumb in the palm of your dominant hand and squeeze it as hard as you can for thirty seconds. Afterwards your gag reflex will be suppressed for about a minute. Enjoy touching your own uvula.
Some of these posts have been incredibly interesting but this is the first one to be both bizarre but also immediately permanently learned. What the hell? What else is this weird body capable of haha
Oh, and most code editors I'm aware of will switch out the whole-word functionality of Ctrl/Double-click with whole-syntax-element, if that's more your thing
And X11 on Linux (if you're not using Wayland) supports middle-click emulation, where pressing the left and right mouse buttons simultaneously (debounced to within a couple milliseconds) will be interpreted as a middle-click.
I may or may not use this with my laptop dozens of times a day (it's how I copy/paste out of xterm).
(Fun bit of trivia, my old EliteBook 8470p has a trackpad+trackpoint setup with two independent sets of buttons, and while I don't seem to be able to middle-click with the trackpoint buttons, I can send a middle-click by pressing both trackpad buttons, keep them held down, and then independently send left and right clicks (press-release) using the trackpoint buttons as much as I want. I somehow doubt that was an accident. Engineering archaeology is fun :D)
When holding the last click of the sequence multiple words/paragraphs can be selected. In some editors quad-click selects all.
In code-editors tripple selects a line (hold to select multiple). When moving/deleting lines this is much more semantically clean (and faster) than manually dragging from the previous line's line end to the next lines last character before the line end.
> When holding the last click of the sequence multiple words/paragraphs can be selected.
When I wrote about the double-click-drag thing, I thought triple-click-drag might logically also be a thing, but it doesn't work for me. (Firefox, KDE, Linux)
In Emacs, right click adjusts the closest side of the selection, and double right-click cuts that region. (And it obeys the mode of selection if you started with a double- or triple-left-click.) Double-left click also selects s-expressions.
Ahh I used a Magic Mouse and trackpad for so long I forgot about this. It makes me a little sad. On the other hand panning and gestures are worth the loss.
By far the most important martial arts skill is the ability to function effectively after receiving a powerful blow to the face, solar plexus, etc. The only way to develop this skill is by practicing it; very few people do, for obvious reasons. If your martial arts training does not include this, it may have many benefits, but is severely lacking as training for actual fighting.
I always thought this, but only works if you are alone, or the person you want to protect can run too. Fighting is most useful to defend someone else who can’t run or fight.
Absolutely. And functioning effectively includes retaining the ability to run away, rather than falling to the ground in a heap, which will be followed by your opponent kicking you in the head.
I guess the two questions that come to mind after I read this are
1. How can I OW objectively OW tell oww the difference ow where being seriously jarred (ow) ends and being significantly injured begins?
2. As a person ages, is it possible to train in this way without ending up seriously banged up with long-term ramifications? Or is this more of a "don't use a skateboard" type of thing with no leeway?
1. Practice getting hit. (Preferably with a partner who has enough knowledge and skill to avoid injuring you.)
2. You can practice at low risk of being seriously injured. Again, you need a partner who is sufficiently knowledgeable and skilled to help you without hurting you.
Grappling arts that emphasize groundfighting give you more practical experience of fighting usefully with lower risk of injury. Because you more often start out on the ground, you're much less likely to be injured in a fall. Because you strike less, you're less likely to injure yourself either by striking something or by being struck.
If you want to get past the non-useful reactions to being struck, though, then you probably also want to do some striking artor sport with contact--boxing, kickboxing, muay thai, et. al. Just be sure to use safety equipment and work with someone who has both skill and empathy.
When ever drilling or hole-cutting into fiberglass, always use masking tape generously applied over and around the area being cut into. Run drill bits in reverse to get past any gelcoat or paint coatings, then run the drill forward to affect the cut. The same goes for hole saws. Slow speed is best with the proper blade for jigsaws. This will prevent cracking of coatings (especially gelcoats) and splintering of the fiberglass.
I'm going to go a bit random...get a styptic pencil!
Nobody I've met even knows what they are. I came across them years ago in the shaving section, and was blown away by their utility.
They are solid pieces of some saltlike material that are super effective at stopping bleeding. We don't even have bandaids in our house anymore, the pencils are way more useful.
In my experience, a styptic pencil leads to fast coagulation compared to a band aid, but is more likely to leave a visible scar. My go to for small cuts is 3M Cavilon (No Sting) in a spray pump. It also comes in wipes, which are helpful in first aid kits.
It may be size of cut. I used one once on something I definitely should have gotten stitches for(folding knife closed on my thumb, and I reactively pulled away), and it did scar.
For shaving cuts, typical knife cuts, paper cuts, etc, noticed nothing really.
> I think today there's even variants approved for such use.
Yeah, look up ‘Vetbond’ and ‘Dermabond’. They are not strictly superglue, but longer chain cyanoacrylates, which confer them some properties such as higher flexibility and bacterial growth inhibition.
Indeed. As I recall, supeeglue was actually developed for this purpose, but they decided it released to much heat (presumably when used in large quantities) and so divised these other chemicals.
It's very common in the US too. When I had knee surgery that involved cutting my kneecap in half(long scar), they superglued the skin back together. Sutures are actually rather rare for most procedures.
No need for peroxide, just slightly salty water is better unless visibly contaminated. By using peroxide, scars are often worse, which is thought to be due to killing cells which contribute to wound healing. Furthermore, peroxide doesn’t significantly help prevent infections.
Our use was less interesting — shaving. Sleep was limited while underway so you would try to milk every minute possible. You'd be rushing to change, shave, and get to watch or muster which meant no time to wait for a shaving cut to stop bleeding. We'd just dip the pen in water and rub it on the cut to stop it from bleeding before heading up.
I've owned both, and found 'alum blocks' pretty useless. Looking up the ingredients, they seem very similar however. Maybe the pencils are more concentrated? They are solid white and chalky, not semi translucent and slick like the alum block I bought.
Wombat is the only known animal whose poo is cubical. The reason for this appears to be, that this allows the poo to be easily stacked high to mark territory.
The original study of the Bare-nosed wombat was produced by a 15 year old boy, Peter Nicholson, who in the 1960s used to sneak out of his boarding school dormitory of a night to crawl into wombat burrows and make friends with the wombats. He used to "speak wombat" to them and his scientific paper is still cited.
Bootstrappable Builds is one of the more important and hard open software projects today. Going from 512 bytes of machine code plus a ton of source code all the way up to a full distro.
I wanted bootstrappable.org to focus on the individual projects (such as Mes, bootstrapping the JDK from C, etc) instead of distro integration. The idea was to avoid the impression that this is a distro project (much as reproducible-builds.org used to look like an exclusive Debian effort).
It might be good to reshape the Bootstrappable Builds effort in a similar way to Reproducible Builds, in order to make it appear to be and also really be a more cross-distro multi-community effort.
The big bang was extremely hot; the universe now is quite cold (microwave background). It was a continuous process of cooling, implying that there was an interval where the entire universe was comfortable (there are at least a few papers on this, e.g. https://arxiv.org/abs/1312.0613 , but I like it more just as a strange thing to thing about).
There's also the slightly more complex hairy ball theorem, such that a combed hairy ball will always have 2 cowlicks (or that on earth there will always be two places with exactly the same temperature, or the exactly same wind speed).
In 1939, the Soviet Foreign Minister, Vyacheslav Molotov, claimed the Soviet Union was not dropping bombs on Finland, but merely airlifting food to starving Finns. The Finns sarcastically dubbed the RRAB-3 cluster bomb "Molotov's bread basket." Consequently, the improvised incendiary device that Finns used to counter Soviet tanks was named the "Molotov cocktail", "a drink to go with the food."
Reptiles (and other ectothermal vertebrates) will “artificially” give themselves fevers by staying in heat sources longer when they’re fighting an infection. It’s called a behavioral fever.
I tend to get chills with my fevers. Maybe "chills" in ill mammals are similar instinct, to provoke the urge to bundle up and get warm when it isn't necessary, thus instigating a fever?
Opera singers learn the International Phonetic Alphabet in order to perfectly pronounce foreign languages without having any idea how to speak those languages.
In the music department at my university the vocalists had to take two different foreign language classes for this reason, and they were _supposed_ to learn the meaning of everything they sang so they could capture the feeling of the song. Bullshitting your way through those kinds of inflections is the voice undergrad's version of slapping a paper together the night before a deadline.
Hmm. Is IPA sufficiently nuanced to represent speech as pronounced by a native speaker without the leeway that would permit a non-native accent?
Put another way, it was my impression that IPA was coarse grained enough to permit at least some different accents to be represented by the same IPA "spelling".
Opera singer here. IPA is indeed as coarse grained as you described in the latter sentence. Accents can definitely leak through. Practically speaking most opera singers do not use it, and are expected to take two or three romance languages in college. Accents tend to be ironed out mechanically because you want uniformity (when singing in a chorus) and good projection of the phonemes onstage.
White dwarfs and neutron stars are both "stellar corpses", the remnant of stars. White dwarfs roughly cram the Sun's mass into an Earth-sized sphere, neutron stars cram that amount of mass into a sphere of about 10km diameter.
Oh, and neutron stars have mountains: height differences on their crust of a few centimeters or less. Scaling that height difference takes more energy than scaling Mount Everest. Gravity is weird.
(Armchair interest in astrophysics; this is roughly correct but I'm sure others can point out nuances better).
Nim (https://nim-lang.org/) is a fast, compiled language that is as easy to use as Python. It doesn't have the same ecosystem or user-base of Python though, although Python and Nim can be bridged via nimpy.
I've been itching to pick up a new language, and Nim is high on the list of choices. But I'm leaning towards Rust, just because it has more recognition and a more evolved ecosystem.
The small ecosystem is Nim's downfall for now, sadly. It really is a much easier language to learn and write in.
I'm working on some Nim modules which will be Open Sourced soon. This will provide a web framework with an ORM. It will be a batteries-included framework.
It will also easily plug-in to a Flutter powered front-end engine which can use Nim defined back-end UI code (not just Nim actually). Also to be Open Sourced soon.
There are two possible extensions of the rational numbers: the reals and the p-adic numbers. This is because of Ostrowski’s theorem which states that any norm defined for rationals, can either be the normal absolute value or the p-adic norm. For p being a prime, n and d being integers relatively prime to p and e being an integer, a rational can be written as r=p^e*n/d. And p-adic norm is p^(-e). Then in 3-adic numbers you can write
1/2 = 2 + 3 + 3^2 + 3^3 + 3^4 + …
which actually converges because the norm has the negative of the exponent.
pari/gp has a native p-adic calculator, where you can type 1/2 + O(3^5) and get the first p-adic digits.
I don’t know why this impresses me, but it is probably because there are just two ways of extending the rationals, and the p-adic numbers are like the lesser known brothers of reals. I feel, I should have known this for ages. And no, I don’t know if there are known applications, outside number theory, but still, its cute.
One of the most deceptively dangerous projects that a DIYer can take on is bathroom wall tiling re-grouting. Tiles may not have a strong bond to their substrate and once you remove the grout may fall into the tub/floor, usually breaking. Then, when you search for tile replacements you find no perfect match for your old tile. Always put a heavily padded rug or blankets over the tub before doing tile work so that fallen tile do not break. Also, if you re-tile, buy a lot of extras and keep them stored safely. Oscillating multi-tools are the best for removing old grout, but their vibrations will shake the wall, loosing tiles that you've removed grout from.
Upon a cursory inspection of the first 10,000 or so early GitHub users, it takes about 64 followers to be in the top 20% of followed GitHub accounts.[1]
Users with 100 or more followers can often be business owners, well known employees from FAANG companies and the like.
Users with 1000 or more followers tend to be specialists in lesser used technologies who have created open source technologies used widely in that specific domain.
Users with 10000 or more followers tend to be luminaries who have created technologies most people in the industry are familiar with, like say coffeescript.
I have tinnitus. Before I had tinnitus I could simply use earplugs. Problem when I sleep: when I use earplugs it presses in my ear and it aggrevates my tinnitus.
Solution:
- Bose QC 35
- Earplugs
Now the earplugs don't get pushed in and don't aggrevate the tinnitus. The bose QC 35 gets pushed onto whatever is directly outside your ear, which is a bit uncomfortable but fine.
I have these and cut off the stalks. They’re easy to take out and don’t get pushed in.
Finally, there are (more expensive) “concert earplugs designed to retain the auditory experience while reducing loudness only. Worth checking out if you like your hearing but are often in loud environments.
I have hearing loss in both ears which runs in family. Tinnitus is just a serious side effect that I somehow manage. I was hoping that a biological solution to hearing loss would be available by 2020, but even with some hope a lot of projects are running late. Guess most of the approaches want to convert stem cells/ molecular sources etc. to hair cells inside the cochlea. I really hope something is available within 20 years and I can experience normal hearing.
I really like the Ohropax wax or silicon earplugs. They don't penetrate your ear canal as much as the foam plugs, but still provide a good enough 23 db noise reduction. They definitely make my tinnitus more noticeable, but I'm kind of used to it anyway at this point.
If you sleep laying down wearing earmuffs, you'd better also place something - like some string - between earmuff and skin: otherwise, you risk suction of the tympanum (and aggravate the situation).
Something that worked for me: playing low volume noise on a stereo in the bedroom. Even barely audible noise would suppress the beeps and drone of my tinnitus.
The study of unidentified aerial phenomena (UFOs) and the acknowledgement of the extraterrestrial hypothesis (or extra dimensional) will become mainstream.
Soft disclosure has happened already via the USS Nimitz encounter. If UAPs are true, then the floodgates are open and will lead to a forced re-evaluation of other high strangeness phenomena (telepathy, ESP, etc).
The SR-71 was designed in the 1950s, unless progress has stalled I don't see why these reports aren't just DARPA playing with their new toys.
And besides, the floodgates will not open, that isn't how the scientific method works. If the little green men arrive, the evidence we have showing everything is mumbo jumbo doesn't cease to exist.
It is unlikely to propose these craft are ours because the basic science underlying the function of their propulsion system is simply undeveloped. You simply cannot hide developments like this from the greater scientific community. Quantum leaps do not happen in vacuum.
Even in the Manhattan project, in the unclassified public work there is active research in nuclear physics. Nothing of the sort is happening here wrt gravitic propulsion/warping of space time.
Floodgates open because in many of UAP encounters, witness testimony relay of telepathic communication initiated by the beings in the craft.
It actually is how the scientific method works. If something that is very unlikely is now true, the likelihood of linked events predicated on this should be re-evaluated because the priors have changed.
I always found it strange how UFOs concentrate in the U.S,is it because of the climate? /s
I wanted to believe and there is a serious subreddit for UFOs but most are debunked.
There exist ~2 truly mysterious UFOs and most of their mystery has been explained.
> Folie à deux ('folly of two', or 'madness [shared] by two'), also known as shared psychosis or shared delusional disorder (SDD), is a psychiatric syndrome in which symptoms of a delusional belief, and sometimes hallucinations, are transmitted from one individual to another.
Well humans sure have many potent shared delusional disorders, one being e.g to ignore the 750000 humans that fled to Russia during the 8 years of Ukraine war on the dombas.
The most common one is the idea of free will though.
I forgot the names, don't remember wether nimitz is gimbal but anyway that explanation is a great example of what kind of analysis we should do
https://youtu.be/qsEjV8DdSbs
I would provide credence to the above flight path as witness testimony from the pilots themselves attest to it, while West's claims of it being just glare are on the data alone.
Nevermind West's continual assertions of it being glare when we have witness testimony that it's a fleet of objects moving together, including the Gymbal.
There are enough recorded exigent cases across history suggesting of extraordinary flight characteristics & bizarre behaviour of UAPs. It's an open question if they're aliens or not, there's not enough public hard data to tell.
But I would encourage you to suspend disbelief and approach the topic methodically, there genuinely appears to be something deeper here.
and for the curious -- cw stands for Continuous Wave, a radio term for signaling that's turned on and off (like Morse code -- as opposed to modulated like AM, FM, etc).
Technically there is no such thing as a continuous wave, as any signal of finite duration (time) must be composed of multiple frequencies, and all signals are of finite duration. This is closely related to the uncertainty principle in physics, whereby quantities that are related by a Fourier transform are also related by an uncertainty principle. Duration (time) and bandwidth (frequency) are related by a Fourier transform.
In 200 billion years all other galaxies will be beyond visibility horizon because of the space expansion (i.e. they will move away faster than light). Any new civilization then in unlikely to develop Big Bang theory.
A Gaeltacht (Gale-tuct) is a region of Ireland that is designated by the government as Irish speaking. There are a few of these areas, mainly along the coast, where the majority of people will conduct their daily affairs through Irish (although they're mostly happy to speak English to you if they notice you have no idea what they're saying!).
Back in the 00's an Irish heritage group in Canada bought a chunk of land in Ontario and managed to get it designated
as a Gaeltacht.
So officially, as of today, Irish is the common vernacular in several small-ish regions of Ireland and a .25km^2 chunk of Canada.
One of the most common railway gauges in Brazil is ~1600 mm. Although this seems like a metric designed gauge, it is in fact due to the chance that 5'3" equals 1600,2mm.
This (particular type of broad) gauge is known as Irish gauge, and is also used in Ireland and Australia.
Humans might just be the stupidest animals on the planet when it comes to seasonal migration, we get tied down by our monetary system, taxes, government policies, borders, etc., while polluting our planet with patchwork solutions like air conditioning for the summer or burning fossil fuels to heat our surroundings in the winter.
We can't simply just "move" to a different location like other animals.
> but we have the intelligence to tame our environment
I beg to differ when most wealthy and well-off Americans get affected by drought, fires, hurricanes, floods, tornadoes, freak snow storms... and still stay in the same place all year round trying to tame their environment.
I always thought the fact that Vim had automatic buffer copying and forced me to use "0p instead of p was really annoying. Looks like there is an actual reason for it after all :)
(Apologies if I’m misunderstanding and you already know this.)
Not specifying a register (e.g. "0) just means the commands, including x and p, will use the " register, aka the unnamed register. By itself, p is equivalent to ""p.
:reg shows you all the registers and their current contents.
This is if you have emacs-based keybindings (which is the default). M-f/M-b (Meta, often the alt key or option key) will move forward/backward a full word. C-f/C-b moves forward/backward a character. C-w kills the previous word (if you're in the middle of a word it leaves everything from the cursor to the end intact).
C-r will search backward in your history allowing you to type partial matches (like, "I know I compiled foo.c, but what options did I use?" type `C-r foo.c` and repeatedly type C-r until I find the compiler command I used).
Here's mine, the Pareto principle or the 80/20 rule. You will find that it applies in MANY ascpects of life and knowing it may give you an `edge'.
https://en.wikipedia.org/wiki/Pareto_principle
Here is my favorite mental-model for problem solving (and my favorite analogy)
So I have this problem:
I love eating scramble eggs, I hate cleaning up the pan and have been trying over the years many different techniques to make sure the egg's doesn't stick to the pan. (No I don't want teflon thanks).
So I tried different oils, temperatures and techniques, all with the goal of getting those delicious eggs not to stick in my pot/pan afterwards.
Then one day it hit me, I'm solving the wrong problem. The problem to be solved is not "EGGS NOT STICKING TO PAN", it's "FINDING A BETTER WAY TO CLEAN" the pan !
And I have found it, it's using a different type of plastic-scrubber. It looks like steel-wool, but it's all plastic.
So whenever I'm stuck on a problem for too long, I try and model the problem as an "eggs-in-pan" problem statement.
First I put oil in the pan and let it heat up.
Once it is hot, I pour the scrambled egg on it.
Use my spoon, fork, whatever to distribute and turn.
then I tip over the pan get the eggs onto my plate.
My pan has a nice feature to make pouring it out easier.
Usually nothing has stuck to the pan.
It is key to heat the pan and the oil first.
If something sticks, whilst still hot, pour some water on it.
It will result in a satifsying bubbling that usually will get
any residue out.
Dry with a (paper) towel or whatever else you use.
I usually drip a little oil in the pan before I put it away.
If that sounds complicated it realy is not.
Once you do it a few times it takes up little time
and is fairly reliable.
Another good tip is make sure you use the right oil.
Use oil that is mean for high heat.
I had a girlfriend in Amsterdam who taught me to make scrambled eggs in a double boiler. This keeps the eggs at the temperature of boiling water. The result is so much better (light and fluffy) that you will never use your skillet again. I have heard that there are special large double boilers that are used to prepare scrambled eggs for restaurants, but I have never seen one.
The Hong Kong–Zhuhai–Macau Bridge (HZMB) is a 55km (34mi) bridge–tunnel system consisting of a series of three cable-stayed bridges, an undersea tunnel, and four artificial islands. It is both the longest sea crossing and the longest open-sea fixed link in the world. Designed to last for 120 years and built from 2009-2018 at a cost of USD$18.8B, it is currently almost completely vacant due to COVID.
The scroll wheel on your mouse is probably a functional middle button.
On Linux, you can usually left-click-drag-release to highlight some text, move your mouse cursor to a new location (in any window), and middle-click once to paste the text you highlighted there. No explicit keyboard or right-click copy and paste required.
If you ever use the middle mouse button to open links in a new tab, you should disable this "feature" because it is a security vulnerability which allows for grabbing the clipboard content.
Programmatically determining the region of an AWS EC2 instance using the metadata API was very annoying in the past; there was no endpoint for it. Instead, devs used many methods to determine the region[1], primarily by dropping the last letter off of the availability zone, (e.g. `us-east-1a` -> `us-east-1`) which was queriable.
At this point, AWS has updated the API to allow querying for the region as one would expect. See my answer on the afore-footnoted question[2].
To determine the three-dimensional structure of proteins (such as the spike protein of SARS-CoV-2) one of the most used techniques is to grow protein crystals and blast X-rays at them. It has been estimated that these crystals have a cost per weight around 3000 times higher than diamonds.
In some editors (SQL server management studio, visual studio code, but badly), ctrl-shift-arrow keys lets you edit many rows of text simultaneously. Simplest example is multi-row comment, but if you align your code well, some repetitive tasks become surprisingly efficient.
Alt+Shift+Left-Click for any JetBrains IDEs. I use it for coding as well as data editing (eg. Aligned MySQL CLI results in a scratch file) all the time.
The support is excellent as when you have multiple cursors, all other standard key combos, like Ctrl+arrow work for all at once. It's also great for selecting a list of aligned text and pasting it elsewhere.
If you use git and clone the same (or related) repositories in multiple places on a single disk, they can share the hashed object to reduce disk space and/or network usage and increase speed.
One workflow is to git clone mirror the main repo and use that as a shared read-only source of historical references when cloning a new repo. You can also hardlink or reference those objects to save disk space (though some file systems will do that transparently for you).
It feels like this could be done entirely transparently to the user, for a free speed boost but it's not quite there yet and still needs some thought and setup. Some big services use this behind the scenes though so it's presumably robust.
So worth looking into if you are spending time waiting for clones.
Kind of distinct but could solve similar problems where they overlap.
Worktrees let you have different branches from the same repo checked out to the file system at the same time, without needing seperate repos.
The clone --shared --reference-if-able and -dissociate options let you have multiple repos with less of a space hit.
They almost certainly can be used at the same time, though I've never heard of anyone doing so.
The limit of only having at most a single copy of each branch checked out, which makes sense for development purposes, made worktrees unsuitable for my use, which was mostly (ab)using git as a deployment tool.
The word derives from the Venetian phrase "s-ciào vostro" or "s-ciào" (schiavo/sciavo) literally meaning "(I am) your slave"
This greeting is analogous to the medieval Latin servus which is still used colloquially in parts of Central/Eastern Europe or the antiquated English valediction "Your Obedient Servant." https://en.wikipedia.org/wiki/Ciao
In Italy you say ciao both when you meet somebody, and when you leave, while in other countries (e.g. Brazil), you say "Ciao" only when you go away: if you use it when you meet somebody sounds sooooo weird :) :)
If you habitually try to open the wrong door at a restaurant entrance or wherever where there's a double door and one side is locked, quickly glance for the side that has the keyhole in it. That's the one that's open.
People have always and will always think that the time they are living in is the most important time ever. This is a self evident fallacy or a universal truth.
These times we are living in are not the most important. Or, of they are, then everyone everywhere is at everytime living at the same importance.
Now, there is something called the hinge of history that some mathematicians have looked into the chance of "well right now is most important, people in the past and future are delusional". The chances are very very low.
Think of this the next time you hear someone appeal to urgency as political motivation.
"Both the statement and its negation are true; the statement is true and its negation is false; the statement is false and its negation is true; both the statement and its negation are false"
I’m impressed. This is denying the existence of something that must exist by construction, like denying there can be three things when you know the concept of 100, 0 and counting.
In "Magic the Gathering Old School 7 Points Singleton" you play the card game Magic the Gathering but only with cards printed in the first 2 years (93/94) and only maximum 1 copy of each in your deck. And power cards are limited to 7 points according to a table found here https://ligaoldschoolmadrid.wordpress.com/2021/03/25/7-point...
I wish Arena had a mode for that. It sounds like a lot more fun than the modes they currently have. I’m still a beginner in MtG, would you say the way you described is easier than modern Magic?
Call me old fashioned but for me Magic is something you play holding physical cards in your hand.
If I look at modern cards they don't feel like the same game. Digitally generated art, strange creatures "Social Climber"? and most of the cards has long complicated texts and many using complicated mechanics.
The end conditions of the universe are just as symmetrical as the beginning conditions of the universe, under various cosmological models including the popular and oft subscribed to 'Heat Death.'
This means that we are not simply connected to the past by cause and effect -- we're also coupled to the future. For any specific end to come about, our experiences must be constrained.
In differential equations, we use the boundary conditions to solve for the dynamical behavior in the time interval between them.
In analysis, we use the existing conditions in a process called 'analytic continuation' to recover the whole process.
In fractals or the physical theories surrounding them (AdS/CFT correspondence), the boundary determines the bulk. So if you have only the frame/margins of an M.C Escher tesselation, the entire inside foreground (the bulk) can be recovered by gradually painting in what's missing.
So we are more of a continuation of both the past and the future than simply a remnant of the past.
While the past may have pushed us, the future attracts us.
"The mutual transformations of matter are not an accidental feature, but the very essence of nature. Without change, there would be no world. Heraclitus seems to acknowledge this in his praise of war and strife:
We must recognize that war is common, strife is justice, and all things happen according to strife and necessity. War is father of all and king of all; and some he manifested as gods, some as men; some he made slaves, some free.
Conflicting powers of opposites, including those of elemental bodies, make possible the world and all its variety; without that conflict we would have only lifeless uniformity."
The saying "blood is thicker than water" is from the Bible. The full quote is "the blood of the covenant is thicker than the water of the womb," which means the opposite of how "blood is thicker than water" is used.
Since we're speculating without proof, my guess is the (not biblical) saying has always meant exactly what you think it means, and someone invented the "blood = covenant, water = familial ties" inversion just to be edgy, ironic, for effect (whatever time period that was).
And as social media has taught us, humour and irony get lost very easily. I have to wonder how much of history we misunderstand because we didn't understand the tone.
Any idea where in the Bible it says that? I spent a couple minutes searching and found many sites saying the same thing you did, but nobody cited which verse. I searched a dozen or so translations of the bible and did not find it, either. Which makes me wonder even more, as there were just as many sites saying it originated in 12th century writings.
I could have sworn I read this in one of the gospels, but going back it appears you're right and this is not found in the old or new testament. I must have read this somewhere and taken it for granted. I apologize for not doing my research!
Humans look symmetric on the outside but have stark asymmetries on the inside Thus, we have a spleen on the left but not the right. Our left lung has two lobes, but our right lung has three. Our heart and stomach are shifted left of center, our liver is shifted right of center, and our intestines meander throughout our abdominal cavity with no regard for the midline at all.
“Life can be much broader once you discover one simple fact, and that is, everything around you that you call life was made up by people that were no smarter than you … the minute that you understand that you can poke life … that you can change it, you can mould it … that’s maybe the most important thing.” – Steve Jobs
Almost everything around you has been shaped by the human hand.
I had this thought while walking down the street in a foreign country. And I remember it having quite a profound effect on me. What a marvelous thing it is, our hand!
(It even applies to many things we would call „nature“, such as woods or meadows.)
2. "cal" pretty prints current month on console. "cal 2022" pretty print current year.
> interesting to you
1. Bank of England one of the earliest banks to issue paper money was establish to finance war expenditure.
2. Federal reserve is holding (and hence funding) ~$2.7T worth of mortgage backed securities[1]. It started out as a stop gap measure to cushion 2008 crisis but they are yet to discontinue it.
Edward Snowden, who had access to basically all the US governments secrets for NSA and CIA with the highest above top-secret clearance, went specifically looking for stuff related to this and says that he came up with nothing. Out of anyone who could know things about this, I find his situation and argument to be most convincing.
Perhaps, but there's all kinds of tangentially-related stuff that could have been leaked but wasn't. Nor was mentioned by Snowden. One already-public example being the 2004 Nimitz incident, which is of genuine national security concern.
I know Snowden is highly-revered around here, but he's not all-knowing, and I'm fairly certain he did not have access to the entire US IC store of intel.
The earth's orbit is elliptical. However the orbit is much closer to a circle than to an ellipse (or at least the image that comes to mind when you say ellipse).
cat > epoch.c
int printf(const char *__restrict, ...);
unsigned int time(unsigned int *tloc);
int main(){printf("%u\n",time((unsigned int *)0));}
^D
c99 epoch.c
./a.out
There is a small but growing community of people who eat only raw fruits and raw vegetables. I tried it and it was awesome (though it was expensive and I had trouble getting quality produce).
I find this community very knowledgeable, smart and down to earth. It was impressive to see a 40+ year old body builder (he is no Arnold, but has impressive physique for an amateur) doing so well on this diet.
If someone is interested, spend some time googling this topic. It is well worth your time.
Yes, but he tried it too late. It is not like cancer would go away by eating fruits for a few weeks. Also eating raw diet doesn't mean one should not get modern medical treatment, especially for killer diseases like cancer. It would take years to undo decades of bad eating habits.
Eating raw is not a panacea. But it is a solid step in getting and maintaining a healthy body and mind, especially as one ages.
Since this has leaned pretty much into Indo-European languages, here is something that is not as well known. Indigenous languages in the Americas were nearly extirpated by colonialism. Take a look.
https://en.wikipedia.org/wiki/Classification_of_indigenous_l...
I agree that we have little control over our thoughts. But we are the "haver" of these thoughts. There is some system that the thoughts are originating from. I like to think (hehe) of it as the (real-time) operating system, with thoughts originating from different subprocesses and actions being in the user land. The primary function of this OS (not the thinking part "me") is to survive. We can't control the OS directly, but we can facilitate our actions and environment in accordance with our OS.
well, I think that if you are aware (mindfulness) about your thought patterns you can steer them, and usually what you think you become, because your thoughts drive actions.
In my personal experience practicing self awareness has helped with this. I understand that there are factors outside of our control, but what goes on within us can be controlled to some extent with effort and time.
Thank you everyone for commenting, you did so because you were compelled by forces that you don't control and thus, did not do so of your own free will.
On a serious note, Sam Harriss essay on free will is a good read, nihilism and existentialism too.
A point I'd like to make is that surrendering and accepting to things you don't control can bring you short term peace but will crush your long term peace. I personally don't care if I have or don't have free will, there are things about myself and the world around me that I cannot control but will never stop trying to fight against it. Hurting people, for one. I think that's my definition of free will.
Someone else made a point that free will could be similar to the god of the gaps theory, where free will is in the gaps.
Lee Richmond threw the first perfect game in history in 1880. He later became a school teacher in Toledo, Ohio. One of his students was Norman Joss. Norman's dad was Addie Joss.
Addie Joss threw the fourth perfect game in MLB history, in 1908.
If you use Spotlight search to look up a word in dictionary, you need to hit down arrow repeatedly to select the word, or click on the word. Instead, you can:
- Command + L to jump to the word
- Command + D to open Dictionary app with that word
At any moment there is atleast one pair of diametrically opposite points on Earth's surface which have same atmospheric temprature and pressure. And that's because Maths. It's quite fascinating to me
(pretty sure that's related to 'can't comb a hairy ball flat', so assuming a topologically spherical earth)
and any four-legged table that wobbles can be rotated to a position so as not to wobble (assuming the floor has no cliffs or other topological oddities in its unevenness)
A tree that germinated before the Great Pyramid was built is still alive today: The Methuselah tree in Inyo County in eastern California. It is the oldest non-clonal organism that is still alive that we know.
You are most likely unable to lift your ring finger when the tall finger is bent under your palm, but it's possible to lean to move it, I just don't know how :)
You often hear that it was next to impossible to calculate with Roman numerals. That's not true at all. People developed a very ingenious scheme to do additions and multiplications using pebbles. Since the word for pebble in Latin is calculus, we now call calculus calculus.
How did it go? You represented the digits of a number by a pebble. You drew some lines in the sand, some rows and columns. The rows represented the digit magnitude (I, V, X, etc). The columns were for the operands. Two columns for a number. Why two? Because Roman numerals sometimes use negative digits.
Let's see a concrete example. You want to do the addition 28+44. That is XXVIII + XLIV.
You first draw the lines and put the pebbles in the corresponding squares. Note that XLIV has two negative digits (44 = XLIV = -10 + 50 -1 + 5). So the pebble for 10 and the one for 1 are put on the left side.
The first number does not have any negative digits, so no pebbles in its left column.
———————————————————————————————
L | || | o
X | oo || o |
———————————————————————————————
V | o || | o
I | ooo || o |
———————————————————————————————
So, now that we set the pebbles, we need to do the calculation. We can do it in many ways. Let's start by first canceling the digits that can be canceled: a positive 10 pebble with a negative 10 pebble, and a positive 1 pebble with a negative 1 pebble. The table looks now like this:
———————————————————————————————
L | || | o
X | o || |
———————————————————————————————
V | o || | o
I | oo || |
———————————————————————————————
At this point we reduced the initial addition 28 + 44 (XXVIII + XLIV) to the equivalent one XVII + LV ( 17 + 55).
Now we just move the pebbles from the fourth column to the second column.
———————————————————————————————
L | o || |
X | o || |
———————————————————————————————
V | oo || |
I | oo || |
———————————————————————————————
We are almost done. We got the result as LXVVII. But this is not a valid Roman numeral, we need to do the simplification VV = X. We remove the two 5-pebbles and add a 10-pebble. Oh, and let's get rid of the last 2 columns.
————————————————
L | o ||
X | oo ||
————————————————
V | ||
I | oo ||
————————————————
So now our final result, in correct form, is LXXII, or 72.
Multiplication can be also performed, and it's quite a lot of fun. I'll explain it in the post below.
Here we need 3 columns, one each for the multiplicands and one for the result.
Our example will be 12 * 61 = XII * LXI. I chose this example so that we don't need to use negative digits; negative digits can be handled easily, but you can get the idea without them.
We start with the operands on the first two columns. 12 has one pebble at the X row and 2 at the units row; 61 has one pebble at the L row, one at the X row and one at the I row. The column for the result is now empty.
D | ||
C | ||
———————————————
L | o ||
X o | o ||
———————————————
V | ||
I oo| o ||
———————————————
We start removing pebbles from the first multiplicand. For each pebble in the multiplicand, we perform the multiplication with the second multiplicand, which in our case means we "copy" the multiplicand in the results column with an appropriate shift in rows. Since the first mutiplicand has 3 pebbles, there are three steps:
Step 1:
D | ||
C | ||
———————————————-
L | o || o
X o | o || o
———————————————-
V | ||
I o | o || o
———————————————-
Step 2:
D | ||
C | ||
———————————————-
L | o || oo
X o | o || oo
———————————————-
V | ||
I | o || oo
———————————————-
Step 3:
D | || o
C | || o
———————————————-
L | o || oo
X | o || oo o
———————————————-
V | ||
I | o || oo
———————————————-
So, the result is DCLLXXXII. Of course, LL is not really "legal", so we replace it with a C, so we get DCCXXXII, or 732, which is indeed the correct number.
It's fun to blow people's minds by cutting/copying/pasting _in the shell_ without touching the mouse and using Ctrl+u to clear password inputs when you make a typo.
Colours are, in the outside world, fuzzy ranges of electromagnetic frequencies: they do «exist outside of your brain».
(Similarly, "brain" and "mind" are not the same.)
Otherwise, the intended general idea is that found e.g. at the beginning of Arthur Schopenhauer's Welt als Wille und Vorstellung:
> "The world is my representation": this holds true for every living, cognitive being, although only a human being can bring it to abstract, reflective consciousness: and if he actually does so he has become philosophically sound
Some electromagnetic frequencies are colors, but not all colors are electromagnetic frequencies. That’s why parent stated that color is a brain phenomenon.
What do you mean? (Outside the detail that "hues" are electromagnetic frequencies, while colours are compositions - there I just simplified.) Which colour is not such?
I would explain it as colors are byproducts of electromagnetic frequencies, but they are qualia generated by your brain. There are many optical illusions that play with this fact. For example, in twilight, the frequencies you would call blue are different than what you call blue during daytime. This is because the brain /eye adjusts to the general light conditions. (As sunlight is generally "redder" at twilight).
If the colors were the same thing as the electromagnetic frequencies, then the same electromagnetic frequencies would be the same colors, by definition.
They aren't. For example, put a card of color A in front of a background of color B; now move it in front of a different background of color C. You will experience color A as being a different color (especially if colors A, B, and C are chosen to maximze the effect).
The electromagnetic spectrum returned by card A isn't different, but the color perceived is. Thus, electromagnetic spectrum is "out there", but color is "in here".
The original post claimed that «Colors don't exist outside of your brain»: such statement, while true, is false, as its negation is true (we have mentioned paraconsistency in this very page): "colours" do exist outside your brain, as their nature also is, in a way, being electromagnetic phenomena, function of frequencies.
While such ontological property is pretty common, it is just summoned with some force when someone claims "[fuzzy] is [strict]".
> Thus, electromagnetic spectrum is "out there", but color is "in here"
Of course. That just depends on what you want colour to be. If you put it conceptually near "electromagnetic spectrum", then the distinction emerges.
It is the sum of two ranges (magenta is the sum of red and blue lights): it also exists "outside", like the rest.
For that matter, not even "pinkish grey" is defined by simply a frequency (the hue is, the colour is not): the definition for this purpose was meant to be concise, not literal.
Your previous comment had stated that hues are electromagnetic frequency and parent comment showed a counterexample. Color is a complex phenomenon that cannot be reduced to electromagnetic frequencies and their composition. For an example, see impossible colors [0].
> Your previous comment had stated that hues are electromagnetic frequency and parent comment showed a counterexample
That was not the point, and said counterexample is not such (also magenta is, in a way, "electromagnetic frequenc[ies]"). Impossible colours - virtual colours - do go more towards the counterexample, though up to a certain point as they refer to the mental phenomenon of colour and leave the physical nature of colour untouched.
But finally I get what you meant when you stated: «not all colors are electromagnetic frequencies». True ("not all perception is produced by a direct influx of"). But also false, apparently (to the best of my), as objectivized hybrid colours are compositions of electromagnetic ranges; objectivized virtual colours are as well etc.
> Color is a complex phenomenon that cannot be reduced to electromagnetic frequencies and their composition
Yes, and nobody ever stated the opposite - no reduction was never implied, no «overlapping» (as written in the other post): the point was expressed, nearby, at https://news.ycombinator.com/item?id=31217999
The distinction between colors exists outside our brains (different frequencies of em radiation), but the color "red", for example, only exists in your brain. Heck, I have no clue what someone else perceives when I perceive "red". As long as it is stable, it doesn't matter.
Well, our brains are only wired up to detect a limited spectrum of colors. There are insects and birds that can detect a greater variety of color. So, how can it just exist in our brains?
One way to look at colors is to see them as qualia, which is basically a subjective experience. Electromagnetic waves come in a large range of frequencies, some of which excite the nerve endings in your retina, causing your brain to experience colors. But colors are a bit more complicated than that: for example, there is no such thing as just "yellow": yellow is the subjective term given by humans when there is a certain balance between the excitation of their there-color vision. You can experience the exact same sense of "yellow" from different types of light: there is true monochromatic yellow around 580nm, but you can also experience that same yellow when mixing the right amount of red and green light. There is no wave of 580nm in the second case, but still you see the same color.
Our eyes are wired up to detect a limited spectrum of wavelengths. Although those wavelengths correspond to a color, not all colors correspond to a wavelength. That’s why OP stated that color is a brain phenomenon.
rj11 cables I.e. telephone wires are straight wires so the pins at the end of each cable are reversed… compared to Ethernet rj45 cables they cross over so whether you are connecting the left or the right side of the wire the pin order is always the same…
The most commonly used and arguably most useful form of logic, First Order Logic (FOL) is not powerful enough to describe even the natural numbers. In whatever ways we may try to describe them, we always end up with potential extra elements that we cannot account for.
Let me describe what I mean.
When we want to describe a structure, for example the natural numbers with their usual arithmetic (addition, multiplication and so on), we first decide on a language to use. Our language should contain the symbol 0 (so we can identify the first element) and symbols for the functions S (successor function), + and *. We also introduce variables such as x, y, etc.
Then we write down a list of axioms for the structure, i.e. logical statements that should characterise that structure. Such statements in first order logic are built out of the symbols we introduced, together with equality (e.g. 2+3=5 is a statement, where 2 is an abbreviation for S(S(0)) etc.), logical connectives such as "not", "and", "or", "implies", etc. (e.g. not(1 = 2), or "1 = 1 AND 2 = 2") and importantly quantifiers, "for all elements" and "there exists an element such that" (for example: "for all x, x = 0 OR there exists a y, such that x = S(y)").
A structure for a specific language is nothing more than a set of elements together with an interpretation for all the symbols. So, for our language we may just take the natural numbers with the usual interpretation of +, * etc. But we might just as well have chosen the single-element set {aardvark}, where 0 is interpreted as aardvark and all functions evaluate to aardvard no matter their arguments, i.e. aardvark + aardvark = aardvark. That's silly, but allowed. The natural numbers with their usual interpretation of symbols satisfy all the sentences I introduced above; we say that this structure is a model of that set of sentences. Our aardvark-structure is not a model of these sentences, since not(1 = 2) is not true in it (remember that this is just an abbrevation of not(S(0) = S(S(0))) and in this structure, this evaluates to not(aardvark = aardvark), which is false).
We can now take the set of all statements which are true in the natural numbers, a.k.a. the theory of natural numbers, Th(N). This set is clearly infinite (for example, it contains the statement "n = n" for all n), but that doesn't bother us. Clearly, the natural numbers are a model of Th(N). But are there other models of Th(N)?
Well, in a sense trivially. I can just define the structure {aardvark0, aardvark1, ...} where, for example, aardvark2 + aardvark3 = aardvark5, and so on. That's also a model for Th(N), but if that feels like cheating it's because it is: these structures are exactly the same except for the names of their elements. We therefore call them "isomorphic" and treat them as the same.
But are there models of Th(N) not isomorphic to the natural numbers? Yes, there are.
Let's introduce a new symbol to the language, call it c. Then add to Th(N) the infinitely many statements not(0 = c), not(1 = c), not(2 = c), etc. There is no way we can interpret this symbol c in the natural numbers and make all these statements true: eventually there has to be some natural number which c is equal to.
But now comes the kicker: By the compactness theorem of first order logic, whenever we have a set of sentences so that all finite subsets have a model, the original set has a model. Now, for any finite subset of Th(N) plus the infinitely many statements involving c, we have a model, namely the natural numbers (since the subset is finite, there is a biggest n for which not(n = c) is in the set of statements; then interpret c as n+1). But then, by compactness, the original set of sentences has a model.
So there must exist a structure that makes all statements true that are true for the natural languages (i.e. that is a model for Th(N)), but that also has some element c that is bigger than anything that can be "reached" by applying the successor function arbitrarily often to 0. What does such a structure look like? Well, it turns out that such an element must live in a so called Z-chain, a set of elements that "looks like the (positive and nonnegative) integers", but is totally disconnected from the natural numbers (this is because in Th(N) every number that is not zero has a successor and a predecessor); so in essence this structure contains the natural number and, disconnected from it, a copy of the integers. In fact, we may have more than one copy of the integers.
Compactness is a really powerful and weird theorem. It can be used to prove that any set of statements that allows for infinite models has infinite models of arbitrary cardinality (i.e. "as big as we want"). It can be used to construct an extension of the real numbers that contains "infinitesimal", non-zero elements (smaller than any positive real number), which is what motivated so-called nonstandard analysis (and makes Newton's and Leibniz's original intuition of "infinitesimally small quantities" precise). And so on.
If we want to actually describe the natural numbers uniquely, we have to turn to a stronger logic, such as second-order logic. In second-order logic we can't only quantify over elements, we can also quantify over sets (or, equivalently, properties). We may for example say "for any set, if it is not empty, it has a least element" (a statement that is true for the natural numbers if we introduce the usual ordering). We can then in second-order logic formulate the principle of induction, which suffices, with some other axioms, to describe the natural numbers uniquely (up to isomorphism, of course).
However, second-order logic has significant downsides. In particular, there exists no proof procedure in second-order logic that is both sound (it only proves true statements) and complete (it can prove all true statements). That means that, in practice, it's often not all that useful. We do have well-behaved proof procedures for first-order logic and that's why we usually stick to it.
In the western/european world, we have access to maybe a couple of dozen types of fruit (including various berries); try listing all of the fruit you know. That is just a small fraction of the (estimated) 2000 types of fruit eaten across the world, particularly in the tropics.
Relatedly, in central and south America, _sapote_ is the native american term for (roughly) "edible fruit". The word has nothing to do with the taxonomy of these plants. European colonists made the mistake of adopting the word "sapote" for various different tree species ("what's this called?"; "sapote"), which has led to a lot of subsequent misunderstanding. If you google for "sapote" you'll find some results which naively attempt to give a latin classification to the unqualified "sapote". Now we have fairly useless english names for a variety of these regional fruits, such as (read "sapote" as "edible fruit" in these examples): "green sapote", "white sapote", "black sapote", "yellow sapote", "sun sapote", "south american sapote", and so on - although many of them also have more distinctive names (such as "canistel" or "chupa chupa"), which are much less ambiguous and should be preferred.
The company United Fruit was formed by the merger of two companies in 1899. One half of the merger was originally founded by an american railwayman, who obtained a 99 year lease of 800,000 acres of land from the Costa Rican government in exchange for constructing and operating a railway from the capital city to the carribean port of Limon (during the construction of which, thousands of laborers died from tropical disease). Once the railway started operating, usage was insufficient to pay back the debt on the loans that he'd taken out for construction, and so he pivoted to banana exports; the story goes that he had planted bananas along the tracks during construction as a cheap means of feeding his workers (which doesn't quite make sense given that bananas take 14+ months to fruit), and so he doubled down by increasing the density of planting over the area of the land that he owned, of course clearing more primary tropical forest to do so. The United Fruit company (today Chiquita) became heavily involved in politics in the countries that it was operating in, with the most famous example being its significant role in instigating the CIA backed coup in Guatamala in the 1950s that overturned the democratic government and installed a violent dictatorship, and led to 4 decades of civil war in which at least 140k people died, and which set Guatemalan economic development back by decades or more. So yeah, bananas.
Oh, and finally, there are dozens of varieties of bananas. The one that's exported most commonly (Cavendish) is relatively large and travels well, but is far from the most flavorful. There have been rumors of a killer fungus for many years, which hasn't had a significant impact on availability (yet); however, there are many varieties of bananas that aren't impacted by this fungus, so in the next decade or two we could see one of the other varieties replace Cavendish as the most widely consumed.
> Relatedly, in central and south America, _sapote_ is the native american term for (roughly) "edible fruit". The word has nothing to do with the taxonomy of these plants. European colonists made the mistake of adopting the word "sapote" for various different tree species ("what's this called?"; "sapote"), which has led to a lot of subsequent misunderstanding. If you google for "sapote" you'll find some results which naively attempt to give a latin classification to the unqualified "sapote". Now we have fairly useless english names for a variety of these regional fruits, such as (read "sapote" as "edible fruit" in these examples): "green sapote", "white sapote", "black sapote", "yellow sapote", "sun sapote", "south american sapote", and so on - although many of them also have more distinctive names (such as "canistel" or "chupa chupa"), which are much less ambiguous and should be preferred.
This is fairly common with Asian or any "exotic" language. The Japanese word sake means "alcoholic drink". If you walk into a Japanese bar, the bartender may ask you "Osake wa?", or "what would you like to drink?" (My favorite sake to consume in Japanese bars was a rum and coke, which is called a "Cuba Libre" outside the USA.) The fermented-rice beverage we know as sake is called in Japanese nihonshu, which simply means "Japanese liquor".
digest field in http contains hash of the file and sha algo used to create hash. You can check this field to know if the file has changed or not and download it only when it is changed.
Very few people know the potent facts about the brain:
* your neurons are not in your "brain"
90% of your neurons are not in what you generally denote by brain. They are in your cerebellum.
The cerebellum has a distinct neural architecture, that difference is in fact clearly visible with the najed eye.
A very poor simplistification would be to call it our GPU.
* the spinal cord is huge and is structurally a contiguous extension of your brain.
* your brain is mostly not filled with blood but with the transparent cerebrospinal fluid
Btw there is a liquid almost twice as much more present in the human body than blood, the white lymph. It's unclear to me why we don't see lymph on dissected bodies. (possibly because of microvascularisation?) also anyone know how is lymph pumped? I don't think the heart pump lymph so is it stagnant?
* opiods,nicotine, and cannabinoids are endogenous neuroreceptors. They have the name of exogenous drugs because it happens they bind on those ligands but that also means that the brain generate similar but slightly different analogues itself.
* in terms of neurotransmitter s, types of receptors and even patterns of activation of brain regions per stimulus, humans are not much different from rats or other mammals and this apply for surprisingly high levels cognitive tasks such as e.g ADHD.
* the problem of consciousness/qualias
If qualias are supraphysical then how can it be created from a physical process?
Moreover where are you?!
Where are you qualias located in your brain?
And more remarquably, how many are you?
Our two hemispheres are highly (but not that much though) symmetric, we might have one consciousness per hemisphere.
Or not? But then what about those people that have their hemispheres crosstalk surgically cut?
They behave normally (mostly), and have two independent brains.
However the best proof might be e.g dolphins, which have a mono-hemispheric alternating sleep.
So if one hemisphere dream, the other day it's the other hemisphere. Hence two distinct qualiesque brain in one dolphin brain.
> also anyone know how is lymph pumped? I don't think the heart pump lymph so is it stagnant?
I have a close friend who is working on getting certified as a massage therapist. Skeletal, muscular, and vascular anatomy make up a big part of the required knowledge for the certification.
Two interesting things I learned from them:
1. The lymphatic system is massive and runs alongside our vascular system. For example, there are many lymphatic veins that run through the head and face that empty out just near our clavicle, where they’re the picked up by large vascular veins to the heart, and then on to the liver for filtration.
2. The lymph system is not attached to our heat, and movement of fluid in those veins is purely driven by mechanical motion of our bodies. This is often why paraplegics or folks with other forms of paralysis end up with swollen feet—lymphatic fluid gets trapped in the extremities and collects whoever gravity dictates.
Fascinating stuff. Gave me a much greater appreciation for the level of knowledge and skill to be a massage therapist.
Wow I wonder how much power does gravity has over lymph repartition in the body?
I mean wether I lay down or stand still probably impact the "coverage" of our body by the immune system.
There might be correlates between time to recovery and human position and movements then.
BTW bugs do not have vascularisation their body is one single flask of stagnant fluid. And their brain is distributed..
Fun fact: one of the red flag symptoms of a headache or migraine is if it is significantly less severe sitting up vs lying down. Or if it gets worse when you sneeze. I am NOT a doctor, but IIRC this can be a signal of serious issues they could be causing such a headache, as it means the blood pressure caused by lying down or sneezing is potentially stressing vascular structure in your head.
A thing I find interesting is that autopsies are finding that folks with tattoos end up with stained lymph nodes as the tattoo ink gets spread through the lymphatic system.
As tattoos have become hugely popular in the last couple decades I wonder if we’ll start seeing more health problems associated with the chemicals used in inks.
Ink is known to be found in the blood for a bit of time after the tattoo (people can’t even donate for a year after where I live?).
So it is not that surprising to find it in lymph nodes whose very purpose is to accumulate foreign things in the body and possibly make them “meet” with immune cells. Also, tattoos are not completely static structures, they are tiny ink droplets that do get collected by our body — we are just unable to handle a certain size of them. Tatto removal works exactly by breaking up the ink to smaller pieces which our cells can handle.
No idea, but from what I’ve head it’s commonly understood that people who regularly move and exercise will generally have better lymph circulation for that reason.
Fun fact: the brain also has a lymphatic system, which wasn't described until about ten years ago. It's called the glymphatic system, and apparently it was rather difficult to spot.
I have a vested interest in very precise understanding of brain structures (qualia too but less pressing matter).
In particular:
- visual perception of self
- visual perception of others
and the somatosensory aspect of those points, how do i represent your emotions and body in my head, my own body / image / emotions too; and how do they relate.
this is a terrible list of 'brain facts'. they're not interesting or constructive to a larger understanding, and analogizing the cerebellum to a GPU is naive and misleading to the point of being insulting. working neuroscientist don't think about consciousness, we look at neurobiology, develop experimental techniques to allow imaging of ever larger numbers of neurons, try to understand how the brain computes.
I don't see how one could boil down the staggering complexity of the field in the one-bite popsci factoids that leave a layperson with any useful model of the brain. People certainly try though (ahem, Andrew Huberman...).
Here are some facts though: you have some neurons, they do things sometimes. we have some vague ideas about what different bits of brain do in very specific context. we have absolutely no idea how it all fits together, and are decades away from such an understanding. the fruit fly visual cortex, which is several orders of magnitude smaller than mammalian cortex, has been exhaustively studied for 40+ years, and we are only recently starting to build a circuit level model of the computation in the first layers. Here is your fact: brains are the most complex objects in the known universe
*Puts mr bucket of water back where they came from*
FWIW that's not how you argue, that's just letting all the air out of the balloon.
As for your arguments, IMHO humans are fundamentally wanderers, and we stumble on discoveries and accidental solutions to problems as much (if not more often) than we intentionally set out to understand or achieve something. Given the expanse of the brain, the theoretical paradox of a scanner trying to scan itself, and the post-discovery nature of how we structure and organize things, I think we'll only make progress once we fully let go a bit.
Not to a stupid extreme, of course - unhinged conspiracy theories are as useless as trying to build load-bearing insights from drug experiences, IMHO - but just scientifically accepting that we'll never truly understand how we tick until we accept the wild nature behind it. We are not nearly there yet.
What an absurd dismissal.
> they're not interesting
You have a failure of appreciation, yet those facts are potent and surprising.
>constructive to a larger understanding
Yes they are not, who cares?
Teaching neuroscience is of course a non-goal of my comment, what an irrelevant thing to say..
>analogizing the cerebellum to a GPU is naive and misleading to the point of being insulting.
Exactly, but if you had allocated a few neurons while reading my comment you would have seen I said "a poor simplist-ification" akin to only transmitting the idea that it is the place in the brain with the most compute power/parallelism.
>working neuroscientist don't think about consciousness, we look at neurobiology,
Such a useless thing to say. It is a truism that scientists mostly focus on what can be studied by the means of metrology/empiricism and indeed for now, qualias are low-studiable.
Saying this has no more value than saying water is wet.
Your shallow dismissal is just a pretext to your layman explanation of what is the brain. Explanation with btw zero level of detail.
I have read over 1000 papers in neuropharmacology and there are in facts a lot of things we understand about brains.
But, the C.elegans ~150-300 neurons, which is much simpler than the fruit flies, is as of now unexplained in terms of neural code, despite having an almost complete connectome of its neuroreceptors.
But no the C.elgans is not one of the most complex thing in the universe. This is a fallacy that make seems progress seems impossible. The truth is, C.elgans has a relatively reasonable complexity, however we lack real-time datasets and we lack accurate instruments to observe and retroengineer the while system.
> we lack real-time datasets and we lack accurate instruments to observe and retroengineer the while system.
wrong again. whole brain 2p imaging is routinely performed in c elegans ( and drosophila for that matter) for a number of years now. in fact whole brain ephys in c elegans is feasible, although it would be an experimental tour de force. c elegans does not have '~150-300 neurons', it has precisely 302, and the complexity therein cannot be underestimated. it wasn't a layman explanation for I am not a layman, and you are missing my point, which is that reductive facts about such staggeringly complex things are neither interesting or useful. it's the feynman magnet explanation, but grumpier
Come on, who do you think you are talking with?
I have read many papers about C.elegans and my claims are backed up:
"the inability to reliably identify all neurons within whole-brain recordings has precluded a full picture with circuit-level details."
"we expect our work to aid in these endeavors (Bargmann and Marder, 2013; Brennan and Proekt, 2019; Kaplan et al., 2020; Kato et al., 2015). A richer set of network responses, even for simple chemosensory inputs, has broad relevance in understanding sensorimotor processing"
"The application of NeuroPAL to whole-nervous-system activity recording is mainly limited by nuclear localization of the GCaMP activity sensor, as this may fail to capture highly localized events in axodendritic compartments. This limitation, combined with potentially incomplete or variable anatomical synaptic annotations, may have contributed to the lack of correlation we observed between functional activity and synaptic connectivity."
Neuropal is an extremely recent advance (2021) https://www.cell.com/cell/fulltext/S0092-8674(20)31682-2?_re...
it is the first to allow a complete mapping of GABA receptors (except mitochondria GABA translocator) however complete maps for other receptors, such as mGLURs, NMDA, kainate, ampa, etc are LACKING, as I said. Let alone realtime data that take into account the individual variances over multiple worms. (they should study genetic clones btw)
real time data are rare, nowhere near big-data https://www.youtube.com/watch?app=desktop&v=B4gNamS8Ars and when they exist, they lack the complete information (receptor locations, precise neurostransmitter density per axon, etc)
C.elegans might be trivial to understand, without a complete map of axons and recpetors, it would still appear generally un-studiable.
As for the 150-300 neurons it is a matter of debate. IIRC the paper I linked refer to ~150 neurons in the head, the rest would be the PNS, not the CNS. But I'm not sure of the exact CNS/PNS ratio besides this paper.
i really couldn't care less how many papers you've read. whole brain calcium imaging is routine, no one expects it to be the whole picture. it's useful though.
and no it is not a matter of debate, there are precisely 302 neurons
it's useful and yet understanding how brain works precisely require better instruments and datasets hence the understanding of the brain is not bottlenecked on infinite complexity but before anything, on good old metrology. Q.E.D
oh yeah well done completely avoiding the PNS question, so according to you neurons in the tail are relevant to worm cognition? spoiler except for tail local stimulus (e.g. tactile, heat) I doubt it
QED is something you write after proofs not incomprehensible sentences. there is a compelling argument to be made that what neuroscience is missing is theory, not data. there is quite a lot of data.
i am dissappointed, although not surprised that someone who relies on appeals to authority when arguing a point is of the mindset that in an organism so finely optimized that it uses biomechanical feedback in gait generation where larger organisms use central pattern generations involving larger number of neurons that some neurons are more important than others. neurons are expensive.
I will plainly repeat my citation from the previous c.elegans paper
""the inability to reliably identify all neurons within whole-brain recordings has precluded a full picture with circuit-level details.""
and the lack of glutamate receptors mapping.
Those are sufficent to asses we are bottlenecked on data. We lack theories especially because we lack data. Yes of course we have a lot of data, which is useless without a complete picture. try debbugging a complex software with only partial, limited logging, the task become easily intractable.
But it seems you are more interested in a clash than in sharing knowledge. My argumentation is sound and valid, which make it cogent, authority is unecessary to come to an agreement.
Here's an interesting fact about the cerebellum: it has more neurons than the rest of the brain combined, and as far as we know, all it does is help with balance a bit. There have been cases studies of people with a fully ablated cerebellum, and they can still walk, just a bit unsteadily.
> There have been cases studies of people with a fully ablated cerebellum, and they can still walk, just a bit unsteadily.
?
I had heard the cerebellum is responsible for predicting the next token in a sentence (reminiscent of language models) but you're saying language is not impaired without a cerebellum?
The axons are just as important as the neurons. While neural networks are not good analogues of the real thing, one can imagine that no matter the neurons, without weights between them it would be utterly useless. The cerebrum is chock full of axons.
> without weights between them it
We don't know where are located the weights but yes probably.
We know for sure that individual neurons and axons do not reduce to a single floating point though. They have action potential memory and many dynamic modulators.
Basically I believe the idea that neurons are mostly doing linear regression (which is what neural networks are) is very broken. I have read papers about neural coding and neurons are able to encode information in the spike timing frequency and in the strength (weight) but non of those two mechanisms even begin to explain how the system works.
Anyway to answer your comment you have a tentative point but mostly unknown. It is unknown how much percent of human axons are in the cerebrum vs the cerebellum. I would be surprised if the cerebellum would have less since I expect some kind of neuron/axon proportionality. Especially since the main neuron type in the cerebellum are highly branchial.
No, I meant to highlight that “most neurons are here” may not be the important question to ask, but their interconnectedness which is achieved through axons, and which gives the most volume of the brainz
> your neurons are not in your "brain" 90% of your neurons are not in what you generally denote by brain. They are in your cerebellum.
Excuse the dumb question, but what's in the brain then? Or are you meaning to say that the cerebellum has a much higher density of neurons than the brain?
Re-read my sentence, yes the cerebellum is part of the CNS and so is the spinal cord.
Is the spinal cord part of the brain? Why not.
What I meant is what people generally denote and imagine when we speak about the brain, is actually the cerebrum
https://www.cancer.gov/publications/dictionaries/cancer-term...
The cerebellum, pons, etc are distinct extensions.
It is part of the brain but not in the mind of the layman which is what I meant. If you asked to someone wether the vast majority of our neurons are located in the cerebrum (not the cerebellum) they would say "of course".
There are ghostly individual isolated hydrogen atoms drifting in interstellar space whose effective diameters are about 0.5 mm, while hydrogen atoms in our local environment are more like 1.1 angstrom (1e-7 mm) in diameter.
These are called Rydberg atoms and exist because the electon bound to the nucleus of the these atoms has climbed up to very high excitation levels, a state that is very unstable as any collision with another particle or interaction with electromagnetic fields will cause it to collapse down to lower energy levels. The extreme emptiness of interstellar space allows such atoms to persist for long periods.
The density of interstellar hydrogen clouds is much lower than what can be achieved with the best vacuum systems on Earth, but laser cooling techniques have now made the production and maintenance of such atoms possible, to the point where they can be used as delicate sensors, and have possible applications in quantum computing.
Most of the scientific problems (diseases, quantum gravity, etc) have been resolved, it's just that no one knows it. Because researchers are paid to produce new papers, but nobody is paid to extensively read the ocean of existing papers.
Isn't that what conferences and journals are for? Reviewing and selecting the most important papers?
When something is really solved (a solution exists without negative side-effects; and this fact has been verified several times), they should indeed look for new problems.
In theory, not in practice.
Revolutionary papers ignored by all are the rule, not the exception.
I have seen hundreds.
See e.g.
This 410% reduction of ALL cause mortality. No side effects and in addition significant amelioration of
quality of life (sleep, etc)
It was twenty years ago and will keep being ignored for the rest of eternity.
https://pubmed.ncbi.nlm.nih.gov/12577695/
The human condition is miserable because even the scientists themselves are actually scientifically illiterate.
I’d like a 410% reduction in all cause mortality as much as the next Joe, but the Pubmed similar articles list isn’t very encouraging. Nearly all the 90 articles listed spanning 40 years are from Russia, many from the same small group of researchers. I observe: 1) there HAS been a sustained effort and publication among a substantial group of investigators, and yet 2) these dramatic results have failed to attract further attention.
I (non-scientist, applying Occam’s razor and the principle of “if it seems too good to be true, it surely is”) conclude there is probably a fundamental flaw, like, perhaps, after a few years of “improvement” people suddenly shrivel up into a husk.
Your Occam razor is a tragic self fulfilling phophethy. By people assuming scientists are not scientifically illiterate they silence those ignored potent results and dismiss them.
The fact is scientists are illiterate.
Russian are the world leaders in gerontology. A reason being they have a well funded longevity institute since the 90s.
Contrary to the rest of the world they are not retarded and therefore do human trials since billions of human lifes are at stake.
And most importantly they are public research. That is essential since the most promising class of drugs is generally non-patentable and therefore non-proditable (Peptides)
Khavinson is the greatest medecine researcher alive
https://www.researchgate.net/scientific-contributions/Vladim...
On pubmed you are seeing Russian related papers because they cite each other. But actually many of those results have been independently reproduced by non-russian researchers, e.g thymalin is the covid cure since day 1. I knew this and all those lives and economic loss could have been saved but the world was not ready to listen the trivial truth.
The trivial retarded truth is that aged people have their thymus that involute, it self atrophy. By injecting them the thymic hormone thymalin, their thymus grow back and rejuvenate. This result in a 610% increase in lymphocite T production, basically fixing age related immunosupression which is the cause of covid mortality.
The world is too retarded and will remain retarded and suffering will persist.
However Russians are not the only ones that make revolutionary discoveries ignored by all, see e.g
https://touroscholar.touro.edu/sjlcas/vol13/iss2/7/
By becoming a meta-researcher like me, you observe how beautiful and futuristic the world could be. And you also observe that the revolution people are waiting for has been discovered in the 90s and that it will never be used.
I was applying Occam to myself as a meta-researcher (I like that term!) not to “scientists”. I will happily assume there is a wide range of literacy among scientists, and that it wouldn’t take many positive results for research on thymalin to start ringing bells. I will also admit to the assumption that Russia cannot possibly be the only place where there is a high degree of willingness to experiment aggressively with , on, or on behalf of people who are running out of time. China seems like one place. India. Indonesia. It’s very difficult to see how real progress in this field wouldn’t be publicized.
As an aside, may I suggest to you that in a forum with many Americans, use of the word “retarded” will usually get you a very strongly negative reaction and obliterate consideration of any other points you may be making.
you are comprised of many selves, subpersonalities, which you can talk to. they have their own agendas, views, emotions. i could talk to your inner teacher, a six year old in you, your lazy bones, etc. bonus fact: it goes deep and it's fun and interesting as hell.
I have chronic migraine. It sounds worse than it is; I rarely suffer attacks anymore, and when I do, they're usually mostly harmless, as in the case I'll describe below.
Contrary to popular belief, migraine is not a headache. It's something a little more like an epileptic seizure. Intense headaches are one common symptom, but there are a lot of others.
One common symptom of migraine is called *scintillating scotoma*. A scotoma is a blind or blank spot in the visual field. A scintillating scotoma caused by migraine is an area of the visual field that is temporarily replaced by a vivid visual aura. Its appearance is commonly the first symptom of a migraine attack, and is often followed by more unpleasant symptoms.
I've seen scintillating scotoma many times over the years. A few years ago I was reading and enjoying a good book, and my scotoma appeared. It was a humdinger: roughly triangular prisms of white light with utterly black zebra stripes moving along them kaleidoscope-style. It took up the lower left middle of my visual field, pulsing and radiating and turning, covering the book in my hand.
I was disappointed. I didn't want to stop reading. I was enjoying the book. So I decided to keep reading until the scotoma made it impossible to continue.
It never did.
It became so vivid that I couldn't see my hand at all, but I still had no trouble reading the book. I even started reading it aloud without any difficulty.
After a few minutes the scotoma faded. In that instance, it was not followed by any other symptoms (that's happened more and more frequently over the years since probably my forties).
I could now see the book again, and could confirm that what I had been reading was indeed what was on the page.
I thought about how to explain my experience. The best hypothesis I've come up with so far is that the neurological process of seeing and the neurological process of being consciously aware of what I'm seeing are not the same thing. They're independent processes. The scotoma prevented me from subjectively experiencing seeing the book, but did not prevent me from actually seeing it, nor from correctly interpreting what I was seeing.
This experience (and one two other odd experiences) has led me to adopt the working hypothesis that many of our cognitive experiences are more complicated than we tend to assume, and that they're often made up of several more or less independent processes. We usually benefit if related processes pretty much work together, so they pretty much do. Because they do, we experience them all together as a single experience, but that's an illusion that unravels if circumstances screw up their synchronization.
I hope someone finds that as interesting as I do.