Hacker Newsnew | past | comments | ask | show | jobs | submit | VanillaCafe's commentslogin

Trying to understand it... if by comparison I'm using tmux then switching to something like this adds mouse based window (panel) management?


> For anyone wondering why it takes so long to actually switch this stuff out

One counterpoint is do we really NEED to have brightly colored foods? It's a hard problem if you need a food to be bright red. But, that has to boil down to strictly to improving sales, right? Hypothetically, if all the artificial food dyes were banned, then all food companies would be on the same level playing field.


Color is definitely something that catches a person's eye, so if you have a "food product" that needs extra to convince someone to buy it, color is a way to do it. You can't taste it before purchasing. You can see and smell it, so they push those levers as much as they can.


Mandate big font "contains carcinogens" label when your food contains this colour. Then let the buyer choose whether s/he finds this shade of bright red attractive or not.


Multiplied by the hundreds of decisions people make every day and now you know why we have the FDA.

People cannot become experts for every decision they must make.


You mean like the big ugly boxes on cigarettes?


Maybe, and I see your point, but there are few alternatives to having them on a cig pack, whereas you could not dye your food and remove this label. As a consumer, the choice is rather simple (for me at least)


Simple has nothing to do with it. The point is that people ignore warnings all of the time for various reasons. Some people look at the risk/reward factor and decide the risk isn’t that bad. Some people have no idea what the risk means and ignore it. People go sky diving even after having to sign all of the liability release forms, make a video recording while reading a release statement, etc. in Hawaii, there are signs that suggest people to not travel any further due to safety reasons and by proceeding further you do accept all liability. Nobody stops there, and the vast majority don’t even stop long enough to read the sign.

The point being that humans are bad at weighing risk/reward and make bad decisions all of the time.


That may be true and I'm not against legislation to rule dangerous things out. I think however that big red warnings can be useful sometimes. Well, maybe not red in this case.


So in other words: no, we don't need it, particularly since people need to consume less ultraprocessed foods, not more.


Visuals have a pretty big impact on food. I wonder how many foods would just look disgusting without any food dyes. Reminds me of butter companies trying to pass legislation to make margerine companies unable to dye their product to look like butter


I thought this might be a useful article because I've often had a similar question. But there's a diagram that has text:

> More simply put: imagine that you have red, green, and blue light sources. What is the intensity of each one so that the resulting light matches a specific color on the spectrum?

> ...

> The CIE 1931 color space defines these RGB color matching functions. The red, green, and blue lines represent the intensity of each RGB light source:

This seems very oddly phrased to me. I would presume that what that chart is actually showing is the response for each color of cone in the human eye?

In which case it's not a question of "intensity of the light source" but more like "the visual response across different wavelengths of a otherwise uniform intensity light source"?

... fwiw, I'm not trying to be pedantic, just trying to see if I'm missing the point or not.


The wording on the article is correct, despite being confusing. The CIE 1931 RGB primaries each stimulate multiple types of cone in human eyes, so the RGB Color Matching Functions (CMFs) don't represent individual cone stimulations.

However, the CMFs for LMS space[1] do directly represent individual cone stimulations over. Like the CIE RGB CMFs, the LMS CMFs can also be thought of as the required intensities of three primariy colors required to reproduce the color of a given spectrum. The reason these two definitions coorespond for LMS space is that each primary would stimulate only one type of cone. However, unlike CIE RGB, no colors of light which stimulate only one type of cone physically exist.

Finally, CIE RGB and LMS space are linear transformations of each other, so the CIE RGB CMFs are linear combinations of the LMS CMFs, so each CIE RGB CMF can be though of as representing a specific linear combination of cone stimulations (the combination excited by the primary color).

I often find it easiest to reason about these color spaces in terms of LMS space, since it's the most physically straightforward.

[1]: https://en.m.wikipedia.org/wiki/LMS_color_space


I'm the author of the article and the intensity is referring to the level of the light source used in the study to generate the data. See the study explained here: https://medium.com/hipster-color-science/a-beginners-guide-t...

but you're right, the intensity needed of each R, G, and B light sources to produce the correct color is directly related to how our eyes perceive each of those sources, so yes you are correct


> I don't know why the narrative became "don't call it hallucination".

Context is "don't call it hallicination" picked up meme energy since https://link.springer.com/article/10.1007/s10676-024-09775-5 on the thesis that "Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived."

Which is meta-bullshit because it doesn't matter. We want LLMs to behave more factually, whatever the non-factuality is called. And calling that non-factuality something else isn't going to really change how we approach making them behave more factually.


How are LLMs not behaving factually? They already predict the next most likely term.

If they could predict facts, then these would be gods, not machines. It would be saying that in all the written content we have, there exists a pattern that allows us to predict all answers to questions we may have.


The problem is that some people are running around and saying they are gods. Which I wouldn't care about, but an alarming number of people do believe that they can predict facts.


Our system can effectively predict facts.

It logics its way to it.

By predicting the next word in a sequence of words.

Sure? It kinda sounds plausible? But man, if it’s that straight forward, what have we been doing as a species for so many years ?


> Unless you could somehow make an Alcubierre warp drive.

Even if you can make it, even though it's theoretically possible that the warp bubble could move through space faster than the speed of light, it's a separate and completely open question as to how you might actually get it to move that fast to begin with.


There are a lot of real world caveats that go into those range estimates. I just took my long range Tesla Model Y with an advertised 326 mile range on a multi-day road trip and I was stopping to charge about every 100 miles. I would love to get something with 3x the advertised range.


A pet nit, and the standards probably don't permit this, but for encoding 128-bit numbers, I prefer base-57 in my own implementations. 22 characters for a 128-bit encoding, same as base-64. You can split it into two 11-character 64-bit encodings. You can avoid the two non-alphanumeric characters in base-64 as well as the similar-looking characters like l1 and oO0. And it takes less visible space, so a bit easier for debugging and tabular output with otherwise no loss of generality.


Evidence suggests that none of this is true. Just look at how Google Fiber wasn't able to navigate the myriad regional regulatory roadblocks thrown up by local governments when lobbied by incumbent ISPs. It's reasonable to expect exactly the same scenario to play out if Google tries to make a competitive ride sharing service from scratch.


That's a pretty good example. But in the case of Fiber, part of the problem was that other ISPs own/control infrastructure (like poles) to which they're legally obligated to give competitors access...and they would just not comply, or drag their feet on it. Fiber physically couldn't execute because of this. Of course there was corruption involved, as you pointed out, but in the case of ride share apps I can't immediately think of anything analogous, where Uber could physically prevent Waymo from competing with them.


I think there is true emergence coming from these newest large language models, which is something that can't be captured by reducing the system to merely a mathematical Plinko machine. To understand emergent behaviors, the system really does need to be treated like a black box and experimented on.

And a secondary point, what if a large fraction of our intelligence comes from language, not the other way around. If that's true, then we might be well on our way to building the spark of an AGI.


I don't remember where the idea came from, but I heard a hypothesis that basically posited this precise thing. If you look at linguistic history many (if not most) of our words derive ultimately from some analogy to another thing or idea. Our modern languages have embedded in them the combined learning of thousands of generations, and since it is much easier to use the words representing analogies than to come up with and communicate those analogies in the first place, this means anyone who learns a modern language is comparatively brilliant to someone who lived a long long time ago.


Love the demo jitters as he picks up the frosted flakes, "Well that's working. That's good."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: