Some lidar units simply use the wavelength that the human eye is opaque to.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
Absolutely, and is a major cause of cataracts. Somewhat near 100% of people with lenses in their eyes will get cataracts eventually if they are ever exposed to unfiltered sunlight.
I remember those old cellphones with weak IR filters. It was a scandal because light clothing turns out to be more transparent to IR than to visible light so they were acting as a sort of clothing "X-Ray" in bright light. Creepers on the Internet tried to start a whole new genre of porn but were shut down in a hurry by cellphone manufacturers adding robust IR filters on the next generation of smartphones.
Shame that perverts had to ruin that for us, it was kinda neat to point a TV remote as the camera and see the bulb light up.
It's funny how big of an impact individual developers can have with such seemingly simple publications. At the time of the article with that diagram release, I was changing jobs and I distinctly remember, that the diagram was extensively discussed and compared to company standards, at both the old and the new place.
The first time I heard of how many neurons the gut has, I went: "well obviously, there's a lot of receptors and muscles to support peristaltic movement; plus probably some start to end connection for proper synchronization and total volume management - it just has to be processed somehow and locally (in the gut, as opposed to the far brain) seems like an obviously good choice".
Never tried to verify that with state of the art though
OpenCode works nicely, I wish it web mode would be developed more. Currently, as it stands, you have to work on the same host in order to pass the full OAuth login flow (it redirects to localhost) for subscription based providers (Claude, ChatGPT). I wish it used some BASE_URL variable I could set, so it would be used instead.
Oh, wow. I was debugging the same in copilot (the only "work approved" agent) in Intellij, which showed that copilot didn't return commands output at all. I wrote a comment under relevant issue, if you're curious.
I think there are quite a few bugs lingering in those agent-cli's and observability, would help a lot with reporting.
Taking yours for a spin this evening, thank you!
Practically anybody actually creating with this class of models (diffusion based mostly) is using ComfyUI.
Community takes care of quantization, repackaging into gguf (most popular) and even speed optimizing (lighting loras, layers skip). It's quite extensive
I personally find nothing about ComfyUI to live up to that name. Node-based workflows are unruly, and you have to know in advance what you need to do for anything to work. Just poking around and figuring stuff out is nearly impossible even for technically literate but AI-inexperienced folks.
You could argue that is what pre-made workflows are for, but that doesn't work super well for users that are off the blessed path in terms of not having Nvidia hardware like everyone assumes. I personally find using stable-diffusion.cpp on the command line to be considerably easier to figure out. Last I saw, it's even shipping a usable demo web ui for those that really want one (my workflow benefits from heavier scripting, so point and click is far too slow and clunky).
I'm really confused at that take.
If you watched the Corridor Channel on YouTube, you can catch a lot of times that Unreal is treated as a draft, or the on-set reference, and gets replaced almost always, before shipping the final. Something doesn't add up here.
Having watched a great deal of Andromeda, Star Trek, and Hercules/Xena growing up, I would submit that weak video effects can be perfectly fine as long as the actors take them seriously enough.
I'm thinking quite a bit about this at the moment in the context of foundational models and their inherent (?) regression to the mean.
Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).
These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.
The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.
There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.
My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Landslides, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.
But I'd be curious how others see this, who might be more knowledgeable in the area.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
reply