Hacker Newsnew | past | comments | ask | show | jobs | submit | eurekin's commentslogin

Some lidar units simply use the wavelength that the human eye is opaque to.

The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.


> human eye is opaque to

Absorbing the laser isn't necessarily any good. Very hypothetically it could lead to cataracts.


Sun emits much stronger IR, near-IR, UV

Absolutely, and is a major cause of cataracts. Somewhat near 100% of people with lenses in their eyes will get cataracts eventually if they are ever exposed to unfiltered sunlight.

And staring directly at the sun is not recommended.

That's why we don't look at it.

I remember those old cellphones with weak IR filters. It was a scandal because light clothing turns out to be more transparent to IR than to visible light so they were acting as a sort of clothing "X-Ray" in bright light. Creepers on the Internet tried to start a whole new genre of porn but were shut down in a hurry by cellphone manufacturers adding robust IR filters on the next generation of smartphones.

Shame that perverts had to ruin that for us, it was kinda neat to point a TV remote as the camera and see the bulb light up.


Can't wait until they finally get to real world CAD

There's a CAD example in that same thread: https://x.com/JeffDean/status/2024528776856817813

I know this isn’t necessarily “real world CAD” but Claude Code is not too shabby at OpenSCAD.

It's funny how big of an impact individual developers can have with such seemingly simple publications. At the time of the article with that diagram release, I was changing jobs and I distinctly remember, that the diagram was extensively discussed and compared to company standards, at both the old and the new place.

The first time I heard of how many neurons the gut has, I went: "well obviously, there's a lot of receptors and muscles to support peristaltic movement; plus probably some start to end connection for proper synchronization and total volume management - it just has to be processed somehow and locally (in the gut, as opposed to the far brain) seems like an obviously good choice". Never tried to verify that with state of the art though

OpenCode works nicely, I wish it web mode would be developed more. Currently, as it stands, you have to work on the same host in order to pass the full OAuth login flow (it redirects to localhost) for subscription based providers (Claude, ChatGPT). I wish it used some BASE_URL variable I could set, so it would be used instead.

Oh, wow. I was debugging the same in copilot (the only "work approved" agent) in Intellij, which showed that copilot didn't return commands output at all. I wrote a comment under relevant issue, if you're curious.

I think there are quite a few bugs lingering in those agent-cli's and observability, would help a lot with reporting. Taking yours for a spin this evening, thank you!


Yeah, debugging swallowed command outputs is definitely a pain.

Thanks for giving it a spin tonight! Let me know if you run into any issues.


Practically anybody actually creating with this class of models (diffusion based mostly) is using ComfyUI. Community takes care of quantization, repackaging into gguf (most popular) and even speed optimizing (lighting loras, layers skip). It's quite extensive

I personally find nothing about ComfyUI to live up to that name. Node-based workflows are unruly, and you have to know in advance what you need to do for anything to work. Just poking around and figuring stuff out is nearly impossible even for technically literate but AI-inexperienced folks.

You could argue that is what pre-made workflows are for, but that doesn't work super well for users that are off the blessed path in terms of not having Nvidia hardware like everyone assumes. I personally find using stable-diffusion.cpp on the command line to be considerably easier to figure out. Last I saw, it's even shipping a usable demo web ui for those that really want one (my workflow benefits from heavier scripting, so point and click is far too slow and clunky).


I'm really confused at that take. If you watched the Corridor Channel on YouTube, you can catch a lot of times that Unreal is treated as a draft, or the on-set reference, and gets replaced almost always, before shipping the final. Something doesn't add up here.


There are definitely movies and TV shows now that are using unreal for the final render. But mainly because they can't afford anything else.


Having watched a great deal of Andromeda, Star Trek, and Hercules/Xena growing up, I would submit that weak video effects can be perfectly fine as long as the actors take them seriously enough.


I'm trying to run it, but getting odd errors. Has anybody managed to run it locally and can share the command?


That's actually putting into words, what I couldn't, but felt similar. Spectacular quote


I'm thinking quite a bit about this at the moment in the context of foundational models and their inherent (?) regression to the mean.

Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).

These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.

The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.

There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.

My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Landslides, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.

But I'd be curious how others see this, who might be more knowledgeable in the area.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: