Hacker Newsnew | past | comments | ask | show | jobs | submit | supermatt's commentslogin

If you want to fall for the QNX bait and switch a 3rd time, more fool you.

Can you elaborate on this?

They've moved back and forth between being partially source-available and fully closed source at least twice. It's a similar story with usage licenses, with hobbyist and non-commercial access variously being granted and then pulled away multiple times.

On at least one occasion, the license was changed overnight leaving a large enthusiast community in the lurch.

Given the history, there's every reason to suspect that there'll be yet more rug-pulls in the future.


They also had a desktop version you could install at home before.

They don't promise anything "Open Source" here.

The bait and switch was around the “free” license for non-commercial use. They got lots of people using it and porting software to it, and then they revoked that free license.

Then they did exactly the same thing again a few years later.

And now, for the 3rd time, they are offering a “free” non-commercial license.


> the Framework 16 feels more like a €1200-€1500 laptop at best… but two thousand Euros for this kind of laptop is just absurd.

You are literally buying a whole new laptop because the keyboard is broken and too difficult to replace, instead of a €65 euro replacement part with framework.

With framework, you are paying a 30% premium for the modularity and upgrade potential.

If that’s not important to you then why would you even buy a framework laptop?


Is there less regulatory oversight when purchasing assets instead of the company, or do Nvidia really believe the FTC/DOJ are that blind? (Or doesn’t it matter in the current climate?)

The near exclusive global provider of AI chips taking key employees from and “licensing” the technology of the only serious competitor while quite specifically describing it as “not acquiring Groq as a company” seems quite obviously anti-competitive, and quite clearly an attempt to frame it as not.


American has not had functional anti-trust laws for the better part of the last 40 years. The current climate is just a peak.

Is there less regulatory oversight when purchasing assets instead of the company?

The near exclusive global provider of AI chips purchasing the only serious competitors technology while quite spceficially describing it as “not an acquisition” seems a bit…


> If Apple knew they would need to expand this feature past their gear, possible they’d never have implemented.

And this is EXACTLY why they need to open up more core access to their devices. So someone else can innovate.


Why they need to be forced to, you mean?

I'm not seeing an incentive structure for them to change being the only source of good workflows for their users - it's their whole thing "It just works" - regardless of if it's true in practice or not.


which other company spends as much investing in UX? there is not a single other company on the planet with as polished of a user experience as apple, so who would develop a better workflow?

If you want the "it just works" experience, you can still buy the Apple products though, that's not changing. You just also have the option to not do so.

Indeed. They have shown (and keep showing via blatant malicious compliance) that they can’t be trusted to play fairly.

That depends on the interpretation of a market, which is why laws like the DMA establish a market based on its size. In the iOS market, apple have a monopoly.

EDIT: Downvotes for what? That’s literally what the DMA is for. If you don’t like it, take it up with your representatives - it’s nothing to do with me.


> In the iOS market, apple have a monopoly.

So if a company creates a widget and sells that widget, thereby creating a market, they are automatically a monopoly? how is anything invented without creating a monopoly?

Also why is it an iOS market and not a mobile phone market? if we compare features of devices then there’s not a lot of difference between modern phones, so segregating them by what OS they run seems odd.



Regardless of the content itself, naive redaction of a high profile PDF still exposing the text contents is something that seems relevant to the community. Maybe you are in the wrong place?

That movement is effectively “consuming” the differential.

What is the max token throughput when batching. Lots of agentic workflows (not just vibe coding) are running many inferences in parallel.

It seems like every time someone does an AI hardware “review” we end up with figures for just a single instance, which simply isn’t how the target demographic for a 40k cluster are going to be using it.

Jeff, I love reading your reviews, but can’t help but feel this was a wasted opportunity for some serious benchmarking of LLM performance.


I note the lack of human portraits in the example cases.

My experience with all these solutions to date (including whatever apple are currently using) is that when viewed stereoscopically the people end up looking like 2d cutouts against the background.

I haven't seen this particular model in use stereoscopically so I can't comment as to its effectiveness, but the lack of a human face in the example set is likely a bit of a tell.

Granted they do call it "Monocular View Synthesis", but i'm unclear as to what its accuracy or real-world use would be if you cant combine 2 views to form a convincing stereo pair.


They're using their Depth Pro model for depth estimation, and that seems to do faces really well.

https://github.com/apple/ml-depth-pro

https://learnopencv.com/depth-pro-monocular-metric-depth/


Im not sure how the depth estimation alone translates into the view synthesis, but the current implementation on-device is definitely not convincing for literally any portrait photographs I have seen.

True stereoscopic captures are convincing statically, but don't provide the parallax.


Good monocular depth estimation is crucial if you want to make a 3D representation from a single image. Ordinarily you have images from several camera poses and can create the gaussian splats using triangulation, with a single image you have to guess z position for them.

For selfies, I think iPhones with Face ID use the TrueDepth camera hardware to measure Z position. That’s not full camera resolution, but it will definitely help.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: