Hacker Newsnew | past | comments | ask | show | jobs | submit | g-mork's commentslogin

it's still down get back to work

I was going to link you the Apple Vision Pro as a counterpoint, but after clicking the link and being reminded of what that product actually looks like, I really don't know what to say any more. I'm literally dumbfounded anyone could make your comment at all

To their credit, they specifically decided not to make a big deal out of AR like Meta did and keep production small and expensive. They realized the tech wasn't ready for a mass adoption campaign. I'd say Apple, overall, has been pretty cautious with AR. I wouldn't be surprised if they even have the guts to cancel that project entirely like they did with self-driving cars

That's not credit at all. If your strongest defense of AVP is "at least they're not Meta" then you've stopped making grounded observations and gone straight to ad-hominem.

I'd also go as far as to say that Apple knew they could have made the Vision Pro better. It should be running a real computer operating system like the headset Valve is making, and Apple knows that. The arbitrary insistence on iPad-tier software in a $3,500 headset guaranteed it was unlovable and dead-on-arrival.


I ran into an AVP recently and it actually is a great piece of hardware. It only has two issues: price and software. The former is forgivable because it really is an amazing piece of hardware and the price is justified. The latter is not and is the original sin that has killed it.

There's an unfulfilled promise of spatial computing. I wish I could load up my preferred CAD program and have wide and deep menus quickly traversable with hand gestures. Barring that the least it could do is support games. Maybe if some combination of miracle shims (fex emu, asahi, w/e) were able to get onto the platform it might be savable. The input drivers alone would be a herculean task.


If it's over SMB/Windows file sharing then you might be looking at some kind of latency-induced limit. AFAIK SMB doesn't stream uploads, they occur as a sequence of individual write operations, which I'm going to guess also produce an acknowledgement from the other end. It's possible something like this (say, client waiting for an ACK before issuing a new pending IO) is responsible

What does iperf say about your client/server combination? If it's capping out at the same level then networking, else something somewhere else in the stack.

I noticed recently that OS X file IO performance is absolute garbage because of all the extra protection functionality they've been piling into newer versions. No idea how any of it works, all I know is some background process burns CPU just from simple operations like recursively listing directories


The problem I describe is local (U.2 to U.2 SSD on the same machine, drives that could easily performs at 4GB/s read/write, and even when I pool them in RAID0 in arrays that can do 10GB/s).

Windows has weird behaviors for copying. Like if I pool some SAS or NVMe SSD in storage space parity (~RAID5) the performance in CrystalDiskMark is abyssal (~250MB/s) but a windows copy will be stable at about 1GB/s over terabytes of data.

So it seems that whatever they do hurts in certain cases and severely limits the upside as well.


warmly encourage you avoid reading the header files of the dahua camera SDK


Mind sharing a bit more insight?


Extra privacy features and named after UK slang for MDMA, hrm.


Molly is the American slang, Mandy is the UK slang for MDMA.


If it was AI-generated I had no difficulty with it, certainly on par with typical surface level journalist summaries, and vastly better than losing 2 hours of my life to watching some video interviews.. :) AI as we know it may not be real intelligence but it certainly has valid uses


I get a lot from seeing the person talk vs reading a summary. I have gone back and watched a lot of interviews and talks with Ilya. In hindsight, it is easy to hear the future ideas in his words at the time.

That said, I use AI summaries for a lot of stuff that I don't really care about. For me, this topic is important enough to spend two hours of my life on, soaking up every detail.

As for being on par with typical surface level journalism. I think we might be further into the dead internet than most people realize: https://en.wikipedia.org/wiki/Dead_Internet_theory


> I get a lot from seeing the person talk vs reading a summary

And some people simply have the opposite preference. There's lots of situations where sound is simply not an option. Some people are hearing impaired. Some people are visually impaired, and will definitely not get much from watching the person speak. Some ESL people have a hard time with spoken English. Even some native English speakers have a hard time with certain accents. Some people only have 5 minutes to spare instead of 50.

All of those problems are solved by the written word, which is why it hasn't gone away yet, even though we have amazing video capabilities.

You can have a preference without randomly label everything you don't like as AI slop.


Handforth Parish council Internet edition. You have no authority here, djb! No authority at all


Talk about a gargantuan project.. also awesome to bag such a thing. He's lucky to even have the resources to store^W warehouse it


It's not that much space in some parts of the US where properties are measured in acres.


Same old world thinking.. Google use single PSUs too, real redundancy came from having multiple machines, and Hetzner certainly makes that cheap enough to accomplish on a budget. You can also pay for 10 Gbit as an option with Hetzner, and a bunch of other custom upgrades, but the further you move outside their sweet spot the more it's going to cost.


> I saw a Boa snake at a zoo once and knew I wanted to name my next project after it

Top tier reasoning, literally makes me want to use it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: