Hacker Newsnew | past | comments | ask | show | jobs | submit | klik99's commentslogin

Are there video "thumbprints" like exists for audio (used by soundhound/etc) - IE a compressed set of features that can reliably be linked in unique content? I would expect that is possible and a lot faster lookup for 2 frames a second. If this is the case, the "your device is taking a snapshot every 30 seconds" sounds a lot worse (not defending it - it's still something I hope can be legislated away - something can be bad and still exaggerated by media)

There are perceptual hashing algorithms for images/video/audio (dsp and ML based) that could work for that.

Given that the TV is trying to match one digital frame against another digital frame, you could probably get decent results even with something super naive like downsampling to a very low resolution, quantizing the color palette, then looking for a pixel for pixel match.

All this could be done long before any sort of TV-specific image processing, so the only source of "noise" I can think of would be from the various encodings offered by the streaming service (e.g. different resolutions and bitrates). With the right choice of downsample resolution and color quantization I have to imagine you could get acceptable results.


That's basically what phash does

I've been led to believe those video thumbprints exist, but I know the hash of the perceived audio is often all that is needed for a match of what is currently being presented (movie, commercial advert, music-as-music-not-background, ...).

This is why a lot of series uploaded to YouTube will be sped up, slowed down, or have their audio’s pitch changed; if the uploader doesn’t do this, it gets recognized by YouTube as infringing content.

The actual story wasn’t anything special, but I thought how it told the story through mechanics was really well done. It wasn’t the first to do that but did it a larger scope than anything else at the time.

Yes, I agree with this - as an expression of learning by wordless doing it was a really profound experience. The ending video of real life was great, reminded me of when I played a lot of Katamari and started seeing the whole world as things to roll up. I share the sentiment in these comments about Blow himself, but The Witness is a great game - though I get why people don’t like it: it’s a slow burn and requires a tolerance for pretentiousness. I don’t feel it was too long, it was as long as it needed to be, it’s just a big game

There's no one size fits all - what they're saying would absolutely fail at the company I'm currently at, but sounds like it works for them. The key thing is to have a process that works well for the people who are there, and hire people who work well under those conditions. The people who do well at our company would not do well at their company, and vice versa. I don't like how this article makes a claim about what works well for them is actually a universal truth. It really depends on the people there.


I agree the coffee at starbucks isn't great, but I feel the coffee at dunkin and mcdonalds are both better than starbucks. I like black coffee though, and Starbucks drip or americano just isn't good - I see starbucks more as a desert place than a coffee place, and judged on that they're good. I think more people want sweet milky coffees, and that's fine. That plus the environment is a big pull of Starbucks. It's not my thing, but I get why people like it

EDIT Also, I'm pretty sure the better coffee at mcdonalds happened after starbucks, IIRC they put a lot of effort into improving their offering after starbucks exploded


The Starbucks near me doesn't even brew coffee any more. They switched to these automatic machines that "brew" a cup in about 15 seconds (ie. vending machine quality). Its undrinkable now. In future would only order espresso drinks or cold brew.


The SB blonde roast is a lot better than Dunkin or McDonalds, in my opinion.


agreed, but it's only ever brewed until like 11am because they've weirdly associated it with the morning


I've instinctively been avoiding hospitals run by PE, and now I have a good reason to.

I'll never forget with my first kid they tried to scare us into genetic testing - I mean, they had a pamphlet and video they were required to show us that were meant to scare us into it, but I could tell from the doctors face that she wasn't into it and felt like she was apologizing when she said she had to play this video and leave the room. We switched to a different hospital almost immediately.


I believe the wristband came from this acquisition: https://www.theverge.com/2019/9/23/20881032/facebook-ctrl-la...

Insanely cool, and awesome to see a viable wave guide device.

It's so cool that it might outweigh my reluctance to strap facebook to my face.


CTRL-Labs themselves acquired the wristband tech from North/Thalmic, who pivoted into smart glasses for a few years before being acquired by Google.

> In an interesting twist, CTRL-Labs purchased a series of patents earlier this year around the Myo armband, a gesture and motion control device developed by North, formerly known as Thalmic Labs. The Myo armband measured electromyography, or EEG, to translate muscle activity into gesture-related software inputs, but North moved on from the product and now makes a stylish pair of AR glasses known as Focals. It now appears the technology North developed may in some way make its way into a Focals competitor by way of CTRL-Labs.


That's not true. Thalmic did develop an sEMG band, but the tech developed here was created by Ctrl-labs and continued development within Meta.


nope. the technology was invented by CTRL-labs, and at Meta after the acquisition.

https://www.nature.com/articles/s41586-025-09255-w

yes the Myo was a similar, earlier, and less capable technology also based on EMG sensing.


I had one of those Thalmic Myo armbands 12ish years ago. Used it a couple of times and then forgot about it. From memory, there were only a few gestures available to program, and anything I could think to sync them to was just as easily handled with keyboard shortcuts (show desktop, close window, change workspace, etc).


The technology was "invented" by CTRL-Labs like how OpenAI "invented" transformer-based language models.


Do you have any evidence or are you just going to go with repeating a bald-face lie?


lol you can go ask Thomas Rearden himself if Ctrl-labs "invented" gesture recognition from sEMG.


> measured electromyography, or EEG

Should be EMG, but is it normal EMG or sEMG?


surface!


Yes, it seems that the gesture detection is based on the raw time-frequency sEMG signals with data and code for model implementation from the Nature paper available here [1],[2].

[1] sEMG data:

https://fb-ctrl-oss.s3.amazonaws.com/generic-neuromotor-inte...

[2] Code for exploring surface electromyography (sEMG) data and training models associated with Reality Labs' paper:

https://github.com/facebookresearch/generic-neuromotor-inter...


Disney is about to have a serious talk with Facebook. Disney Research has had a prototype on gesture detection via wristband electric sensing tech since 2012: https://youtu.be/E4tYpXVTjxA?t=2m8s


Besides it being different technology, the original Myo wristband was also introduced around 2012. The parents were later acquired by CTRL-labs which was then acquired by Meta. So you can be pretty confident that they have the patents.

Although surface electromyography is quite a bit older than that.


not the same tech at all.


This is why Jobs spent months prepping for each presentation.

But hey, at least it's not all faked


When I was at Meta (then facebook), people lived and died by the live demo creedo.

Pitches can be spun, data is cherry picked. But the proof is always in the pudding.

This is embarrassing for sure, but from the ashes of this failure we find the resolve to make the next version better.


Yep I hope that mindset never dies. Meta is one of the last engineering-first companies in big tech and willing to live demo something so obviously prone to mishaps is a great sign of it. It's not unlike SpaceX and being willing to iterate by crashing Starships for the world to see. You make mistakes and fix them, no big deal.


why did they choose to air this live?

For an internal team sure absolutely, but for public-facing work, prerecorded is the way to go


One of my internships was preparing Bill Gate's demo machines for CES. I setup custom machine images and ran through scripts to make sure everything went off w/o a hitch (I was doing just the demos for Tablet PC, each org presumably had their own team preparing the demos!)

Not doing it live would've been an embarrassment. I don't think the thought ever crossed anyone's mind, of course we'd do it live. Sure the machines were super customized, bare bones Windows installs stripped back to the minimum amount of software needed for just one demo, but at the end of the day it sure as hell was real software running up there on stage.


If it was pre-recorded we’d know it was staged and that assume they didn’t have a working product.

Their actual result was pretty bad, but, ya know, work in progress I guess.


Watch their big "Metaverse" presentation where its all vaporware and faked, presumably this is a cultural shift from that era.


The same unwarranted sense of confidence that tells them this product is worth making tells them that they can easily pull off a live demo. This is called "culture fit"


I saw Jobs give a demo of some NeXT technology and the system crashed and rebooted right in the middle of it. He just said “oops” and talked around it until the system came back up.


i love jobs but i do remember the “everybody please turn off your laptops” presentation.

live demonstrations are tough - i wish apple would go back to them.


Totally agree. Up until a few years ago failures during live demos on stage used to be a mark of authenticity, and companies playing recordings was always written off as exaggerated or fake. Now all of Apple's keynotes are prerecorded overproduced garbage.


"At least it's not faked" was my main reaction, too. Some other big-tech AI-related demos the last couple years have been caught being faked.

Zuckerberg handling it reasonably well was nice.

(Though the tone at the end of "we'll go check out what he made later" sounded dismissive. The blame-free post-mortem will include each of the personnel involved in the failure, in a series of one-on-one MMA sparring rounds. "I'm up there, launching a milestone in a trillion-dollar strategic push, and you left me @#$*&^ my @#*$&^@#( like a #@&#^@! I'll show you post-mortem!")


I appreciate the live demo but I'm suprised they didn't at least have a prerecorded backup. I wanted to see how video calls work!


Considering there's no camera pointing to your face they can't be all that interesting.


Matlab/Scipy/Julia are totally different since those function more like user interfaces, they are directly user facing. You're not building an app with matlab (though you might be with scipy and julia, it's not the primary use case), you're working with data. C++ on the other hand underpins a lot of key infrastructure.


I am a big fan of keepass which I sync with dropbox, good apps exist for iphone/android/mac/windows/linux. But I don't know if that's more secure than a password provider like 1password. At least not fitting into the typical profile, and being able to control the data, open source code, and offline access feels like the optimal way for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: