Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't wait for the day when Flock's "proactive AI" flags the way you are driving or your vehicle movements as suspicious and alerts LE to just ... "check in on you".

Or when they enable the mics in their devices to just start recording your conversations with your friend in a public place and does the same. "AI didn't like what you were talking about, so alerted the local PD".





> alerts LE to just ... "check in on you".

This is currently an epidemic. Drivers are targeted for “random” checks by police for a number of non-falsifiable factors (e.g. the evergreen “your license plate light was out…huh, looks fine now”) that overwhelmingly correlate with driver income and race.

That’s not whataboutism; I am genuinely not sure if ALPR/automated policing systems stand to make that situation worse or better. Are Flock and friends likely to be abused in the same way that human police traffic stop reasons are?


I have every reason to believe so.

Flock's founders belief is that he wants to eliminate all crime (literally) with Flock.

So in his eyes, false positives are inherently acceptable, and preferable to false negatives.

And I feel that (actually, I know that, though I wasn't in Sales, but I did work at Flock) one of their selling points to agency is almost a "whitewashing" of such practices. "Oh, our PD wasn't targeting anyone, we were just acting on the recommendations of the Flock surveillance system".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: