Hacker Newsnew | past | comments | ask | show | jobs | submit | kujino's commentslogin

Facial recognition is one of the most harmful for society & dystopian tech we've made. We need to draw strict lines

With that kind of reasoning we could go anywhere ... Hey we have Alexa in every house in the country what do we do ?

Option 1 - record & access anybody's intimate life and solve abuse, crime, blabla

Option 2 - do nothing

Ofc going full unlimited surveillance could, maybe in some cases, help with crime ( actually there was a trial were Alexa recording were handed over to convict the murderer). But there is a balance to strike, and mass spying facial rec from unconsented data scraping from the whole web is going is going too far.

You might have your photo on your blog, on your university website, yet you never signed any agreement that Clearview could analyse & monetize your biometric data and sell it to all the shadiest people of the world.

At least in the EU, Clearview is clearly illegal, yet all gov close their eyes on it because they like the power it give them.

Even if, we suppose we want to accept the use of facial recognition for some cases, this should be allowed to be done only by the gov., on a limited dataset, on a case-by-case, on a specific person & only after the okay from a judge

But not by a psychopath's shady illegal company whose initial business was to spy on billionaires daughters' friends


I agree 1000% that the unconsented use of someone’s image as training data or as a target match for facial recognition ought to be illegal, or at least as inadmissible in court as the statement “we had Joe review the footage, and he swears that the guy there in brown is his old college roommate’s friend Jimbo”

But matching against a govt-collected set (photo IDs for example), has well known precedent in fingerprinting.

I think requiring a judge to sign off is the right idea in principle, though in practice, I’d prefer it to be an open-records administrative function of the govt instead, in particular for cases where there is no known ID for the suspect.

To get a facial recognition done, you have to provide a video with enough context that shows someone committing a crime. That video is reviewed by the non-police administrative function, and then the request is either granted or denied.

Records are kept for later review and appeal, and are ideally public record, although I can certainly imagine cases (like porch pirates) where a public record of the video could open up an accuser (or an accused, for that matter) to later threats of harassment or violence.


Dark mode everything. Code editor in dark mode + OS theme in dark mode + DarkReader extension on your browser.

I don't know if the risks of an operation to remove them are worth it...

Also it can vary a lot, you can have more at some times and less at other times. They can end up falling down out of the field of view with time and things get better.

I also heard being well hydrated helps but I couldn't really make that correlation from my personal experience


VSCode has telemetry, the extension market place can't be used by non-microsoft products, VSCode is not open source (only VSCodium is), many of the MS extension are not open source (like live collaboration), etc.

VSCode followed the classic big tech recipe : 1) make it open source to look like the good guys & get adoption and contributions 2) close-source many key components & add spyware.

Story of Android too pretty much


Svelte hasn't taken over because React is way too popular already. If you want a job you have to learn React (the same way Java is still one of the top lang to get a job but it's for legacy rather than technical reasons).

Svelte performance is a nice bonus but it is the last reason I prefer Svelte over React. I prefer Svelte because it is truly reactive (unlike React) which makes everything easier, cleaner & more readable.

Svelte is the only framework you can grasp in 10min by just looking at a few examples. To get started, you don't even have to read a tutorial or documentation, the code is self-explanatory.

React is a complex beast and, in my opinion, all this is an overkill / overengineered environment for expressing UIs.

I find Svelte to be the only framework that "make sense". Ultimately UI is not that complex, it's a store -> derived variables -> UI where each step is a reactive function of the previous one. A framework should let you express this very concisely and take care of everything else for you.

If x = y + z then I want to write x = y + z, end of the story. Like in a Excel sheet, just write the formulas and that's it. No useStates, hooks, componentShouldUpdate(), and other wierd stuff.

Svelte is to React what Pluto is to Jupyter Notebooks, less popular because of legacy, but, in my opinion, obviously more elegant & cleaner


I’m in agreement with you that the performance is only a part of why Svelte is so nice.

The state management in Svelte is fantastic.


Hi, thanks a lot for the presentation, it was very interesting. I'd have a few questions :

1) Why choose a CPU approach ? The trend seems to be doing 2D graphic on GPU (for e.g Mozilla's WebRender or Google / Raph L. 's Piet-GPU).

2) Is Pixie going to be the rendering lib for Fidget ? If it is the case, do you think the perfs / battery usage will be okay despite the drawing being done on CPU ?

3) I loved your last presentation on Fidget. What's going to be the main changes in Fidget 2 ? Are you still going to be using a Virtual DOM ? How/where do you store the app states ? I'd be very interested in any early info you could share about it


1) Why CPU? Mainly we used CPU because it's easier to get working. A lot of this graphics stuff requires deep understanding. It’s just easier to get CPU mode running. I do have many GPU experiments though. I even have a macro that translates a subset of Nim into the GLSL - GPU’s shader language. In time if the GPU path becomes better we will switch to it.

2) Yes, Pixie will render Fidget. Current system is that Pixie renders to a texture atlas. Then Fidget just deals with the textures. Pixie does work only at “load time” or when a new element comes on screen. For most screens Pixie CPU rendering will not be called that often. At this time the CPU render for text is faster than my on-the-GPU render. More things will switch to GPU as it becomes faster.

3) I am still working on Fidget, but I really wanted to get Pixie done first. I have this idea that I am exploring. But I don’t have anything for sure yet. One thing for sure Fidget 2 will be a total paradigm ship from Fidget 1. But Fidget 2 is still ways off.


thanks for the answer , all the best for your future works


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: