Apple PR did what they could with the art they had available and the need to pander to a gov't administration, but weren't inspired to do it more genuinely?
Around that age, I wrote a letter to Tandy (Radio Shack), proposing that I write a hobby electronics book.
In hindsight, I wasn't knowledgeable enough to write a printed book's worth of material (maybe a few modern blog posts, at best). But at the time, I knew more about electronics than the other 29 kids in my grade school class, and that constituted most of my worldview, so why couldn't I write a book.
I loved the Forrest Mims books, and, like any kid, wanted to mimic the things that I saw grownups doing.
Someone at Tandy might have realized that I was just an enthusiastic kid, but in any case, they wrote me a nice letter back. The company didn't wish to develop a book at this time, but if I did so on my own, they would be happy to review a copy off the press.
Recent: Across the US, people are dismantling and destroying Flock surveillance cameras (bloodinthemachine.com) | 456 points by latexr 2 days ago | 293 comments | https://news.ycombinator.com/item?id=47095134
Biggest risk is that a beam steering element stops while the emitters are running. Basically impossible with a phased array emitter like the article discusses.
And you'd probably have to be staring into the laser at close range while it was doing that.
The laser beams usually aren't tiny points like your laser pointer. Several centimeters across is more typical, especially at typical road distances. Your pupil is very small in comparison.
The optical hazard calculations are a very early part of the design of a LIDAR system, and all of this does get considered. Or should anyway.
Biggest risks are for people involved in R&D, where beams may be static and very close to personnel.
> I bet you that if they ban one they ban the other too
Related: I've had a suspicion that, if you have an Apple or Google app developer account through a company (in your name and recovery phone number, but company email address)... and you leave the company... you'd better hope that someone at the company doesn't then use the account to do something sketchy or rule-breaking.
Someone inheriting the account is a very real possibility, given motive (people can be lazy about figuring out how to set up the account for another developer, or not want to pay another fee), and opportunity (professionalism norm is to preserve all passwords/secrets in a way that is accessible to the company).
> They explained that they switched between multiple models from multiple providers such that no one company had the full picture of what this AI was doing.
Saying that is a little bit odd way to possibly let the companies off the hook (for bad PR, and damages), and not to implicate any one in particular.
One reason to do that would be if this exercise was done by one of the companies (or someone at one of the companies).
Right now, this standard recruiter question is on the side of the table that's often being especially penny-wise and pound-foolish...
A weird thing I'm seeing is early AI startups lowballing both salary and equity for AI startup jobs, compared to a few years ago for generic Web/app developer jobs.
You're in a narrow opportunity window of a massive investment gold rush. You probably got funding with a weak/nonexistent business model, and some mostly vibe-coded demo and handwavey partnership.
Now you need to hire a few good founding engineer types who can help get the startup through a series of harder milestones, with skillsets less clear than for generic Web/app development. If you can hire people as smart and dedicated as yourself, they'll probably do things that make a big positive difference, relative to what bottom of the barrel hires will do.
So why would you lowball these key early hires, at less than a new-grad starting salary, plus a pittance of ISOs that will be near-worthless even if you have a good exit.
Is it so that the founders and investors can have the maximum percentage of... something probably less valuable than what they'd get by attracting and aligning the right early hires? (Unless it's completely an investment scam, in which genuine execution doesn't affect the exit value.)
I’ve also noticed this, and it causes real issues long term when you want to build the product. Suddenly management is surprised your senior engineer with no relevant experience is taking a long time and needs to bring in a half million in consultants to actually do the work. It stresses everyone else out and then you end up with a lot of churn, a lot of burn, and very little internal knowledge to build off of for the future
1. Removes the pain of age verification, encouraging some people to stay in the proprietary walled garden when everyone would be better served by open platforms (and network effects).
2. Provides a pretext for more invasive age verification and identification, because "the privacy-respecting way is too easily circumvented".
3. Encourages people to run arbitrary code from a random Web site in connection with their accounts, which is bad practice, even if this one isn't malware and is fully secure.
Proving that something is possible doesn't mean encouraging it. This was a beautiful work of reverse engineering, that shows how hard it can be to verify personal data without invading privacy. I prefer this awareness to blind trust.
The code was released, therefore it is not arbitrary (problem #3). Should companies react with more invasive techniques (problem #2), users can always move to other platforms (problem #1).
>users can always move to other platforms (problem #1)
Until the cycle restarts again with new platforms.
Also, I am convinced self-hosting or getting a new platform (including return to traditional forums) to run might as well be bureaucratically harder at this point, given the case of lfgss' shutdown: https://news.ycombinator.com/item?id=42433044
This suggests that the immediate availability of a drop-in replacement today means there is no utility in encouraging that growth.
There are multiple open-source tools that do everything Discord does. There are few-to-none that offer everything Discord does, and certainly none that are centralized, network-effect-capture-ready.
Short term:
* Small group chats with known friends: Signal, whatsapp, IRC, Matrix
* Community chat: Zulip, Rocket.chat
* Community voice: Mumble, Teamspeak
* Video / screen sharing and voice chat: Zoom, BigBlueButton, Jitsi
If you want to host your own stoat server, you will also need to recompile the apps to use your URL and distribute them to your friends, and they will not be compatible for any other server.
Yes, I learned in the Zulip promo discussion earlier this week that self-hosted push notification servers have to have certs compiled directly into the app. I can't tell if it's malice, indifference or incompetence to have that design; any answer is completely believable.
Is there an architectural opportunity to build a "Self-hosted push notification" app and business, where the push broker builds an app to deploy to play, then the self-hosted apps build trust with the broker. The broker app sends push notifications to the user device, which can inform them of the message sent and open arbitrary app windows?
None of those play in the same league as discord for hosting a community, and none of them look in a position to be there in the foreseeable future. It sucks but that's how it is.
This is how it always is, until suddenly one day it isn't. Linux didn't play in the same league as serious and commercial UNIX systems until one fateful day it killed them all dead forever.
"Think Different" -> "Think Indifferent"
reply