Given that the previous world police are presently treating international law as toilet paper, how do you propose global regulation of space would work or be enforced?
being cryptic and poorly specified is part of the assignment
just like real code
in fact, it's _still_ better documented an self contained than most of the problems you'd usually encounter in the wild. pulling on a thread to end up with a clear picture of what needs to be accomplished is like 90% of the job very often.
I didn't see much cryptic except having to click on "perf_takehome.py" without being told to. But, 2 hours didn't seem like much to bring the sample code into some kind of test environment, debug it enough to works out details of its behaviour, read through the reference kernel and get some idea of what the algorithm is doing, read through the simulator to understand the VM instruction set, understand the test harness enough to see how the parallelism works, re-code the algorithm in the VM's machine language while iterating performance tweaks and running simulations, etc.
Basically it's a long enough problem that I'd be annoyed at being asked to do it at home for free, if what I wanted from that was a shot at an interview. If I had time on my hands though, it's something I could see trying for fun.
it's "cryptic" for an interview problem. e.g. the fact that you have to actually look at the vm implementation instead of having the full documentation of the instruction set from the get go.
That seems normal for an interview problem. They put you in front of some already-written code and you have to fix a bug or implement a feature. I've done tons of those in live interviews. So that part didn't bother me. It's mostly the rather large effort cost in the case where the person is a job applicant, vs an unknown and maybe quite low chance of getting hired.
With a live interview, you get past a phone screening, and now the company is investing significant resources in the day or so of engineering time it takes to have people interview you. They won't do that unless they have a serious level of interest in you. The take-home means no investment for the company so there's a huge imbalance.
It's definitely cleaner than what you will see in the real world. Research-quality repositories written in partial Chinese with key dependencies missing are common.
IMO the assignment('s purpose) could be improved by making the code significantly worse. Then you're testing the important stuff (dealing with ambiguity) that the AI can't do so well. Probably the reason they didn't do that is because it would make evaluation harder + more costly.
You say that the connection would be permanently severed, but if the fibre is run through PVC can’t you pull a new run? Easiest way is to use the existing fibre to pull the new cables through.
It’s not a special class, but teams of engineers tend to spook together and are more likely to discuss topics “uncomfortable” to the employer (but not to unionise, apparently). If the degradation of fellow humans is too on the nose for the engineers, they will make noise and move.
That description leaves off some of the flavor— changes the feel once you know the descendent were labeled the Vile Offspring by the main characters, that were still human-ish.
Some people will call BS on books like that until every detail down to getting their own AIneko cat, and start again when the Wunch start eating their uploaded thought space via man-in-the-middle exploits that the protocol spec in the book was inaccurate
I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.
The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.
I hear this all the time, but to what end? If the input costs to produce most things ends up driving towards zero, then why would there be a need for UBI? Wouldn't UBI _be_ the performative economics mentioned?
I think of it like limits in math. The rate at which we'll be out of work is much higher than the rate at which prices will fall towards zero.
A performative/underemployment economy keeps everyone working not out of necessity, but to appease the sentiments of the wealthy. I'd argue that we passed the point at which wages were tied to productivity sometime around 1970, meaning that we're already decades into a second Gilded Age where wealth comes from inheritance, investment and connections (forms of luck) rather than hard work.
And honestly, to call UBI performative when billionaires are trying to become trillionaires as countless people die of starvation every day just doesn't make any sense.
Isn’t that the one where corporate structures become intelligent self executing agents, cause a lot of problems? Yet here IRL, the current tech billionaires think it’s a roadmap to follow?
Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…
Why would this need to work with Tailscale? It just needs to be running on a machine in your tailnet to be accessible, what other integration is necessary?
I'm a co-author of tsidp, btw. You don't need tsidp with a Tailscale-native app: you already know the identity of the peer. tsidp is useful for bridging from Tailscale auth to something that's unaware of Tailscale.
I use `tsnet` and `tsidp` heavily to safely expose a bunch of services to my client devices, they've been instrumental for my little self-hosted cloud of services. Thanks for building `tsidp` (and Perkeep!) :).
reply