Hacker Newsnew | past | comments | ask | show | jobs | submit | vermilingua's commentslogin

Given that the previous world police are presently treating international law as toilet paper, how do you propose global regulation of space would work or be enforced?

Think that means you failed :(

+1

being cryptic and poorly specified is part of the assignment

just like real code

in fact, it's _still_ better documented an self contained than most of the problems you'd usually encounter in the wild. pulling on a thread to end up with a clear picture of what needs to be accomplished is like 90% of the job very often.


I didn't see much cryptic except having to click on "perf_takehome.py" without being told to. But, 2 hours didn't seem like much to bring the sample code into some kind of test environment, debug it enough to works out details of its behaviour, read through the reference kernel and get some idea of what the algorithm is doing, read through the simulator to understand the VM instruction set, understand the test harness enough to see how the parallelism works, re-code the algorithm in the VM's machine language while iterating performance tweaks and running simulations, etc.

Basically it's a long enough problem that I'd be annoyed at being asked to do it at home for free, if what I wanted from that was a shot at an interview. If I had time on my hands though, it's something I could see trying for fun.


2 hours does seem short. It took me a half hour to get through all you listed and figure out how to get the valu instruction working.

I suspect it would take me another hour to get it implemented. Leaving 30 minutes to figure out something clever?

Idk maybe I'm slow or really not qualified.


My instinct to read about the problem was to open the "problem.py" file, which states "Read the top of perf_takehome.py for more introduction"

So yeah. They _could_ have written it much more clearly in the readme.


it's "cryptic" for an interview problem. e.g. the fact that you have to actually look at the vm implementation instead of having the full documentation of the instruction set from the get go.

That seems normal for an interview problem. They put you in front of some already-written code and you have to fix a bug or implement a feature. I've done tons of those in live interviews. So that part didn't bother me. It's mostly the rather large effort cost in the case where the person is a job applicant, vs an unknown and maybe quite low chance of getting hired.

With a live interview, you get past a phone screening, and now the company is investing significant resources in the day or so of engineering time it takes to have people interview you. They won't do that unless they have a serious level of interest in you. The take-home means no investment for the company so there's a huge imbalance.

There's another thread about this article, which explains an analogous situation about being asked to read AI slop: https://zanlib.dev/blog/reliable-signals-of-honest-intent/


It's definitely cleaner than what you will see in the real world. Research-quality repositories written in partial Chinese with key dependencies missing are common.

IMO the assignment('s purpose) could be improved by making the code significantly worse. Then you're testing the important stuff (dealing with ambiguity) that the AI can't do so well. Probably the reason they didn't do that is because it would make evaluation harder + more costly.


I have some workmates on Steam, and sometimes I come down with a cold right around game releases.

I want to get off MR ALTMANS WILD RIDE.

You say that the connection would be permanently severed, but if the fibre is run through PVC can’t you pull a new run? Easiest way is to use the existing fibre to pull the new cables through.

I believe “permanent magnet” here refers to an object permanently magnetised, not a magnet permanently embedded in the ear canal.


We are well past the point that “videos from the attack” can be trusted, no matter which argument they support. It’s a terrifying state of affairs.


It’s not a special class, but teams of engineers tend to spook together and are more likely to discuss topics “uncomfortable” to the employer (but not to unionise, apparently). If the degradation of fellow humans is too on the nose for the engineers, they will make noise and move.


Hmm, where have I seen this before…

https://en.wikipedia.org/wiki/Accelerando


One of the most interesting things about the book is how it skewers the idea of there being a singular AI.

In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.


That description leaves off some of the flavor— changes the feel once you know the descendent were labeled the Vile Offspring by the main characters, that were still human-ish.


For better or worse, Accelerando comes to mind _a lot_ when trying to figure out how agent platforms should work. Hopefully for better.


Amazing how forward looking that book was.


It's not gonna happen like that, because all the paths leading to it cannot be financially exploited.

Also what a shortsighted scifi book, yet techies readily invest in that particular fantasy because it's not your usual spaceship fare.


> Also what shortsighted scifi book

It's art not oracle


Some people will call BS on books like that until every detail down to getting their own AIneko cat, and start again when the Wunch start eating their uploaded thought space via man-in-the-middle exploits that the protocol spec in the book was inaccurate


Here's another one: Manna - Two Views of Humanity’s Future, by Marshall Brain. It's a fairly light read, just 8 chapters:

https://marshallbrain.com/manna1

I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.

The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.


> take any meaningful action to implement UBI

I hear this all the time, but to what end? If the input costs to produce most things ends up driving towards zero, then why would there be a need for UBI? Wouldn't UBI _be_ the performative economics mentioned?


I think of it like limits in math. The rate at which we'll be out of work is much higher than the rate at which prices will fall towards zero.

A performative/underemployment economy keeps everyone working not out of necessity, but to appease the sentiments of the wealthy. I'd argue that we passed the point at which wages were tied to productivity sometime around 1970, meaning that we're already decades into a second Gilded Age where wealth comes from inheritance, investment and connections (forms of luck) rather than hard work.

And honestly, to call UBI performative when billionaires are trying to become trillionaires as countless people die of starvation every day just doesn't make any sense.


Isn’t that the one where corporate structures become intelligent self executing agents, cause a lot of problems? Yet here IRL, the current tech billionaires think it’s a roadmap to follow?

Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…


Why would this need to work with Tailscale? It just needs to be running on a machine in your tailnet to be accessible, what other integration is necessary?


Primarily using Tailscale for authentication as well, replacing perkeep's other auth methods.


It appears that it does integrate with Tailscale for auth (but not using tsidp via OIDC like I expected): https://perkeep.org/doc/server-config#simplemode


I'm a co-author of tsidp, btw. You don't need tsidp with a Tailscale-native app: you already know the identity of the peer. tsidp is useful for bridging from Tailscale auth to something that's unaware of Tailscale.


I use `tsnet` and `tsidp` heavily to safely expose a bunch of services to my client devices, they've been instrumental for my little self-hosted cloud of services. Thanks for building `tsidp` (and Perkeep!) :).


I think @kamranjon means that, before this tailscale compatible release happened, thought about how cool it be if it worked directly with tailscale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: