I mostly share the last few because of all of the "me, too" comments on them.
There are several instances in there where an employer has no idea they are paying a salary, but a computer is doing the vast majority of the actual work.
I feel like this is a "business world Turing test," like, "would an employer pay money for it, thinking it was a human." And I feel like I've provided evidence that has actually occurred.
> Why must it apply for a job, rather than just DO a job?
Because being able to manage a business relationship is a part of the job. If you could show an AI which got a job, then wrote a simple script that automated the AI's job and then coasted for a year that would be fine, but your links are just humans doing that, I want an AI that can do that to consider it intelligent.
But thanks for demonstrating so clearly how AI proponents are moving goalposts backward to make them easy to meet.
Should the AI be able to use a real human's SSN? And resume, to be able to pass a background check? Can a real human show up to interview, and take a drug test? Can we have real humans provide references, or must those be faked too? Must the computer go to high school and college, to have real transcripts to validate?
Do we need to have a computer baby trick doctors into issuing it a birth certificate, so it can get its own SSN, and then the computer baby needs to have a physical body that it can use to trick a drug test with artificial urine, and it also needs to be able to have either computer-generated video and audio meetings, or at least computer-generated audio calls?
Or can you list some jobs that you think require no SSN, no physical embodiment, no drug test, no video or audio teleconfrencing?
Since you're accusing me of moving the goalposts backwards to make it "easy," let's have you define exactly where you think the goalposts should be, for your intelligence test.
Or maybe, replacing a human driver (or some other job), 1:1, for a job a human did yesterday, and a computer does today could be enough? If it's capable of replacing a human, do you then not think the human needed intelligence to do their job?
You can use a real persons contact details as long as the AI does all communication and work. Also it has to be the same AI, no altering the AI after you see the tasks it needs to perform after it gets the job, it has to understand that itself.
For teleconferencing it could use text to speech and speech to text, they are pretty good these days so as long as the AI can parse what people say and identify when to speak and what to say it should do fine:
But it might be easier to find a more hacker friendly job where all you need is somewhere for them to send money and they just demand you to write code and answer emails. There aren't that many such jobs, but they exist and you just need one job to do this.
I find it interesting that you have not put any kind of limit on how much can be spent to operate this AI.
Or on what kinds of resources it would have access to.
Could it, for instance, take its salary, and pay another human to do all or part of the job? [1]
Or how about pay humans to answer questions for it? [2] [3] Helping it understand its assignments, by breaking them down into simpler explanations? Helping it implement a few tricky sub-problems?
Does it have to make more than its total operational expenses, or could I spend ten or hundreds as much as its salary, to afford the compute resources to implement it?
You also haven't indicated how many attempts I could make, per success. Could I, for instance, make tens of thousands of attempts, and if one holds down a job for a year, is that a success?
Also, just to talk about this a little bit, I'll remind you that not all jobs require getting hired. Some people are entrepreneurs. Here's an example that should be pretty interesting. [4] It sure sounds like an AI could win at online poker, which could earn it more than the fully remote job you're envisioning...
I said it has to manage all communications and do all the work, so no forwarding communications to third party humans. If it can convince other humans in the job to do all its work and coast that way it is fine though.
> Does it have to make more than its total operational expenses, or could I spend ten or hundreds as much as its salary, to afford the compute resources to implement it?
Yes, spend as much as you want on compute, the point is to show some general intelligence and not to make money. So even if this experiment succeeds it will be a ton of work left to do before the singularity, which is why I choose this kind of work as it is a nice middle ground.
> You also haven't indicated how many attempts I could make, per success. Could I, for instance, make tens of thousands of attempts, and if one holds down a job for a year, is that a success?
If the AI applies to 10 000 jobs and holds one of them for a year and gets paid that is fine. Humans do similar things. Sometimes things falls between the cracks, but that is pretty rare so I can live with that probability, if they made a bot that can apply to and get millions of jobs to get high probabilities of that happening then I'll say that it is intelligent as well, since that isn't trivial.
> > Why must it apply for a job, rather than just DO a job?
I think the idea here isn't just to create a job doing machine but to create a machine that is actually employable in the sense that it has not just intelligence but also agency.
An AI that can apply for a job, get the job, and do the job well enough to keep the job is demonstrating that it has some sort of theory of mind. But I don't think such a demonstration is very likely except as a stunt, purely on the grounds that an AI capable of the necessary subterfuge is more likely to employ shortcuts that are less taxing. A bit of electronic B&E to plant some ransomware, or have a shell corporation take out a few submarine patents and sue some businesses, perhaps operate a darknet marketplace (I think we can agree that an AGI will likely have better opsec than a human), etc.
Lest you think that the only options here are illegal, there are white-hat alternatives as well, for example it is possible to participate in bug bounty programs anonymously and get payouts in cryptocurrencies.
So why would an AI bother applying for a job in the first place?
But maybe some combination of this [1] and this [2] would do it.
If you want to know about a computer actually DOING a remote job for a year without anyone noticing, I'll conclude with many links [a-i].
[1] : https://thisresumedoesnotexist.com/ (Sorry for the bad certificates.)
[2] : https://www.businessinsider.com/tiktoker-wrote-code-spam-kel...
[a] : An original claim of just that: https://www.reddit.com/r/antiwork/comments/s2igq9/i_automate...
[b] : Coverage of that post: https://www.newsweek.com/no-harm-done-it-employee-goes-viral...
[c] : https://www.reddit.com/r/antiwork/comments/p3wvdy/i_automate...
[d] : https://www.reddit.com/r/AskReddit/comments/jcdad/my_wife_wo...
[e] : https://www.reddit.com/r/talesfromtechsupport/comments/277zi...
[f] : https://www.reddit.com/r/AskReddit/comments/tenoq/reddit_my_...
[g] : https://www.reddit.com/r/AskReddit/comments/vomtn/update_my_...
[h] : https://www.reddit.com/r/AmItheAsshole/comments/ew6gmd/aita_...
[i] : https://www.reddit.com/r/talesfromtechsupport/comments/7tjdk...
I mostly share the last few because of all of the "me, too" comments on them.
There are several instances in there where an employer has no idea they are paying a salary, but a computer is doing the vast majority of the actual work.
I feel like this is a "business world Turing test," like, "would an employer pay money for it, thinking it was a human." And I feel like I've provided evidence that has actually occurred.