Hacker Newsnew | past | comments | ask | show | jobs | submit | adhocmobility's commentslogin

I think its pretty clear that in the coming decade intelligence and cognitive labor is going to become very cheap. So your kid should develop some skills outside of that to stay competitive in the job market.


general intelligence will become cheap? and so your suggestion is to develop skills outside of skills?


Personally i think markets will be so different from now that this question of where you'll have skills for jobs is less important than asking what will a job mean in 2040. But ok, maybe this is still a minority opinion.


I think it's pretty clear that cognitive labor will become even higher value in the future, as our tools get better and allow us to become more productive.


You speak the truth but people here will hate you for it.


Any suggestions?


Stockpile metals and rare earth elements. Natural resources will be the beginning and the end of trade.

(not actually serious, at least not yet)


AI Software Engineering? Baker?


Plumber


Apple has invested a lot in making sure that they STAY in "an order of magnitude better position". They know that the ultimate winner in personalized AIs will be whoever has the best edge hardware. That is why they have been investing so heavily in special purpose on-device chips for running neural networks.


The answer to what they're building is written on Karpathy's twitter - "A kind of Jarvis"


Hi everyone, author of the article here. I'm sorry if the article sounds overly pessimistic. I'm not making any claims with this article. I'm not proposing anything either. I do think technological progress is a good thing, even in this case. But I wrote this blog because I did have an emotional response to this technology, and wanted to pen down my thoughts.

Its one thing to look at a report about the economic impact of new technology, but another to experience it first-hand. This is just a story about someone who will be impacted. Calling it a "sob" story is very harsh. This story is very real and the feeling of losing your job to automation is anything but pleasant.


Thank you for penning down your emotional response. HN is mostly in denial of anything ChatGPT (most people haven't used GPT4 and keep pasting results of GPT3.5). The thing is that there is nothing other than denial that people can express. Saying that their jobs will become much less relevant is just too hard for almost anyone to swallow. People in this thread keep talking about productivity improvements, not realising that 2x is an improvement, 10x is a revolution.

There are several important differences in the impact of GPT4 vs the PC, which is being quoted quite a lot as a response in this thread. People talk of other scenarios as well, but even the best case scenarios (UBI) mean the end of social mobility, which means far fewer humans will have the chance of being ambitious and climbing the social ladder. And this is not even mentioning the 2nd and 3rd order effects.


This technology lowers the entry barrier to almost any field. It empowers self education, creative hacking and growth. Why are you making it sound like a disaster? We can scale our ambitions quickly and absorb the new productivity without losing jobs. We have not solved global warming, poverty or colonised the space. We have to scale AI billions of times. We have to survive the demographic crash. There's plenty of scope for AI to fill in without replacing humans.


> This technology lowers the entry barrier to almost any field.

I am not saying it doesn't. But there is some factor n, that if the productivity increase is n times x in a short amount of time, the world will not evolve as rosily as you are thinking.

> It empowers self education, creative hacking and growth.

Again, I never said anything contrary. But making random new products for which there is no market by self education is not a bright prospect for much of humanity.

> Why are you making it sound like a disaster?

You are welcome to counter my points. I am just enumerating my view of the future.

> There's plenty of scope for AI to fill in without replacing humans.

Are you seeing the same pace of improvements I am seeing? One year ago there was no talk of any such thing, and now we have GPT4

AI might be very good for humanity as a whole over a millennium, but for individual human beings it is hard to say the same.


It's so refreshing to see someone recognizing the mass-spread denial of HN commentariat on GPT4.

What HN commentariat doesn't realize is that many of them will be made redundant.

And "many new jobs are created" is such a bloated, empty statement in the wake of GPT4 like techs. We all know that all technological improvements in recent decades have led to more inequality. No questions about that.

LLMs, AI will lead more to that.


The interesting bit is that tech people are used to displacing other people's jobs and then telling them to suck it up so it's no wonder that they're in denial: this is the first time that it is their jobs that are at stake and they seem to be about as agile as a deer frozen in the headlights of an oncoming truck. We'll see how it all plays out. Jokes along the line of 'better behave or I'll replace you with a script' are not nearly as funny as they once seemed to be.


Agreed with both you and GP. The denial is a normal emotional response. It's not strange to cling to your decades of professional experience and skillset. It's just that, now really is not the time for emotional responses. It's time to start running away from the crowd so that you're one of those not made redundant. You can grieve the lost innocence of days past later.


I myself, am at crossroads.

I do Computer Vision research for a company, and wanted to go to Academia (in US/UK/NA/EU). That's a too risky career choice now, and has always been. What if I am not as brilliant as I think and cannot meaningfully contribute to Science? (Or don't get tenure?) Wanted to do either ML + fundamental Science or Edge AI.

Thinking of going to med school. I am sure I can qualify. So thinking of preparing for that while keeping my industry job.

Another option is going into Administration, i.e. government jobs, by qualifying something called UPSC (I am in India).

I fully understand what's going on and I am under no denial that many jobs in many sectors will be made redundant and competition will skyrocket. Societal turmoil is inevitable.

I am just 23 and weighing in my options. My days are so emotional and full of dilemmas and trilemmas.

I keep myself sane by doing my job, side hustle, dogs, family, and friends. I will be depressed if I ponder too much into these.


Med School takes too long though. UPSC will be great if you can pull it off.


Yes, but med school has at least hints of technicality. I always have been a problem-solver/analytical kind of guy- my whole life.

You use your brains to solve interesting problems, at least sometimes.

And even if you are an IAS, after your district posting ends, you are just another government servant. Doing repeatative jobs, bound to an office.

Will I even like that life 20 years later?

And the income in UPSC jobs is too low. Lower than doctors or techies (I make close to an entry level IAS now).

Another point to consider is:

Once you learn programming, you are always a programmer.

Once you are a doctor, you are always a doctor.

But your status as IAS is solely tied to your job. You leave or you retire- you are a nobody again.

Honestly, I don't have enough information to decide. I am postponing making this decision as much as I can.

Thank you for your comment, anyway.


You make some very interesting points. I would be very interested to read about your future deliberations, if you post them anywhere (your bio links to your website).


I don't write personal stuff there. If you leave me an email- if you want, I will make sure to let you know if I write something in the line of our communication.


I too would love to see your thoughts in more detail.


The thing is unlike the rise of the PC based tools, with the rise of LLMs it is too hard to see what the safe careers are. Careers that might be safe and have high income potential are mostly not quick to switch to.


Plumber.


Yeah, there is no universe where there is enough demand for plumbers to sustain even the same order of magnitude of number of jobs for even a smaller category of knowledge work. And when you unleash millions of plumbers, the wages won't look better than McDonalds.


There are not that many plumbers not because the market isn't there. It is because people don't want to be plumbers to begin with.


The market can never be big enough for plumbers to absorb even a tiny bit of knowledge workers. There is not enough plumbing to be done. Also if a lot of people do try to become plumbers, the wages will plummet similar to other jobs where availability of labour is high. Your statement will hold true if 2x more people wanted to enter plumbing, not 20x or 200x.


UBI is about as likely to happen as OpenAI deleting GPT4 and never training another model, so if we're picking patiently absurd scenarios I pick that one.


I think the B means it will be enough right? I think that part tends to get ignored since the $1000/mo figure in the US was floated and now is no longer enough for anyone here.

$1000/mo in 2016 purchasing power in a city like Dallas seems very unlikely to me, but I think that some meager version of UBI might happen in response to a humanitarian crisis.

I can also see guaranteed jobs rather than guaranteed income.


I am glad you wrote it, please don't be sorry. A lot of us feel the same way.

Here's what I imagine sama and AI apologists would say in response so they can sleep at night:

Have you thought about training Priya to use ChatGPT? You don't need to know how to code well to be skilled at using it, especially if she has the domain knowledge.

Then you will have 10x'd your company's output and Priya keeps her job. At least for a time -- that is, until others start doing it too -- this will be a big competitive advantage. Then you will definitely need her and her colleagues!

/end

But, there are many reasons why laying her off and just using GPT4 is the better business decision, at least short term. The above is a totally naive suggestion stemming from reasoning motivated by the incomprehensibly large profits going to OpenAI and their eventual competitors.

Actually, I think we are about to see massive unemployment (tens of millions if not hundreds globally), even greater inequality and attendant social unrest. Even if smooth transitions can be made for some of the jobs made redundant by ChatGPT, this will be the exception not the rule. Something will have to give. UBI? Regulations? Physical destruction of data centers by angry, hungry, desperate people?

Probably all of the above. It's going to be a chaotic time until the world finds a new equilibrium.

On a personal note: at the ripe age of 40, in direct response to GPT4, I've decided to go back to school this fall to become a certified teacher. The poor work conditions and low pay kept me away from it as a full time job until now. However, I believe this is one of the few jobs that will still be around in 25 years when I (hopefully) retire. I'll take low pay and poor work conditions over the desperation of extended unemployment and poverty.

(I like kids and have taught voluntarily in various capacities over the years, so it's not as crazy as it maybe sounds.)


Have you asked Priya for her opinion about this?

You've outlined the pessimistic case.

Priya has qualifications in biotechnology. She currently spends her time doing work that sounds quite repetitive.

If AI tools can help accelerate that work, is there a more optimistic scenario where she gets to do different, related work that isn't automatable?

(I personally really hope the pessimistic case isn't what happens here, and in so many other similar situations. I understand and share your concern!)


In my experience with Indians, in my opinion, they, more than any other populace value brands and labels.

So, in a scenario where LLM automates her job, she will be unemployed along with 10 with the same job as her, and the "creative" job will go to someone who did her degree/s from an IIT.

This is another fallacy when it comes to AI-replacements of jobs.

AI will do the menial, repeating job and only the interesting, creative, hard jobs will be left for humans. What's the twist is that you WON'T be the human with that job.

You will be unemployed or in a UBI or your parents' basement eating Ramen, and that job will be done by an MIT gold medalist or a Math Olympiad medalist.


> You will be unemployed or in a UBI or your parents' basement eating Ramen, and that job will be done by an MIT gold medalist or a Math Olympiad medalist.

Along these lines I recommend the book The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies.


He has that covered IMO. He talks at length about how he's not confident about his own career in the long run. So while he starts off talking about the AI doing one person's job, he makes it clear that he doesn't just think that about entry level workers.


He talked about his own fears: I want to hear Priya's opinion.


> He talked about his own fears: I want to hear Priya's opinion.

Why? Either he's right about GPT or he's wrong, and if he's right (which I think he is) and she disagrees, then she's probably just in denial, like so many HNers who aren't worried about their job, let alone worried about protecting themselves from the massive societal disruption this tech is likely to usher in.


It feels like he's using her as a rhetorical device: telling her life story to add emotional weight to his own opinion.

I think his position on this would be a lot more credible if he presented her own opinions.


The way I understood it, this is an entry level employee that even before GPT-4 could easily be replaced by another human, perhaps by paying a bit more if this type of employee is difficult to find. So that's why her mobility within the company isn't addressed much.

I'm still interested to hear her opinion as well but the point in the article would still be made, because if for some reason she had more mobility at the company, I could just imagine another scenario where the employee didn't.

The author leaves it open as to what she might to next, but makes it clear that at a minimum it would be a huge disappointment to be laid off due to AI after having gotten this job after all her efforts.


I feel empathetic to your message, thanks for writing.


I share some of your concerns and I'd not thought about this angle - folks outside the west doing this kind of work. So thanks for sharing.

I have attempted to shift my mindset a little, thinking about how I might become an effective user of AI tools. I hope if I can do this that it will keep me employable, or even enable me to start some kind of venture down the line. Maybe there's a path forward for you and your friend on that route. Best of luck.


Thanks for sharing your thoughts.

I don't understand the harsh comments you received here. Denial is the way others seem to be using to cope with a tech that threatens their precious skills.


This attitude can only stem from a society that loves to thrive off subsidies and earning bread by doing menial and repetitive assignments.

What must be pondered upon is embracing GPT along with human intelligence instead without human intelligence.


Most people are not practically privileged enough to elevate themselves to a stage where they don't have to do "menial work".

Actually tens of millions lack the privilege of even knowing what doing not-menial work is.


You are being given a chance to dispute it. Give an example of a problem that any human would be easily able to solve but GPT4 wouldn't.

>> "good model of a model of reality"

That is just a model of reality. Also, a "model of reality" is what you'd typically call a world model. Its an intuition for how the world works, how people behave, that apples fall from trees and that orange is more similar to red than it is to grey.

Your last line shows that you still have a superficial understanding of what its learning. Yes it is statistics, but even our understanding of the world is statistical. The equations we have in our head of how the world works are not exact, they're probabilistic. Humans know that "Apples fall from the _____" should be filled with 'tree' with a high probability because that's where apples grow. Yes, we have seen them grow there, whereas the AI model has only read about the growing on trees. But that distinction is moot because both the AI model and humans express their understanding in the same way. The assertion we're making is that to be able to predict the next word well, you need an internal world model. And GPT4 has learnt that world model well, despite not having sensory inputs.


Can chatgtp ride a bicycle? Can you ride a bicycle? If you ‘d never rode on a bicycle before - do you think if you read enough books on bicycle riding, the physics of bicycle riding, the physics of the universe - you would have anywhere near as complete a model of bicycle riding as someone who’d actually rode on a bicycle before. Sure you’d be able to talk a great game about riding bicycles - but when it comes to the crunch, you’d fall flat on your face. That’s because riding a bicycle involves a large number of incredibly complex emergent control phenomena embedded within the marvel of engineering that is the human body - not just the small part of the brain that handles language. So call me when LLM’s can convert their ‘world models’ learned from statistics on human language use into being able to ride a bicycle first time. Until then I feel comfortable in the knowledge they know virtually nothing of our objective reality.


Could Stephen Hawking ride a bicycle?


Yes, his mnd was diagnosed around the age of 21? And he didn’t learn to ride bicycles from reading books.


If you just want a git for large data files, and your files don't get updated too often (e.g. an ML model deployed in production which gets updated every month) then git-lfs is a nice solution. Bitbucket and Github both have support for it.


I've used both extensively. Git-lfs has always been a nightmare. Because each tracked large file can be in one of two states - binary, or "pointer" - it's super easy for the folder to get all fouled up. It would be unable to "clean" or "smudge", since either would cause some conflict. If you accidentally pushed in the wrong state, you could "infect" the remote and be really hosed. I had this happen numerous times over about 2 years of using lfs, and each time the only solution was some aggressive rewriting of history.

That, combined with the nature of re-using the same filename for the metadata files, meant that it was common for folks to commit the binary and push it. Again, lots of history rewriting to get git sizes back down.

Maybe there exist solutions to my problems but I had spent hours wrestling with it trying to fix these bad states, and it caused me much distress.

Also configuring the backing store was generally more painful, especially if you needed >2GB.

DVC was easy to use from the first moment. The separate meta files meant that it can't get into mixed clean/smudge states. If you aren't in a cloud workflow already, the backing store was a bit tricky, but even without AWS I made it work.


We resolve this in two ways

1. All git-lfs files are kept in the same folder

2. No one can directly push commits to one of the main branches, they need to raise a PR. This means that commits go through review and its easy to tell if they've accidentally commit a binary, and we can just delete their branch form the remote bringing the size back down.


I think the one thing that DVC does a bit better than git-lfs is that DVC doesn't keep the files directly in the repo. DVC puts a pointer file with a path and a hash of the file (to detect change). As far as I can tell, git-lfs only keeps them in the .git path of the repo.

For example, I think CodeOcean might use git-lfs under the hood but handles upload download separately from the UI. In the below sample, you can clone the repo from the Capsule menu but data and results are downloadable from a contextual menu available from each, respectively.

https://codeocean.com/capsule/2131051/tree/v1


I do feel like git-lfs is a good solution. Once you have 10s or 100s of GB of files (eg. a computer vision project), this gets pretty pricey.

Ideally I'd love to use git-lfs on top of S3, directly. I've looked into git-annex and various git-lfs proxies, but I'm not sure they're maintained well enough to be trusting it with long-term data storage.

Huggingface datasets are built on git-lfs and it works really well for them for storage of large datasets. Ideally I'd love for AWS to offer this as a hosted thin layer on top of S3, or for some well funded or supported community effort to do the same, and in a performant way.

If you know of any such solution, please let me know!


Have you tested Weights & Biases Artifacts[1]?

It comes with a smart versioning approach, checks the Δ based on the checksum and has a feature to visualize the lineage.

You can also use your existing object store and link it for very large / sensitive data.[2]

Disclaimer: I work at W&B.

[1]: https://docs.wandb.ai/guides/data-and-model-versioning/model... [2]: https://docs.wandb.ai/guides/artifacts/track-external-files#...


+1. git-lfs is sufficient for tracking binaries, including a ML model, at that cadence.

Thinking more abstractly, there is benefit for code and data to live "next" to each other, if possible. Atomically committed to a codebase and the latter loaded / used by the former without connecting to yet another workflow.


It seems to be the solution Hugging Face have picked too.


Waiting anywhere while your car is charging seems like an awful user experience. I've always imagined that a quick battery swap service would become the standard. Don't know how feasible it is though, maybe someone can shed more light on that.


We would need to develop a single standard and a limited number of battery sizes for this to ever be feasible. Go look at how many different starter batteries your typical auto parts store has to stock. Now imagine doing that with batteries that are much larger and much riskier if they are stored/handled improperly.


I can't imagine it, which is why I asked. You certainly seem skeptical. Your arguments aren't very convincing though.

>> Too many different start batteries

Okay... that's because there's no standard. But isn't that why standards are developed? Could you explain why developing a standard and a limited number of battery sizes is not feasible?

>> Risk of storing and handling them improperly

Could you explain why batteries are riskier than petrol?

And while we're discussing this China's decided to try it out [0].

[0] https://www.reuters.com/business/autos-transportation/inside...


The second case is preferable if numbers is an Optional[List], which is usually the case when passing lists around as arguments.

  def custom_sum(numbers: Optional[List[int]] = None) -> int:
    if not numbers:
      ## something

    return sum(numbers)
This can handle both cases - if numbers is None, or numbers is []


I'm a very passive hackernews reader, and don't engage much with posts. I want to break that passivity to say that Visidata is the tool that has given me the greatest joy out of any other piece of software I've ever used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: