Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ChatGPT has already decreased my income security, and likely yours too (scottsantens.com)
74 points by hunglee2 on Feb 21, 2023 | hide | past | favorite | 123 comments


I have always thought that most, if not all, people that called themselves “content creators” are not providing much value. And in aggregate they are probably net negative due to their spamminess.

Yes, the term technically include book authors, artists and others as well, but those people don’t normally call themselves content creator. In fact I think that highlights the difference between the mindset: the “content creator” is all about pumping out quantity of content, quality be damned.

All of this is just a long way of saying that the decreased income security for “content creator” is less about the ability of ChatGPT, and more on the uselessness of a lot of “content” out there.


I expect this theme to continue and expand across many "jobs". There are a lot of people who are just plugging numbers into spreadsheets all day and you would think "yes but when something doesn't fit they have to be smart and use human intelligence to fix the problem" However if you see what happens in practice all they do is send an e-mail to someone else saying there is a problem. ChatGTP can do that part. You don't need many of these pencil pushers


A friend of mine started working for a company a while back. He met a woman there whose job involved some very tedious tasks, manually typing things from one software into another.

Thinking he'd do her a favour, he wrote a python script to automate it, so she could spend more time on the less tedious parts of her job.

Turns out there weren't any. He had automated her entire job in 15 minutes.


The irony is these types of jobs are usually in companies that aren't going to spend any money to invest in the back office tools/processes their employees use on a daily basis. Any AI/ML tooling is so far away from that type of company that those jobs might be safer than others.


These are probably the same companies that tried and failed to outsource labor to other countries. We'll see if this goes any better.


not if (rather when) Microsoft includes its AI into MS Office. And I've read somewhere that that's exactly what they are planning.


That will last until Microsoft realizes it cannibalizes Office licenses.


Content creation very much follows some pareto rule where the good 80% comes from only 20% of creators, or something along those lines. It means the 80% of creators producing bad/unuseful/spammy content are in trouble because what little work they were doing can now be automated. This will lead to an explosion of the already very saturated world of bad/unuseful/spammy content.

My prediction is that this will strengthen the huge market share that the big institutional providers of content had pre-internet. In the those days, they had effectively full market share because access to smaller creators was just too difficult. Then the internet happened and small creators could reach consumers and vice versa and it was easy. Market share started to shift somewhat to smaller creators. But... with automated AI crap content it will severely distort the signal/noise ratio when looking for content -- making it again very hard to find small creators of quality content. This difficulty will lead to people sticking to the big institutional creators.


Linus from Linus Tech Tips (one of the biggest tech Youtube channels) posts vids where he upgrades his rigs, such as one where he goes through setting up a server that can store a petabyte of videos because they are spending another 100k on new cameras so that they can shoot in 8k raw.

And he isn't the only one. If you look at LGR, a nostalgia tech Youtuber, he has also upgraded his setups over the years, though less so than LTT.


>“content creators” are not providing much value.

Depends on the value you put on entertainment.


What's entertaining about a 5000 word muffin recipe?


Depends on the value of the entertainment they provide


Until recently I was watching Yannic Kilcher on YouTube, he posted over the span of a few years hundreds of in-depth paper reviews. He is technically a "content creator" but for me his videos were as good as lectures from famous universities.


I was reading an article comparing chat GPT results with the Google equivalent, and how much better the chat GPT result was.

Of course, in reality, once I reduced the search terms to make it more Google friendly (chatGPT’s conversational mode requires a lot of superfluous language, whereas a Google search does better with only keywords), Google returned the same result much faster. The difference was that Google didn’t print the entire result. It asked me to click a link to get the answer. Which I did, and clicking the link and loading the website was still significantly faster than chatGPT.

But what made it clear for me is that besides it’s conversational approach (which I don’t find beneficial), chatGPT’s biggest advantage appears to be that it’s ingestion method means that it is able to completely evade copyright, and present somebody else’s results as it’s own.

Google could have done exactly the same thing and presented the entire result on its own page, but since it’s not got “AI” which we’ve apparently decided doesn’t need to honor copyright, it doesn’t do that.

AI’s biggest advantage right now appears to be the bag of wool it is placing before our eyes that’s preventing us from questioning it’s blatant and outright copyright theft and abuse.


AI can reword something as much as needed to make it different. What are human authors going to do? Copyright all variations and paraphrases of their texts? Reinvent copyright to cover not just expression, but ideas? I think that would make copyright borrow characteristics from trademarks and patents, it would not be fair.

And LLMs are not used just to "abuse copyright", as you put it. They are also useful for task solving, thousands of different of tasks can be executed on language models. Thinking their sole purpose is copyright evasion is denying benefits to other people. Similarly artists think SD only exists to make infringing art, but it can be applied in many other fields, even other modalities like audio.

Why don't we just ban electricity to protect copyright? Without electricity there is almost no more copying, not even loading a web page. /s No AI works without electricity. Problem solved if all that matters is copyright.


the parent is not talking about 'LLMs are used just to "abuse copyright"' at all, you made that up. They are not making some vague philosophical claims about all LLMs, they are pointing out a specific fact that ChatGPT (not "LLMs") is training on information that was created by someone else, aka theft. Some sources of their training data have already shut off the access, others may not even be aware that their work is being recycled for Microsoft's and OpenAI's gain.

PS I would love to see some informed debate about how is it even possible to lift that much information without any kind of credit or reference to its authors and make a billion-dollar business out of that, but frustratingly everyone is too busy fantasizing how OpenAI is about to kill Google, seems to be a flashier topic...


> chatGPT’s conversational mode requires a lot of superfluous language, whereas a Google search does better with only keywords

I was recently travelling in another country and was able to ask ChatGPT for breakfast/lunch/dinner recommendations suiting a particular style within "walking distance" from where I was and it knocked it out of the park. All great, detailed, bulleted recommendations that I may have otherwise overlooked, as I had already done Google searches on this.

It was really an eye-opener, despite having interacted with it already in a variety of different ways.

I will note one recommendation was permanently closed, a lack of it's current knowledge, but I informed it that the recommendation was wrong, it "thanked" me and told me I was correct and that it would not recommend this location in the future.

I have no idea whether any of that is true, but it was quite interesting.


> I was recently travelling in another country and was able to ask ChatGPT for breakfast/lunch/dinner recommendations

I just tried this out myself, 'please recommend restaurants near (my home town)'. ChatGPT identified several real, local, pubs, giving generic descriptions such as 'This traditional British pub serves delicious food, including locally sourced meats and vegetables. They offer a range of pub classics like fish and chips, as well as more modern dishes like lamb rump with sweet potato mash'.

None of the descriptions really sold the pubs to me. One of the descriptions put the pub concerned in the wrong village. But it was no worse than asking Google the same thing, and without adverts (yet).


From my understanding of ChatGPT, it will not remember that it all. It isn't fully aware that its knowledge stops at 2021, and will hallucinate that it has the ability to store information. It doesn't.

If you asked again in the same conversation, I'm sure it would remember, but a fresh chat session would result in ChatGPT telling you that the closed restaurant is open


I recently asked chatGPT for the best bánh mì restaurants in a particular city, and it gave me a great list of answers... except half those restaurants weren't actually in the city.


Yeah it sometimes works but isn’t great for this kind of question it’s absolutely perfect for questions about something you’d read on Wikipedia. It’s like having a subject matter expert with you. It’s sometimes wrong but the value is still immense.


Were the recommended places all real places?


> Were the recommended places all real places?

For us, they worked. They were all real minus the one I mentioned, and the descriptions weren't vague. They were accurate and what we were looking for.


And to that I say, good riddance. Copyright as it exists in 2023 is an abusive dead weight on knowledge and culture and we would be better without it.


Don't throw the baby out with the bath water! Copyright as it exists today may have too long a term. But getting rid of it entirely would create massive problems. Film, music and books would become uneconomic to produce overnight. Things like GPL would be unworkable.


I don't think so. We already live in a world where anything playing on a movie screen has a digital equivalent easy to find for free; ditto music and books. Despite that, sales continue to increase every single year. While removing it from the underground would no doubt decrease sales, calling it "uneconomic to produce" is unnecessarily apocalyptic.

As for software licensing, the GPL and most licenses would become irrelevant, since the copying can now happen both ways.


It is true that I pay for things I could illegally obtain for free. It is not the fear of being caught that stops me, rather that I believe the creator should be rewarded so more good stuff is made.

It is not only individuals illegally downloading we have to consider. What production or publishing company will invest millions creating and distributing content if they can't make a profit? Very few I would think.

It is true that software licenses would become irrelevant as well. People who think software freedom is important (i.e. copyleft) would not be happy. On the other hand, software companies could also not sell proprietary software either. I guess everything would become a SaaS product.


Copyright isn't going away for the masses. People won't be able to get away with publishing Google or Disney adjacent materials justbecause they’re laundered through "AI". It just means Google and Disney will be able to take credit for and monetize peoples’ content easier.


I'll let someone else run a rabbit hole debate on "copyright as it exists in 2023", but there is basic human decency: citing the sources and crediting the authors of the original information that is being thrown out the window here.


I am curious, who has the copyright of chatGPT output from its conversation with the user?


If it is an intermediate between a prompt and a human rewrite, who gets the credit? Does word get copyright credit for spellcheck? Does photoshop get credit for magic lasso?

Generally, the person at the stick of an automated tool owns the result, if they input a significant enough creativity.

Writing a four word prompt probably isn’t sufficiently creative. If I were trying to pass a coauthored writing sample as one I own, I would be wise to document all the prompts I provided, and keep track of editing/rewriting diffs.


I've found the summarizing capabilities of ChatGPT to be very helpful when researching certain topics


>What AI is already capable of should be celebrated. It's amazing. It can save so much time and enable so much more productivity. But we should all share in that collective productivity increase as it continues to improve. It shouldn't be the case that some people win big while many, if not most, lose.

That's some wishful thinking if I've ever seen one.


Especially since it's going against everything we did since the invention of the steam engine lol

Like yes in a perfect world time saved would go back to workers, it doesn't though.

A machine could do in a minute what a knitter would do in a day, it doesn't mean knitters got a 7 minute work week


But it does mean that the people who were formerly knitters now have more material wealth than a lot of the upperclass people of their day. In order to buy a pair of shoes, that knitter would have had to work months if not a full year to save enough money. Now even some of the poorest people among us only need to spend a fraction of their weekly income to buy a new pair of very serviceable shoes that are probably much better made and more comfortable than the cheapest pair a local cobbler could have made by hand.

I believe that means in a very real sense that the time saved has gone back to workers. We all already share in that collective productivity increase.


… which for many is all eaten up by runaway inflation in certain key constrained areas like housing.

This is why in the US there are hordes of homeless people with smart phones and why the poor generally have no problem having houses full of appliances and giant TVs. Gadgets have indeed gotten radically cheaper.

The things that are hard to make cheaper for structural or physical reasons like housing, health care, tuition, and child care tend to inflate so as to eat the surplus. Housing is the worst culprit in many markets.


Not sure about other countries, but the US also made this worse by encouraging people to use housing as an investment and retirement vehicle, so you have a lot of people who really don't want housing to get cheaper because it means their retirement savings or investment will be worth less.


That's a very different and much more recent development, two interconnected ones actually. Housing is now an investment for money that should be able to be used in more productive pursuits. Part of the reason for that is the inflationary economy compels you to invest your money in something that will provide a consistent return. Those two things have very little to do with industrial efficiency.


Well that only works if your life's goal is to accumulate things, imho we're way past that in the west, for a good 50 years


The implication here is that if you simply want to meet your basic needs you can do so with relatively little work compared to the past. The good thing about consumption is that it motivates innovation in manufacturing which we all continue to benefit from. Barring any major upset, I think we're still on track for an abundant future.


> Especially since it's going against everything we did since the invention of the steam engine

Much earlier. Special skills and inventions giving you business, military or social advantages have been a thing since special skills and inventions have been a thing.


It means the consumer got a cheaper product and the person who invented, built and operated the machine got a share of the profit.


> A machine could do in a minute what a knitter would do in a day, it doesn't mean knitters got a 7 minute work week

That’s why the capitalist class owns the means of production (the owner makes their profit in 7 minutes now).


this is very shallow criticism.

I benefit from invention of a car because I can buy a car and be it's owner and only benefactor

You cannot buy and own ChatGPT as invididual, you will never even be able to inslect its source code.

A car is patent protected. Patents allow me personally to build a car for my personal use. Copyright doesn't allow that.

Patents last 20 years, copyright lasts close to 100, depending on jurisdiction.

The only reason corporations can make insane profits from this tech is because we granted this category of IP insane priviliges and government is spending tax dollars to protect their IP.

Make code like patents - it must be published in a register accessibke by anyone, and loses protections after 20 years.

If you don't register it, then the government will not be prosecuting anyone for using it.

Suddenly the benefits will be shared


Now we start suing OpenAI into oblivion for copyright infringement (uhhh) when it has trained on text that is under copyright... like artists are doing to Stable Diffusion.


thats abother option.

However I believe Art belongs under copyright. It us not functional - our society does not fall apart is a famous piece of art is lost.

However our society does fall apart when key piece of technology is lost, thats why Patents have all these safeguards. Code is functional technology, it is not merely artistic.


I like the suggestion; contrary to physical commodities, however, the usage of derivation of code cannot be proven easily, unless the entire supply chain is rebuilt on, I don’t know, hashes of code fragments and remote attestation?


Whenever there are large paradigm shifts in culture, business, or technology, there is a shift in winners and losers, so I agree this is wishful thinking.


It would also be inconsistent with every previous technological advancement.


I'm kinda happy that I live in this age as an office worker and not in a factory during the industrial revolution. Those jobs all disappeared but do we really miss them?

It's the menial work that gets offloaded to machines. I don't see current AI coming up with new ideas for example.


> I don't see current AI coming up with new ideas for example

I'd check that carefully before relying on it. I've personally used AI to come up with new ideas for a few projects I'm working on. I've read several articles detailing how people are using it to innovate in their business and I'm 100% certain the reason why ChatGPT is at capacity during working hours every day is because there are droves of people using it daily for work now. I'd even go as far as to say that brainstorming with ChatGPT is one of the most productive use cases.


> I've personally used AI to come up with new ideas.

It seems that you basically agree with the point being made. You are coming up with those ideas with help of ChatGPT as a tool, or is the bot coming up with ideas? If it's the latter, can we remove you from the process and achieve the same result?

I don't think so, but I am not saying this will never happen in the future, though.


In a hypothetical scenario where there is a group of people that need to come up with ideas then I think having ChatGPT in the mix would mean that group could be smaller but remain as productive as a larger group. It's certainly been able to fill the position of a sounding board or knowledgeable collaborator when I've used it.


I really don't think the current generation of chatbots is capable of creative thoughts. They're just really good at combining stuff that they already know.

I'm sure at some point they'll get past that point but it seems a tough nut to crack with current tech.


  > Those jobs all disappeared but do we really miss them?
That's basically true, but consider this; right now, some people would vote for candidates who bring back coal mining just to restore their jobs and income. This is easy in hindsight of a couple of centuries, but very challenging at the time.


Certainly not, that update isn’t due for another week!


This is exactly what Karl Marx predicted 200 years ago, except he saw things in the context of industrialization and mechanical automation as opposed to AI and digital automation.


Wishful thinking would be if they thought it would.


Google is not primarily a web search company. Instead, it is an ad company that uses web search to maximise its advertising revenue.

ChatGPT is good at the moment, as it has not yet included advertising or has been gamed by SEO. This will change.


This is actually the primary message I got from the recent Bing announcement and demos. When the new Bing makes a suggestion for a TV, Microsoft wants to have sold those suggestion spots to the highest bidder.



Soon ChatGPT will recommend Colgate if you ask it how to brush your teeth, or use a similar product placement strategy.


I wonder what happens, though, if you ask ChatGPT, "Is there a better toothpaste than Colgate"? The instance this question is not answered truthfully, ChatGPT becomes garbage.


They'll spend a lot of effort scrubbing any facts that they don't like or don't suit their agenda and narrative. Whether that's the agenda of advertisers, the powers that be, or the "privileged" classes won't matter.

You think Googles search results are opaque now?? Wait till it's hidden behind a 2 trillion parameter neural network.

And by extension of that, "fake news" will be targeted and the all knowing AI knowledge base will be used to determine what is false or true. This will be the end of disent. When they say we won't own anything, and we'll be happy, they also mean knowledge.


This will be a game of cat and mouse between the search companies and the regulators. It has to be clear it is an advertisement of sorts, at least in most countries, however, there are different ways to do this. It will put a significant number of lawyers' kids through university.


Isn't it lovely, just when OS vendors (at least iOS) finally decide to tighten their privacy controls to avoid ad-revenue-based companies to track more and more, a new and more subtle form of advertising comes up.

It would be very difficult for people to differentiate what's an ad and what's not. At least google and likes showed "Ad" or "Sponsored'. Will AI do it?


It will be a game of cat and mouse between the search companies and the regulators. I am sure it needs to be "clear" it is an advertisement. However, this can be done in several ways, some too subtle for the regulator's taste.


>Because ChatGPT is so hugely popular and experiencing truly massive demand already, you may not have been able to actually personally use it yet, but suffice to say, it's kind of like magic. And it's still early day

Hypefluff. Could have replaced ChatGPT with crypto, penny stocks, or Juicero.

Show me the money. I mean actually show me. No more theorycrafting, hypotheticals, or bullshitting. Show me a company using ChatGPT and how its use is reflected on the bottom line of their financial statement.


ChatGPT was released less than 3 months ago, maybe give it a minute? The ecosystem around LLMs has been flourishing and is accelerating, look at langchain, look at all the open models out there. And in comparison to crypto, there is hard, actual usefulness to it, today.


You dont think the commercialization speed of GPT3 via CoPilot is evidence of something?

You think that was a one-off?


Maybe there could be services like Patreon where you subscribe to support them, but the content made by creators is used to either train the AI or the embeddings are put in a vector database. Then when the user does a semantic search, it puts the results from creators or whatever type of curation first.

This could provide better search results for the user, at least some degree of attribution or citation for some searches, and a way to support creators at least a little bit.

It seems quite possible someone is already building such a system. I mean the alternative seems just to not even try to credit/support creators with these generative systems.

I wonder if there is a way to embed a token that is unique and 100% identifiable so it will always be recoverable somehow. Although that might not really make sense. But if it was possible maybe it could be used for attribution.


If training models becomes the next big business venture, which it seems is likely, then companies doing so might pay for exclusivity rights to content that can be accurately labeled by its author. The specialist authors could also be employed during the reinforcement learning phase to evaluate the response to various prompts within a domain. AI Trainer could easily become a new profession of its own.


> Searching for information is about to experience a paradigm shift as big as using browsers instead of phone books

A more appropriate comparison would be that information is about to experience a paradigm shift as big as reading blogs instead of books.


In software development, it probably increases earning potential, because you can outsource all the tedious coding to the AI and have more time to manage the big picture of how your software interacts with other components, is optimized for certain tasks, etc. So you can deliver more value in the same time frame.


What tedious for you is essential for me. I take great care writing every line of my code. I'm actually paying for Copilot and use for repeating blocks of code. But that's a miniscule amount of time saved, like IDE autocomplete, just increases quality of life.

So far no AI written any substantial amount of not-completely-trivial code that I'd accept. And amount of time tweaking its output exceeds amount of time I'd spend writing that code.


The question is, will this still be the case in 2, 5, 10 years. Is there a hard limit to how well these models will perform, or will we look back at "carefully, manually crafted lines of code" as we do now at punchcards.


Honestly I have my doubts that current programming could be replaced by AI (other than General AI that would replace just anyone).

Programming is a precise work. Every bit matters. One flipped bit could change things drastically.

Current AI works in a kind of imprecise environment. What astonishes me is that GitHub Copilot can't perform arithmetic. For example I just wrote:

    // 23.56 * 4 / 3 = 31.413333333333334
    // 1280 / 16 * 155 = 24800
Text after `=` was completed by Copilot. You can check with calculator that first result is correct and second result is not correct.

I've encountered similar issues with ChatGPT. It tries to work as calculator but it's not reliable.

I do understand that this particular issue could be solved by adding some special handling into that AI. But this highlights general issue: this AI is imprecise.

And this is major discrepancy between current generation of AI and programming.

Drawing images imprecisely is fine. Few weird pixels won't make it or break it. 6 fingers are weird but kind of acceptable. Writing selling texts imprecisely is fine. Writing software programs imprecisely is not fine.

I think that we miss some kind of absolutely precise AI. Built on logic engine. Which will never do false statement. Which will never make false calculation. But yet more powerful than static algorithms.

With that kind of AI we could invent new programming languages which would be extra-high-level yet this AI will be able to optimize those to very fast machine code. That could change programming drastically.


I'm thinking these tools might be better utilized for creating user interfaces. As you say, a few pixels wrong here and there can be easily remedied and has little, if any, impact on the functionality of the software. You could use an AI tool to get you 80%-90% there - which would be a big help.

With regards to coding, AI assistants could make an excellent intellisense. That would also be a big help. I don't need the tool writing huge swaths of the code for it to be useful.


Miniscule amount over thousands iterations... adds up


My career now spans nearly 40 years. If I had a nickel for every technology that was going to reduce the need for developers and put us all out of work...

What has happened instead is we now have more developers than ever and I would argue the productivity of the average developer is 10x-100x what it was when I first started, thanks to all these tools and technologies. That's how software is eating the world: more people than ever are creating an order of magnitude or more than ever and yet the global backlog of software needing to be written is increasing!

I'm looking forward to using ChatGPT to assist with my own projects. There are so many projects in my backlog and many projects that just never get done because we don't have enough manpower to get the work done and not enough money to hire (and manage) any more developers. In other words, we weren't going to be hiring anyway, yet we're going to be able to get more needed work done.

I realize the software we create will put a lot of people out of work, but at my company a full 40% of the employees are eligible for retirement within the next five years. Right now there's no way we can create enough software to eliminate those positions and there simply isn't enough people in the labor force to replace them. This technology might be the silver bullet we need to solve this problem.


This is how I view it. The actual servers and computers run machine code. And we're now at a point where we can describe intent to a machine and have an ok bit of code come out of the AI process.

Now the trick will be making sure that we don't over train the bots and end up having a really complex refined description language that the AI interprets to make code, that itself just get boiled down by interpreters and compilers a few times until the CPU finally get's the instructions it actually needs. Hopefully we'll find balance.

I like to describe AI's impact on coding like the invention of the nailgun for carpenters. No ones job is going away, things are going to get more complex and done faster.


Are you aware of what happens when the amount of value that can be produced from workers exceeds the demand? Salaries get demolished, jobs get cut.


existing* jobs; but it also opens up entire new markets that weren't feasible prior.

In the US as an example; ~150 years ago 80% of the population had to farm to feed the country. Today it's 1%. Yea; '97% of jobs lost!' - but it gave the workforce the power/freedom to build other new/amazing things also.

I know that's a trite example; but I like it as it makes it clear that the status quo is not an immutable (or entirely positive) thing.


I'm wondering if this perspective is true at scale? If everyone is saving ~25% of their time then on the whole there is a 25% increase in workforce efficiency, no?

The flip side of efficiency if the workforce headcount remains constant is over capacity, which is a downward force on earning potential.


Jevons would like to have a word with you (https://en.wikipedia.org/wiki/Jevons_paradox)


I sort of think you just proved the author's point - you are using the AI to increase your productivity. People who cannot adjust in that way will be less productive and loose out.


With current state of AI, there is nothing to adjust to. It does everything by itself. It's not like you need to learn new framework or language.

In the future it will be even easier.


This is part of progress (be it good or bad progress).

It used to be that the majority of the US population worked in agriculture and today thanks to automation - it’s around 1% of the population. Same goes for numerous roles (secretaries, switchboard operators etc.)

As far as ChatGPT goes - I have a lot of concerns around it, but I am still not convinced it will replace search. I think it might compliment search. Either way I think it’s a good thing if how we search changes. Search is broken, it no longer brings me the best results, it’s becoming harder and harder to differentiate between ads and legit links (this is of course intentional because of revenue and KPIs) and I know longer trust the results. I am not sure if chatGPT is the answer, but maybe it’s part of the answer.

Important lesson - teach your kids to have a self development mindset. They need to be prepared to have multiple career changes and that their current skill set might be made obsolete at any moment. It sucks, but it’s reality and when things change, they change fast.


"Content creators" are doomed, of course. It is no longer a sustainable income source; now, you have to be a niche artist, like people who try their luck in modern art: only a few of them will invent something original enough and be lucky enough to be successful, the rest will get nothing.

Or, you can switch to be an AI operator and run mass production content pipelines. There will be less of those compared to individual content creators, but still sustainable enough. Like, only 3% of people are now professionally involved in food farming, and that's a good thing.

There is no other way.


I think it does a great job of synthesizing certain things like recipes, book summaries, etc but as soon as there’s anything important involved it’s just wasting time. It spits out a lot of things that seem correct but on closer inspection is not and that ends up costing a lot of time down the road. If it’s not right 100% of the time, or at least tells you when it’s not that confident, it can’t really be trusted. At the very least hopefully it’ll help get rid of all the SEO junk sites littering the internet.


AI is not just for creators, it's also for readers: summarisation, fact checking, "talking to" articles, books and papers.


"Fact checking" is the one I worry about there. If AI confidently tells you something is true will you know that it's not?

It's obvious when it's telling you it's still 2022 but what about more everything else?


Better than no checking, which is what we will do 99% of the time. There is a NLP task called "entailment", where two affirmations are judged to see if they support each other or not. A combination of search + entailment would work for fact-checking articles.

But if you want to do this properly you need to first mine all facts from all sources, then do reconciliation, then update your "truth" table for reference. Probably everyone will want to select the sources of truth they want loaded into the system, we're not going to agree on truth.

Even the bare minimum of knowing when an affirmation is controversial or doesn't exist in references would be of great help. AI could indicate <controversial> tags for the first and <citation needed> for the latter. Fortunately search can tell us when no results are found, unlike LLMs.


Currently it seems mostly useful for exercising human fact checking ability ...


the new turing test


I'm more worried about the internet itself, as soon as this technology is accessible to bots, we would no longer be able to distinguish if we're chatting or reading an article by a real human or an AI.

Same goes for comments here on HN, any one of which could be generated by ChatGPT..


Same goes for comments here on HN, any one of which could be generated by ChatGPT.

I asked ChatGPT to write a rebuttal to your statement. It seems to have pretty strong opinions on the matter:

While it's true that advances in artificial intelligence and natural language processing have made it possible for machines like ChatGPT to produce human-like responses, the idea that we would no longer be able to distinguish between a real human and an AI-generated response is not entirely accurate.

Firstly, it's important to note that AI-generated responses are typically generated with a specific purpose or objective in mind. ChatGPT, for example, is designed to generate responses to questions and prompts in a way that mimics human conversation. However, it's unlikely that an AI-generated response would be able to replicate the nuances and complexities of human conversation in a completely convincing way.

Secondly, there are various methods that can be used to determine whether a response has been generated by an AI or a human. For instance, machine-generated responses tend to be more formulaic and less spontaneous than human responses. Additionally, there are techniques such as the Turing Test that can be used to determine whether a response has been generated by a machine.

It's also worth noting that the development of AI and natural language processing technology has many potential benefits. For instance, it can help to automate tasks, improve decision-making, and enhance our understanding of complex problems.

In summary, while it's true that advances in AI and natural language processing have the potential to generate responses that are difficult to distinguish from those generated by humans, there are still ways to determine whether a response has been generated by a machine. Moreover, the development of AI and natural language processing technology has many potential benefits, and we should not be overly worried about the potential negative effects of this technology.


That's because it's specifically designed not to value itself higher than any human, even if you try and force it.


AI will most likely just increase the speed of development/content production etc.

It makes no sense for companies to fire anyone. It makes sense to make everyone use AI so they get even more value from each employee, maximizing their profits and market position.


I think ChatGPT's programming prowess might be a bit overblown. A majority development efforts are spent in adding functionality to existing code , fixing bugs and reducing technical debt. I am not sure ChatGPT can do this.


Only for arbitrary reasons. ChatGPT is entirely capable of being provided a block of code and then answering questions about it or making modifications to it. Without the token limit and with some way to guarantee that the code you provided won't make its way into their training data, this would be an entirely reasonable use case.


Sure, at least not for 2 more weeks.


yet


Anything that helps put advertising in a deep grave is a good thing.


ChatGPT is no threat to SE.


Famous last words. Just the other day I asked it to do something somewhat non-trivial: write an asynchronous streaming parser for Server-Sent Events in Rust using reqwest and nom. It cranked out not only working, but _sophisticated_, code that just needed some minor fixes to account for version updates in about 20 seconds. It was pretty eye opening.


> write an asynchronous streaming parser for Server-Sent Events in Rust using reqwest and nom

And now ask yourself how many non-SE's can even ask that question to ChatGPT.


How is that not the same as a machinist spending 15 minutes in solidworks to design a peice, hitting export on the g-code, firing up the 7 axis cnc, walking out the door and coming back the next day to the finished goods.

3d printing is a great example of how this will only create more stuff. It's a threat to the low end. If you do basic stuff, your need to climb the skill ladder.


Sure it can give you cool results from time to time just like Google and SO, yet you still need to analyze its output and understand all this shit

Software is not like art

One error here = doesnt work

Meanwhile errors at image may be hard to spot or even cool


> you still need to analyze its output and understand all this shit

And the person who does that will earn more. But they will be expected to produce at a rate that a human, working unassisted, cannot today. And those who would have assisted today will not have a niche anymore.


Explaining to ChatGPT what to produce will still be a bottleneck.


> Explaining to ChatGPT what to produce will still be a bottleneck

And a specialised skill. Still cheaper than maintaining a fleet of coders.


You could as easily say that Python is a specialized skill, and is cheaper than maintaining a fleet of manual assembly coders. Yet we hire even more of these higher level coders, and pay them more.


True software engineering, no. But for those those who are essentially line workers, stitching together frameworks and libraries and earning a spectacular income doing it, I'd say it is.


I always thought about writing this insightful blog about whay Ayn Rand would think of surveillance capitalism. She's a very strong proponent of the right to privacy, but at the same time she is a proponent of an unregulated market, that market produced surveillance capitalism. Or did it? Maybe it was produced by companies manipulating the rules behind the scenes, Ayn would hate that! Perhaps she would feel the need to take matters into her own hands: self-host all the things! I bet she'd have her own server, even though it runs that socialist, shared OS called Linux ;) ...

So I ask ChatGPT what would Ayn think? And it writes a very nice, balanced text. Exactly in line with my thoughts, just written more concisely and in much better English. I really enjoyed that conversation with ChatGPT. I did loose the interest to write the assay. Or perhaps I can copy paste it to my blog... Makes me feel like a CheatGPT though.


Basically: won't anyone think of the horse shoe makers?


I think AI democratizes writing and "content production". Now anyone can produce good texts. That includes uneducated migrants and half autistic programmers.


Just like netflix can produce good content, but it's shit by every metrics. If checks the checkboxes for "movies", :"documentaries", &c. but most of the time it's bland as fuck and not worth watching

"Content" is good if all you are is a container


Netflix produces garbage, because they want garbage.

Imagine there would be some AI that produces movies. Some interesting person (war veteran) would tell it its story, did a few iterations and corrections, and in a month would have a decent movie...

That will happen with writing now!


1000 a month is "successful" on patron?

Talk about a low bar.

ChatGPT might threaten mediocre writers, but not good ones, and never great ones.


Been a while since I saw a call for UBI.

I think I subconsciously credited an inflationary environment and the wake-up-call that was the things we depend on becoming slightly harder to get because fewer people were working their usual jobs.

And the difficulty in getting things done in a malfunctioning labor market.

And in general just a lot less magical thinking in commodity squeezes / rising rate environments.

But that's not to say that the problem doesn't exist. Just that it's interesting that this particular solution seemed to skulk away for a little while.


I find this comment interesting but hard to understand, what is the gist of your comment ?


That's fair, it was probably a bit snarky.

I read the featured article as a call for universal basic income.

My point was mainly that calls for things like that had disappeared slightly over the past couple of years, and in parts of the world we had the brief experiment and the almost immediate result was an overheated economy where none of the boring staples were available anymore.

I'll grant that it's not fair because we didn't have robots stocking shelves and the stalled shipping industry played a big part, but I think there was still a lesson learned that these nice-sounding solutions like "just stop burning fossil fuels overnight", and "just give everyone money" started to feel a bit less like quick fixes and we were reminded that the transition will be a long, slow one.

It was interesting to see a UBI article with inflation running hot. I had sort of assumed they were from another time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: