Hacker Newsnew | past | comments | ask | show | jobs | submit | shrewduser's commentslogin

maybe if you re wrote it from memory.


This doesn't surprise me, i find LLM's are really good at interpolating and translating. so if i made up a language and gave it the rules and asked it to translate i wouldn't expect it to be bad at it.


It shouldn't surprise anyone, but it is clear evidence against the claim I replied to, and clearly a lot of people still hold on to this irrational assumption that they can't produce anything new.


They're not producing anything new... If you give it the answer before asking the question, no wonder it can answer. Prompting is to find resonance in the patterns extracted from the training data, which is why it fails spectacularly for exotic programming languages.


When you invent a language and tell it express something in that language, you've not given it the answer before asking the question.

That's an utterly bizarre notion. The answer in question never existed before.

By your definition humans never produce anything new either, because we always also extrapolate on patterns from our previous knowledge.

> it fails spectacularly for exotic programming languages.

My experience is that it not just succeeds for "exotic" languages, but for languages that didn't exist prior to the prompt.

In other words, they can code at least simple programs even with zero-shot by explaining semantics of a language without giving them even a single example of programs in that language.

Did you even read the comment you replied to above?

To quote myself: "Invent a programming language that does not exist."

I've had this work both for "from scratch" descriptions of languages by providing grammars, and for "combine feature A from language X, and feature B from language Y". In the latter case you might have at least an argument. In the former case you do not.

Most humans struggle with tasks like this - you're setting a bar for LLMs most humans would fail to meet.


As long as you create the grammar, the language exists. Same if you edit a previous grammar. You're the one creating the language, not the model. It's just generating specific instance.

If you tell someone that multiplying a number by 2 is adding the number to itself, then if this person knows addition, you can't be surprised if it tells you that 9*2 is 18. A small leap in discovery is when the person can extract the pattern and gives you 5*3 is 5+5+5. A much bigger leap is when the person discovers exponent.

But if you take the time to explain each concept....


> As long as you create the grammar, the language exists.

Yes, but it didn't exist during training. Nothing in the training data would provide pre-existing content for the model to produce from, so the output would necessarily be new.

> But if you take the time to explain each concept....

Based on the argument you presented, nothing a human does is new, because it is all based on our pre-exististing learned rules of language, reasoning, and other subjects.

See the problem here? You're creating a bar for LLMs that nobody would reasonably assign to humans - not least because if you do, then "accusing" LLMs of the same does not distinguish them from humans in any way.

If that is the bar you wish to use, then for there to be any point to this discussion, you will need to give a definition of what it means to create something new that we can objectively measure that a human can meet that you believe an LLM can't even in theory meet, otherwise the goalpost will keep being moved when an LLM example can be shown to be possible.


See my definition at : https://news.ycombinator.com/item?id=44137201

As mentioned there, I was arguing that without being prompted, there's no way that it can add something that is not a combination of the training data. And that combination does not act on the same terms that you would expect someone learning the same material would do.

In Linear regression, you can reduce a big amount of data to a small amount of factors. Every prediction would be a combination of those factors. According to your definition, those prediction will be new. For me what's new is when you retrospectively adds the input to the training data, find a different set of factors that gives you a bigger set of possible answers (generation) or narrows the definition of correct answers (reliability).

That is what people do when programming a computer. You goes from something that can do almost anything and you restrict it down to a few things (that you need). What LLM do is throwing the dice and what you get may or may not do what you want, and may not even be possible.


That comment doesn't provide anything resembling a coherent definition.

The rest of what you wrote here is either also true for humans or not true for machines irrespective of your definitions unless you can demonstrate that humans can exceed the Turing computable.

You can not.


i wish i could do that in an interview.


I have very limited experience with llms but i've always thought of it as a compounding errors problem, once you get a small error early on it can compound and go completely off track later.


Google make almost all their money from search, an extremely lucrative property, which is under threat from all the new ai players.

So while they have a bunch of cool tech on the possibility horizon the only thing the market cares about is the ability to make money and there's some uncertainty on that front.


these are the entry level cards, i imagine the coming higher end variants will have the option of much more ram.


Sorry what? Shelter is a necessity for survival.

Is it communism to not allow people to buy up all the water supplies and sell it back to you at an extortionate rate?

The truth about the housing market is that it's not a free market anyway it's controlled by special interests, bad laws, zoning, local nimbys and councils the list goes on.


Shelter is a necessity but lots of people think they’re entitled to live wherever they want at a price they can afford. They go to SF or NY and complain about rents when there is a whole country they can live in.

I don’t think communities have to allow higher density and the lower quality of life that comes with it. Desirable places are going to be expensive and that’s okay.

Personally I think if we have better rail, people can live in a wider range of places but still access jobs. You see this in Europe, and it lets smaller communities keep their way of life.


Lots of employers think they're entitled to in-office days when remote work suffices. Argument would be stronger if there were a whole country to do all work in.


There is available work in many places. But I do think many want to only do certain kinds of jobs, get paid a certain amount, and live wherever they want. That feels more like entitlement to me. For example many college educated adults look down on trade jobs and want a desk job in whatever field they studied in a coastal urban city. But that’s not reasonable.


Non sequitur. Read carefully, there isn't all work in all places. It's not about how much it pays. It's the demand to come to an office in "a coastal urban city" to work when the work can be completed elsewhere. It seems you not only want to deter people from living in desirable places, but also want to deter people from working in jobs they were trained for for no reason at all.


Yes, actually, that is communism and it's awesome.


surely we could already do 1khz displays with OLED, but i assumed that the bandwidth required to meet that wouldn't be available on any of todays display outputs.


I think they're going to have to adjust a lot more than just.

Let's see. Bandwidth seems to be the main problem but so far no one has shipped a 640x480bay 2khz or whatever. Who knows


True, LG is releasing OLED monitors this year that reach 480hz but at 1080p (240hz at 4K).


Interesting that 4x the pixels only requires a halving of the refresh rate


Might be due to compression of frames as its sent over the cable (called digital stream compression iirc


Any idea why south korea is particularly cost prohibitive? i would assume twitch would be very keen to have a presence there.


Per: https://blog.twitch.tv/en/2023/12/05/an-update-on-twitch-in-...

  Ultimately, the cost to operate Twitch in Korea is prohibitively expensive and we have spent significant effort working to reduce these costs so that we could find a way for the Twitch business to remain in Korea. First, we experimented with a peer-to-peer model for source quality. Then, we adjusted source quality to a maximum of 720p. While we have lowered costs from these efforts, our network fees in Korea are still 10 times more expensive than in most other countries. Twitch has been operating in Korea at a significant loss, and unfortunately there is no pathway forward for our business to run more sustainably in that country.
Korean implements a "Sending Party Network Pays" tax.


Would be interesting to understand how far they got with the p2p experiments.


I've never worked at Twitch, but I have experimented with P2P video delivery, and the TL;DR from my experiments is that real-time video delivery can not easily provide a consistent, low latency experience, and video experience problems amplify quite quickly. I wrote up a fair bit about this in the comments of a different HN post a while ago: https://news.ycombinator.com/context?id=33070218


https://carnegieendowment.org/2021/08/17/afterword-korea-s-c...

tl;dr is that SK has some big fees for data users that have a disproportionate impact on non-Korean companies


Wouldn't the work around then be for Twitch to do a 49-51 split with a South Korean ISP to get favortism?

These entrenchment laws are there to favor the big ISPs in SK, it would stand to reason that they would be happy to make money in such an arrangement, and Twitch gets a secondary downstream benefits that they can reap.


There’s a good chance someone already had this idea, explored it, and found a reason why it can’t work.


I'm not so sure. Amazon doesn't have a history of doing this sort of thing, I don't know that its in their DNA.

I'm willing to bet there was no real diligence to an idea like this as a result. I was of course postulating myself (and don't know the inner details of Twitch either) however, upon thinking about it further, it seems unlikely it was ever seriously entertained, if at all.


This was discussed here [0]. The tl;dr is that South Korea has the opposite of net neutrality: instead of charging the consumer for excess bandwidth, they charge the service providers. Streaming video is extremely bandwidth-intensive, so Twitch can't afford to keep streaming if they're charged for every GB.

[0] https://news.ycombinator.com/item?id=38539167


They just didn't care the South Korean market enough. Many blame South Korea's sending party network fee, but the biggest competitor to Twitch in South Korea, namely AfreecaTV, has been fine operating in South Korea under the same network fee terms. Twitch just didn't want and/or lacked the ability to adapt to the South Korean market.


Nick Plott (a longtime American commentator for StarCraft Brood War who lives and works in Korea) has a video that goes into this: https://youtu.be/o9n2PRbwGMY


not an expert, but have read that South Korea's ISP's get to charge Twitch for the bandwidth they consume (no net neutrality). RIP.


the license doesn't transfer if the company is bought.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: