Hacker Newsnew | past | comments | ask | show | jobs | submit | measurablefunc's commentslogin

Every AI company is doing the same thing, there is nothing special about Microsoft in this instance. If you're using a 3rd party provider for your queries you can assume it is going to end up in the training corpus.

Their structural properties are similar to Peano's definition in terms of 0 and successor operation. ChatGPT does a pretty good job of spelling out the formal structural connection¹ but I doubt anyone knows how exactly he came up with the definition other than Church.

¹https://chatgpt.com/share/693f575d-0824-8009-bdca-bf3440a195...


Yeah I've been meaning to send a request to Princeton's libraries with his notes but don't know what a good request looks like

The jump from "there is a successor operator" to "numbers take a successor operator" is interesting to me. I wonder if it was the first computer science-y "oh I can use this single thing for two things" moment! Obviously not the first in all of science/math/whatever but it's a very good idea


The idea of Church numerals is quite similar to induction. An induction proof extends a method of treating the zero case and the successor case, to a treatment of all naturals. Or one can see it as defining the naturals as the numbers reachable by this process. The leap to Church numerals is not too big from this.

Probably not possible unless you have academic credentials to back up your request like being a historian writing a book on the history of logic & computability.

TypeScript's type system is Turing complete so you have access to essentially unlimited expressivity (up to the typechecking termination depth): https://news.ycombinator.com/item?id=14905043

Wall Street is based on financial speculation & moving digits in databases, Main Street lives with real material constraints like gas prices at the pump & food prices at the grocery store instead of future contracts for oil barrels. Article talks about concentration of wealth in fewer corporations with deep pockets but that's again more moving of digits in databases instead of anything to do with physical constraints like declining EROI of fossil fuel deposits.

Meta sells ads in exchange for keeping people hooked to their various digital platforms. Meta will never produce any worthwhile AI progress other than how to sell more ads & how to better keep people glued to their digital properties. Their incentives are not structured for anything else other than that. If you believe they will be successful in the mission incentivized by their corporate structure (I don't) then you should buy their stock & sell it later when they make more money by getting more people addicted to useless engagement spam.

How do they prove that further precision enhancements will maintain the singularity? They're using numerical approximation which means they don't have analytical expressions to work with.

LLMs can not "lie", they do not "know" anything, and certainly can not "confess" to anything either. What LLMs can do is generate numbers which can be constructed piecemeal from some other input numbers & other sources of data by basic arithmetic operations. The output number can then be interpreted as a sequence of letters which can be imbued with semantics by someone who is capable of reading and understanding words and sentences. At no point in the process is there any kind of awareness that can be attributed to any part of the computation or the supporting infrastructure other than whoever started the whole chain of arithmetic operations by pressing some keys on some computer connected to the relevant network of computers for carrying out the arithmetic operations.

If you think this is reductionism you should explain where exactly I have reduced the operations of the computer to something that is not a correct & full fidelity representation of what is actually happening. Remember, the computer can not do anything other than boolean algebra so make sure to let me know where exactly I made an error about the arithmetic in the computer.


These types of semantic conundrums would go away if, when we refer to a given model, we think of it more holistically as the whole entity which produced and manages a given software system. The intention behind and responsibility for the behavior of that system ultimately traces back to the people behind that entity. In that sense, LLMs have intentions, can think, know, be straightforward, deceptive, sycophantic, etc.

In that sense every corporation would be intentional, deceptive, exploitative, motivated, etc. Moreover, it does not address the underlying issue: no one knows what computation, if any, is actually performed by a single neuron.

> In that sense every corporation would be intentional, deceptive, exploitative, motivated, etc.

...and so they are, because the people making up those corporations are themselves, to various degrees, intentional, deceptive, etc.

> Moreover, it does not address the underlying issue: no one knows what computation, if any, is actually performed by a single neuron.

It sidesteps this issue completely, to me the buck stops with the humans, no need to look inside their brain and reduce further than that.


I see. In that case we don't really have any disagreement. Your position seems coherent to me.

Can't you say the same of the human brain, given a different algorithm? Granted, we don't know the algorithm, but nothing in the laws of physics implies we couldn't simulate it on a computer. Aren't we all programs taking analog inputs and spitting actions? I don't think what you presented is a good argument for LLMs not "know"ing, in some meaning of the word.

What meaning of "knowing" attributes understanding to a sequence of boolean operations?

Human brains depend on neurons and "neuronal arithmetic". In fact, their statements are merely "neuronal arithmetic" that gets converted to speech or writing that get imbued with semantic meaning when interpreted by another brain. And yet, we have no problem attributing dishonesty or knowledge to other humans.

Please provide references for formal & programmable specifications of "neuronal arithmetic". I know where I can easily find specifications & implementations of boolean algebra but I haven't seen anything of the sort for what you're referencing. Remember, if you are going to tell me my argument is analogous to reductionism of neurons to chemical & atomic dynamics then you better back it up w/ actual formal specifications of the relevant reductions.

Well, then you didn't look very hard. Where do you think we got the idea for artificial neurons from?

You can just admit you don't have any references & you do not actually know how neurons work & what type of computation, if any, they actually implement.

I think the problem with your line of reasoning is a category error, not a mistake about arithmetic.

I agree that every step of an LLM’s operation reduces to Boolean logic and arithmetic. That description is correct. Where I disagree is the inference that, because the implementation is purely arithmetic, higher-level concepts like representation, semantics, knowledge, or even lying are therefore meaningless or false.

That inference collapses levels of explanation. Semantics and knowledge are not properties of logic gates, so it is a category error to deny them because they are absent at that level. They are higher-level, functional properties implemented by the arithmetic, not competitors to it. Saying “it’s just numbers” no more eliminates semantics than saying something like “it’s just molecules” eliminates biology.

So I don’t think the reduction itself is wrong. I think the mistake is treating a complete implementation-level account as if it exhausts all legitimate descriptions. That is the category error.


I know you copied & pasted that from an LLM. If I had to guess I'd say it was from OpenAI. It's lazy & somewhat disrespectful. At the very least try to do a few rounds of back & forth so you can get a better response¹ by weeding out all the obvious rejoinders.

¹https://chatgpt.com/share/693cdacf-bcdc-8009-97b4-657a851a3c...


I once wrote a parody sci-fi short story about this which I called "Meeting of the Bobs"¹ but it looks like some people thought of the same idea & took it seriously. It seems obvious enough so I'm not claiming any originality here.

¹https://drive.proton.me/urls/7D2PX37MJ0#5epBhVuZZMOk


I haven't bought a new phone or computer for more than 5 years now. I don't really feel like I'm missing out on anything.

The answers will soon have ads for vitamins & minerals.

Dr. Nigel West's Medical Elixir

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: