Hacker Newsnew | past | comments | ask | show | jobs | submit | LPisGood's commentslogin

Really? That is so interesting - which ones? Any ancestors of commonly used ones today?

Off the top of my head BIX, Prodigy, Compuserve, MCIMail, BBS, Ethernet, Token Ring, $25 Network, AOL, Timeshare, Kermit, Fax

Anyone with 2+ computers immediately thought about connecting them.


Why do you think it will be increasingly bad? It seems to me like it’s already as bad as it’s capable of getting.

Because it's still relatively new. Gambling's been around forever, and so has addiction. What hasn't been around is gambling your life away on the same device(s) you do everything else in today's modern society on. If you had an unlimited supply of whatever monkey is on your back, right at your fingertips, you'd be dead before the week is out from an overdose. It's the normalization of this level of access to gambling which gives me great fear for the future. Giving drugs to minors is a bigger crime than to adults for a reason. Without regulation and strong cultural push back, it's gonna get way worse, unless we make huge leaps in addiction treatment (which I am hopeful for. GLP-1s aren't yet scientifically proven to help with that, but there's a large body of anecdotal evidence to suggest it does.

It's only been 8 years, the addicts lives and those they touch can keep getting worse until their death.

That’s the thing here. Software engineering is an intelligence-complete problem. If AI can solve it, then it can solve any sort of knowledge work like accounting, financial analysis, etc

Only if by "solving it", you mean being able to write any program to do anything.

Software engineering is a hubris-complete problem. Somehow, being able to do so much seems to make us all assume that everyone else is capable of so little. But just because we can write 1000 programs to do 1000 different things, and because AI can write 1000 programs to do 1000 different things, it doesn't mean that we can write the million other programs that do a million other things. That would be like assuming that because someone is a writer and has written 1 book, that they are fully capable of writing both War & Peace and an exhaustive manual on tractor repair.

Financial analysis is not easier than programming. You don't feed in numbers, turn a crank, and get out correct answers. Some people do only that, and yeah, AI can probably replace them.

"Computing" as a field only made sense when computers were new. We're going to have to go back to actually accomplishing things, not depending on the fact that computers are involved and making them do anything is hard so anyone who can make them do things is automatically valuable. (Which sucks for me, because I'm pretty good at making computers do things but not so good at much of anything else with economic value.) "What do you do?" "I use computers to do X." "Why didn't you just say you do X, then?" is already kind of a thing; now it's going to move on to "I use AI to do X."

Then again: the AI-dependent generation is losing the ability to think, as a result of leaning on AI to do it for them. So while my generation stuck the previous generation with maintaining COBOL programs, the next generation will stick mine with thinking. I can deal with that. I like thinking.

</end-of-weird-rant>


> Financial analysis is not easier than programming. You don't feed in numbers, turn a crank, and get out correct answers

It’s not, but if software engineering is solved then of course so is financial analysis, because a program could be written to do it. If the program is not good enough, then software engineering is not solved.

I think this what you were getting at with this part, but it’s not clear to me, because it seems like you were disagreeing with my thesis: “ because AI can write 1000 programs to do 1000 different things, it doesn't mean that we can write the million other programs that do a million other things”

I’m not sure if you’re saying that people weren’t using computers to solve problems before, but that’s pretty much everything they do. Some people were specifically trained to make computers solve problems, but if computers can solve X problem without a programmer, then both the computer programmer and the X problem solver are replaced.


I don't think software engineering is ever going to be solved, but financial analysis will definitely never be solved. It's impossible, the nature of it dictates that, whatever changes happen will further change the results. Financial analysis requires novel thinking, and even if you have AGI that can engage in novel thought they will just be another input into the system.

Just like AI, the winners will (continue to) be the ones with the most access to data and the technical and financial capital to make use of it.

This is the crux of it. The digital world doesn't produce value except when it eases the production of real goods. Software Development as a field is strange: it can only produce value when it is used to make production of real goods more efficient. We can use AI to cut out bureaucratic work, which then means that all that is left is real work: craftsmanship, relationship building, design, leadership.

There are plenty of "human in the loop" jobs still left. I certainly don't want furniture designed by AI, because there is no possible way for an AI to understand my particular fleshly requirements (AI simply doesn't have the wetware required to understand human tactile needs). But the bureaucratic jobs will mostly be automated away, and good riddance. They were killing the human spirit.


> Software Development as a field is strange: it can only produce value when it is used to make production of real goods more efficient. We can use AI to cut out bureaucratic work, which then means that all that is left is real work: craftsmanship, relationship building, design, leadership.

Thats a really odd take. Software is merely a way of ingesting data and producing information. And information often has intrinsic value. This can scale from simple things like minor annoyances of forgetting your umbrella, to avoiding deaths/millions of dollars in losses due to ships sinking in storms.

Now the long term value of software does approach zero, because it can usually be duplicated quite easily.


Extraction and manufacturing are considered the primary and secondary economic sectors. In a closed loop system, tertiary and onward sectors, like services and technology, cannot exist without the primary and secondary.

I value your weird rant. Yes it did go on as a thought stream, but there's sense in there.

I've been thinking a lot around a kind of smart-people paradox: very intellectual arguments all basically plotting a line toward some inevitable conclusion like super intelligence or consciousness. Everything is a raw compute problem.

While at the same time all scientific progress gives us more and more evidence that reality is non-computable, non linear.


> While at the same time all scientific progress gives us more and more evidence that reality is non-computable, non linear.

What scientific problems are non-computable?

ANNs are designed to handle non-linearities BTW, thats the entire point of activation functions and multi layer networks


non computable, non-linear as in given known input parameters you can determine the output parameters.

we can't do that for mostly any complex physical system, as would be for something like living organisms.


> non computable, non-linear as in given known input parameters you can determine the output parameters.

These two words do not mean the same thing.

Non-linear functions do not mean you cannot determine the output for a given input.

All non-linear means is that the condition f(x+y) = f(x) + f(y) and f(kx) = kf(x) do not hold for arbitrary x,y,k

For example f(x) = x^2 is a non-linear function. Can you determine what f(x) for arbitrary x?

Perhaps you meant what used to be called "chaotic systems", those which were highly sensitive to initial conditions. Yes, they are non-linear but they are completely deterministic. A classic example would be the n-body problem in physics under most conditions.

And I'm not sure what you understand what non-computable means. It means that the computation will not halt in a finite amount of time for a general input. For a particular input, it may indeed halt in a finite amount of time.

Most real numbers are non-computable, such as the square root of 2 or Pi.

Practically speaking however, we can get approximations as close as we want. In other cases, such as the Busy Beaver function, we can set bounds


You're correct. I only have a very casual understanding of these things. For the non-linear thing, I just mean that for any advanced system there are say trillions of parameters, like cellular systems, and even if you mapped them in you couldn't be sure what the output would be.

    > And I'm not sure what you understand what non-computable means. It means that the computation will not halt in a finite amount of time for a general input. For a particular input, it may indeed halt in a finite amount of time.
Sounds familiar, the "halting problem"? I suppose I'm too loosely tying concepts together. Particular vs general input is same as simple vs complex input above, given a complex enough input, the compute involved approaches boundless/infinite.

In practice, yes, as I understand it, modern science is all about stochastic approximations and for all intents and purposes it's quite reliable.

I probably should stop using "non-linear" terminology. I really just mean that it's not 1:1. You mention how systems can be deterministic and I looked it up and yes wave function collapse specifically says:

    > The observable acts as a linear function on the states of the system
We can compute the possible states, but not the exact state. We can't predict the future.

Thanks for the reply, this is much more interesting to me as it approaches philosophy, so admittedly I too loosely throw words-that-mean-things around.


You are right, but I think at the moment, a lot of people are confusing "software engineering" with "set up my react boilerplate with tailwind and unit tests", and AI just is way better for that sort of rote thing.

I've never felt comfortable with the devs who just want some Jira ticket with exactly what to do. That's basically what AI/LLMs can do pretty well.


Those people have always annoyed the hell out of me and I would prefer to not work with them.

What’s still not clear to me about this story is if there was ever live human monitoring of shoppers. Did the low confidence resolution occur in real time, at some point between the customer grabbing the item and getting their bill?

It wasn't real-time. Recorded events were entered into a queue and latency would vary depending on the size of the queue and the number of annotators.

What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?

It bears all the hallmarks of AI writing: length, repetition, lack of structure, and silly metaphors.

Nothing about this story is complex or interesting enough to require 1000 words to express.


Correct form and relevant citations have been, for generations up to a couple of years ago, mighty strong signals that a work is good and done by a serious and reliable author. This is no longer the case and we are worse off for it.

This basically just the ethical framework philosophers call Contractarianism. One version says that an action is morally permissible if it is in your rational self interest from behind the “veil of ignorance” (you don’t know if you are the actor or the actee)

This is why I like using mathematical or algorithmic approaches to solve difficult problems. Writing programs that use statistics, mathematics, optimization, analytical geometry, etc guarantee a certain level of security from the swarms of CRUD merchants flooding the market.


I think it’s a pretty easy principle that machines are not people and people learning should be treated differently than machines learning


You see this principle in privacy laws too.

I can be in a room looking at something with my eyeballs and listening with my ears perfectly legally... But it would not be legal if I replaced myself with a humanoid mannequin with a video camera for a head.


You can even write down what you are looking at and listening to, although in some cases, dissemination of, e.g. verbatim copies in your writing could be considered copying.

But it is automatically copying if you use a copier.


Yes, I remember a friend that interned there a couple times showed me that. One of them was “list comprehensive python” and the Google website would split in 2 and give you some really fun coding challenges. I did a few, and you get 4(?) right you get a guaranteed interview I think. I intended to come back and spend a lot of time on an additional one, but I never did. Oops


I think I only did three or something and I didn't hear back from them. Honestly my view of Google is that they aren't as cool as they think they are. My current position allows me to slack off as much as I want and it's hard to beat that, even if they offer more money (they won't in the current market).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: