Hacker Newsnew | past | comments | ask | show | jobs | submit | MadDemon's commentslogin

Doesn't 50% heritability mean that it's a coin flip in a 2 party country? So basically no heritability?


> coin flip

No, you have to consider the non-genetic, environmental factors that also influence the development of political ideology, specifically the households in which children are raised and the schooling and media to which they're exposed, all of which will increasingly become conservative.


That does not explain how mathematically your statement of 40%-60% heritability represents anything other than a coin flip.


Greece took on more debt than they could serve. Do you expect the tax payers from other countries to just pay for that without significant changes to how Greece operates? If you can't pay your debts and you can't print your own currency, you lose some sovereignty. But I feel like Greece would have been worse off if they still had the drachma and tried to print their way out of the crisis.


Pretty much every single country in the world has taken on more debt than they can serve. And the 2008 crisis wasn't triggered by Greece either. The private creditors should have taken the loss. And that holds true for the rest of the world.


If a debtor can't pay their debt, you don't just get to wipe out the bond holders. There is some kind of negotiation to try to restructure the debt and see how much the bond holders can still get. Simply wiping out the debt would create a terrible precedent with terrible consequences for the credibility of the whole eurozone. Who would want to lend money to an EU country if they just get wiped out when things get bad? It would've also had bad consequences for the financial system and potentially caused some institutions to go belly up. The way that countries typically get rid of their debt is by printing money to serve it and thereby inflating it away. But that is obviously not popular among the remaining EU countries. It was always clear that the Euro comes with this constraint that you can't just inflate away your debt.


The high voltage DC transmission lines from north to south are being built right now and for example SuedLink is expected to be operational in 2028. Their transmission capacity will be more than enough. Why would you split Germany into electricity zones now, if in a few years the transmission problem will largely be fixed?


LLMs and their capabilities are very impressive and definitely useful. The productivity gains often seem to be smaller than intuitively expected though. For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it, but the time savings are often not that relevant for your overall productivity. Before LLMs you were usually already able to find the information quickly and even a 10x speed up does not really have too much of an impact on your overall productivity, because the time it took was already negligible.


> For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it, but the time savings are often not that relevant for your overall productivity. Before LLMs you were usually already able to find the information quickly and even a 10x speed up does not really have too much of an impact on your overall productivity, because the time it took was already negligible.

I'd even question that. The pre-LLM solutions were in most cases better. Searching a maintained database of curated and checked information is far better than LLM output (which is possibly bullshit).

Ditto to software engineering. In software, we have things call libraries: you write the code once, test it, then you trust it and can use it as many times as you want forever for free. Why use LLM generated code when you have a library? And if you're asking for anything complex, you're probably just getting a plagiarized and bastardized version of some library anyway.

The only thing where LLMs shine is a kind of simple, lazy "mash this up so I don't have to think about it" cases. And sometimes it might be better to just do it yourself and develop your own skills instead of use an LLM.


One advantage with LLMs is that they are often more able to find things that you can roughly explain but don't know the name of. They can be a good take-off point for wider searches.


I was able to find a decade old video via llm with the prompt “YouTube video of a french band on a radio station with a girl wearing orange jumpsuit”. I had tried many google searches without success trying to remember the video but the llm came right out with the correct video of The Dø on KEXP first try. 99 times out of 100 I prefer normal search though.


Why not have the LLM generate the library?


Because every time you run the LLM it will generate a new library, with new and surprising bugs.

It's better to take an existing, already curated and tested library. Which, yes, may have been generated by an LLM, but has been curated beyond the skill of the LLM.


The value of a library is not in the code it’s in the operations. It’s been curated and tested by multiple people in multiple environments. One of the ways to code at a high level is to delegate the cognitive load.

If you really can one shot it and it’s simple(left-pad). Great. But most things aren’t my that simple, the third time you have to think about it, it’s probably a net loss.


If only search engine AI output didn't constantly haluciate nonexistent APIs, it might be a net productivity gain for me...but it's not. I've been bit enough times by their false "example" output for it to be a significant net time loss vs using traditional search results.


Gemini hallucinated a method on a rust crate that it was trying to use and then spent ten minutes googling 'method_name v4l2 examples' and so on. That method doesn't exist and has never existed; there was a property on the object that contained the information it wanted, but it just sat there spinning its wheels convinced that this imagined method was the key to its success.

Eventually it gave up and commented out all the code it was trying to make work. Took me less than two minutes to figure out the solution using only my IDE's autocomplete.

It did save me time overall, but it's definitely not the panacea that people seem to think it is and it definitely has hiccups that will derail your productivity if you trust it too much.


My favorite with ChatGPT is:

"Tell me how to do X" (where X was, for one recent example, creating a Salt stanza to install and configure a service).

I do as it tells me, which seems reasonable on the face of it. But it generates an error.

"When creating X as you described, I get error: Z. Why?"

"You're absolutely correct and you should expect this error because X won't work this way. Do Y instead."

Gah... "I told you to do X, and then I'm going to act like it's not a surprise that X doesn't work and you should do something else."


You're absolutely right


it's not just that you are absolutely correct but you are also absolutely right


It's even worse when LLM eats documentation for multiple versions of the same library and starts hallucimixing methods from all versions at the same time. Certainly unusable for some libraries which had a big API transition between versions recently.


The library that this happened to me repeatedly on was AWS' CDK, which did have a large delta between v1 to v2, so that may help explain it.


Using ChatGPT and phrasing it like a search seems like a better way? “Can you find documentation about an API that does X?”


It will often literally just make up the documentation.

If you ask for a link, it may hallucinate the link.

And unlike a search engine where someone had to previously think of, and then make some page with the fake content on it, it will happily make it up on the fly so you'll end up with a new/unique bit of fake documentation/url!

At that point, you would have been way better off just... using a search engine?


how is it hallucinating links? The links are direct links to the webpage that they vectorized or whatever as input to the LLM query. In fact, on almost all LLM responses DuckDuckGo and Google, the links are right there as sited sources that you click on (i know because I'm almost always clicking on the source link to read the original details, and not the made up one


I would imagine links can be hallucinated because the original URLs in the training data get broken up into tokens - so it's not hard to come up with a URL that has the right format (say https://arxiv.org/abs/2512.01234 - which is a real paper but I just made up that URL) and a plausible-sounding title.


Yeah, but the current state of ChatGPT doesn’t really do this. The comment you’re replying to explains why URLs from ChatGPT generally aren’t constructed from raw tokens.


You are absolutely right! The current state of ChatGPT was not in my training data.


How do you explain it then, when it spits out the link, that looks like it surprisingly contains the subject of your question in the URL, but that page simply doesn't exist and there isn't even a blog under that domain at all?


Near as I can tell, people just don’t actually check and go off what it looks like it’s doing. Or they got lucky, and when they did check once it was right. Then assume it will always right.

Which would certainly explain things like hallucinated references in legal docs and papers!

The reality is that for a human to make up that much bullshit requires a decent amount of work, so most humans don’t do it - or can’t do it as convincingly. LLMs can generate nigh infinite amounts of bullshit for cheap (and often more convincing sounding bullshit than a human can do on their own without a lot of work!), making them perfect for fooling people.

Unless someone is really good at double checking things, it’s a recipe for disaster. Even worse, doing the right amount of double checking makings them often even more exhausting than just doing the work yourself in the first place.


I’ve used Claude code to debug and sometimes it’ll say it knows what the issue is, then when I make it cite a source for its assertions, it will do a web search and sometimes spit out a link whose contents contradict its own claim.

One time I tried to use Gemini to figure out 1950s construction techniques so I could understand how my house was built. It made a dubious sounding claim about the foundation, so I had it give me links and keywords so I could find some primary sources myself. I was unable to find anything to back up what it told me, and then it doubled down and told me that either I was googling wrong or that what it told me was a historical “hack” that wouldn’t have been documented.

These were both recent and with the latest models, so maybe they don’t fully fabricate links, but they do hallucinate the contents frequently.


> maybe they don’t fully fabricate links

Grok certainly will (at least as of a couple months ago). And they weren't just stale links either.


After getting beaten for telling the truth so frequently, who wouldn’t start lying?


I haven't seen this happen in ChatGPT thinking mode. It actually does a bunch of web searches and links to the results.


The real benefit to a search engine is to rework and launder other people's information and make it your information.

Now instead of the wikipedia article you are reading the exact same thing from google's home page and you don't click on anything.


I think you're underestimating how many people don't know how to properly search on google (i.e. finding the proper keywords, selecting the reputable results, etc etc). Those are probably also the same people that will blindly believe anything a LLM says unfortunately.


True, I do not know how two properly search something on google.com in 2025. I only know how to do it on startpage.com in 2025, kagi.com in 2025 or google.com in 2015.


LLM output is quickly rendering google search unusable, so it's kind of creating its own speedup multiplier.


It really depends on what 'XYZ' is and how many hoops you need to jump through to get to the answer. ChatGPT gets information from various places and gives you the answer as well as the explanation at each step. Without tools like ChatGPT its definitely not negligible in a lot of cases.


I use ChatGPT “thinking” mode as a way to run multiple searches and summarize the results. It takes some time, but I can do other stuff in another tab and come back.

It’s for queries that are unlikely to be satisfied in a single search. I don’t think it would be a negligible amount of time if you did it yourself.


But for large searches, I then have to spend a lot of time validating the output - which I'd normally do while reading the content etc as I searched (discarding dodgy websites etc).

On the other hand, where I think llms are going to excel, is you roll the dice, trust the output, and don't validate it. If it works out yayy you're ahead of everyone else that did bother to validate it.

I think this is how vibe coded apps are going to go. If the app blows up, shut down the company and start a new one.


I find Gemini to be the most consistent at actually using the search results for this, in "Deep Research" mode


This is the way. I do it the same way for development. The main point is I can run multiple tasks in parallel(myself + LLM(s)).

I let Claude and ChatGPT type out code for me, while I focus on my research


This is partly because Google is past the enshittification hump and ChatGPT is just starting to climb up it - they just announced ads.


This. And the wonderful thing about LLMs is that they can be trained to bend responses in specific directions, say toward using Oracle Cloud solutions. There's fertile ground for commercial value extraction that goes far beyond ads. Think of it as product placement on steroid.


You don't even need training — you can add steering vectors in the middle of the otherwise-unmodified computation. Remember Golden Gate Claude?


> they just announced ads

wondering how is it going to work when they "search the web" to get the information, are they essentially going to take ad revenue away from the source website?


Not to be a dick, but enshittification is not a hump you get past, it's a constant climb until the product is abandoned. Did you just mean growing pains?


The difference is that in the past that information had to come from what people wrote and are writing about, and now it can come from a derivative of an archive of what people once wrote, upon a time. So if they just stop doing that — whether because they must, or because they no longer have any reason to, or because they are now drowned out in a massive ocean of slop, or simply because they themselves have turned into slopslaves — no new information will be generated, only derivative slop, milled from derivative slop.

I think we all understand that at this point, so I question deeply why anyone acts like they don’t.


That makes me think about the development of much software out there: the development time is often several orders of magnitude smaller than its life cycle.


> For example, using ChatGPT to get a response to a random question like "How do I do XYZ" is much more convenient than googling it

More convenient than traditional search? Maybe. Quicker than traditional search? Maybe not.

Asking random questions is exactly where you run into time-wasting hallucinations since the models don't seem to be very good at deciding when to use a search tool and when just to rely on their training data.

For example, just now I was asking Gemini how to fix a bunch of Ubuntu/Xfce annoyances after a major upgrade, and it was a very mixed bag. One example: the default date and time display is in an unreadably small "date stacked over time" format (using a few pixel high font so this fits into the menu bar), and Gemini's advice was to enable the "Display date and time on single line" option ... but there is no such option (it just hallucinated it), and it also hallucinated a bunch of other suggestions until I finally figured out what you need to do is to configure it to display "Time only" rather than "Data and Time", then change the "Time" format to display both data and time! Just to experiment, I then told Gemini about this fix and amusingly the response was basically "Good to know - this'll be useful for anyone reading this later"!

More examples, from yesterday (these are not rare exceptions):

1) I asked Gemini (generally considered one of the smartest models - better than ChatGPT, and rapidly taking away market share from it - 20% shift in last month or so) to look at the GitHub codebase for an Anthropic optimization challenge, to summarize and discuss etc, and it appeared to have looked at the codebase until I got more into the weeds and was questioning it where it got certain details from (what file), and it became apparent it had some (search based?) knowledge of the problem, but seemingly hadn't actually looked at it (wasn't able to?).

2) I was asking Gemini about chemically fingerprinting (via impurities, isotopes) roman silver coins to the mines that produced the silver, and it confidently (as always) comes up with a bunch of academic references that it claimed made the connection, but none or references (which did at least exist) actually contained what it claimed (just partial information), and when I pointed this out it just kept throwing out different references.

So, it's convenient to be able to chat with your "search engine" to drill down and clarify, etc, but a big time waste if a lot of it is hallucination.

Search vs Chat has anyways really become a difference without a difference since Google now gives you the "AI Overview" (a diving off point into "AI Mode"), or you can just click on "AI Mode" in the first place - which is Gemini.


> I asked Gemini (generally considered one of the smartest models

Everyone is entitled to their own opinion, but I asked ChatGPT and Claude your XFCE question, and they both gave better answers than Gemini did (imo). Why would you blindly believe what someone else tells you over what you observe with your own eyes?


I'm curious what was your Claude prompt? I used to use Claude a lot more, but the free tier usage limits are very low if you use it for coding.


Another reason search vs chat has become a difference without a difference is that search results are full of highly-ranked AI slop. I was searching yesterday for a way to get a Gnome-style hot corner in Windows 11, and the top result falsely asserted that hot corners were a built-in feature, and pointed to non-existing settings to enable them.


You're overestimating the mean person's ability to search the web effectively.


And perhaps both are overestimating the mean person's ability to detect a hallucinated solution vs a genuine one.


I think hallucination is grossly overstated as a problem at this point, most models will actively search the web and reason about the results. You're much more likely to get the incorrect solution browsing stack overflow than you are asking AI.


Gemini hallucinated a method name in a rust crate then spent several minutes googling the method name + 'rust example' trying to find documentation about the method it made up. Unsurprisingly it didn't find any, and then it just gave up and commented out the entire function and called it done.


Comparing the free tier of Gemini to the latest premium coding models will give you drastically different results.


The difference is LLMs let you "run Google" on your own data with copy paste. Which you could not do before.

If you're using ChatGPT like you use Google then I agree with you. But IMO comparing ChatGPT to Google means you haven't had the "aha" moment yet.

As a concrete example, a lot of my work these days involves asking ChatGPT to produce me an obscure micro-app to process my custom data. Which it usually does and renders in one shot. This app could not exist before I asked for it. The productivity gains over coding this myself are immense. And the experience is nothing like using Google.


It's great for you that you were able to create this app that wouldn't otherwise exist, but does that app dramatically increase your overall productivity? And can you imagine that a significant chunk of the population would experience a similar productivity boost? I'm not saying that there is no productivity gain, but big tech has promised MASSIVE productivity gains. I just feel like the productivity gains are more modest for now, similar to other technologies. Maybe one day AGI comes along and changes everything, but I feel like we'll need a few more break throughs before that.


there have been various solutions that allow you to "run Google" on your own data for quite a while, what is the "aha" moment related to that?


By "run Google" I don't mean "index your data into a search engine". I mean the experience of being able to semantically extract and process data at "internet scale", in seconds.

It might seem quaint today but one example might be fact checking a piece of text.

Google effectively has a pretty good internal representation of whether any particular document concords with other documents on the internet, on account of massive crawling and indexing over decades. But LLMs let you run the same process nearly instantly on your own data, and that's the difference.


But before I needed to be a programmer or have a team of data analysts analyze the data for me, now I can just process that data on my own and gather my own insights. That was my aha moment.


Liechtenstein is as much a monarchy as Britain is. It probably falls more in the direct democracy bucket. Also, the GDP per capita figures for these tiny countries are very missleading because you can have a situation where more than half the work force is commuting into the country every day for work. They increase the GDP but don't count in the capita part.


The food system in Europe works pretty well.


> The food system in Europe works pretty well.

"Suicide Among Farmers in France: Occupational Factors and Recent Trends":

* https://pubmed.ncbi.nlm.nih.gov/27409004/

"Under Pressure: Suicides Among Farmers in Austria and Germany":

* https://www.journalismfund.eu/suicides-farmers-austria-and-g...

"Mental Health Risks to Farmers in the UK":

* https://committees.parliament.uk/writtenevidence/43055/pdf/

And Europe's policies affecting other places, "Stop the Dumping! How EU agricultural subsidies are damaging livelihoods in the developing world":

* https://policy-practice.oxfam.org/resources/stop-the-dumping...


Define "works pretty well".

If your only expectation is that it provides enough calories for your population, you are absolutely right. If you have a look at the bigger picture, the issues are plentiful. On the producer side, farmers are operating at relatively thin margins which encourages consolidation and unsustainable farming practices. This in turn leads to extensive soil degradation and fertilizer use, which is unsustainable - both financially and ecologically.

On the consumer side, people are becoming more overweight (which cannot be exclusively be attributed to the food system, but diet of course plays a significant role). Food is becoming more expensive and lower quality. Food waste also still is a major problem.

Many issues are shared between the US and the European food system, although they may not be as extreme as in the US. However, it does not feel like there is actual political will to steer the ship in a different direction.


Las Vegas is actually very efficient with their water use.


Being efficient in watering golf courses in the desert in certainly nice, but maybe it's time to question having over 50 golf courses in the desert with an impending massive water shortage.


That's definitely questionable, but a drop in the bucket. Irrigation for all the farms in the desert is using vastly more water.


Maybe, but the southwest of the US uses more water than it has and can import. With droughts and overconsumption, the water supply is at risk. See https://en.wikipedia.org/wiki/Southwestern_North_American_me...


Nevada only uses a small percentage of the Colorado river water (https://en.wikipedia.org/wiki/Colorado_River_Compact). Most of the water is used for farming in the desert.


LOL. Las Vegas water prices are ridiculously low for the paltry amount of water they have. It's hard to get people to not waste the water when the price is artificially kept low.

Las Vegas water is less expensive than mine, and we have in excess of 10x the precipitation and everything is naturally green.


They seem to be able to survive with the small amount of water that is allocated to them from the Colorado river.


The Brenner base tunnel is still under construction.


Switzerland has such close ties to the EU that I would consider them half in.


Do you have something to back up this claim?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: