I wrote this before to another comment like yours:
I thought this part of penalizing mistakes made with the help of LLMs more was quite ingenious.
If you have this great resource available to you (an LLM) you better show that you read and checked its output. If there's something in the LLM output you do not understand or check to be true, you better remove it.
If you do not use LLMs and just misunderstood something, you will have a (flawed) justification for why you wrote this. If there's something flawed in an LLM answer, the likelihood that you do not have any justification except for "the LLM said so" is quite high and should thus be penalized higher.
One shows a misunderstanding, the other doesn't necessarily show any understanding at all.
I thought this part especially was quite ingenious.
If you have this great resource available to you (an LLM) you better show that you read and checked its output. If there's something in the LLM output you do not understand or check to be true, you better remove it.
If you do not use LLMs and just misunderstood something, you will have an (flawed) justification for why you wrote this. If there's something flawed in an LLM, the likelihood that you do not have any justification except for "the LLM said so" is quite high and should thus be penalized higher.
One shows a misunderstanding, the other doesn't necessarily show any understanding at all.
>If you have this great resource available to you (an LLM) you better show that you read and checked its output. If there's something in the LLM output you do not understand or check to be true, you better remove it.
You could say the same about what people find on the web, yet LLMs are penalized more than web search.
>If you do not use LLMs and just misunderstood something, you will have an (flawed) justification for why you wrote this. If there's something flawed in an LLM, the likelihood that you do not have any justification except for "the LLM said so" is quite high and should thus be penalized higher.
Swap "LLMs" for "websites" and you could say the exact same thing.
The author has this in their conclusions:
>One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all.
This is not true. What is true is that if the students are more accountable for their use of LLMs than their use of websites, they prefer using websites. What is "more" here? We have no idea, the author didn't say so. It could be that an error from a website or your own mind is -1 point and from a LLM is -2, so LLMs have to make two times less mistakes than websites and your mind. It could be -1 and -1.25. It could be -1 and -10.
The author even says themselves:
>In retrospect, my instructions were probably too harsh and discouraged some students from using chatbots.
But they don't note the bias they introduced against LLMs with their notation.
I think the professor here presented them with a "special" case which can not be generalized outside of the exam context.
If you're presented with the choice of "Don't use AI" and "Use AI, but live with the consequences" (consequences like mistakes being judged harsher when using AI than when not using AI), I do not think chatbots will be a desirable choice if you've properly prepared for the exam.
"Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated."
I think that's a general guideline to identify "propaganda", regardless of the source. I've seen people in person write such statements with their own hands/fingers, and I know many people who speak like that (shockingly, most of them are in management).
Lots of those points seems to get into the same idea which seems like a good balance. It's the language itself that is problematic, not how the text itself came to be, so makes sense to 100% target what language the text is.
Hopefully those guidelines make all text on Wikipedia better, not just LLM produced ones, because they seem like generally good guidelines even outside the context of LLMs.
Wikipedia already has very detailed guidelines on how text on Wikipedia should look, which address many of these problems.[1] For example, take a look at its advice on "puffery"[2]:
"Peacock example:
Bob Dylan is the defining figure of the 1960s counterculture and a brilliant songwriter.
Just the facts:
Dylan was included in Time's 100: The Most Important People of the Century, in which he was called "master poet, caustic social critic and intrepid, guiding spirit of the counterculture generation". By the mid-1970s, his songs had been covered by hundreds of other artists."
Right, but unless you have a specific page about "This is how to treat AI texts", people will (if they haven't already) bombard you with "This text is so obviously AI written, do something" and by having a specific page to answer to those, you can just link that instead of general "Here's how text on Wikipedia should be" guidelines. Being more specific sometimes helps people understand better :)
A good place to see this pre-2022 (the ai epoch) is articles on less known bands from the late 2000s when Wikipedia was becoming more popular. Quite a few of them turn out to be copy/paste promo text. I know this because I did webdev work for that industry, and when I look up those bands on wikipedia I will recognize the text as text that I personally had to paste into a bio page 20 years ago. Since the bands are well known, nobody reports it (I admit I'm too lazy)
The real tell on those tends to be weirdly time-specific claims that tend to be wildly outdated ("currently touring with XYZ")
To me that seems like we're mistaken in mixing fiction and non-fiction in AI training data. The "a revolutionary titan of industry" makes sense if you where reading a novel where something like 90% of a book is describing the people, locations, objects and circumstances. The author of a novel would want to use exaggeration and more colourful words to underscore a uniquely important person, but "this week in trains" would probably de-emphasize the person and focus on the train-coupler.
fiction is part of our shared language and culture. we communicate by making analogies, and our stories, especially our old ones, provide a rich basis to draw upon. neither a person nor an llm can be fluent users of human language without spending time learning from both fiction and non-fiction.
The funny thing about this is that this also appears in bad human writing. We would be better off if vague statements like this were eliminated altogether, or replaced with less fantastical but verifiable statements. If this means that nothing of the article is left then we have killed two birds with one stone.
I'm thinking quite a bit about this at the moment in the context of foundational models and their inherent (?) regression to the mean.
Recently there has been a big push into geospatial foundation models (e.g. Google AlphaEarth, IBM Terramind, Clay).
These take in vast amounts of satellite data and with the usual Autoencoder architecture try and build embedding spaces which contain meaningful semantic features.
The issue at the moment is that in the benchmark suites (https://github.com/VMarsocci/pangaea-bench), only a few of these foundation models have recently started to surpass the basic U-Net in some of the tasks.
There's also an observation by one of the authors of the Major-TOM model, which also provides satellite input data to train models, that the scale rule does not seem to hold for geospatial foundation models, in that more data does not seem to result in better models.
My (completely unsupported) theory on why that is, is that unlike writing or coding, in satellite data you are often looking for the needle in the haystack. You do not want what has been done thousands of times before and was proven to work. Segmenting out forests and water? Sure, easy. These models have seen millions of examples of forests and water. But most often we are interested in things that are much, much rarer. Flooding, Wildfire, Earthquakes, Landslides, Destroyed buildings, new Airstrips in the Amazon, etc. etc.. But as I see it, the currently used frameworks do not support that very well.
But I'd be curious how others see this, who might be more knowledgeable in the area.
reply