Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In other words, it disrupts classroom assignments where the student is being asked to produce bullshit but where historically the easiest way to produce that bullshit was via a process that was supposedly valuable to the student's education. The extent to which a teacher cannot distinguish a human-generated satisfactory essay from an essay generated by a bullshit generator is by definition precisely the extent to which the assignment is asking the student to generate bullshit. This will certainly require a lot of reworking of these traditional curriculums that consist heavily of asking the student to generate bullshit, but maybe that wasn't the best way to educate students this whole time.


The point of school work isn't to generate something of value externally, it's to generate understanding in the student. Much like lifting weights or running are "bullshit" activities. You haven't seen bullshit until you see whatever is produced by a society with a voracious appetite that runs on "gimme what I want" buttons and a dismal foundation of understanding.

The students bullshitting their way through course work are like the lifters using bad form to make the mechanical goal of moving the weight easier in the short term. They completely miss the point.


That's exactly what I'm saying. It's reasonable for teachers to give assignments where the resulting artifact is not itself of direct value to the subject matter, but where the only feasible way for the student to produce the artifact is via a process that is supposed to be of educational value. I'm just saying that when new ways emerge to produce that artifact which don't have the same supposed educational value to the student, the assignment itself needs to change.

I don't think your analogy to lifting weights or running is appropriate. A more appropriate analogy would be a physical education teacher who says "your assignment is to meet me on the other side of town in 30 minutes" because the only feasible way to get there is to run. Obviously the point is to get the physical activity, not to be on the other side of town. Then the bicycle gets invented, and suddenly it requires much less strenuous activity to get across town. All I'm saying is that it's vital for the teacher to realize that being on the other side of town was never actually the point, and to change the assignment so that students still get the intended amount of physical activity.


It's more like "your assignment is to run to other side of town in 30 minutes" but due to resource constraints, it's not easy or possible to monitor you the entire way. Its also not possible to redesign the task without constant monitoring of everyone.


I would argue that if the student doesn't understand that the value of the exercise is to run, and they take their car instead, then something needs to be explained to them.

There are two ways: you can try to design exercises that students cannot abuse, or you can try to have the students actually understand why they should do the exercise and not abuse it.


Students are often thinking of how to maximise their grades and efficiently use their time in the short term rather than how to best learn the subject matter. Anyone who has had to deal with assignment deadlines can understand why a student may choose to take such a shortcut if it's available, even to their own detriment in the long term.


Implying anything learned in high school is of benefit long term… ha.

Someone recently described their high school experience as “slightly better than prison.” I agreed. If you want to fix education, perhaps start at the root of where we force kids to do bullshit work.

Teaching them how to do their taxes would be infinitely more valuable than memorizing pointless geographical locations (solved by google maps) or names of dead white people in history class (also solved by google) or forcing them to learn Spanish (google translate). The experience was so unpleasant that I didn’t realize until my 20’s that I actually love studying history.


I don't completely agree. I mean it should not feel like prison, and putting pressure with grades is not necessarily good.

But I don't think that school is about learning useful stuff. There is plenty of time to learn a boring job/how to do taxes after school. School should be an opportunity to learn how to learn, and to discover new things.

Because you don't need it in your job later doesn't mean it's bullshit.


Sure. I'm honestly not a big fan of grades, to be honest. But that's a hard problem: some students will be better with grades, some without, and some won't care.

Still I don't believe that the goal in school is to learn useful stuff. The goal is to learn how to learn, and to discover stuff. There is plenty of time then to learn how to do a boring job.


>It's reasonable for teachers to give assignments where the resulting artifact is not itself of direct value to the subject matter, but where the only feasible way for the student to produce the artifact is via a process that is supposed to be of educational value.

Reminds of this scene from A River Runs Through It.

https://www.youtube.com/watch?v=gA-sEfXOaEQ&ab_channel=Grady...


I suppose a human teacher not being able to come up with an assignment not easily solvable by computers is a new level of the Turing test.

Therefore the assignment itself must include a requirement not to use a computer.


Agreed, relying on ChatGPT to do work is like using Chegg for answers, we can tell ourselves, "oh yea, I could write that no problem, or I can just do the same method later on..." until you can't.

It's called learning without a crutch, aka learning. (obviously I'll do things like use CMake on large projects, but I've forced myself to write Makefiles by hand for exactly this reason).


But most people today aren’t learning the times tables, and long division. They just use a calculator.

Most young people don’t know how to write by hand anymore, cursive or whatever. They rely on spellcheck and grammar checks. I’m talking native US citizens

So why would they know how to compose an essay? They just text short form things. Even “important” business people fire off misspelled emails with bad grammar as a status symbol, so now ChatGPT will be the status symbol. Putting effort into making a properly worded email actually shows lower status in business negotiations — because if you can’t be bothered to spend time, you are a bigshot with many deals and can walk away at any moment.


"But most people today aren’t learning the times tables, and long division. They just use a calculator."

Let me fix that for you:

Most people learn the times table, and long division. Then they just use a calculator.

The American 5 paragraph essay is an extraordinary artificial academic construct, completely divorced from the real world. It aims to be systematically taught and systematically graded, during the developing stages of education where it is often difficult to get students to express any written thoughts at all.

Using it outside of that context is a mistake. Composing a real-world essay is an exercise in considering an audience, preparing suitable arguments, making points more or less firmly, and hopefully providing some value.


I can learn long-form division by writing with a pencil. Is the pencil a crutch? Do I have to make my own pencil? Should I learn to divide without needing a writing utensil?

Of course, we can't take ChatGPT for granted like a pencil, because it's controlled by a third party and requires resources out of reach (for now). But it might become trivially cheap one day.


Simply, yes, it is a crutch for your mental math skills.

Should you do any of those follow ups? Seems like a judgement call. In my experience strengthening those fundamentals is important, but there are many occaisons where the point of the exercise (such as learning to write legible long form algebra) overshadows other possible benefits - making a crutch such as a pencil and paper, or a computer typing system, the appropriate call to make.

Please, when you pose 'absurd' hypotheticals, follow them to likely or reasonable conclusions. If you shit your brain off when the conclusion seems obvious but before evaluating you will frequently end up just asking reasonable (if somewhat obvious) questions.


There's clearly some limit to this line of thought though. Maybe it's valuable to know how to write your own make file. But have you built your own microprocessor by soldering together transistors? You have?! That's awesome! Have you built your own soldering iron?


Indeed! If you wish to make an apple pie from scratch, you must first invent the universe.


Yes, thank you! That is what the latter half of the first paragraph is about - we make judgement calls about the tradeoffs involved with a crutch all the time.

The point isnt to accept a heuristic that points one way or another, but to be conscious about it being a choice that we can make that can have various trade offs along a spectrum. As usual figuring out what trade offs to make is highly dependent on what your goals are.


The point I was aiming for was that learning with a crutch is still learning. I interpreted Avicebron's comment as "real programmers code in binary" and you are "unworthy" if you can build a CMake system without having written a Makefile manually in your life.

But after your comment, a lot of what I see around me are crutches. The keyboard I type on, the heated building I'm in. They really help me perform "normal" functions, and living without them would be a handicap, so they are crutches.

But I see no point in purposely living without them. Live your life, and don't force yourself to do something you don't like unnecessarily.


Yes, they are! Nothing wrong with that. A crutch is a tool, and like other tools we will have varying levels of dependence on it/them.

Where the line is for necessary/unnecessary dependence lies is a judgement call.


Plato famously thought writing was a crutch, because people no longer had to memorize everything they learn.

Why bother with the effort of learning anything if you can just be lazy and look it up in a book!


I'm getting pretty burned out on hearing this analogy. It seems to stem from a basic confusion between tools and toys. Tools don't constrain your use by dictating output; they help you speed up solving new problems. Toys create environments that shoehorn you into their prefabricated set of solutions. The feeling of gratification from making something nice with a toy is a placebo for the gratification of solving a real problem, and a big part of the drag we're experiencing on creative problem solving these days is the replacement of free-form solving with puzzle-solving. Puzzles, like AI prompts, have solutions that exist in advance in a latent space, and their discovery promotes a form of gamification but doesn't ever till any fresh soil. Being great at playing a game can never approximate being halfway good at writing a game. Discovering Will Wright's philosophy embedded in the best way to play SimCity should only be a step on the road to writing your own SimCity, not an end in itself. SimCity is a toy, not a tool, despite its being a fun stepping stone for future urban planners. One would never want urban planners to rely on it to make their decisions, because deep assumptions are baked into it. Same with ChatGPT. Many, many assumptions based on a snapshot of the world are baked into its output.

Whereas a pencil has no assumptions whatsoever, and no bias towards what you'll do with it, and offers no suggestions that might funnel you in some direction. That is what good tools are like.

[edit] and I mean, how often do we hear that a tool or framework is too opinionated? What's meant by that is that the tool constrains your ability to solve problems with it based on its pre-baked gestalt. Extrapolating the inherent limiting problem with that to all writing leads to some extremely weird outcomes like whether a student's poor attempt at analyzing a text shouldn't get at least as much credit as the student who fed it into a neural net they neither care about nor comprehend.


> can learn long-form division by writing with a pencil. Is the pencil a crutch?

Yes. When was the last time you used that algorithm versus breaking down the problem in your head?


Strongly disagree. This is much more analogous to math students using a calculator rather than a slide rule. Yes, it’s faster. No, you aren’t going to pass unless you know the material well enough to validate the outputs.

I’m old. I’ve lived this moral panic so many times. I remember when social media, newsgroups, the internet, word processors, computers, graphing calculators, calculator watches, BBSes, and calculators were all going to destroy our children’s education.

I am highly skeptical that this time is any different.


By using some ML model to spit out text for you, will you learn how to write well? How to write a good essay? None of that. During the act of writing those, the student is supposed to put in effort and thing of well worded phrases, using the parts of their brain, which are responsible for language processing. Without actual practice, the exam will come and the student will fail. They wont have a bullshit generator at hand, that generates speech, which sufficient sounds meaningful, when they are in an actual discussion "over the table". Even if they did, who would listen, if all they do is get out their phone and read some generated bullshit from it, that does not follow along with what was previously said?

Being able to have a good discussion is a life-long valuable skill. Not practicing it will cost sooner or later.


I don't understand your point. If creating understanding is the point, why don't we test for understanding having been created but instead test for "bullshit activities" being successfully performed?

In your weight lifting analogy: why don't we test for outcome based fitness metrics (VO2max for runners, 1RM for weight lifters) and let students learn to choose the best ways to achieve those goals?


You seem to have solved the fundamental challenge with examinations of any kind. Please elaborate, teachers that for millennia have settled with compromises are eager to learn about your watertight solution.


Yeah I guess it's too hard for teachers to actually interact with their students to get an idea of their understanding of a subject. Let's do standardized multiple choice tests that only require you to learn how to pass tests. Who cares about the actual knowledge, right?


How do you test that a kindergartener understands addition without asking them for things a calculator can do?

Or do we just think it’s not important for people to understand addition at this point?


You give them a bag of marbles and tell them to show you 5 marbles and 3 marbles and then show you how much there are when you put them all together. That's how I taught my kids addition.


Your solution is to take away tools and ask to do a task under constant supervision? People take exams like this every day.


You are missing the parent comment's point. Your question can also be answered using a calculator.


Don't tell my kids.


The point, I believe, is that sometimes generating bullshit requires some understanding, so it is a worthwhile exercise. Just like running: you end up where you started, so it may seem that you did nothing. But obviously you exercised.

Asking a student to write an essay is similar: it is an exercise. Of course that essay won't be published or read by anyone else, but that was not the point. Just like you were not running somewhere, you were just running for running.


> why don't we test for understanding having been created

What it means for understanding to have been created is a fascinatingly complex problem in philosophy: https://plato.stanford.edu/entries/understanding/

If you have an straightforward test for this, you may wish to consider contributing to the field.


For athletes its bad form that will limit your ability to reach the top levels of your domain. We then test for the top levels of these domains, like the Olympics or Tour de France. We don't test VO₂ max since it is boring. In education we don't test understanding because it is hard to write those tests in many domains.


> The students bullshitting their way through course work are like the lifters using bad form to make the mechanical goal of moving the weight easier in the short term. They completely miss the point.

Eh, when your future career prospects hinge on what grades you get it's perfectly rational to optimize for getting those grades over learning "properly". Especially in subjects you know you're already competent enough in or that aren't relevant to your future goals.

It's stupid in weightlifting because nobody is going to pay you a higher salary if you can do 15 reps instead of 10.

(Although there's a related and opposite problem where people massively overestimate how important their highschool grades are and get unnecessarily stressed out over it.)


> Eh, when your future career prospects hinge on what grades you get it's perfectly rational to optimize for getting those grades over learning "properly".

I knew a few people who cheated and gamed the system at every opportunity through high school, all in the name of getting into the best universities.

They got absolutely crushed in college when their old tricks stopped working and they had to actually do the work. One of them got caught cheating and had a wake-up call when the university threatened him for it. He eventually put in the effort to grind it out and graduate a year late, but graduated nonetheless.

He's mentioned that he could have saved himself a lot of pain and suffering by just engaging in school the first time instead of cheating.

This idea that you can cheat your way into a job and then just coast doesn't really match reality. It may work for some people who get into weird jobs that don't fire people, but the idea that it's "optimal" to cheat your way through life until you're handed an easy job doesn't hold up in practice.


In your anecdote, the person who cheated got into a well-regarded university and graduated (albeit a year late). That's likely to have a better long-term outcome than not cheating at high school, and going to a lower-ranked school.

Cheating worked.


For the individual, maybe. For society, no.


Some of them didn't want to make the time commitment-- or partied. But a lot of smart students make then had old answers to tests especially in hard majors like the sciences. I think that counts as cheating.. they probably figured out they were giving canned answers or suspicious they arrived to answer right away.


I did this accidentally once at uni. Previous years tests were posted online (intentionally, by the person running the class) so I was doing those while studying to make sure I knew the material.

I sit down in the actual exam and lo and behold the questions in the test were exactly the same as one of the previous years I'd been using to study, I guess the professor got lazy.


The assumption here is that only grades will affect your lifetime earnings and other skills that you might improve through studying vs cheating won't come into play...

Some people can bullshit their way through life forever, but I've seen a lot of more bullshitters stuck doing entry level crap for a long time because their limits became apparent...


> The assumption here is that only grades will affect your lifetime earnings and other skills that you might improve through studying vs cheating won't come into play...

Or that perhaps you have a better plan for your own education than a cookie cutter curriculum that is intended to applied indiscriminately to millions of people and hasn't been updated in 50 years.


That's shifting the goalposts from the parent poster who was saying "all you need is the grade for your future career prospects," not "you are educating yourself independently."

Let's not pretend most people happy to cheat their way through to the eventual level of their mediocrity are doing so because they're doing a bunch of other learning.


It’s not a shift. I’m suggesting that both could be happening at the same time: you are maximizing your superficial credentials as easily as possible because credentials will help you and you are genuinely educating yourself because that will also help you.


In theory, maybe, but how many cheaters are undertaking their own great books curriculum in the copious spare time they're freeing up?


If you've written off yourself as "competent enough" or things "aren't relevant to future goals" you might want to think about the phrase "you don't know what you don't know"


I mean stuff like bullshitting your way through an english assignment when you're going to study computer science at uni.

You can be fairly confident that understanding the intricacies of Catcher in the Rye won't be important in any compsci paper or dev job, but the writing skills from the bullshitting are still useful.


Well, I for one hope I don't have to work with people, who only ever spent the minimal effort even including cheating on assignments, because their shortcomings will be what I have to deal with, because they cannot stem (or lift, ha!) their weight. But in reality it is highly unlikely, that one will never have to work with such people. I guess more realistically I should hope for not encountering too many of them.


Is asking ChatGPT to generate low quality code much different to the average npm dependency chain? It would seem disingenous to berate AI code completion for removing the challenges of development yet most reach for import * in the same breath..


> Much like lifting weights or running are "bullshit" activities.

> like the lifters using bad form to make the mechanical goal of moving the weight easier in the short term.

I don't know about you, but this would actually be the perfect excuse to do a sport like judo that you can't bullshit through but can still get the same end result from.


You do realise that in order to reach a good level in judo, you have to do some of the bullshit training like lifting weight, right?


I don't think so. I rarely lift weights but I do just fine in judo, largely because judo largely replaces it.


>I rarely lift weights

So, you do do "bullshit" exercise. What a weird hill to die on.


To get out of reading a long book -- if you want a TLDR or something. I see it as a tool to augment understanding -- bad actors will manipulate it to use to their advantage.

Teacher can't go around to each student to help with their understanding -- its not economical and each student is at a different level of ability.


I agree with some of this (particularly the conclusion that better education methods are required) but lets be a bit generous for a second.

The ability to write well is (or was) an important skill. Being able to use correct grammar, to structure even a simple argument, to incorporate sources to justify one's statements, etc. Even if we're just talking about the level of what GPT 3.5 is capable of, that still corresponds to, let's say, a college freshman level of writing.

Now, perhaps with the advent of LLMs, that's no longer true. Perhaps in the near future, the ability to generate coherent prose "by hand" will be thought of in the same way we think of someone who can do long division in their head: a neat party trick, but not applicable to any real-world use.

It isn't at all clear to me though that we're yet a the point where this tech is good enough that we're ready (as a society) to deprecate writing as a skill. And "writing bullshit" may in fact be a necessary element of practice for writing well. So it isn't self-evident that computers being able to write bullshit means that we shouldn't also expect humans of being able to write bullshit (at a minimum, hopefully they can go well beyond that.)


The teacher (or developer of the curriculum) needs to decide both 1) what they consider to be cheating and 2) how important it is to establish a certain level of confidence that students aren't cheating. If writing grammatically without any external help is deemed to be extremely important, and it's far too easy for students to cheat on take-home writing assignments, then suck it up and use some of your in-class time for writing exams. Your in-class time is finite anyway, so deciding how to use it is already critically important.

If electronic earpieces become commonly used by students to cheat on in-class exams, and that's important enough to the teacher, I guess they better figure out a way to screen for those devices.

If eventually nearly everyone has some sort of brain link that can whisper grammatical phrases directly into their brain and cannot feasibly be detected in the classroom, well, then either your entire curriculum is on the honor system, or maybe you'll need to update your thoughts about what tasks are important for your students to be able to perform without "external" help.


Oh, I agree. Optimistically, the existence of LLMs will be a forcing function for education to focus on what actually matters. (IMO, this is the ability to critically assess and generate arguments, something GPT isn't particularly good at (yet)).

But your example is actually rather striking. In most highschool and college classes for algebra and calculus, we're asking students to solve/prove problems that computers have been easily able to solve/prove, and have been for decades.

But the educational consensus is that being able to do algebra or differentiate/integrate by hand is valuable (up to a certain complexity.) Which is why calculators are not allowed (at certain levels) and why "show your work" is an important part of grading at every level.

Perhaps this whole discussion is a nothingburger once we figure out what the language arts equivalent to "show your work" is for text generation.


> Now, perhaps with the advent of LLMs, that's no longer true. Perhaps in the near future, the ability to generate coherent prose "by hand" will be thought of in the same way we think of someone who can do long division in their head: a neat party trick, but not applicable to any real-world use.

Except that AI would be hard pressed to become a part of the in-circles of waves of new generations of artists and writers, who create new literature in part through being part of new literary movements. In a way AI is closer to God than to people, in their omni-ness. Unless ChatGPT eats and craps and needs to find a job, find love, find friends, loses things, installs tinder, gets hurt, and has its life literally threatened (even just once in a lifetime--whatever that means for LLM), it cannot be a wholesome part of society and thus react to political change by creating new modes of writing. Because of that, AI will forever play catchup with the new tendencies in human literature and art.

Unless AIs dominate consumption and pay for their Netflix subscription, of course.


> Even if we're just talking about the level of what GPT 3.5 is capable of, that still corresponds to, let's say, a college freshman level of writing.

Indeed - the output of ChatGPT to a very quickly thrown together prompt and essay question, was able to easily beat the average of a cohort of freshmen in terms of writing skills. I imagine this is partly due to a decline in writing skills generally (?), but also partly due to an LLM having a significant advantage in being able to roll a die and produce a completely grammatically valid sentence on any topic each time.

I suspect that we will see LMMs reach a point where they can pose a threat to weaker non-straight-A students - at least at freshman level, you can already get GPT to write an essay that's more factually accurate than the weaker students in a cohort. It won't be factually perfect, but it will be more factually accurate than the students' own attempts.

Fundamentally though, I do wonder - if the marginal cost of prose reduces to near-zero (through bullshit generation via AI), what will we move to?

Lengthy prose that you know someone won't read is a perfect situation for LMM generation. 5x short, sharp, action-focused bullet points that get to the crux of a situation and how to resolve it are far harder to bullshit, as ultimately people are focused on the substance, rather than style and presentation of your point. This would (I hypothesize) disadvantage both human and AI bullshitters equally.

I tend to see the best students feel less of a need to pad and write more, as they know they have said what needs to be said, and are finished. Those with a need to bullshit (perhaps through not having as robust an understanding of the subject matter) will pad, flounder, circumlocute, and eventually get to something resembling a point, eventually.


If it can generate working code, does that mean that asking students to produce working code was bullshit? Or does it just mean that AI can now do a lot of things we used to asked students to do (for probably solid educational reasons) at an above average level?


If it's reliably generating working code then that isn't bullshit! (Ignoring, of course, other things about the code that might be relevant to the assignment, like coding style or efficiency.) What I'm saying is that if you are looking at the AI's output and judging that it's bullshit, and if you can't distinguish that output from your students' satisfactory essays, then that by definition means that the assignment was to produce bullshit.


This is a pretty dumb take and it's repeated in this thread. The goal of the assignment is not a result. I mean the professor can probably write a better essay than some kid who just learned that this subject exist. The point of this exercise is to have the student learn how to do the work, so they can do it when it's not a simulated exercise.


You’re missing the point. Even if you disagree with the point, it’s important to understand it.

If the goal is “to have the student learn how to do the work”, and there is a tool they can use to do so, then using the tool is doing the work.

Your position only makes sense if you define “the work” to also include exactly the process you personally learned. No fewer tools (did you learn on a word processor?), no more tools (is spellcheck OK?).


Even if language models exist that can generate text for you, it is still very useful to be literate yourself. At least it is for me.


I think you're missing the elephant in the room: we learn _to_ learn and be able to adapt, not to "do the work".


Um. So kids adapting to use LLMs and learning how to prompt them to get the desired results is evidence that they aren’t learning or adapting?

Doesn’t that feel a little. . . Odd?


When you are told to write an essay about WW2 it's not because your teacher needs info about WW2 but because they want you to read, parse, search, filter information and organise it in a logical manner. If all you do it type a question in chatgpt you learn nothing of these things that will be very useful in life in many situations in which you won't be able to ask your AI overlord for a quick answer.

You go from being a swiss army knife to being a butter knife without handle, it's all fun and games as long as you're asked to cut butter but when you'll have cut a steak or open a beer bottle you'll have a hard time.


That's not how learning work in the slightest, the output may be both of similar low quality but if the student learned a few things in the process of writing that low quality essay then it might have been worth it anyway, as in maybe the student didn't fully reached the knowledge or understanding to get the highest grade but it did improve the way the student thinks about the subject, maybe after a few days in the back or their heads it will click, or maybe it will click after reading something tangentially related, or maybe it will help them find gaps in their knowledge they were even unaware of before trying to actually complete the assignment (and were unable to be spotted by reading the homework tasks alone)


The code generated for school assignments is generally bullshit. No professor is going to take the students' assignments and run them in production on critical workloads.

The understanding in the mind of the students (or at least confirmation of that understanding) of how computers/the languages used work is not bullshit.

The problem here is that a lot of schoolwork is basically generating bullshit in the hope that it sparks/reinforces understanding of the material or lets a professor confirm that understanding. A competent bullshit generator makes that style of teaching/evaluation useless because you can just skip the understanding part.


You make it sound bad, but there's nothing inherently wrong with learner outcomes being worthless, or even worse than worthless.

When people learn a new language at a very basic level, they generate dull and simple sentences, and butcher the pronunciation. When they learn martial arts, they kick the air or a punching ball. When they learn to play the violin, they play hellish sounds that can make your ears hurt. But all that is still needed for learning.


> > where the student is being asked to produce bullshit

It depends on which subject? I have only good things to say about my college days. They are life changing. The rigorous training in computer science and maths has been paying dividend all these years. The professor in the writing class did not only teach me how to write, but also how to understand and appreciate literature. Even classes like Canadian Culture and History taught me so much about arts, literature, politics and history of Canada. I have a hard time fathoming in what classes will we write bullshit (yeah, I have some ideas, but I guess I was lucky enough that my school taught me how to love and appreciate the wonder of nature and civilization, instead of teaching me hatred to towards you know what).


> This will certainly require a lot of reworking of these traditional curriculums that consist heavily of asking the student to generate bullshit, but maybe that wasn't the best way to educate students this whole time.

Just curious which discipline you have a grudge against here. Because presumably disciplines are actually disciplines where someone working in the field for their entire career can spot BS.


GP did not mention disciplines, and I don't think individual disciplines are to blame.

The approach of evaluating students based on "text generation" is very boring to study for, easy to fool (a parent/guardian can do it, last year's students can pass you an A grade answer, ChatGPT can generate it) and doesn't prepare students for reality (making new things, solving new problems).


How exactly do you teach "Making new things, solving new problems."?

How can you solve a math problem if you can't do basic algebra, even if you can run an algebraic statement through wolfram and get a result?

You learn problem solving largely through solving already solved problems in life. They're new problems to the student, not the teacher.


What if the thing you are supposed to be making is written communication in the form of a document, or a book, or a paragraph explaining why production was down? What if the goal of these things was to make a student literate?


Not all of us can be making new things and solving new problems, however..


Not the parent poster, but I suspect this may be pointing towards humanities-based subjects, where I've seen regular assignments in the form of essays on fairly subjective topics.

I do think there will be a challenge in handling use of GPT in some kinds of essay questions, even in objective and "right/wrong" type disciplines - I asked a friend to grade some answers to an essay style exam question, and they felt the GPT output was considerably better than many students in the class - ChatGPT was producing markedly better output than weak students. This wasn't a properly blinded or double-blinded test, but the output from GPT was so much better than weaker students that there was little need. You could tell from glancing at the text that it was better.

ChatGPT had far better grasp of the English language, and the language model meant what it wrote was more coherent, structured, and better flowing than a weaker student would write. It had a significantly better vocabulary than many students, and the GPT output appeared to always use words correctly. Sentences were fully-formed and coherent. They had proper grammatical structure, made proper use of punctuation, and advanced a cohesive idea (as the essay asked for). This wasn't true of weaker (human) students' attempts.

The content that ChatGPT produced was not always factually perfect, but when compared with anyone who wasn't "top of the class", there were fewer factual errors in the Chat GPT output, since it was mostly regurgitating fairly simple text. A paraphrase of part of the Wikipedia article would probably have beaten many students though.

Where GPT output falls apart is references - even when asked to produce references, they are generally non-existent (plausible paper name, plausible author, plausible journal name, just that paper doesn't exist and that author didn't publish anything in that journal, etc.), and do not actually support the statements which cite them - this still seems to be a fairly robust way of detecting non-human output at present.

It would be interesting to see this kind of experiment carried out at scale in a controlled environment, where the examiners are unaware that GPT outputs are interspersed with real students' work, in order to make it a fair assessment.


Why would you learn math when you have calculators ?

Why would you learn to write when you have grammarly ?

Why would you read books when you have summaries ?

Why would you watch movies when you have reviews ?

Sometimes, a lot of times actually, it's not about the result but the process. Otherwise the most optimal path at life is suicide or matrix style coma pods.

Everything we do before a PhD is by definition bs, it's just regurgitating already known facts.


> This will certainly require a lot of reworking of these traditional curriculums that consist heavily of asking the student to generate bullshit, but maybe that wasn't the best way to educate students this whole time.

That process was invaluable for making normal bullshitters into bullshit artists -- how will we train our elite human bullshitters in the modern age of language models?


Bullshitting well is an important skill in life. Sometimes the content doesn't matter, it's how you say it that matters.


The education system is moribund. During Covid, I thought we'd see the flipped classroom shine, and a revolution in education. Instead we got children chained to Zoom or Teams and attempts to replicate traditional classrooms remotely.

I wouldn't be surprised if my grandchildren are still doing the same bullshit in 20 years.


Yeah but it would be nice if that was the floor instead of a ceiling - personally I've never seen a ChatGPT-generated text that couldn't have been generated by asking a high schooler to generate the same thing.


Part of education is learning how to write. Some of that may be bullshitty, but useful nonetheless.


Hahaha that's the most insightful comment I've read in HN thus far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: