> The parents' case hangs largely on the student handbook's lack of a specific statement about AI, even though that same handbook bans unauthorized use of technology. "They told us our son cheated on a paper, which is not what happened," Jennifer Harris told WCVB last month. "They basically punished him for a rule that doesn't exist."
I'm going out on a limb here, but if this is the viewpoint of the people who raised him, then I'm not surprised he cheated.
If this was my son and the facts were the same, he'd be grounded in addition to whatever consequence the school deems fit.
> I'm going out on a limb here, but if this is the viewpoint of the people who raised him, then I'm not surprised he cheated.
At my child's elementary school and now middle school the teachers openly ecourage using AI chat for schoolwork, to help the kids understand the usefulness and the uselessness of it.
I think that's great. AI isn't going away, so pretending students won't be using it at all times, both now and in their lives, is pretty naive and hopeless. Better to embrace it and teach them what it can and can't do.
From the article:
> the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations).
They should not be punished for using AI. They should, however, just get a very bad grade for "including citations to nonexistent books". Which would also be a good lesson in why AI is useful but you can't trust it at all without validation.
> They should not be punished for using AI. They should, however, just get a very bad grade for "including citations to nonexistent books".
As far as I am aware, they weren’t punished for using AI specifically, they were punished for straight up copypasting AI output thus violating cheating policies (in addition to hallucinating non-existent citations). Whether it was copypasted from AI or from another real person is irrelevant, the AI part just made it easier to prove in this particular case.
The school even affirmed in their statement that AI was allowed to be used for brainstorming topics or identifying topics (quote directly from the article):
“Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application”
the purpose of school is to prepare students for the real world, and in the real world, there's quite the delta between copying off ai and plagiarism, copying off ai may annoy some people, but there's no law against it (at least right now)
> copying off ai may annoy some people, but there is no law against it
There is no law against plagiarizing on assignments by copying off a real human either, what’s your point?
If AI generated an assignment paper for me, and I simply copypasted it and turned it in as my own, I don’t see it being materially different from the same being done with a paper written by another human (rather than AI).
> the purpose of school is to prepare students for the real world
The purpose of each individual class is to teach students the skills that the class is supposed to teach. If the class in the OP was on proper usage of AI, that would be a different story, but it wasn’t.
Similarly, you wouldn’t write your own memory allocator in the real world, you would use an existing library for that. That doesn’t mean you will get away with just simply importing an existing memory allocator library in your assignment code and calling it a day.
>AI isn't going away, so pretending students won't be using it at all times, both now and in their lives, is pretty naive and hopeless. Better to embrace it and teach them what it can and can't do.
I agree with that but school should focus on fundamentals. The temptation and the ability of AI to do the school work for students so they don't need to put into any effort is a real problem; learning should require some effort and AI could solve that part for students. On the other hand AI could be used for some type of education, but then people tend to throw the baby with the bathwater and declare AI as the principal tool for learning, further diminishing the role of professors and mentors.
> > the students had indiscriminately copied and pasted text from the AI application
If that's what they were doing, then I agree that the real problem wasn't the use of AI as such. The real problem was that this is clearly plagiarism. Punishing it the same as any other plagiarism sounds appropriate.
Is it clearly plagiarism? I wouldn't say it is that clear-cut, since in a sense the output of an LLM to a prompt you give it could still be seen as something you produced -- albeit with the help of a magical matmul genie.
Yes. It’s clearly plagiarism. Your reply is clearly grasping at the furthest of straws in an attempt to be contrarian and add another “stochastic parrot hehe!” comment to the already overflowing pile. Line up 100 people and the only ones agreeing with you are other wannabe contrarians.
I truly don't understand the tone of your comment.
I'm not grasping at the furthest of straws, I see a distinction between 'verbatim copying someone else's work' and 'verbatim copying the results of a tool that produces text'.
Plagerism isn’t the copying part, it’s the part where you claim to be the author of something you are not the author of. Hope that helps to clear up things. You can plagerism content that your are both legally and ethically allowed to copy. It doesn’t matter the least bit. If you claim to be the author of content you didn’t author and lack attribution AI or otherwise then you’re plagering the content.
> A translation tool like DeepL is presumably trained on a huge amount of 'other people's work'. Is copying its result verbatim into your own work also plagiarism then?
Yes, if you present yourself as its author.
So let's say you are not a native English speaker and write a passage of your paper in your native language, then let DeepL translate this and paste the result into your paper, without a note or citation. Is that plagiarism?
the tool actually produces text… of someone else’s work… that you then copy… verbatim… :)
But the text itself is not someone else's work verbatim.
A translation tool like DeepL is presumably trained on a huge amount of 'other people's work'. Is copying its result verbatim into your own work also plagiarism then?
plagiarism - by definition - is copying someone else’s work.
the easier definition is “did YOU write this?” if answer is no - you plagiarised it and should be punished to the full extent.
'Someone else's work' -- exactly. Not 'the output of some tool'.
I'm not saying what the guy did wasn't wrong or dumb, I'm saying: Plagiarism has a strict definition, and I don't think it can be applied to the case of directly copying the output of an LLM -- because plagiarism refers to copying the work of another author, and I don't think LLMs are generally regarded (so far) as being able to have this kind of 'authorhood'.
plagiarism does NOT refer to copying the work of another author, it refers to you submitting work as yours that you didn’t yourself write.
if I copy entire article from the Economist, did I plagiarize!? There is no author attribution so we don’t know the author… Many articles in media today are LLM generated (fully or partially), can I copy those if someone sticks there name as author?!
bottom line is - you didn’t do the work but copied it from elsewhere, you plagiarized it, period
I'll just link here to another comment I made that sums up my argument quite well, I think:
It seems clear to me. The student is claiming that he wrote something that he didn't write.
Definition of plagiarism, by the Cambridge Dictionary:
"the process or practice of using another person's ideas or work and pretending that it is your own"
What I am objecting to is the "another person's" part. An LLM is not a person, it is a tool -- a tool that is trained on other people's work, yes.
If you use a different tool like DeepL, which is also trained on other people's work, to produce text purely from an original prompt you give it (i.e. translate something you wrote yourself), and you put that into your paper... is that then plagiarism as well? If not, what if you use an LLM to do the translation instead, instructing it to act strictly as a 'translation tool'?
It seems to me, the mere act of directly copying the output of an LLM into your own work without a reference cannot be considered plagiarism (in every case), unless LLMs are considered people.
Of course, you can prompt an LLM in a way that copying its output would _definitely_ be plagiarism (i.e., asking it to quote the Declaration of Independence verbatim, and then simply copying that).
So, all I'm saying is: The distinction is not that clear, has nuances, and depends on the context.
By your argument, since an encyclopedia is not a person, I can copy it with impunity. It's a collection of work built on others' ideas and research, but technically a tool to bring it together. I can assure you that virtually any school would consider the direct use of it, without citation, plagiarism.
Let's assume I used an encyclopedia outside of my native tongue. I took the passage verbatim, used a tool to translate it to my native tongue, and passed it off as my own. The translation tool is clearly not a person, and I've even transformed the original work. I might escape detection, but this is still plagiarism.
Do you not agree?
Let's go to how Cambridge University defines it academically:
> Plagiarism is defined as the unacknowledged use of the work of others as if this were your own original work.
> A student may be found guilty of an act of plagiarism irrespective of intent to deceive.
And let's go to their specific citation for the use of AI in research:
> AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge
> By your argument, since an encyclopedia is not a person, I can copy it with impunity.
I don’t see where they said (or implied) that.
How does “that isn’t plagiarism” imply “I can copy it with impunity”? Copyright infringement is still a thing.
Have you conflated plagiarism with copyright infringement? Neither implies the other. You can plagiarize without committing copyright infringement, and you can violate copyright without plagiarism.
I'm sorry, but this encyclopedia analogy really doesn't say anything at all about the argument I raised. An encyclopedia is the work of individual authors, who compiled the individual facts. It is not a tool that produces text based entirely on the prompt you give it. Using an encyclopedia's entries (translation or not) without citing the source is plagiarism, but that doesn't have any parallel to using an LLM.
(Also, the last quote you included seems to directly support my argument)
The translation software isn't a person. It will necessarily take liberty with the source material, possibly even in a non-deterministic fashion, to translate it. Why would it be any different from a LLM as a tool in our definition of plagiarism?
If I used a Markov Chain (arguably a very early predecessor to today's models) trained on relevant data to write the passage, would that be any different? What about a RNN? What would you qualify as the threshold we need to cross for the tool to not be to be plagiarism?
when did he imply that a LLM would be different as a tool than a translator in his definition of plagiarism? are you even understanding his points lmao?
There's nuances to the amount of harm dealt to the authors based on what sources you are stealing from, but it's irrelevant here, as the specific incident we're talking about is whether or not the student is the actual author of the work submitted.
It'd be the same as if I had Google Translate do my German 101 exam. I even typed the word "germuse" with my own two thumbs!
What we are talking about in this sub-thread is exclusively the 'this is clearly plagiarism' part.
If you used Google Translate for your German 101 exam, that would be academic dishonesty/cheating, but not plagiarism.
I'm largely uninterested in the specific name you want to give it and more if its worthy of punishment.
> What I am objecting to is the "another person's" part.
Fair enough. We disagree about definitions here. To me, plagiarizing is claiming authorship of a work that you did not author. Where that work came from is irrelevant to the question.
> If not, whatif you use an LLM to do the translation instead, instructing it to act strictly as a 'translation tool'?
Translation is an entirely different beast, though. A translation is not claiming to be original authorship. It is transparently the opposite of that. Nobody translating a work would claim that they wrote that work.
> Fair enough. We disagree about definitions here. To me, plagiarizing is claiming authorship of a work that you did not author. Where that work came from is irrelevant to the question.
This is exactly what it is ... the post is taking "another person's" waaaay to literally - especially given that we are in the year of our Lord 2024/2025. One of the author's comments above is also discarding Encyclopedia argument stating that they are written by people which cannot ever be factually proven (I can easily ask LLM to create an Encyclopedia and publish it). Who is "another person" on a Wikipedia page?! "bunch of people" ... how is LLM trained? "bunch of people, bunch of facts, bunch of ____"
The crux of this whole "argument" isn't that plagiarism is "another person's work" it is that you are passing work as YOURS that isn't YOURS - it is that simple.
Well, I understand, and I suspect that a lot of people commenting here see the term similarly to you; but there's an official definition regardless of your personal interpretation, and it does include the 'somebody else's work' part.
Why is translation a different beast? It produces text based on a prompt you give it, and it draws from vast amounts of the works of other people to do so. So if a translation tool does not change the 'authorship' of the underlying text (i.e., if it would have been plagiarism to copy the text verbatim before translating it, it would be plagiarism after; and the same for the inverse), then it should also be possible for an LLM to not change the authorship between prompt and output. Which means, copying the output of an LLM verbatim is not necessarily in itself plagiarism.
> but there's an official definition regardless of your personal interpretation, and it does include the 'somebody else's work' part.
No, it doesn't. First of all, dictionaries aren't prescriptive and so all quoting a definition does is clarify what you mean by a word. That can be helpful toward understanding, of course.
That said, the intransitive verb form of the word does not require "somebody else's work" in the sense of that "someone else" being a human.
> to commit literary theft : present as new and original an idea or product derived from an existing source
-- Merrian-Webster https://www.merriam-webster.com/dictionary/plagiarizeAccording to this, what it means is taking credit for a work you did not produce. That work did not have to be produced by a human, it merely had to exist.
> Why is translation a different beast?
Because it doesn't produce a new work, it just changes the language that work is expressed in. "Moby Dick" is "Moby Dick" regardless of what language it has been translated to. This is why the translator (human or otherwise) does not become the author of the work. If you were to run someone else's novel through a translator and claimed you wrote that work, you would in every respect be committing plagiarism both by the plain meaning of the word and legally.
> copying the output of an LLM verbatim is not necessarily in itself plagiarism.
Yes, it is. You would be taking credit for something you did not author. You would be doing the same if you took credit for a translation of someone else's work.
Did he write it? Did he write 99% of it? 98%? Less than 5% of it?
Then did he represent it as his own work?
>They should not be punished for using AI. They should, however, just get a very bad grade for "including citations to nonexistent books".
This. We should apply this sort of logic in most situations, penalize for the actual 'crime', worrying about the meaningless details outside of that is just a distraction that confuses the issue.
It was considered cheating to copy and paste my paper from Wikipedia.
Why is it any different in this case?
Agreed.
Only someone trying to cheat would use the excuse that it wasn’t explicitly stated that AI was cheating.
This reminds me of the court case where they asked the court to define child pornography and they said “I can can’t define it, but I know it when I see it.”
Imagine saying with a straight face that some pictures you have of a minor are fine because this particular pose, facial expression, and clothing wasn’t specifically designed child porn. It would instantly make you sound like a pedo, like he sounds like a cheater
> This reminds me of the court case where they asked the court to define child pornography and they said “I can can’t define it, but I know it when I see it.”
If you’re referring to the famous statement from Justice Potter Stewart’s concurrence in Jacobellis v. Ohio, that comment was in reference to defining “hardcore pornography,” not child pornography.
Exactly, it wasn't about CP (in particular) at all, just pornography. Which makes it a really horrible ruling: at least with CP, you can use age limits, though there's still huge controversies about 1) pictures by parents of their kids doing stuff like taking a bath and 2) artwork/animation that isn't even photography and never involved any actual children.
Stewart's ruling was ridiculous: how is anyone supposed to know whether something is "pornographic" or not if it can just be determined by some judge arbitrarily and capriciously, and there's no objective standard whatsoever? They could just rule any picture of an unclothed person to be illegal, even if it's something medical in nature. Heck, they could even rule a medical diagram to be porn. Or anything they want, really.
> though there's still huge controversies about 1) pictures by parents of their kids doing stuff like taking a bath
Is this a controversy in the US?
I am fine with people worrying about Zuckerberg or Sandar potentially watching their naked kids, but I guess that the angle is that the parents would be 'perverts' and not the surveillance state being a problem?
I think Germany has exclusions for pictures with your kids taking baths. That’s just pretty common and midwives will tell you to take a picture when they show you how to bath your baby for the first time.
We're talking about an American court ruling here, and America is well-known for being far more prudish than Germany, so I'm sure there's no exclusion there for kids taking baths.
Back in the 70's there was a sex-ed book called "Show Me" which featured very frank photographs of both adults and children, completely naked.
It was subject to litigation at the time it was released and was upheld as not being obscene. But it continued to cause moral panics and was eventually taken out of print by the publisher.
But it was never banned and can still be found as a collector's item. And in the mid 70's it was completely normal to walk into a book store and see it on display.
It's such a stupid argument. That because the handbook didn't list the specific method of cheating it was therefore allowed.
How about just don't cheat?
Highly successful people like Elon Musk and Donald Trump succeed in part because they push the rules really hard, and they win enough in the cases when it matters. So it's a proven strategy, though it doesn't always work.
Criminals push the rules really hard too. It's just that they're only labeled criminals by society if they're caught, prosecuted, and punished.
People who "push the rules" (I call this cheating, if I was playing a tabletop game with someone who did this, I would say they were cheating too) have an unfair advantage over the people who do follow the rules.
It's like sociopaths: smart sociopaths become successful businesspeople and politicians, while stupid sociopaths end up in prison.
People who are good at pushing the rules and have some kind of knack for knowing which rules to push and how far, end up succeeding over the rule-followers, while the people who lack this talent and push the wrong rules, or push too far, end up failing somehow (and maybe in prison).
It's no good cherry-picking success stories to evaluate a strategy. You have to know how often it fails too.
Anyway, "pushing the rules" is so vague, and it could be done intelligently, ethically, or otherwise.
In this particular case the student copy-pasted AI output in an assignment and tried to pass it off as his own work. I mean, come on... the fact his went to court is just absurd
What's the relevance? Are you going to embark on a "Let's Educate The Users" mission for parenting?
It would be futile. Parents and children are now united in not wanting to be educated.
This has got to be specific to the US, yes? None of my overseas colleagues have this attitude toward education, and my wife (now an American, but an immigrant) certainly doesn't.
It seems to highly correlate with religious fundamentalism, IMO, which has a great many angry adherents worldwide, as well as in the US.
What is unauthorized use of technology? Is the light I need to read not technology? Is using the internet to find more about a topic not technology? Where is the line that makes AI forbidden?
The lack of implicit or explicit authorization. As the school has lights, you may assume they are authorized implicitly.
This is unproductive and unenlightening pedantry.
I think the throwaway actually raises the valid point about the rule being an exceedingly broad catchall. The type primed for selective and weaponized enforcement.
That said, the kids are clearly defenseless in this situation, for blatant plagiarism as well as just being just being factually incorrect in their report.
> The type primed for selective and weaponized enforcement
Theoretically true, but irrelevant because this particular case isn't that.
Yes, it is broad, and probably a bad rule. That said, there is more than enough than that simple rule in this case that points toward intentional academic dishonesty. If he was my son, getting off on a technicality isn’t exoneration in my house.
In education, the goal is internalizing to the individual the knowledge required for them to bootstrap into a useful, contributing member of society. Things like composition of written works, organizing one's thoughts into communicable artifacts, doing basic mathematics, familiarity with the local and encompassing polity and it's history, how to navigate and utilize institutions of research (libraries) etc... Any technology employed that prevents or sidesteps that internalization is unauthorized.
It ain't that hard to connect the dots unless you're going out of your way to not connect the dots.
If that is the goal of a specific task, be explicit about that. From my personal experience, it's just what old teachers say because they can't deal with the new reality. The teachers who incorporated AI and other tech into their lessons are much more successful and better rated - by parents as well as children, and the final exams too.
They were explicit about that! There was an AI policy that the student knew about and blatantly ignored. I am not sure how much more did you want.
Here is a relevant quote from the TFA:
> Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations).
(And re better rating: sadly making classes less useful usually improves the rating. I sure if there was a class where they were just watching cartoons, with super-simple quizzes at the end that everyone could answer, it'd have highest ratings from most students, as well as high ratings from many parents, and best scores on finals. Now, it might not work that hot in real world, but by that time the school would be long over...)
The problem is that the school simply didn't teach them about the tool enough, and they taking something that should be just another lesson as disciplinary action.
Why do you expect children to know math only after months of tries, but understand the perils of AI after hearing one sentence regulation? That's not going to help the kids. You need to spend time practicing using the tool with them, showing them the pitfalls in practice, and only after enough time you can start rating them.
The school handed them a gun and they're Pikachu surprised a kid got shot, and now they're blaming it on the kid that had it in hands - but the school is to blame. And it's certainly not newsworthy, or where is the math exam results article?
> they taking something that should be just another lesson as disciplinary action.
Remember the "disciplinary action" here is giving the student a bad grade, with the opportunity to redo the assignment.
Are you seriously asserting they should've gotten a good grade for a paper that cites sources that don't exist? In an advanced placement class no less?
If anything they're getting of lighter than students who did a bad job on their own without using AI. I know I was never given a chance to redo a paper I phoned in.
I know my paper was never in the national news.
Did your parents sue over it? It's not like the school went running to the media.
Why do you think that it’s in the national news!? Can you please re-state your actual view because I’m really not sure of it anymore.
Something that can be read 10k kilometers away from the school by people who never met the kid.
The students are not expected to understand the perils of AI, they are expected to follow the policies the teacher gave them. And the policies were very clear: they can use AI for inspiration and web search, but they cannot use it to write text to be submitted.
What you are describing might make sense for "ChatGPT class", but that wasn't it, that was AP History.
(And that's how schools work in general: in real life, no one integrates matrixes by hand; and yet calculus classes do not teach CAS systems or their perils)
The current trend in education is to blend subjects and methods together and create cohesive interdisciplinary lessons practicing multiple skills at once. "ChatGPT lesson" is the 19th century way.
This is like saying that they have an AI policy, but that using an LLM isn’t in violation since it is just calculus applied to words and not true intelligence.
Courts aren’t stupid for the most part. Most of them are happy to interpret things in terms of what a ‘reasonable person’ would think, for better or worse. Right now, most reasonable people would say that using an LLM to do your work without disclosing it is in violation of the spirit of the student handbook.
I could fine tune an AI, and name it after myself, and put my own name at the top of the paper as an attribution, but no reasonable person would say that was following the spirit of the law if I turned that in as a class assignment.
The part where students were given explicit guidance on the use of AI resources and told how to cite it appropriately. Besides, even aside from the use of technology it’s still a blatant case of plagiarism as he passed off work he did not write as his own.
How could you even cite AI “appropriately”? That makes about as much sense as citing Google..
Like e.g. this:
“Clarity isn’t found in answers; it’s carved from the questions we dare to ask.” - ChatGPT, https://chatgpt.com/share/67439692-b098-8011-b1df-84d3761bba...
I know that being obtuse is sometimes fun..
But no paper (even high school level) which does that should ever be accepted..
No paper?
If you're clear what your sources are, why does it matter who (or what) you quote?
Boris Johnson and GWB are known (for different reasons) spouting jibberish sentences, yet cite them and if the quote was appropriate then no foul; all fiction is made up and that too can be quoted when appropriate.
When I was at school in the UK 24 years back, media studies was denigrated as "mickey mouse studies", but in restrospect the ability to analyse and desconstruct media narratives would have been useful for the country.
Now AI is a new media.
Sometimes it will be the right thing to quote; sometimes it will be as much of an error as citing Wuthering Heights in an essay about how GWB handled the War On Terror.
20 years ago, when I was taking a graduate class in Technical Communications, we were taught how to properly cite online sources. This is far from being new.
Have you actually read the piece? The answers to those is in the written policy the student was given. But even without the policy, it should be pretty clear that passing others' work as your own (be they people or AI) is academic dishonesty.
As judge said, "the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced"
Is what I wrote here mine or not? I used the autocorrect suggestions almost exclusively, wrote few letters only.
Then, no. This isn’t text you generated. No one cares on Internet forums though.
Who came up with the words? If autocorrect is acting as a typist, transferring your words to screen, you are the author.
What if I first asked ChatGPT what should I say? And what's the difference from just copy pasting it?
The question is who comes up with words. If you re-type textbook, you are plagiarizing. Same happens if you re-type ChatGPT output.
On the other hand, if you read some text first (be it ChatGPT's output, or a textbook) and then rephrase it yourself, then you are the author.
How much you have to rephrase? Is changing every other word with synonym enough? That's actually a gray area, and it depends on the teacher. Most teachers would expect you to at least change sentence structure. But in this case it's completely irrelevant, as we know the students did copy/paste.
I really don't see why you are trying to present ChatGPT like something special re plagiarism. Copying other's work is copying. Paying $10 to someone to do your homework and then copying their answer as-is is cheating. So is using ChatGPT yo do it for free.
ChatGPT is not someone. It's a tool.
So is a textbook. Still not OK to copy homework from it.
Textbook has an author that you can copy. You can't copy output of an auto suggest, it's just yours.
It does not matter if author is human or computer.
If there is a spelling bee, but student is secretly using spellcheck on the phone, they are cheating.
If there is a math speed competition, but student is using a calculator on the phone, they are cheating.
If it's a calculus exam, but student is using Wolfram Alpha (or TI-89) to calculate integrals and derivatives, it is cheating.
If it's a written exam but student is using ChatGPT to write the text, it is cheating as well. Not that different from previous cases.
There is no difference. They’re not your words.
These are games no one in the real world is interested in playing.
How come they're not my words? I thought of sending them, not the keyboard. Same with ChatGPT, it doesn't do anything on its own - even if it could, it's a tool, not a person.
If he had turned in the prompt he wrote, then this would be his words.
If your position is correct, then I suppose asking another person to write an essay for you is in fact your writing as well. Which is absurd.
This pedantry is useless. Ask any person if an essay produced by AI was written by the person writing the prompt and I think a majority will say “no”. If AIs writing essays for you isn’t plagiarism, then nothing is.
How could it be the same if another person wrote it?
Situation A - a person uses a tool. They are the author.
Situation B - a person contracts another person. They are not the author. The other person might be using a tool, but it doesn't matter.
This is not a theoretical debate. This is how the current legal framework works, and if you expect someone to behave differently, you have to be explicit. Change the law if you want the default to be different. This is how it works now.
> This pedantry is useless. Ask any person if an essay produced by AI was written by the person writing the prompt and I think a majority will say “no”.
I'm an expert in the field. I don't need to ask mainstream people about their sci-fi influenced opinions, they don't matter to me nor to the law. This is how it works, it's not magic, it's not scifi, it's not a person, it can't be an author and thus it can't be plagiarized by the user, nor the user can violate the tool's copyright by using the output.
It's a tool that authors can use to create their works, and even if all they did is push a single button, they are the author. Compare this to me applying a filter on a blank canvas to produce an abstract art wallpaper in Photoshop - am I the author or is Photoshop? Let me tell you, don't try to steal my work. I did it, not Photoshop, that was just a tool that made it easier for me.
Same with ChatGPT - this morning I used it to write an architectural proposal. It's my proposal, there is no way I somehow "plagiarized" something. Doesn't matter that I pushed like 50 buttons in total; it's my work, I am the author.
--
And if schools don't actively teach children to use this technology, they should be reformed. If this school was in my city, I'd vote for the party that will cancel it - for the extremely serious offense of letting the children think it's wrong to use tools. At least they should explain that the goal is elsewhere and why a particular tool shouldn't be used for that task. It's not like children will just know, you know.
This is just like when teachers at my own school told us that Wikipedia is a bad source and we can't use it because it can't be trusted. That's total bullshit, just be honest - this assignment is supposed to test your writing skills, not tool usage skills.
> This is how the current legal framework works, and if you expect someone to behave differently, you have to be explicit.
The classroom teacher was explicit in their expectations. The student did not follow the instructions. Did you RTFA?
> It's a tool that authors can use to create their works
This isn't about authorship or ownership. It doesn't matter whether the words are "yours" or not - that you keep making that a point of contention is a semantic sideshow. This isn't a commercial or artistic enterprise. If the student wants to publish their copy-pasted, error-ridden slop (or masterful slop!), then by all means, they can go for it.
Rather, this is a classroom, the goal is to learn, think, and demonstrate understanding. Copy-pasting responses from AI does not do this (even setting aside the explicit instructions not to to do so). Similarly, plagiarism isn't bad in high school because it hurts other authors, it's bad because it's a lazy end-run around the process of learning and thinking that undermines those goals and creates bad habits to avoid critical thinking.
> a person uses a tool
If you simplify and trivialize everything to an extreme degree, then yes, you might have a point…
> Wikipedia is a bad source and we can't use it because it can't be trusted
No, but you still shouldn’t use it for anything besides research (i.e. you should be citing the sources the Wikipedia article is based on directly).
> just be honest - this assignment is supposed to test your writing skills,
Isn’t that more than perfectly obvious already?
It's important to understand that all ChatGPT knows is what's in its training set, and what was in the prompt. I see it all the time in programming - if you are doing a boring, run-off-the-mill websites or plugging other peoples' libraries together, then AIs like copilot have a very high success rate. But try something new or unusual (even a bit unusual, like QNX), and the lack of training data means the amount of hallucinations becomes incredibly high, and the resulting code is often outright wrong.
A lot of times that's fine - after all, there are programmers who spend their careers never writing an original algorithm, and they could definitely use the help. And in a lot of cases an original architecture is actually a bad idea - stick to the boring technology, create your architecture proposal based on re-blending previous projects with a tiny bit of changes. Nothing wrong with that.
But school tries to be better than that (it does not always succeed, but it at least tries). That's why the students are taught how to multiply numbers by hand, and only then they are allowed calculators. And they must be taught how find their primary sources, before being allowed to use Wikipedia. And they must be taught how to write their thoughts in their words, before being allowed to ChatGPT.
Sure, some of them will go on and never multiply numbers by hand, nor ever read a primary source nor ever create a highly original proposal. That's fine and even expected. Still, the school aims high even if not everyone can get there.
Somewhere in there is the problem. ChatGPT & Co should be viewed as tools - they can do research, they can discuss and echo your ideas, they can write FOR YOU. (Or at least that's the idea, when they don't go nuts). And if they are your tool, you are doing the work - you are the author.
If ChatGPT copies the wikipedia article, you are now in trouble. Your tool caused you to plagiarize and not even notice but that won't save you. Same if you include complete nonsense generated by your tool. Still your fault (although some lawyers have been getting away with it.)
If you copy the wikipedia article yourself, at least you know you cheated.
But equating the tool to the plagiarizing is absurd. What is possible is that perhaps the problem is now outperformed / overpowered by the tool. It may now be trivial to answer the problem. A few clicks?
But again it's long tradition for schools to outlaw some tools in the effort to teach something specific that the tool would replace: calculator out of bounds, or only basic calculator and no computer, no books, sliderule only, etc.
Yes, exactly right. If your tool caused you to plagiarize, it's your problem - you should've checked the output. You can't point to ChatGPT, it'd be like pointing to Word. You are the author and thus you are responsible.
Schools need to adapt. Explain why a tool is forbidden - it will improve efficiency of learning if the kids know what is the purpose. Use different more fun method to teach mind skills, and teach overcoming problems using various always evolving and changing hi-tech tools - instead of the rigid lesson/homework style of today. Teach them that change is constant and that they need to adapt. This is not something theoretical - there are state sponsored schools operating in this way where I live. It's great.
Come on, it's not that complicated. The judge determined that the student has copy/pasted the AI output and submitted that. That's not the same as using AI for research just like you use Google Search for research.
The student was not punished for "using AI", but for plagiarism:
>The incident occurred in December 2023 when RNH was a junior. The school determined that RNH and another student "had cheated on an AP US History project by attempting to pass off, as their own work, material that they had taken from a generative artificial intelligence ('AI') application," Levenson wrote. "Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations)."
so they were caught due to the citations to non-existing books. it seems fine and unconnected to all of the controversial news stories of using AI detection, which has a high rate of false positives
The teacher noticed a few irregularities such as the time the student spent inside the submitted project. The cheater student only had around 55 mins logged whereas everyone else had 7-8 hours. That set off alarm bells for the teacher who then actually looked into the paper more and noticed fake citations and it was flagged as “ai” generated when they ran it through a few ai detectors.
Good. Insofar as the point of a school paper is to make the student do the thinking and writing, taking it from ChatGPT is plagiarizing from OpenAI.
Probably unpopular opinion here but families, usually wealthy, that use the legal system like this to avoid consequences are parasites. It reveals not only your poor job of raising your children. But also the poor character of the parents.
Glad the courts didn’t grant a similar “affluenza” ruling here. The student plagiarized, short and simple.
> Probably unpopular opinion here but families, usually wealthy, that use the legal system like this to avoid consequences are parasites
100% agree. Plus it diverts school resources -- in other words, taxpayer money -- to fight the court case, meaning they have less to spend on educating children which is what taxpayer's are funding.
The parents should be on the hook for the school's legal fees.
I think the only unpopular part of that is that you'd think it's an unpopular opinion
It depends on the audience. On HN, it's probably a popular opinion. Among typical American parents these days, it's probably not.
+1.
The article hints at a worry about college applications by the student. So perhaps this was an attempt to clean the student's slate of the cheating record. Still, I can't support the approach taken.
Agreed. Ironically it’s parents such as these who are the most loudly self-proclaimed staunch supporters of meritocracy, except when it comes to their children who are somehow not subject to it at all.
What's sad is that the school district spending its limited money to fight frivolous lawsuits like these directly impacts all other students who are just trying to get a good education. Stuff like this won't stop unless the helicopter parents are made to pay the school's legal bills.
This isn’t helicopter parenting, it’s bulldozer parenting. One wonders how they convince themselves that this is good for their child.
When you don’t know the difference between good and bad, and when bullying your way through things on a technicality because you have the money to do so is normal, well I doubt seriously they convince themselves of anything. They know based on pure emotion and the desired outcome that they were wronged — so of course you sue?
Sadly, the parents could still win. What a precedent that would be. Though I'm not sure there's any avoiding a future with mass societal atrophy of reading and writing skills.
In the midst of the perennial "woe the kids today" rant, it's worth considering that this generation is the first generation in the 300,000 year history of homo sapiens sapiens to widely utilize reading and writing as a primary means of daily communication.
Ob XKCD: https://xkcd.com/1414/
Reading/writing part of the brain is a repurposed part which otherwise responsible for the faces, facial emotions, etc. recognition. Giving the already noticeable decrease of the in-person skills in the young "texting" generations we can speculate where it may go.
They might all become hyper-nerds interested in D&D?
Sounds cool.
But then, my brother was already into D&D in the late 80s back when I was learning to read from the Commodore 64 user manual, so of course I'd think that :P
Yes, they'll be able grasp the beauty of Les Misérables or The History of the Decline and Fall of the Roman Empire so long as it is fed to them as a stream of <280 character tweets, dohohoho.
Mr. Munroe is falling victim to the unfortunate phenomenon where people believe their popularity means that their opinions outside of their areas of expertise are well-informed. Whether they can spell better or not, minds weaned on little chunks of text laced with memes and emoji are going to struggle with chapters, let alone full books.
Given the comic is an echo of conversations like this, I think perhaps that if he is guilty of that then so too are you and I and all others here.
Myself, I say that to equate the modern proclivity for tweets with a degradation of intellectual rigor is as fallacious as imagining that the concise elegance of Tacitus foretold a dulling of Roman wit. Or something like that.
Will they (kids these days) like the style of old classics? Of course not, just as few native english speakers alive today wish to speak (or write) in the style of Shakespeare — breaking a long thing up into tweet-sized chunks, that is simply style, no more relevant than choice of paragraph or sentence length.
But to dismiss a generation’s capacity for engagement with monumental works (be they Les Mis, The Decline and Fall etc., Shakespeare, Dickens, or any other) on the basis of their chosen tools of communication betrays not only an ignorance of how often communication has changed so far — when Edward Gibbon wrote the Decline and Fall, literacy in the UK was somehere around the 50% mark, but still the illiterates could watch plays and listen to tales — but also modern attention spans when we also have binge-watching of entire series of shows as a relatable get-to-know-you-better topic on dating apps.
I mean, I'm pretty sure that more people have read Sam Pepys in the last 15 years than in the previous few hundred, due to https://www.pepysdiary.com (it's a bot that tweets Pepys' Diary in real time; no longer available on Twitter due to API changes, but it's still on RSS, Mastodon and Bluesky. It's on its third run through the diaries).
Hopefully this sets a precent that discourages other similar lawsuits or has them dismissed out of hand.
What's striking to me is that the parents sued. RNH passed off AI-generated text as their own when they knew how to cite AI generated works and were versed in academic integrity. It wouldn't occur to me to sue the school if this was my kid.
They're not optimizing for the kid's education. They optimizing for the credentials the kid is able to get.
Filing the lawsuit is an asymmetric bet:
- win, and increase college admissions odds
- lose, and be no worse off that without the suit
> lose, and be no worse off that without the suit
This kid should change his name, given his initials, high school and parents’ names are public record next to a four brain cell cheating attempt.
Do you think college admissions officers follow the news and use what they learn to maintain a naughty list?
Perhaps a business idea?
Unless he has someone who is very sympathetic to his cause, the teacher/counselor recommendation will wreck him.
This guy needs to go to a JuCo that feeds into a decent state school — he’s screwed for competitive schools.
I'm guessing at some point there will be LLMs trawling through news items to put together profiles for people, and as the cost comes down, it won't just be available to three letter agencies and ad platforms, but schools and employers will start to use them like credit scores.
> Do you think college admissions officers follow the news and use what they learn to maintain a naughty list?
College admissions, no. College students and colleagues and employers, being able to use a search engine, absolutely.
If you search the student's name on Google, you probably won't find this lawsuit.
Admissions know his name and the name of the school, which helps find specific students.
It’s easy to miss, but I wouldn’t be surprised if it comes up as “Hingham High School Harris” brings up the relevant info. Further, his parents suing may be a larger issue for a college than his behavior.
Nope. I just replied above with a similar story when I was in school. My classmate got expelled for cheating and sued the school. tv segment, articles about him, etc.
Zero effect on his college outcomes. Got into really good schools.
understand though the kid is bearing the implications for the parent's decision.
> win and increase college admissions odds + also gain funds for the parents
> lose, and be no worse off that without the suit
If I were in college admissions then I'd probably think twice about admitting the candidate with a widely reported history of trying to sue their school on frivolous grounds when things don't go their way.
> - win, and increase college admissions odds
wouldn't this decrease? I wouldn't want to admit a litigious cheating student - whether won or lost. this was a pure money play by the parents
Do you think colleges thoroughly check the background of each applicant, such that they would discover this?
> - win, and increase college admissions odds
Will it, though? Like if the college happens to know about this incident?
It does strike me that the purpose in attending college is the credential you get; education is a far second.
It strikes me that this is a foolish take to adopt.
I saw lots of students acting a bit like this but I was grateful that I could dedicate myself primarily to my schooling and took as much advantage as I could to learn as much as I could.
The credential gets used as a heuristic for the learning you do but if you show up and don't have and knowledge, then everything is harder and your labor more fruitless.
I know some people don't care and that there are degenerate workplaces but you'll still be left with having been a lot less useful in your life than you were capable of being.
So what would you do in the parents' shoes?
teach my kids some good ethics and taking responsibility for their actions, instead of jeopardizing the chances of college with the prospects of money
> What's striking to me is that the parents sued
And the kid was even offered a redo!
On the other hand, the school caved on National Honor Society after the parents filed. So maybe the best move would have been (tactically, not as a parent) to show the school the draft complaint but never file it.
Almost zero downside. I knew a student who plagiarized 3x so they got kicked out. His parents sued. It was even on the tv news because they were asking for hundreds of thousands in compensation. He lost and the school kept him expelled.
I was expecting the bad press coverage to hurt his college chances since there were several articles online about him getting kicked out for cheating and then suing.
Nope! Dude got into a really good school. He even ended up texting asking me for past essays I wrote to turn in as his own to his college classes.
And the kicker was he then transferred to one of the prestigious military academies that supposedly upholds honor and integrity.
So. There is almost zero downside for suing even if it gets you tons of negative publicity.
I don't think we can claim zero downside from one anecdote. There are always outliers that can occur from extenuating circumstances.
- The family potentially has the financial resources or possibly connections to 'make things happen'.
- Perhaps the student is especially charismatic and was able to somehow right the situation. Some people have that con-artist mindset where they're able to cheat/commit fraud through their life with seemingly minimal consequences.
- Perhaps they just got lucky and the administration didn't do their due diligence.
anecdata is data if you do not have other data or anecdotes to pack your claims up. The present an example whereas you present speculation
> Perhaps they just got lucky and the administration didn't do their due diligence.
Are universities supposed to google every applicant?
I mean I haven't been in academia for a decade, but back when I was I certainly never browsed a 17-year-old girl's instagram before making an admission decision.
Not every applicant, but the ones in the accepted pool, it strikes me as odd there isn't some basic amount of vetting.
Instagram? No (although, wouldn't be surprised)... but doing a gut check with the school admin and looking at public records? Sure.
you present a very interesting specific example of Instagram which is completely unrelated. story time?
If I put some kid's name into Google, shouldn't I expect social media to come up?
> His parents sued. ...
> He even ended up texting asking me for past essays I wrote to turn in as his own ...
> he then transferred to one of the prestigious military academies ...
>> There is almost zero downside for suing even if it gets you tons of negative publicity.
Sounds like the caveat here should be, "when your parents/family is connected".
How on earth do you get caught plagiarizing and still pass with a C+?
I once had a student in a U.S. history class that literally copy-pasted almost the entirety of a paper from a Wikipedia article (incidentally, an article that was only tangentially related to what he was supposed to write about, which only made it more glaringly obvious something was wrong). After confronting him he told me he "had no clue" how the copying could have happened! I gave him a 0 on the paper, which caused him to fail the course, and reported the incident. But the school admins changed his grade so that he would pass. This was at a for-profit college that thankfully no longer exists (I quit after that experience).
This is how it works at most universities it seems.
I think it depends. At least at the major public university I went to grad school at, if an undergrad had pulled that there would have been extremely serious repercussions. Failing the class would have been the minimum. The bigger issue then was that students with money could just buy their papers and take-home work, which was often impossible to catch. This was before LLMs started hurting paper mills' bottom lines, and a lot has changed in the past few years though.
I used to pick up pocket money writing essays for dumb rich kids in college. If I cared, there’s enough of a paper trail to invalidate more than one degree, at least by the written rules. In the real world I doubt that the cheaters would face real consequences, and have concerns that I would lose my own credentials.
It's high school, and a public one at that. Cheating can be rampant in some schools or with some individuals.
What I find ridiculous is the parents are suing over a C+ vs B grade and a detention on the record. Like where do you see your cheating kid going in life that you're going to waste your resources and the district resources on this?
I find it completely unsurprising that parents who would sue to change a C+ into a B raised a kid who would cheat.
We elected a felon and frequent financial cheat as President, so the sky is the limit, I suppose.
Presumably, this one assignment wasn't the entire grade and the C+ was for the entire course.
> he received Saturday detention and a grade of 65 out of 100 on the assignment
The student still received a passing grade for the assignment despite some of the assignment being AI hallucinated text. From my experience, plagiarism is an automatic zero for the entire assignment or course, but there are tons of counterexamples when the teacher/professor doesn't want to deal with the academic integrity process.
- "but there are tons of counterexamples when the teacher/professor doesn't want to deal with the academic integrity process"
That's a good point: in this particular case, the teacher of the course was subpoenaed to federal court and compelled to testify about their grading. Incredible burden, for someone else's problem.
I have had the rare privilege to see up close examples of how at several US universities, when professors are presented with irrefutable proof that a student has cheated (well beyond any reasonable doubt) the professor will most often do nothing. In the best case they will meet with the student and give them a stern talking to.
The whole system is set up to disincentivize any effort to actually hold students accountable for cheating in a significant way (fail assignment, fail course, expulsion, etc.)
When we read about cases of students being held accountable it's generally the exception not the rule.
Fail to meaningfully discipline students due to fear of litigious moron parents, get sued by litigious moron parents anyway.
There need to be counter claims to cover these costs instead of it falling on taxpayers.
Last I checked, 65 was a D-, not a C+. So the C+ was for the course.
Is it plagiarizing when you copy stuff that isn't even factual? Merriam-Webster: > : to steal and pass off (the ideas or words of another) as one's own : use (another's production) without crediting the source
I guess so.
No, it's academic dishonesty: representing work that you did not do as your own work.
ETA: "Academic dishonesty" also covers things like falsifying data and willfully misattributing sources, which is a closer approximation to this case.
We wonder why our society is going to shit. This is one of those thousand cuts.
While the parents assert the C+ might keep their kid out of Stanford, the more likely impact is that being known for a nationally notorious lawsuit over a minor infraction is what will keep him out of Stanford.
Curiously, a nationally notorious lawsuit is not enough to keep you out of Stanford [0].
[0] https://www.popehat.com/p/an-anti-slapp-victory ("An Anti-SLAPP Victory")
Also, he's not getting into Stanford with the B grade that the parents are suing for anyway. You can't even get into Stanford with all A's these days.
> Also, he's not getting into Stanford with the B grade that the parents are suing for anyway. You can't even get into Stanford with all A's these days.
None of this is true.
Grades are just one part of the picture.
The folks who think a B is what kept them out of an elite school are just engaging in wishful thinking.
The number of people who get into elite schools like Harvard or Stanford with multiple Bs would surprise you.
I think you might get in with multiple Bs and a good story about your interest in the subject you're pursuing (or suitably connected family)
"good story" probably doesn't include being too uninterested to write your own answers despite parents so committed to you going to Stanford they're prepared to litigate to get you a B...
> "good story" probably doesn't include being too uninterested to write your own answers despite parents so committed to you going to Stanford they're prepared to litigate to get you a B...
So true.
This kid is a persona non grata for elite schools at this point.
As I said in the other thread, his best bet is to go to a JuCo that feeds into a decent state school, and just lay low for two years.
He can go to an elite school for a graduate degree if he wants the club membership.
Of course it's possible, but you have to have something truly extraordinary to make up for it (or be a legacy admit, rich parents who donated to the school, etc.). The B will certainly work against you.
> but you have to have something truly extraordinary to make up for it
Flip that, and you’re closer to correct for everyone.
You have to do something truly extraordinary to get in, with the things you listed as being some of the least common types.
Grades just need to be directionally correct rather than perfect.
Also, a side note about legacy admits…
While the admission rate of legacies is about 33% at Harvard (12% at Yale, 30% at Princeton, and 14% at Stanford), that doesn’t mean that being a legacy was the primary reason they got in.
First, 67% of legacies still get denied — that’s quite a bit.
Second, the folks who get into elite schools often know how to raise their kids in a way that increases their chances to get into an elite school. It’s an advantage, but much more often than not, the applicant put in the effort to make themselves a strong applicant.
The legacy “advantage” comes into play almost purely at the margin, when someone is borderline admit/waitlist or waitlist/deny, and the legacy status will push them to the favorable side. You’re not going to see a substantial difference in the folks who were rated comparably.
People seem to want it to be that legacies are freeloading off of their parents and aren’t really qualified admits, and that largely isn’t true. The exceptions are examples like z-list applicants (which you mentioned) or recruited athletes who also happen to be legacies.
I wanna see how many Asian men get in with B's
> I wanna see how many Asian men get in with B's
Please stop perpetuating this myth.
Asians are not held to a different standard.
Anecdotally (with truck load of anecdotes), Asian-Americans (to be specific) frequently seem to be held to a widely-known standard that either they aren’t aware of or don’t believe in.
Note that this is not exclusive to Asian-Americans — plenty of upper-middle class white people fall into this category as well — but that was the group you mentioned.
I have made an open offer to HN, and it still holds:
If you show me the application of an Asian that you felt was held to a different standard for elite school admissions, then I will give you the reason why they most likely didn’t get in.
that’s not much of an offer. one can easily always find (especially when specifically looking for it to prove a point) whatever it is they are looking for :)
I personally know there is asian-american bias (not just asian-american…) in admissions at least one elite school via one of my best friends who works in admissions office.
> I personally know there is asian-american bias (not just asian-american…) in admissions at least one elite school via one of my best friends who works in admissions office.
Oh, interesting.
What is the specific bias they claim exists?
Fwiw, they did a fully body cavity search on Harvard admissions, and the best that they could come up with was describing an applicant (accurately) using race-based shorthand — something like “standard Asian reach applicant”, which (iirc) meant something like high grades, high standardized test scores… and almost nothing else. This is a complete nothing burger.
Note that this stereotype exists for a reason. It’s not exclusive to Asians, but it’s much more common with Asian applicants than other races.
Edited to add:
> that’s not much of an offer. one can easily always find (especially when specifically looking for it to prove a point) whatever it is they are looking for :)
Almost every time I’ve done this face-to-face, it wasn’t some subtle oversight — it was a glaring omission or weakness in the application.
The times that it wasn’t obvious, the person got into an elite school, just didn’t get into their elite school of choice, and that’s a different issue.
One of the hallucinated authors is literally named "Jane Doe". Our society is about to become powerfully stupid.
"Doe" is actually a real surname, with a few thousand of them in the US. I'd guess that there probably have been people actually named "Jane Doe". I wonder if that causes many problems for them?
what sorts of problems do you imagine this causing?
The name is widely used as a placeholder. Here's how Wikipedia describes it [1]:
> John Doe (male) and Jane Doe (female) are multiple-use placeholder names that are used in the British and US-American legal system and aside generally in the United Kingdom and the United States when the true name of a person is unknown or is being intentionally concealed.
I'd imagine that could lead to some difficulties when someone really named Jane Doe has to deal with some system that uses that name as a placeholder. Similar to the way people whose surname is Null sometimes run into problems because of poorly written computer systems.
In legal cases that is how one can choose to remain anonymous.
See, there's stuff even geniuses dont know.
Why do you think the previous poster found that name notable? Just because it's inherently funny sounding or something?
That's not relevant to this. It's a direct quote from the work the students handed in.
ai hallucinated citations to non existent publications.
in this case, the ai should publish the cited hallucinated works on amazon to make it real.
not that it would help us, but the ai will have its bases covered.
Then they could train the next generation of models on those works. Nothing to scrape or ingest, since they already have the text on hand!
Discussed before the ruling:
How do you get students to engage in creative writing assignments in age of AI?
How do you get them to dive into a subject and actually learn about it?
I'm thirty something. How did my teachers engage me in doing math? How did they engage me in rote-memorizing the multiplication tables when portable calculators were already a thing, being operated by coin-cells or little solar panels?
Part of teaching is getting kids to learn why and how things are done, even if they can be done better/faster/cheaper with new technology or large scale industrial facilities. It's not easy, but I think it's the most important part of education: getting kids to understand the subjacent abstract ideas behind what they're doing, and learning that there's value in that understanding. Don't really want to dichotomize, but every other way kids will just become non-curious users of magic black boxes (with black boxes being computers, societal systems, buildings, infrastructure, supply chains, etc).
The same way you did so before LLMs existed - you rely on in-class assignments, or take-home assignments that can't be gamed.
Giving out purely take-home writing assignments with no in-class component (in an age where LLMs exist), is akin to giving out math assignments without a requirement to show your work (in an age where calculators exist).
Many years before LLMs were ever a thing, I recall being required to complete (and turn in) a lot of our research and outlining in class. A plain "go home and write about X topic" was not that common, out of fear of plagiarism.
The same way you did it "in the age of the internet" and "in the age of TV"
You largely don't. Of course you try to motivate them. And it does work on some, mostly out of fear of an authoritarian institution that's setting the course for their life, rather than conveying the intrinsic value of the pursuit. And the rest is just glorified day-care.
Sure, use AI for research, just like using the Internet for research.
But don't copy/paste AI generated content in the same way that you don't copy/paste a chapter from a book and pass it off as your own.
Invert the assignment, provide a prompt to supply to an essay writing AI of the students choice, but the assignment is to provide critique for the veracity and effectiveness of the generated essay
It would seem that what was put into the report is clearly wrong (in this case from generative AI, but regardless of where it came from, it would still be wrong), so it is still legitimate to mark those parts as wrong. There are other things too which can be called wrong, whether or not the use of this generative AI is permitted (and it probably makes sense to not permit it in the way that it was used in this instance), so there are many reasons why it should be marked wrong.
However, if the punishment is excessively severe, then the punishment would be wrong.
He didn't get detention for hallucinating facts. He got detention for plagiarizing hallucinations without attribution.
HOLD IT! It would be a disservice to the law to not attain all the facts before rendering a judgement. We need to hear testimony from the AI itself!
> the Harris’s lawsuit against the Hingham school committee remains alive
what does this mean if the judge already ruled in the school's favor? parents will appeal?
The parents asked for a preliminary injunction to remove the cheating from the kids record. A judge could do this prior to trial if he believes the suit likely to succeed. The judge refused the injunction because he believes the school district was likely acting in good faith and did nothing illegal.
Oh, I see, so the case is still going to court. What a waste of taxpayer money. Elon wants to cut government waste? Make it more difficult to sue by setting a higher bar by which your suit can even be accepted.
Yep, case is still alive. This just indicates the judge doesn’t see it as a slam dunk for the parents/cheater.
This would have been illegal in Italy has their 1970 Worker Protection against automated management would kill this AI.
Citing nonexistent sources should lower your grade whether you used ai or not.
I am not a lawyer but the student's defense is akin to "Ain't no rules says a dog can't play basketball" from airbud. There are clear rules against plagiarism, and the student copied stuff verbatim from an online source without any citations.
After reading this article, it is hard to say who is in the right here. The court could easily be wrong because they can only judge based on the facts at hand, based on presumptions they've already settled on.
On one hand, the school referenced academic honesty policy in their defense, but there are no international standards for referencing AI, many AI detection measures have false positives, and they both disallow and allow the same behavior seemingly based upon the teacher's discretion.
If you were a malign individual in a position of authority (i.e. the classic teacher being out to get a troublemaker), you could easily set up circumstances that are unprovable under these guidelines.
There is also a vested interest in academia to not create a hostile work environment, where there is no duty to investigate. They are all in it together. This has been an ongoing problem for academia for decades.
There were also several very prejudicial aspects referenced, such as the revision changes, but some people write their stuff out in paper first, and then copy what's written into a document from there. This is proof of nothing because its apples to oranges.
Finally, there are grievances made about lack of due process, and other arbitrary matters which are all too common in academia, but academia makes it very difficult to be caught in such matters short of recording every little thing, which may potentially be against the states laws.
For example, you may be given written instructions for an assignment that are unclear, and ask for clarification, and the teacher may say something contradictory. Should students be held accountable for a teacher lying verbally (if it happened)?
It is sad that it had to come down to court, but that is just how academia operates with free money from the government. They don't operate under a standard business loss function, nor do they get rid of teachers who don't perform once they reach permanent faculty status. The documentary waiting for superman really drives this home, and its only gotten worse since that documentary came out.
There are plenty of people in the comments who are just rabid against the parents, and that's largely caused by poor journalism seeking to rile people up into irrational fanatic fervor. These people didn't look at the details, they just hopped on the bandwagon.
Be rational, and realize that academia has been broken for decades, and what you read isn't necessarily the whole truth of the matter.
The parents had several valid points which went ignored because there is no smoking gun, and that is how corruption works in centralized systems, and indicates a rule by law rather than a rule of law.
Back when I was in high school CD-ROMs were brand new and you could buy encyclopedias on disc.
I made dozens of dollars selling book reports and history papers to my fellow honors class peers. Every paper was a virtually unaltered copy & paste job from Microsoft Encarta. Copy, paste into word, format using some “fancy font”, add my “customers” name, date and class to the top… print! Boom. Somebody buys me lunch.
I mean how else was I gonna have time to write shitty Visual Basic programs that used every custom control I could download in order to play/pause the CDROM’s music stuff?
A microcosm of society. Helping others cheat for profit.
Nothing modern whatsoever about it. Students at Oxford nearly two thousand years ago sold their talents to other students.
Almost one not two millennia.
Lets see - 1096 to today is ... you're right. Obviously I didn't go to Oxford, not in maths anyway.
It was high school.
Using AI in school today is heresy, yet give it a few years and "yesterday's heresy is today's canon".
I just used chatGPT to code an html/css/JavaScript solution in an hour for coworkers who were having troubles. There were like wow that was fast we were trying to figure this out for a few days. I'm skilled / an expert but that would've taken me many hours vs. a few back n forth with GPT.
Overall my html/css/javascript skills I feel now aren't as valuable as they were.
I guess in this instance I cheated too or is it that my developer peers haven't gotten into using GPT or they are more moral? As well maybe this is just the new normal....
The rules for working are very very different from being at school.
No you were not cheating, you did what was expected from you. But you knew that.
How so and or AI is changing the rules everywhere no? Today it seems not good yet tomorrow it's how things are...
The goals are very different. It was like this also before AI.
The goal in school is to learn things. To learn to write you can't just copy an article from a paper and say it is yours. You have not learned.
At work, the goal is to get things done.
In our field you needed / need to learn new things to stay relevant yet now the new thing does it almost all for you.
As well if one generation is using AI to get things done why wouldn't a younger generation do the same? Do as I say and not as I do.. that never has held well over time.
But you already learned the web stack--school kids haven't. Your mental model is what prepared you to use LLMs well to solve a problem. So if they're going to do as you did, they need to learn the subject first and then learn how to extend their reach with LLMs. Otherwise, they're just cheating in school.
You don't need that knowledge as i just went to GPT and asked it ...
"I need to create a dropdown for a website can you help me make it?"
And then I asked,
"How do I make what you wrote above work?"
It detailed the things one needs to do ..copy/paste each block of code in three separate notepad files and save each one accordingly (index.html, style.css and script.js) all in one folder. Once that's done double click on the index.html to run the dropdown.
the kids are going to be in a different world than we are. just like it was useful for us to learn a foreign language (still being taught it schools but those days are numbered) for kids these days it is a waste of time (I am sure there are many studies that say being bi/tri/… lingual has benefits beyond communication but you get my point).
I think while we may think “they need to learn the subject first…” do they really? and if they do why? e.g. someone teaching their kid “web development” in soon-to-be 2025 is insane given the tools we have now… so while there are things for sure kids should learn it is not easy to figure out what those things actually are
Yeesh this is full of red flags…
What is..the new normal of using AI to do or help you get your job done and or quicker? Comment above shows it could be the new normal...
No. This attitude of being better than coworkers, coming in and saving the day. It had nothing to do with using AI. It’s about “I am better than you” instead of helping people out, or teaching them these things you know.
It’s just a passing internet comment missing all the context, so what do I know.
My comments are to be controversial… To get people to think… What is the future with AI and using it as such… If I told my coworkers how I achieved it would they not think less present day… What about in a few years or more it's the norm and mine and everyone's HML, CSS, JavaScript skills are less valuable,… this example shows that AI will definitely take peoples jobs, including my own if I do not ramp up my skills
You ramping up your skills will do nothing for you if a machine can otherwise be delegated your job due to the overhead of human worker vs. just owning a machines output. Not having to negotiate is extremely valuable to a business owner. Mark my words. Until people realize that the whole innovation around AI is to sidestep the labor class, things'll continue getting much darker before they brighten.
And the saddest thing is, the fools think it'll work in their favor, and won't blowback with massive unintended consequences.