> The parents' case hangs largely on the student handbook's lack of a specific statement about AI, even though that same handbook bans unauthorized use of technology. "They told us our son cheated on a paper, which is not what happened," Jennifer Harris told WCVB last month. "They basically punished him for a rule that doesn't exist."
I'm going out on a limb here, but if this is the viewpoint of the people who raised him, then I'm not surprised he cheated.
If this was my son and the facts were the same, he'd be grounded in addition to whatever consequence the school deems fit.
> I'm going out on a limb here, but if this is the viewpoint of the people who raised him, then I'm not surprised he cheated.
At my child's elementary school and now middle school the teachers openly ecourage using AI chat for schoolwork, to help the kids understand the usefulness and the uselessness of it.
I think that's great. AI isn't going away, so pretending students won't be using it at all times, both now and in their lives, is pretty naive and hopeless. Better to embrace it and teach them what it can and can't do.
From the article:
> the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations).
They should not be punished for using AI. They should, however, just get a very bad grade for "including citations to nonexistent books". Which would also be a good lesson in why AI is useful but you can't trust it at all without validation.
> They should not be punished for using AI. They should, however, just get a very bad grade for "including citations to nonexistent books".
As far as I am aware, they weren’t punished for using AI specifically, they were punished for straight up copypasting AI output thus violating cheating policies (in addition to hallucinating non-existent citations). Whether it was copypasted from AI or from another real person is irrelevant, the AI part just made it easier to prove in this particular case.
The school even affirmed in their statement that AI was allowed to be used for brainstorming topics or identifying topics (quote directly from the article):
“Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application”
the purpose of school is to prepare students for the real world, and in the real world, there's quite the delta between copying off ai and plagiarism, copying off ai may annoy some people, but there's no law against it (at least right now)
> copying off ai may annoy some people, but there is no law against it
There is no law against plagiarizing on assignments by copying off a real human either, what’s your point?
If AI generated an assignment paper for me, and I simply copypasted it and turned it in as my own, I don’t see it being materially different from the same being done with a paper written by another human (rather than AI).
> the purpose of school is to prepare students for the real world
The purpose of each individual class is to teach students the skills that the class is supposed to teach. If the class in the OP was on proper usage of AI, that would be a different story, but it wasn’t.
Similarly, you wouldn’t write your own memory allocator in the real world, you would use an existing library for that. That doesn’t mean you will get away with just simply importing an existing memory allocator library in your assignment code and calling it a day.
>AI isn't going away, so pretending students won't be using it at all times, both now and in their lives, is pretty naive and hopeless. Better to embrace it and teach them what it can and can't do.
I agree with that but school should focus on fundamentals. The temptation and the ability of AI to do the school work for students so they don't need to put into any effort is a real problem; learning should require some effort and AI could solve that part for students. On the other hand AI could be used for some type of education, but then people tend to throw the baby with the bathwater and declare AI as the principal tool for learning, further diminishing the role of professors and mentors.
> > the students had indiscriminately copied and pasted text from the AI application
If that's what they were doing, then I agree that the real problem wasn't the use of AI as such. The real problem was that this is clearly plagiarism. Punishing it the same as any other plagiarism sounds appropriate.
Is it clearly plagiarism? I wouldn't say it is that clear-cut, since in a sense the output of an LLM to a prompt you give it could still be seen as something you produced -- albeit with the help of a magical matmul genie.
Yes. It’s clearly plagiarism. Your reply is clearly grasping at the furthest of straws in an attempt to be contrarian and add another “stochastic parrot hehe!” comment to the already overflowing pile. Line up 100 people and the only ones agreeing with you are other wannabe contrarians.
I truly don't understand the tone of your comment.
I'm not grasping at the furthest of straws, I see a distinction between 'verbatim copying someone else's work' and 'verbatim copying the results of a tool that produces text'.
Plagerism isn’t the copying part, it’s the part where you claim to be the author of something you are not the author of. Hope that helps to clear up things. You can plagerism content that your are both legally and ethically allowed to copy. It doesn’t matter the least bit. If you claim to be the author of content you didn’t author and lack attribution AI or otherwise then you’re plagering the content.
> A translation tool like DeepL is presumably trained on a huge amount of 'other people's work'. Is copying its result verbatim into your own work also plagiarism then?
Yes, if you present yourself as its author.
So let's say you are not a native English speaker and write a passage of your paper in your native language, then let DeepL translate this and paste the result into your paper, without a note or citation. Is that plagiarism?
the tool actually produces text… of someone else’s work… that you then copy… verbatim… :)
But the text itself is not someone else's work verbatim.
A translation tool like DeepL is presumably trained on a huge amount of 'other people's work'. Is copying its result verbatim into your own work also plagiarism then?
plagiarism - by definition - is copying someone else’s work.
the easier definition is “did YOU write this?” if answer is no - you plagiarised it and should be punished to the full extent.
'Someone else's work' -- exactly. Not 'the output of some tool'.
I'm not saying what the guy did wasn't wrong or dumb, I'm saying: Plagiarism has a strict definition, and I don't think it can be applied to the case of directly copying the output of an LLM -- because plagiarism refers to copying the work of another author, and I don't think LLMs are generally regarded (so far) as being able to have this kind of 'authorhood'.
plagiarism does NOT refer to copying the work of another author, it refers to you submitting work as yours that you didn’t yourself write.
if I copy entire article from the Economist, did I plagiarize!? There is no author attribution so we don’t know the author… Many articles in media today are LLM generated (fully or partially), can I copy those if someone sticks there name as author?!
bottom line is - you didn’t do the work but copied it from elsewhere, you plagiarized it, period
I'll just link here to another comment I made that sums up my argument quite well, I think:
It seems clear to me. The student is claiming that he wrote something that he didn't write.
Definition of plagiarism, by the Cambridge Dictionary:
"the process or practice of using another person's ideas or work and pretending that it is your own"
What I am objecting to is the "another person's" part. An LLM is not a person, it is a tool -- a tool that is trained on other people's work, yes.
If you use a different tool like DeepL, which is also trained on other people's work, to produce text purely from an original prompt you give it (i.e. translate something you wrote yourself), and you put that into your paper... is that then plagiarism as well? If not, what if you use an LLM to do the translation instead, instructing it to act strictly as a 'translation tool'?
It seems to me, the mere act of directly copying the output of an LLM into your own work without a reference cannot be considered plagiarism (in every case), unless LLMs are considered people.
Of course, you can prompt an LLM in a way that copying its output would _definitely_ be plagiarism (i.e., asking it to quote the Declaration of Independence verbatim, and then simply copying that).
So, all I'm saying is: The distinction is not that clear, has nuances, and depends on the context.
By your argument, since an encyclopedia is not a person, I can copy it with impunity. It's a collection of work built on others' ideas and research, but technically a tool to bring it together. I can assure you that virtually any school would consider the direct use of it, without citation, plagiarism.
Let's assume I used an encyclopedia outside of my native tongue. I took the passage verbatim, used a tool to translate it to my native tongue, and passed it off as my own. The translation tool is clearly not a person, and I've even transformed the original work. I might escape detection, but this is still plagiarism.
Do you not agree?
Let's go to how Cambridge University defines it academically:
> Plagiarism is defined as the unacknowledged use of the work of others as if this were your own original work.
> A student may be found guilty of an act of plagiarism irrespective of intent to deceive.
And let's go to their specific citation for the use of AI in research:
> AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge
> By your argument, since an encyclopedia is not a person, I can copy it with impunity.
I don’t see where they said (or implied) that.
How does “that isn’t plagiarism” imply “I can copy it with impunity”? Copyright infringement is still a thing.
Have you conflated plagiarism with copyright infringement? Neither implies the other. You can plagiarize without committing copyright infringement, and you can violate copyright without plagiarism.
I'm sorry, but this encyclopedia analogy really doesn't say anything at all about the argument I raised. An encyclopedia is the work of individual authors, who compiled the individual facts. It is not a tool that produces text based entirely on the prompt you give it. Using an encyclopedia's entries (translation or not) without citing the source is plagiarism, but that doesn't have any parallel to using an LLM.
(Also, the last quote you included seems to directly support my argument)
The translation software isn't a person. It will necessarily take liberty with the source material, possibly even in a non-deterministic fashion, to translate it. Why would it be any different from a LLM as a tool in our definition of plagiarism?
If I used a Markov Chain (arguably a very early predecessor to today's models) trained on relevant data to write the passage, would that be any different? What about a RNN? What would you qualify as the threshold we need to cross for the tool to not be to be plagiarism?
when did he imply that a LLM would be different as a tool than a translator in his definition of plagiarism? are you even understanding his points lmao?
There's nuances to the amount of harm dealt to the authors based on what sources you are stealing from, but it's irrelevant here, as the specific incident we're talking about is whether or not the student is the actual author of the work submitted.
It'd be the same as if I had Google Translate do my German 101 exam. I even typed the word "germuse" with my own two thumbs!
What we are talking about in this sub-thread is exclusively the 'this is clearly plagiarism' part.
If you used Google Translate for your German 101 exam, that would be academic dishonesty/cheating, but not plagiarism.
I'm largely uninterested in the specific name you want to give it and more if its worthy of punishment.
> What I am objecting to is the "another person's" part.
Fair enough. We disagree about definitions here. To me, plagiarizing is claiming authorship of a work that you did not author. Where that work came from is irrelevant to the question.
> If not, whatif you use an LLM to do the translation instead, instructing it to act strictly as a 'translation tool'?
Translation is an entirely different beast, though. A translation is not claiming to be original authorship. It is transparently the opposite of that. Nobody translating a work would claim that they wrote that work.
> Fair enough. We disagree about definitions here. To me, plagiarizing is claiming authorship of a work that you did not author. Where that work came from is irrelevant to the question.
This is exactly what it is ... the post is taking "another person's" waaaay to literally - especially given that we are in the year of our Lord 2024/2025. One of the author's comments above is also discarding Encyclopedia argument stating that they are written by people which cannot ever be factually proven (I can easily ask LLM to create an Encyclopedia and publish it). Who is "another person" on a Wikipedia page?! "bunch of people" ... how is LLM trained? "bunch of people, bunch of facts, bunch of ____"
The crux of this whole "argument" isn't that plagiarism is "another person's work" it is that you are passing work as YOURS that isn't YOURS - it is that simple.
Well, I understand, and I suspect that a lot of people commenting here see the term similarly to you; but there's an official definition regardless of your personal interpretation, and it does include the 'somebody else's work' part.
Why is translation a different beast? It produces text based on a prompt you give it, and it draws from vast amounts of the works of other people to do so. So if a translation tool does not change the 'authorship' of the underlying text (i.e., if it would have been plagiarism to copy the text verbatim before translating it, it would be plagiarism after; and the same for the inverse), then it should also be possible for an LLM to not change the authorship between prompt and output. Which means, copying the output of an LLM verbatim is not necessarily in itself plagiarism.
> but there's an official definition regardless of your personal interpretation, and it does include the 'somebody else's work' part.
No, it doesn't. First of all, dictionaries aren't prescriptive and so all quoting a definition does is clarify what you mean by a word. That can be helpful toward understanding, of course.
That said, the intransitive verb form of the word does not require "somebody else's work" in the sense of that "someone else" being a human.
> to commit literary theft : present as new and original an idea or product derived from an existing source
-- Merrian-Webster https://www.merriam-webster.com/dictionary/plagiarizeAccording to this, what it means is taking credit for a work you did not produce. That work did not have to be produced by a human, it merely had to exist.
> Why is translation a different beast?
Because it doesn't produce a new work, it just changes the language that work is expressed in. "Moby Dick" is "Moby Dick" regardless of what language it has been translated to. This is why the translator (human or otherwise) does not become the author of the work. If you were to run someone else's novel through a translator and claimed you wrote that work, you would in every respect be committing plagiarism both by the plain meaning of the word and legally.
> copying the output of an LLM verbatim is not necessarily in itself plagiarism.
Yes, it is. You would be taking credit for something you did not author. You would be doing the same if you took credit for a translation of someone else's work.
Did he write it? Did he write 99% of it? 98%? Less than 5% of it?
Then did he represent it as his own work?
>They should not be punished for using AI. They should, however, just get a very bad grade for "including citations to nonexistent books".
This. We should apply this sort of logic in most situations, penalize for the actual 'crime', worrying about the meaningless details outside of that is just a distraction that confuses the issue.
It was considered cheating to copy and paste my paper from Wikipedia.
Why is it any different in this case?
Agreed.
Only someone trying to cheat would use the excuse that it wasn’t explicitly stated that AI was cheating.
This reminds me of the court case where they asked the court to define child pornography and they said “I can can’t define it, but I know it when I see it.”
Imagine saying with a straight face that some pictures you have of a minor are fine because this particular pose, facial expression, and clothing wasn’t specifically designed child porn. It would instantly make you sound like a pedo, like he sounds like a cheater
> This reminds me of the court case where they asked the court to define child pornography and they said “I can can’t define it, but I know it when I see it.”
If you’re referring to the famous statement from Justice Potter Stewart’s concurrence in Jacobellis v. Ohio, that comment was in reference to defining “hardcore pornography,” not child pornography.
Exactly, it wasn't about CP (in particular) at all, just pornography. Which makes it a really horrible ruling: at least with CP, you can use age limits, though there's still huge controversies about 1) pictures by parents of their kids doing stuff like taking a bath and 2) artwork/animation that isn't even photography and never involved any actual children.
Stewart's ruling was ridiculous: how is anyone supposed to know whether something is "pornographic" or not if it can just be determined by some judge arbitrarily and capriciously, and there's no objective standard whatsoever? They could just rule any picture of an unclothed person to be illegal, even if it's something medical in nature. Heck, they could even rule a medical diagram to be porn. Or anything they want, really.
> though there's still huge controversies about 1) pictures by parents of their kids doing stuff like taking a bath
Is this a controversy in the US?
I am fine with people worrying about Zuckerberg or Sandar potentially watching their naked kids, but I guess that the angle is that the parents would be 'perverts' and not the surveillance state being a problem?
I think Germany has exclusions for pictures with your kids taking baths. That’s just pretty common and midwives will tell you to take a picture when they show you how to bath your baby for the first time.
We're talking about an American court ruling here, and America is well-known for being far more prudish than Germany, so I'm sure there's no exclusion there for kids taking baths.
Back in the 70's there was a sex-ed book called "Show Me" which featured very frank photographs of both adults and children, completely naked.
It was subject to litigation at the time it was released and was upheld as not being obscene. But it continued to cause moral panics and was eventually taken out of print by the publisher.
But it was never banned and can still be found as a collector's item. And in the mid 70's it was completely normal to walk into a book store and see it on display.
It's such a stupid argument. That because the handbook didn't list the specific method of cheating it was therefore allowed.
How about just don't cheat?
Highly successful people like Elon Musk and Donald Trump succeed in part because they push the rules really hard, and they win enough in the cases when it matters. So it's a proven strategy, though it doesn't always work.
Criminals push the rules really hard too. It's just that they're only labeled criminals by society if they're caught, prosecuted, and punished.
People who "push the rules" (I call this cheating, if I was playing a tabletop game with someone who did this, I would say they were cheating too) have an unfair advantage over the people who do follow the rules.
It's like sociopaths: smart sociopaths become successful businesspeople and politicians, while stupid sociopaths end up in prison.
People who are good at pushing the rules and have some kind of knack for knowing which rules to push and how far, end up succeeding over the rule-followers, while the people who lack this talent and push the wrong rules, or push too far, end up failing somehow (and maybe in prison).
It's no good cherry-picking success stories to evaluate a strategy. You have to know how often it fails too.
Anyway, "pushing the rules" is so vague, and it could be done intelligently, ethically, or otherwise.
In this particular case the student copy-pasted AI output in an assignment and tried to pass it off as his own work. I mean, come on... the fact his went to court is just absurd
What's the relevance? Are you going to embark on a "Let's Educate The Users" mission for parenting?
It would be futile. Parents and children are now united in not wanting to be educated.
This has got to be specific to the US, yes? None of my overseas colleagues have this attitude toward education, and my wife (now an American, but an immigrant) certainly doesn't.
It seems to highly correlate with religious fundamentalism, IMO, which has a great many angry adherents worldwide, as well as in the US.
What is unauthorized use of technology? Is the light I need to read not technology? Is using the internet to find more about a topic not technology? Where is the line that makes AI forbidden?
The lack of implicit or explicit authorization. As the school has lights, you may assume they are authorized implicitly.
This is unproductive and unenlightening pedantry.
I think the throwaway actually raises the valid point about the rule being an exceedingly broad catchall. The type primed for selective and weaponized enforcement.
That said, the kids are clearly defenseless in this situation, for blatant plagiarism as well as just being just being factually incorrect in their report.
> The type primed for selective and weaponized enforcement
Theoretically true, but irrelevant because this particular case isn't that.
Yes, it is broad, and probably a bad rule. That said, there is more than enough than that simple rule in this case that points toward intentional academic dishonesty. If he was my son, getting off on a technicality isn’t exoneration in my house.
In education, the goal is internalizing to the individual the knowledge required for them to bootstrap into a useful, contributing member of society. Things like composition of written works, organizing one's thoughts into communicable artifacts, doing basic mathematics, familiarity with the local and encompassing polity and it's history, how to navigate and utilize institutions of research (libraries) etc... Any technology employed that prevents or sidesteps that internalization is unauthorized.
It ain't that hard to connect the dots unless you're going out of your way to not connect the dots.
If that is the goal of a specific task, be explicit about that. From my personal experience, it's just what old teachers say because they can't deal with the new reality. The teachers who incorporated AI and other tech into their lessons are much more successful and better rated - by parents as well as children, and the final exams too.
They were explicit about that! There was an AI policy that the student knew about and blatantly ignored. I am not sure how much more did you want.
Here is a relevant quote from the TFA:
> Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations).
(And re better rating: sadly making classes less useful usually improves the rating. I sure if there was a class where they were just watching cartoons, with super-simple quizzes at the end that everyone could answer, it'd have highest ratings from most students, as well as high ratings from many parents, and best scores on finals. Now, it might not work that hot in real world, but by that time the school would be long over...)
The problem is that the school simply didn't teach them about the tool enough, and they taking something that should be just another lesson as disciplinary action.
Why do you expect children to know math only after months of tries, but understand the perils of AI after hearing one sentence regulation? That's not going to help the kids. You need to spend time practicing using the tool with them, showing them the pitfalls in practice, and only after enough time you can start rating them.
The school handed them a gun and they're Pikachu surprised a kid got shot, and now they're blaming it on the kid that had it in hands - but the school is to blame. And it's certainly not newsworthy, or where is the math exam results article?
> they taking something that should be just another lesson as disciplinary action.
Remember the "disciplinary action" here is giving the student a bad grade, with the opportunity to redo the assignment.
Are you seriously asserting they should've gotten a good grade for a paper that cites sources that don't exist? In an advanced placement class no less?
If anything they're getting of lighter than students who did a bad job on their own without using AI. I know I was never given a chance to redo a paper I phoned in.
I know my paper was never in the national news.
Did your parents sue over it? It's not like the school went running to the media.
Why do you think that it’s in the national news!? Can you please re-state your actual view because I’m really not sure of it anymore.
Something that can be read 10k kilometers away from the school by people who never met the kid.
The students are not expected to understand the perils of AI, they are expected to follow the policies the teacher gave them. And the policies were very clear: they can use AI for inspiration and web search, but they cannot use it to write text to be submitted.
What you are describing might make sense for "ChatGPT class", but that wasn't it, that was AP History.
(And that's how schools work in general: in real life, no one integrates matrixes by hand; and yet calculus classes do not teach CAS systems or their perils)
The current trend in education is to blend subjects and methods together and create cohesive interdisciplinary lessons practicing multiple skills at once. "ChatGPT lesson" is the 19th century way.
This is like saying that they have an AI policy, but that using an LLM isn’t in violation since it is just calculus applied to words and not true intelligence.
Courts aren’t stupid for the most part. Most of them are happy to interpret things in terms of what a ‘reasonable person’ would think, for better or worse. Right now, most reasonable people would say that using an LLM to do your work without disclosing it is in violation of the spirit of the student handbook.
I could fine tune an AI, and name it after myself, and put my own name at the top of the paper as an attribution, but no reasonable person would say that was following the spirit of the law if I turned that in as a class assignment.
The part where students were given explicit guidance on the use of AI resources and told how to cite it appropriately. Besides, even aside from the use of technology it’s still a blatant case of plagiarism as he passed off work he did not write as his own.
How could you even cite AI “appropriately”? That makes about as much sense as citing Google..
Like e.g. this:
“Clarity isn’t found in answers; it’s carved from the questions we dare to ask.” - ChatGPT, https://chatgpt.com/share/67439692-b098-8011-b1df-84d3761bba...
I know that being obtuse is sometimes fun..
But no paper (even high school level) which does that should ever be accepted..
No paper?
If you're clear what your sources are, why does it matter who (or what) you quote?
Boris Johnson and GWB are known (for different reasons) spouting jibberish sentences, yet cite them and if the quote was appropriate then no foul; all fiction is made up and that too can be quoted when appropriate.
When I was at school in the UK 24 years back, media studies was denigrated as "mickey mouse studies", but in restrospect the ability to analyse and desconstruct media narratives would have been useful for the country.
Now AI is a new media.
Sometimes it will be the right thing to quote; sometimes it will be as much of an error as citing Wuthering Heights in an essay about how GWB handled the War On Terror.
20 years ago, when I was taking a graduate class in Technical Communications, we were taught how to properly cite online sources. This is far from being new.
Have you actually read the piece? The answers to those is in the written policy the student was given. But even without the policy, it should be pretty clear that passing others' work as your own (be they people or AI) is academic dishonesty.
As judge said, "the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced"
Is what I wrote here mine or not? I used the autocorrect suggestions almost exclusively, wrote few letters only.
Then, no. This isn’t text you generated. No one cares on Internet forums though.
Who came up with the words? If autocorrect is acting as a typist, transferring your words to screen, you are the author.
What if I first asked ChatGPT what should I say? And what's the difference from just copy pasting it?
The question is who comes up with words. If you re-type textbook, you are plagiarizing. Same happens if you re-type ChatGPT output.
On the other hand, if you read some text first (be it ChatGPT's output, or a textbook) and then rephrase it yourself, then you are the author.
How much you have to rephrase? Is changing every other word with synonym enough? That's actually a gray area, and it depends on the teacher. Most teachers would expect you to at least change sentence structure. But in this case it's completely irrelevant, as we know the students did copy/paste.
I really don't see why you are trying to present ChatGPT like something special re plagiarism. Copying other's work is copying. Paying $10 to someone to do your homework and then copying their answer as-is is cheating. So is using ChatGPT yo do it for free.
ChatGPT is not someone. It's a tool.
So is a textbook. Still not OK to copy homework from it.
Textbook has an author that you can copy. You can't copy output of an auto suggest, it's just yours.
It does not matter if author is human or computer.
If there is a spelling bee, but student is secretly using spellcheck on the phone, they are cheating.
If there is a math speed competition, but student is using a calculator on the phone, they are cheating.
If it's a calculus exam, but student is using Wolfram Alpha (or TI-89) to calculate integrals and derivatives, it is cheating.
If it's a written exam but student is using ChatGPT to write the text, it is cheating as well. Not that different from previous cases.
There is no difference. They’re not your words.
These are games no one in the real world is interested in playing.
How come they're not my words? I thought of sending them, not the keyboard. Same with ChatGPT, it doesn't do anything on its own - even if it could, it's a tool, not a person.
If he had turned in the prompt he wrote, then this would be his words.
If your position is correct, then I suppose asking another person to write an essay for you is in fact your writing as well. Which is absurd.
This pedantry is useless. Ask any person if an essay produced by AI was written by the person writing the prompt and I think a majority will say “no”. If AIs writing essays for you isn’t plagiarism, then nothing is.
How could it be the same if another person wrote it?
Situation A - a person uses a tool. They are the author.
Situation B - a person contracts another person. They are not the author. The other person might be using a tool, but it doesn't matter.
This is not a theoretical debate. This is how the current legal framework works, and if you expect someone to behave differently, you have to be explicit. Change the law if you want the default to be different. This is how it works now.
> This pedantry is useless. Ask any person if an essay produced by AI was written by the person writing the prompt and I think a majority will say “no”.
I'm an expert in the field. I don't need to ask mainstream people about their sci-fi influenced opinions, they don't matter to me nor to the law. This is how it works, it's not magic, it's not scifi, it's not a person, it can't be an author and thus it can't be plagiarized by the user, nor the user can violate the tool's copyright by using the output.
It's a tool that authors can use to create their works, and even if all they did is push a single button, they are the author. Compare this to me applying a filter on a blank canvas to produce an abstract art wallpaper in Photoshop - am I the author or is Photoshop? Let me tell you, don't try to steal my work. I did it, not Photoshop, that was just a tool that made it easier for me.
Same with ChatGPT - this morning I used it to write an architectural proposal. It's my proposal, there is no way I somehow "plagiarized" something. Doesn't matter that I pushed like 50 buttons in total; it's my work, I am the author.
--
And if schools don't actively teach children to use this technology, they should be reformed. If this school was in my city, I'd vote for the party that will cancel it - for the extremely serious offense of letting the children think it's wrong to use tools. At least they should explain that the goal is elsewhere and why a particular tool shouldn't be used for that task. It's not like children will just know, you know.
This is just like when teachers at my own school told us that Wikipedia is a bad source and we can't use it because it can't be trusted. That's total bullshit, just be honest - this assignment is supposed to test your writing skills, not tool usage skills.
> This is how the current legal framework works, and if you expect someone to behave differently, you have to be explicit.
The classroom teacher was explicit in their expectations. The student did not follow the instructions. Did you RTFA?
> It's a tool that authors can use to create their works
This isn't about authorship or ownership. It doesn't matter whether the words are "yours" or not - that you keep making that a point of contention is a semantic sideshow. This isn't a commercial or artistic enterprise. If the student wants to publish their copy-pasted, error-ridden slop (or masterful slop!), then by all means, they can go for it.
Rather, this is a classroom, the goal is to learn, think, and demonstrate understanding. Copy-pasting responses from AI does not do this (even setting aside the explicit instructions not to to do so). Similarly, plagiarism isn't bad in high school because it hurts other authors, it's bad because it's a lazy end-run around the process of learning and thinking that undermines those goals and creates bad habits to avoid critical thinking.
> a person uses a tool
If you simplify and trivialize everything to an extreme degree, then yes, you might have a point…
> Wikipedia is a bad source and we can't use it because it can't be trusted
No, but you still shouldn’t use it for anything besides research (i.e. you should be citing the sources the Wikipedia article is based on directly).
> just be honest - this assignment is supposed to test your writing skills,
Isn’t that more than perfectly obvious already?
It's important to understand that all ChatGPT knows is what's in its training set, and what was in the prompt. I see it all the time in programming - if you are doing a boring, run-off-the-mill websites or plugging other peoples' libraries together, then AIs like copilot have a very high success rate. But try something new or unusual (even a bit unusual, like QNX), and the lack of training data means the amount of hallucinations becomes incredibly high, and the resulting code is often outright wrong.
A lot of times that's fine - after all, there are programmers who spend their careers never writing an original algorithm, and they could definitely use the help. And in a lot of cases an original architecture is actually a bad idea - stick to the boring technology, create your architecture proposal based on re-blending previous projects with a tiny bit of changes. Nothing wrong with that.
But school tries to be better than that (it does not always succeed, but it at least tries). That's why the students are taught how to multiply numbers by hand, and only then they are allowed calculators. And they must be taught how find their primary sources, before being allowed to use Wikipedia. And they must be taught how to write their thoughts in their words, before being allowed to ChatGPT.
Sure, some of them will go on and never multiply numbers by hand, nor ever read a primary source nor ever create a highly original proposal. That's fine and even expected. Still, the school aims high even if not everyone can get there.
Somewhere in there is the problem. ChatGPT & Co should be viewed as tools - they can do research, they can discuss and echo your ideas, they can write FOR YOU. (Or at least that's the idea, when they don't go nuts). And if they are your tool, you are doing the work - you are the author.
If ChatGPT copies the wikipedia article, you are now in trouble. Your tool caused you to plagiarize and not even notice but that won't save you. Same if you include complete nonsense generated by your tool. Still your fault (although some lawyers have been getting away with it.)
If you copy the wikipedia article yourself, at least you know you cheated.
But equating the tool to the plagiarizing is absurd. What is possible is that perhaps the problem is now outperformed / overpowered by the tool. It may now be trivial to answer the problem. A few clicks?
But again it's long tradition for schools to outlaw some tools in the effort to teach something specific that the tool would replace: calculator out of bounds, or only basic calculator and no computer, no books, sliderule only, etc.
Yes, exactly right. If your tool caused you to plagiarize, it's your problem - you should've checked the output. You can't point to ChatGPT, it'd be like pointing to Word. You are the author and thus you are responsible.
Schools need to adapt. Explain why a tool is forbidden - it will improve efficiency of learning if the kids know what is the purpose. Use different more fun method to teach mind skills, and teach overcoming problems using various always evolving and changing hi-tech tools - instead of the rigid lesson/homework style of today. Teach them that change is constant and that they need to adapt. This is not something theoretical - there are state sponsored schools operating in this way where I live. It's great.
Come on, it's not that complicated. The judge determined that the student has copy/pasted the AI output and submitted that. That's not the same as using AI for research just like you use Google Search for research.