What is unauthorized use of technology? Is the light I need to read not technology? Is using the internet to find more about a topic not technology? Where is the line that makes AI forbidden?
The lack of implicit or explicit authorization. As the school has lights, you may assume they are authorized implicitly.
This is unproductive and unenlightening pedantry.
I think the throwaway actually raises the valid point about the rule being an exceedingly broad catchall. The type primed for selective and weaponized enforcement.
That said, the kids are clearly defenseless in this situation, for blatant plagiarism as well as just being just being factually incorrect in their report.
> The type primed for selective and weaponized enforcement
Theoretically true, but irrelevant because this particular case isn't that.
Yes, it is broad, and probably a bad rule. That said, there is more than enough than that simple rule in this case that points toward intentional academic dishonesty. If he was my son, getting off on a technicality isn’t exoneration in my house.
In education, the goal is internalizing to the individual the knowledge required for them to bootstrap into a useful, contributing member of society. Things like composition of written works, organizing one's thoughts into communicable artifacts, doing basic mathematics, familiarity with the local and encompassing polity and it's history, how to navigate and utilize institutions of research (libraries) etc... Any technology employed that prevents or sidesteps that internalization is unauthorized.
It ain't that hard to connect the dots unless you're going out of your way to not connect the dots.
If that is the goal of a specific task, be explicit about that. From my personal experience, it's just what old teachers say because they can't deal with the new reality. The teachers who incorporated AI and other tech into their lessons are much more successful and better rated - by parents as well as children, and the final exams too.
They were explicit about that! There was an AI policy that the student knew about and blatantly ignored. I am not sure how much more did you want.
Here is a relevant quote from the TFA:
> Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations).
(And re better rating: sadly making classes less useful usually improves the rating. I sure if there was a class where they were just watching cartoons, with super-simple quizzes at the end that everyone could answer, it'd have highest ratings from most students, as well as high ratings from many parents, and best scores on finals. Now, it might not work that hot in real world, but by that time the school would be long over...)
The problem is that the school simply didn't teach them about the tool enough, and they taking something that should be just another lesson as disciplinary action.
Why do you expect children to know math only after months of tries, but understand the perils of AI after hearing one sentence regulation? That's not going to help the kids. You need to spend time practicing using the tool with them, showing them the pitfalls in practice, and only after enough time you can start rating them.
The school handed them a gun and they're Pikachu surprised a kid got shot, and now they're blaming it on the kid that had it in hands - but the school is to blame. And it's certainly not newsworthy, or where is the math exam results article?
> they taking something that should be just another lesson as disciplinary action.
Remember the "disciplinary action" here is giving the student a bad grade, with the opportunity to redo the assignment.
Are you seriously asserting they should've gotten a good grade for a paper that cites sources that don't exist? In an advanced placement class no less?
If anything they're getting of lighter than students who did a bad job on their own without using AI. I know I was never given a chance to redo a paper I phoned in.
I know my paper was never in the national news.
Did your parents sue over it? It's not like the school went running to the media.
Why do you think that it’s in the national news!? Can you please re-state your actual view because I’m really not sure of it anymore.
Something that can be read 10k kilometers away from the school by people who never met the kid.
The students are not expected to understand the perils of AI, they are expected to follow the policies the teacher gave them. And the policies were very clear: they can use AI for inspiration and web search, but they cannot use it to write text to be submitted.
What you are describing might make sense for "ChatGPT class", but that wasn't it, that was AP History.
(And that's how schools work in general: in real life, no one integrates matrixes by hand; and yet calculus classes do not teach CAS systems or their perils)
The current trend in education is to blend subjects and methods together and create cohesive interdisciplinary lessons practicing multiple skills at once. "ChatGPT lesson" is the 19th century way.
This is like saying that they have an AI policy, but that using an LLM isn’t in violation since it is just calculus applied to words and not true intelligence.
Courts aren’t stupid for the most part. Most of them are happy to interpret things in terms of what a ‘reasonable person’ would think, for better or worse. Right now, most reasonable people would say that using an LLM to do your work without disclosing it is in violation of the spirit of the student handbook.
I could fine tune an AI, and name it after myself, and put my own name at the top of the paper as an attribution, but no reasonable person would say that was following the spirit of the law if I turned that in as a class assignment.
The part where students were given explicit guidance on the use of AI resources and told how to cite it appropriately. Besides, even aside from the use of technology it’s still a blatant case of plagiarism as he passed off work he did not write as his own.
How could you even cite AI “appropriately”? That makes about as much sense as citing Google..
Like e.g. this:
“Clarity isn’t found in answers; it’s carved from the questions we dare to ask.” - ChatGPT, https://chatgpt.com/share/67439692-b098-8011-b1df-84d3761bba...
I know that being obtuse is sometimes fun..
But no paper (even high school level) which does that should ever be accepted..
No paper?
If you're clear what your sources are, why does it matter who (or what) you quote?
Boris Johnson and GWB are known (for different reasons) spouting jibberish sentences, yet cite them and if the quote was appropriate then no foul; all fiction is made up and that too can be quoted when appropriate.
When I was at school in the UK 24 years back, media studies was denigrated as "mickey mouse studies", but in restrospect the ability to analyse and desconstruct media narratives would have been useful for the country.
Now AI is a new media.
Sometimes it will be the right thing to quote; sometimes it will be as much of an error as citing Wuthering Heights in an essay about how GWB handled the War On Terror.
20 years ago, when I was taking a graduate class in Technical Communications, we were taught how to properly cite online sources. This is far from being new.
Have you actually read the piece? The answers to those is in the written policy the student was given. But even without the policy, it should be pretty clear that passing others' work as your own (be they people or AI) is academic dishonesty.
As judge said, "the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced"
Is what I wrote here mine or not? I used the autocorrect suggestions almost exclusively, wrote few letters only.
Then, no. This isn’t text you generated. No one cares on Internet forums though.
Who came up with the words? If autocorrect is acting as a typist, transferring your words to screen, you are the author.
What if I first asked ChatGPT what should I say? And what's the difference from just copy pasting it?
The question is who comes up with words. If you re-type textbook, you are plagiarizing. Same happens if you re-type ChatGPT output.
On the other hand, if you read some text first (be it ChatGPT's output, or a textbook) and then rephrase it yourself, then you are the author.
How much you have to rephrase? Is changing every other word with synonym enough? That's actually a gray area, and it depends on the teacher. Most teachers would expect you to at least change sentence structure. But in this case it's completely irrelevant, as we know the students did copy/paste.
I really don't see why you are trying to present ChatGPT like something special re plagiarism. Copying other's work is copying. Paying $10 to someone to do your homework and then copying their answer as-is is cheating. So is using ChatGPT yo do it for free.
ChatGPT is not someone. It's a tool.
So is a textbook. Still not OK to copy homework from it.
Textbook has an author that you can copy. You can't copy output of an auto suggest, it's just yours.
It does not matter if author is human or computer.
If there is a spelling bee, but student is secretly using spellcheck on the phone, they are cheating.
If there is a math speed competition, but student is using a calculator on the phone, they are cheating.
If it's a calculus exam, but student is using Wolfram Alpha (or TI-89) to calculate integrals and derivatives, it is cheating.
If it's a written exam but student is using ChatGPT to write the text, it is cheating as well. Not that different from previous cases.
There is no difference. They’re not your words.
These are games no one in the real world is interested in playing.
How come they're not my words? I thought of sending them, not the keyboard. Same with ChatGPT, it doesn't do anything on its own - even if it could, it's a tool, not a person.
If he had turned in the prompt he wrote, then this would be his words.
If your position is correct, then I suppose asking another person to write an essay for you is in fact your writing as well. Which is absurd.
This pedantry is useless. Ask any person if an essay produced by AI was written by the person writing the prompt and I think a majority will say “no”. If AIs writing essays for you isn’t plagiarism, then nothing is.
How could it be the same if another person wrote it?
Situation A - a person uses a tool. They are the author.
Situation B - a person contracts another person. They are not the author. The other person might be using a tool, but it doesn't matter.
This is not a theoretical debate. This is how the current legal framework works, and if you expect someone to behave differently, you have to be explicit. Change the law if you want the default to be different. This is how it works now.
> This pedantry is useless. Ask any person if an essay produced by AI was written by the person writing the prompt and I think a majority will say “no”.
I'm an expert in the field. I don't need to ask mainstream people about their sci-fi influenced opinions, they don't matter to me nor to the law. This is how it works, it's not magic, it's not scifi, it's not a person, it can't be an author and thus it can't be plagiarized by the user, nor the user can violate the tool's copyright by using the output.
It's a tool that authors can use to create their works, and even if all they did is push a single button, they are the author. Compare this to me applying a filter on a blank canvas to produce an abstract art wallpaper in Photoshop - am I the author or is Photoshop? Let me tell you, don't try to steal my work. I did it, not Photoshop, that was just a tool that made it easier for me.
Same with ChatGPT - this morning I used it to write an architectural proposal. It's my proposal, there is no way I somehow "plagiarized" something. Doesn't matter that I pushed like 50 buttons in total; it's my work, I am the author.
--
And if schools don't actively teach children to use this technology, they should be reformed. If this school was in my city, I'd vote for the party that will cancel it - for the extremely serious offense of letting the children think it's wrong to use tools. At least they should explain that the goal is elsewhere and why a particular tool shouldn't be used for that task. It's not like children will just know, you know.
This is just like when teachers at my own school told us that Wikipedia is a bad source and we can't use it because it can't be trusted. That's total bullshit, just be honest - this assignment is supposed to test your writing skills, not tool usage skills.
> This is how the current legal framework works, and if you expect someone to behave differently, you have to be explicit.
The classroom teacher was explicit in their expectations. The student did not follow the instructions. Did you RTFA?
> It's a tool that authors can use to create their works
This isn't about authorship or ownership. It doesn't matter whether the words are "yours" or not - that you keep making that a point of contention is a semantic sideshow. This isn't a commercial or artistic enterprise. If the student wants to publish their copy-pasted, error-ridden slop (or masterful slop!), then by all means, they can go for it.
Rather, this is a classroom, the goal is to learn, think, and demonstrate understanding. Copy-pasting responses from AI does not do this (even setting aside the explicit instructions not to to do so). Similarly, plagiarism isn't bad in high school because it hurts other authors, it's bad because it's a lazy end-run around the process of learning and thinking that undermines those goals and creates bad habits to avoid critical thinking.
> a person uses a tool
If you simplify and trivialize everything to an extreme degree, then yes, you might have a point…
> Wikipedia is a bad source and we can't use it because it can't be trusted
No, but you still shouldn’t use it for anything besides research (i.e. you should be citing the sources the Wikipedia article is based on directly).
> just be honest - this assignment is supposed to test your writing skills,
Isn’t that more than perfectly obvious already?
It's important to understand that all ChatGPT knows is what's in its training set, and what was in the prompt. I see it all the time in programming - if you are doing a boring, run-off-the-mill websites or plugging other peoples' libraries together, then AIs like copilot have a very high success rate. But try something new or unusual (even a bit unusual, like QNX), and the lack of training data means the amount of hallucinations becomes incredibly high, and the resulting code is often outright wrong.
A lot of times that's fine - after all, there are programmers who spend their careers never writing an original algorithm, and they could definitely use the help. And in a lot of cases an original architecture is actually a bad idea - stick to the boring technology, create your architecture proposal based on re-blending previous projects with a tiny bit of changes. Nothing wrong with that.
But school tries to be better than that (it does not always succeed, but it at least tries). That's why the students are taught how to multiply numbers by hand, and only then they are allowed calculators. And they must be taught how find their primary sources, before being allowed to use Wikipedia. And they must be taught how to write their thoughts in their words, before being allowed to ChatGPT.
Sure, some of them will go on and never multiply numbers by hand, nor ever read a primary source nor ever create a highly original proposal. That's fine and even expected. Still, the school aims high even if not everyone can get there.
Somewhere in there is the problem. ChatGPT & Co should be viewed as tools - they can do research, they can discuss and echo your ideas, they can write FOR YOU. (Or at least that's the idea, when they don't go nuts). And if they are your tool, you are doing the work - you are the author.
If ChatGPT copies the wikipedia article, you are now in trouble. Your tool caused you to plagiarize and not even notice but that won't save you. Same if you include complete nonsense generated by your tool. Still your fault (although some lawyers have been getting away with it.)
If you copy the wikipedia article yourself, at least you know you cheated.
But equating the tool to the plagiarizing is absurd. What is possible is that perhaps the problem is now outperformed / overpowered by the tool. It may now be trivial to answer the problem. A few clicks?
But again it's long tradition for schools to outlaw some tools in the effort to teach something specific that the tool would replace: calculator out of bounds, or only basic calculator and no computer, no books, sliderule only, etc.
Yes, exactly right. If your tool caused you to plagiarize, it's your problem - you should've checked the output. You can't point to ChatGPT, it'd be like pointing to Word. You are the author and thus you are responsible.
Schools need to adapt. Explain why a tool is forbidden - it will improve efficiency of learning if the kids know what is the purpose. Use different more fun method to teach mind skills, and teach overcoming problems using various always evolving and changing hi-tech tools - instead of the rigid lesson/homework style of today. Teach them that change is constant and that they need to adapt. This is not something theoretical - there are state sponsored schools operating in this way where I live. It's great.
Come on, it's not that complicated. The judge determined that the student has copy/pasted the AI output and submitted that. That's not the same as using AI for research just like you use Google Search for research.