Top
Best
New

Posted by PretzelFisch 5 hours ago

Training students to prove they're not robots is pushing them to use more AI(www.techdirt.com)
131 points | 132 commentspage 2
etempleton 3 hours ago|
When I was in high school I was a better writer when I had time (versus in class) and generally a better writer than I was a student. The net result was fairly often being accused of plagiarism. Not because the teacher had proof(I never plagiarized), but because the teacher couldn’t believe I could write to the level I sometimes wrote at on take home assignments. Admittedly, I was a wildly inconsistent student.

This reminds me a bit of that. AI writing is—in many ways—objectively very good, but that doesn’t matter if no one thinks you wrote it. AI writing is boring exactly because it is consistent and like any art form people want to see something original.

jupp0r 3 hours ago||
Sounds like a great opportunity for kids in high school to learn how to feed back the AI detection results into the model and have this process be automated. Next level would be fine tuning the model via reinforcement learning and sharing it with your friends via Hugging Face.
carcabob 3 hours ago||
A few times in some Discord communities, I've been accused of being A.I. because of how I write. Kind of sad and a bit annoying. I also quite like em dashes, but have felt the need to reduce how much I use them.

Glad to see some schools and teachers teach how to use them well, rather than ban them outright.

ghaff 3 hours ago|
em-dashes have been house style for where I've worked for over a couple decades. If people don't like it, F them. I'm not going to change how I write because people may think it make me more AI-like.
themafia 4 hours ago||
If you're just going to use software to judge the output of students then why don't we all just keep them at home? I have a computer at home and it seems like everyone from the teachers to the school board have just abdicated their responsibility. This doesn't sound like a system that needs to be maintained.
johnvanommen 3 hours ago|
Right? It’s the obvious question:

why are they using software to detect software?

I can detect AI written prose in less than five seconds; I would expect a trained teacher to be able to do that as well.

mold_aid 3 hours ago||
You know you can't just say "I detect AI written prose" and then do whatever you want about it, right? It's not difficult, sure, to detect it. It's difficult to prove that it's true and then punish the student for it.
tliltocatl 2 hours ago||
Define "worse". I absolutely hated this formal essay style even before LLMs were a thing. All these "on the other side", "in conclusion" patterns with loads of generics of doesn't convey anything useful. And they make it really hard to tell if the writer is pretending to know anything or actually knows their shit but don't know how to write so that doesn't sound like an essay assignment. Good riddance.

On a side note: the fixed-pattern essay thing seems to be an American invention, or at least popularized by the American education system.

with 2 hours ago||
nobody's asking who profits from false positives. these AI detection vendors have a direct financial incentive to flag aggressively. more flags = "more value" = more school contracts renewed. same playbook as selling antivirus to your grandma. sell fear, charge per seat, and make the false positive rate someone else's problem.
ipcress_file 2 hours ago|
Do you have any evidence to back this up or is it speculative?

My institution subscribes to TurnItIn's AI detector. The documentation is quite clear that the system is tuned in a manner that produces a significant number of false negatives and minimizes false positives. They also state that they don't report anything under "20% AI-generated" content.

So the marketing I've seen is intended to reassure skittish administrators that the software is not going to generate false accusations.

That being said, I have no idea whether the marketing claims are true. The software is a black box.

with 1 hour ago||
Fair point, the "tuned to flag aggressively" claim was speculative on my part. Turnitin's own documentation says they favor false negatives over false positives.

That said, their accuracy claims have been disputed before. Inside Higher Ed [1] reported that Turnitin's real-world false positive rate was higher than originally asserted, and the company declined to disclose the updated number. And, USD also noted that while Turnitin claimed <1% false positives, a Washington Post investigation found a 50% rate on a smaller sample, and that non-native English speakers / neurodivergent students get flagged at higher rates [2].

Now, those are from 2023 and the product (and AI in general) has been updated drastically since. But the broader incentive problem holds even if the detector itself is conservatively tuned. The product is a black box. And the downstream cost of errors falls entirely on students, not on Turnitin's renewal rate. You don't need aggressive tuning for the incentive structure to be broken.

[1] https://www.insidehighered.com/news/quick-takes/2023/06/01/t...

[2] https://lawlibguides.sandiego.edu/c.php?g=1443311&p=10721367

Someone1234 4 hours ago||
I've started do this on social media. I got "called out" after using big words or using a - in a sentence. So now I write less good on purpose, so whatever I commented doesn't get drawn into a sidetrack off-topic witch-hunt.

As soon as someone yells "witch" you cannot disprove you're not one, and I've even had people put my handwritten comments through "AI detector" websites that "proved" they were AI (they weren't). It literally just highlighted two popular English phases.

LLMs were trained on sites like HN and Reddit, so now if you write like a HN or Reddit commentator, you sound like AI...

jjmarr 4 hours ago||
AI only uses big words to engage in elegant variation, not to compress information.

If someone calls an article like this a "jeremiad" I know they're a human.

zahlman 3 hours ago|||
Oh, well chosen. I keep forgetting that word, and lamenting that "diatribe" (or, er, "lament") doesn't quite fit in some situation.
ipcress_file 2 hours ago|||
Interesting. I'll have to keep an eye out for this!
heddycrow 3 hours ago|||
Here's one vote for just be the witch if that's what people need from you.

Just make it be what you want to say and how you want to say it. And when they come after you, shame them to the best of your ability or treat them like they are not there.

mitthrowaway2 3 hours ago||
That strategy didn't work out well for the witches of the past...
heddycrow 2 hours ago||
So what? There's all kinds of things that didn't work in the past that at some point began to work.

It wasn't someone who was primarily motivated by fear of the past that made it work the first time.

Frost1x 4 hours ago|||
I don’t think this is a good long term solution. LLMs can do easy language substitutions and you can even force them to add errors. So relying on that alone won’t work as people intentionally make things look more “human.”
Someone1234 3 hours ago||
Right, but the problem here are other humans yelling "witch," not LLMs. You're combating people's terrible witch-detector, not anything factual or real.
teo_zero 3 hours ago|||
> now I write less good on purpose, so whatever I commented doesn't get drawn into a sidetrack off-topic witch-hunt.

I've begun downvoting each and every entry that questions the authenticity of a comment or article.

I don't even bother if the claim is true or not. A text can be AI-generated and interesting, or human-written and dumb.

zahlman 4 hours ago|||
I have never really gotten the impression that HN or Reddit commentators write in any particular way overall.

LinkedIn, OTOH....

Kye 4 hours ago|||
I put a piece of text in one and the only line it flagged is the one line I actually wrote.
lich_king 3 hours ago||
[flagged]
Someone1234 2 hours ago||
No, I do not post a lot about AI. I am talking about normal social media comments on various topics. Perhaps your writing style doesn't lend itself to such accusations.
j45 4 hours ago||
The more students read, and the more variety they read, the better they will write.

This will likely be valuable for AI skills too.

ramon156 4 hours ago|
This is true, I know someone that has read multiple versions of the bible and their writing style became very similar to that. There's a term for it, I just forgot what the term was
theptip 3 hours ago||
This is what terrifies me about the public school system. A revolution has occurred, but it’s unevenly distributed.

The schools simply don’t have the flexibility, agility, or frankly it seems motivation to adapt to what has already happened.

The ship has sailed; essay writing is no longer a viable form of assessment.

The idea to try to build a reliable AI detector is asinine, and fundamentally misunderstands how any of this works now, let alone the very obvious trend-lines.

Stop with the lazy half-baked solutions, get your head out of the sand, rethink the whole curriculum. This is an emergency, we needed to be urgently attending to this years ago.

heddycrow 2 hours ago||
Public Schools. I think terror there is built as a feature, not a bug. So be afraid.

But keep in mind, it may have always been this way. God bless those few cool teachers in each school who are aware of this and work to rescue a few who need it.

Love changes everything. Good teachers matter.

georgebcrawford 3 hours ago|||
> essay writing is no longer a viable form of assessment.

Of course it is. In person, with an unseen prompt/question. By hand or not doesn’t really matter as we can airgap or just monitor via software when in class.

Someone1234 3 hours ago||
> This is what terrifies me about the public school system.

This has nothing to do with Public School in particular. This is impacting private and university education too.

theptip 1 hour ago||
From what I have seen, (some) private schools are moving faster here; not to say private primary/secondary schools are unaffected, rather that it's worst in public schools.
throw73838 3 hours ago|
> The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excel

Did not this self censorship process started decades ago? There are certain answers expected in academia, arguing for anything else would get you in troubles. Not using “devoid” seems pretty minor inconvenience.

For me biggest wtf is why students are still expected to write graded essays, and to keep this make believe it is somehow useful and applicable skill.

ipcress_file 2 hours ago||
Avoid the theory-heavy disciplines. You won't be told what to think (as often) if you take History and Geography rather than Sociology and Gender Studies.
georgebcrawford 3 hours ago||
An essay is a good gauge of how one can organise their thoughts, argue a position, respond to a stimulus.

In short it’s a good way measure thinking.

ipcress_file 2 hours ago||
This -- and you might just learn how to conduct some research along the way.
More comments...