Posted by usefulposter 1 day ago
I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.
https://xkcd.com/386/ "Duty Calls"
I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.
Are there any places in life where conversation is _not_ intended to be between humans?
To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.
I'm not sure how I feel about this new rule.
If you think your writing could use improvement, then write your comment and let it sit for a few minutes before re-reading it and the comment you are replying to, make your edits and then post it. It will give your brain time to reset and maybe spot something you didn't earlier.
Seeing value in that "learning experience" and not, is the basis of our disagreement, perhaps?
But here's the funny thing. I'm pretty sure the frontier models are now smarter than I am, more eloquent, and definitely more knowledgeable, especially the paid versions with built-in search/research capability. I'm also fairly certain that the number of original thoughts in a given discourse on the Internet is fairly small, I know that's certainly the case for me.
So whither humans now?
If I'm looking for human engagement, forums make sense. But for an informed discussion, I'm less certain that it's wise to be exclusionary. There is a case to be made that lower quality comments should be hidden or higher quality comments should be surfaced, but that's true regardless of the source, innit?
The rest of us want the benefit of lived experience and genuine curiosity in discussions. LLMs are fundamentally incapable of both.
Because I want to know what you think, because putting our thoughts into words and sharing them is an important part of thinking, because we'll lose these skills if we don't use them, because in thinking for yourself you might come up with something interesting that nobody has ever thought before.
Of course, writers are allowed to reference and use other peoples writing: with proper attribution. I don't have a problem with people sharing quality AI generated content when it's labelled as such. The issue is that most people writing AI comments don't do this, which is itself probably the strongest indictment of the practice.
One could argue that it should be, but it's just not the the same standard to which students and papers and Wikipedia materials are held to :)
Good news then, you're currently on a forum! So we all agree that humans > AI, regardless of your thought on the intelligence behind it.
I made the post to specifically disagree with that notion: I think that excluding top-quality AI output from the discussion will reduce the overall quality of forums, because it's now the case that top-tier LLMs > average human.
How do we assess top-quality output? The moderation tools for that already exist. Doesn't scale well? I'm guessing the days where ai can do it cheaper and faster will soon be nigh.
If it helps, my friends and family tend to have at least a master's, and the majority have PhDs.
> Would you hang out with a friend over coffee or something who, rather than conversing with you, recorded your side of the conversation directly into an LLM and then played you back the result?
I think the difference is that you're imagining the LLM replaces the conversationalist, but as I said above, my lived experience is that the LLM provides grounding to the discussion, effectively having replaced internet search as a better, faster, broader, smarter library. It doesn't kill the conversation, it makes it better.
Those aren't super rare these days, I don't know why arbitrary credentials would matter for this purpose, but incidentally, the notion that they would matter in conversation at all kind of speaks to the type of engagement you might be having with them, which may indeed be different than what I care about.
Personally I don't find people all that engaging the more inclined they are to go looking up answers, to me it represents a certain amount of discomfort with uncertainty, ego, that are necessary for a fun conversation. If someone has an answer because of their experience, great, otherwise it's ok to not know in the moment and continue on.
In one case, I had a friendship kind of fizzle out because we'd be hanging and I'd express some curiosity that I'd hope he'd build on with his own experience or his own sense of wonder, but because he only cared about authoritative facts, he'd google the answer and get frustrated that I only cared about his opinion on what the answer might be. The actual fact was incidental, and this conflict regularly led to impasse where I'd clarify I don't care what the internet says etc.. and I'm fine with that because he wasn't really interested in thought exercises.
A concrete hypothetical mundane example might be posing "How do you think the Iran war might impact gas prices here?" and they'd just look up the history and trends, and then kind of stop there. Dull, I want a human response, speculate and build on it, let yourself be wrong.
It's an indicator that that demographic isn't opposed to using AI as a conversational tool and find it useful for that purpose - an instant, "smarter" library, if you will.
> The actual fact was incidental, and this conflict regularly led to impasse where I'd clarify I don't care what the internet says etc.. and I'm fine with that because he wasn't really interested in thought exercises.
Thought exercises are better, imho, when they're grounded in facts. Why wouldn't you care what the facts are? Can one have the same level of discourse about space with someone who isn't aware that the Earth is round and thinks it is flat?
> A concrete hypothetical mundane example might be posing "How do you think the Iran war might impact gas prices here?" and they'd just look up the history and trends, and then kind of stop there. Dull, I want a human response, speculate and build on it, let yourself be wrong.
Color me confused. Are you looking for a panic or doomsday response or? What does "human response" even mean? A human looked at the history and trends, that's that human's response to the question!
Looking up the history and trends, and building on those facts could be a deeper dive into the wonders of economics, an exploration of the interconnected-ness and dependence of the various parts of the economy on oil and gas (fertilizer, plastics, and their downstream industries), where the fractionating plants are, where they get their raw materials from, how tied into futures contracts those are, who's got long-term contracts insulating them from the impact, what's that % of folks insulated for 3 months, 6 months, 12 months etc. etc.
I have to say, asking me to speculate and build on a topic that I know nothing about would invite a 'lookup' response from me as well; that's just (imho) a critical thinker style. Once the lookup is done, as a questioner, may I suggest asking probing questions to move the conversation forward - that's what I do.
Just out of curiosity, are you a D&D player, or a Fantasy or adjacent creative? I'm wondering what sort of nature would want to elicit an ungrounded speculative response, and I can imagine an enjoyer or creator of fantasy looking for a creative, speculative, thought exercise with a real world question as a starting point.
(Reinforcement learning from human feedback)