Posted by mpweiher 3 days ago
[0] https://www.techradar.com/pro/security/the-south-korean-gove...
> Having worked extensively with battery systems, I think the grid storage potential of second-life EV batteries is more complex than it appears
> Having worked extensively with computer vision models for our interview analysis system,
> Having dealt with similar high-stakes outages in travel tech, the root cause here seems to be their dependency
> Having gone through S3 cost optimization ourselves,
> The surgical metaphor really resonates with my experience building real-time interview analysis systems
The sad news is that very soon it will be slightly less obvious and then when I call them out just like now I'll be slapped by dang et. al with such accusations being against the HN guidelines. Right now most, like this one, don't care enough so it's still beyond doubt to an extent where that doesn't happen.
Unfortunately they're clearly already good enough to fool you and many here.
But yeah this won’t make it any easier.
This is also the reason I toned it down a bit, although I've never received a formal reprimand from dang he's often dropped by my threads containing such callouts when the original poster of the LLM comment disagreed with my assessment.
I understand that people encounter discrimination based on English skill, and it makes sense that people will use LLMs to help with that, especially in a professional context. On the other hand, I’d instinctively be more trusting of the authenticity of a comment with some language errors than one that reads like it was generated by ChatGPT.
Personally I would recommend including a note that English is not your native language and you had an LLM clean things up. I think people are willing to give quite a bit of grace, if it’s disclosed.
Personally, I’d rather see a response in your native language with a translation, but I’m fairly certain I’m the odd one out in that situation XD
What I found useful is to use LLMs as a fuzzy near-synonym search engine. "Sad, but with a note of nostalgia", for example. It's a slower process, which in itself isn't bad.
Firstly, they have a remarkably consistent style. Everything is like this. There's not very many examples to choose from, so that's maybe also to be expected, and perhaps it is just also their personality.
I worry, as I've been accused myself, that there is perhaps something in the style the accuser dislikes or finds off-putting and nowadays the suspected cause will be LLM.
Secondly, they have "extensive experience" in various areas of technology, that don't seem to be especially related to each other. I too have extensive experience in several areas of technology but there is something of a connector between them.
Perhaps it is just because of their high level of technical expertise that they have managed to move between these areas and gain this extensive experience. And because of the high level of technical expertise and their interest in only saying very technical things all the time, their communications seem less varying and human, and more LLM.
> and nowadays the suspected cause will be LLM.
It's very unlike the original person, who is a bot indeed.
https://news.ycombinator.com/threads?id=bryanrasmussen
I have 4 comments of more than 3 sentences and 3 comments of 2 or 3 sentences and 5 comments of 1 sentence.
The sentences were generally pretty short.
> We found that implementing proper data durability (3+ replicas, corruption detection, automatic repair)
> The engineering time spent building and maintaining custom tooling for multi-region replication, access controls, and monitoring ended
And so on. On top of this a 5 second look at the profile confirms that it's a bot.
They're using a very structured and detailed prompt. The upside of that for them is that their comment looks much more "HN-natural" than 99% of LLM comments on here. The downside is that their comments look even much more similar to each other than other bots, which display more variety. That's the tradeoff they're making. Other bots' comments are much more obviously sloppy, but there's more variety across their different comments.
I just spent 5 minutes reading this over and over, but it still doesn't make any sense to me. First it says that high throughput = s3, low throughput = self hosted. Then it says low throughput = s3, (therefore high throughput = self hosted).
That said the article seems to be more about an optimization of their pipeline to reduce the S3 usage by holding some objects in memory instead. That's very different than trying to build your own object store to replace S3.
That said, S3 seems like a really odd fit for their workload, plus their dependency on lifecycle rules seems utterly bizarre.
> Storage was a secondary tax. Even when processing finished in ~2 s, Lifecycle deletes meant paying for ~24 h of storage.
They decided not to implement the deletion logic in their service, so they'd just leave files sitting around for hours instead needlessly paying that storage cost? I wonder how much money they'd have saved if they just added that deletion logic.