Posted by __rito__ 10 hours ago
It would be very interesting to see this applied year after year to see if people get better or worse over time in the accuracy of their judgments.
It would also be interesting to correlate accuracy to scores, but I kind of doubt that can be done. Between just expressing popular sentiment and the first to the post people getting more votes for the same comment than people who come later it probably wouldn’t be very useful data.
It's a shame that maintaining the web is so hard that only a few websites are "good citizens". I wish the web was a -bit- way more like git. It should be easier to crawl the web and serve it.
Say, you browse and get things cached and shared, but only your "local bookmarks" persist. I guess it's like pinning in IPFS.
It is not possible right now to make hosting democratized/distributed/robust because there's no way for people to donate their own resources in a seamless way to keeping things published. In an ideal world, the internet archive seamlessly drops in to serve any content that goes down in a fashion transparent to the user.
If you make it possible for people to donate bandwidth you might just discover no one wants to donate bandwidth.
It's not hard actually. There is a lack of will and forethought on the part of most maintainers. I suspect that monetization also plays a role.
Keeps the spotlight on carefully protected communities like this one.
This only manipulates the children references though, never the item ID itself. So if you have the item ID of an item (submission, comment, poll, pollItem), it'll be available there as long as moderators don't remove it, which happens very seldom.
What do you mean?
I suppose they want to make the comments seem "fresh" but it's a deliberate misrepresentation. You could probably even contrive a situation where it could be damaging, e.g. somebody says something before some relevant incident, but the website claims they said it afterwards.
But, I'm just guessing here based on my own refactoring experience through the years, may be a completely different reason, or even by mistake? Who knows? :)
I would grade this article B-, but then again, nobody wrote it... ;)
For instance, one of the unfortunate aspects of social media that has become so unsustainable and destructive to modern society is how it exposes us to so many more people and hot takes than we have ability to adequately judge. We're overwhelmed. This has led to conversation being dominated by really shitty takes and really shitty people, who rarely if ever suffer reputational consequence.
If we build our mediums of discourse with more reputational awareness using approaches like this, we can better explore the frontier of sustainable positive-sum conversation at scale.
Implementation-wise, the key question is how do we grade the grader and ensure it is predictable and accurate?
* ignore comments that do not speculate on something that was unknown or had not achieved consensus as of the date of yyyy-mm-dd
* at the same time, exclude speculations for which there still isn’t a definitive answer or consensus today
* ignore comments that speculate on minor details or are stating a preference/opinion on a subjective matter
* it is ok to generate an empty list of users for a thread if there are no comments meeting the speculation requirements laid out above
* etc
But it reminds me that I miss Manishearth's comments! What ever happened to him? I recall him being a big rust contributor. I'd think he'd be all over the place, with rust's adoption since then. I also liked tokenadult. interesting blast from the past.
Forecasting and the meta-analysis of forecasters is fairly well studied. [1] is a good place to start.
>In February 2023, Superforecasters made better forecasts than readers of the Financial Times on eight out of nine questions that were resolved at the end of the year.[19] In July 2024, the Financial Times reported that Superforecasters "have consistently outperformed financial markets in predicting the Fed's next move"
>In particular, a 2015 study found that key predictors of forecasting accuracy were "cognitive ability [IQ], political knowledge, and open-mindedness".[23] Superforecasters "were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness".
I'm really not sure what you want me to take from this article? Do you contend that everyone has the same competency at forecasting stock movements?