Posted by alcazar 11 hours ago
Great explanation for what I see when I mess around with coding LLMs. The natural human instinct of “this feels complicated, let me think about it some more” is suppressed. So far all the gains from the stunning initial speed have been cancelled out later in the project, arising from the over-engineered complexity baked into the code.
Day 400: Having thoroughly described a universal theory of everything, we set out to build an experimental apparatus in orbit at a Lagrange point capable of detecting a universal particle which acts a mediator for all observable forces in the known universe.
Like, if you stay focused, is it even really a side project?
Which is why my 2d top down sprite-based rpg now has a 3d procedural animation engine, a procedural 3d character generator with automagic rigging, a population simulator that would put Europa Universalis to shame if I ever get around to finishing it (ha!) a pixel art editor, a 2d procedural animation engine using active ragdolls.........
You might wonder why a 2d game needs 3d procedural animation, well...
The scope creeps in mysterious ways
You could achieve things yourself if you tried!
Any advice on how to mitigate this?
If it helps anything at all: It's normal. At this point, you've already proven you're smart and knowledgeable. Now, the universe wants to see if you can also finish what you've started. That's the main thing a PhD proves: That you can take an incredibly interesting topic and then do all the boring stuff that they need you to do to be formally compliant with arbitrary rules.
Focus on finishing. Reduce the scope as much as possible again. Down to your core message (or 3-4 core messages, I guess, for paper-based dissertations).
Listen to the feedback you get from your advisor.
You got this!
When I did my MSc thesis he told me it was a pretty good PhD. (Before giving me a months work in corrections.) I didn’t understand back then, but I understand now. It was small, replicatable and novel (still is)! Just replicate three times and be done with it. You’ve proven your mastery. Now start something serious.
My professor once told me he presented at a small conference, the whole audience everybody had PhD in mathematics and maybe 2 of the 50 or so people in the audience could follow along. The point he was trying to make is at some point the people in the audience were not really interested in what was being presented because it is difficult to just follow along some really niche topic.
He discussed this topic and how generally it's left to those who are more notable in a field to ask the 'dumb' questions everyone else is afraid to ask. And such questions often need to be asked to get the audience on board and open the floodgates with areas of niche research - the speaker themself is often too far into the rabbit hole to discern the difference between opaque and obvious.
So it stands to reason, at smaller conferences this would be a big problem, with fewer thought leaders in attendance whose reputations are intact enough that they wouldn't mind looking foolish.
in my field this would be terrible advice. instead you need to be doing something that your audience actually will give a shit about.
But there's some things to remember that are incredibly important
- a paper doesn't *prove* something, it suggests it is *probably* right
- under the conditions of the paper's settings, which aren't yours
- just because someone had X outcome before doesn't mean you won't get Y outcome
- those small details usually dominate success
- sometimes a one liner seemingly throw away sentence is what you're missing
- sometimes the authors don't know and the answer is 5 papers back that they've been building on
- DO NOT TREAT PAPERS AS *ABSOLUTE* TRUTH
- no one is *absolutely* right, everyone is *some* degree of wrong
- other researchers are just like you, writing papers just like you
- they also look back at their old papers and say "I'm glad I'm not that bad anymore"
- a paper demonstrating your idea is a positive signal, you're thinking in the right direction
As soon as you start treating papers as "this is fact" you tend to overly generalize the results. But the details dominate so you just kill your own creativity. You kill your own ideas before you know they're right or wrong. More importantly you don't know how right or how wrong.Combine that with the publish-or-perish paradigm and I think we got significant coverage. People don't even consider diving deeper into things and are encouraged to take the route of "assume paper is correct" because that's the fastest way to push out research. But if the foundation is shaky, then everything built on it is shaky too.
Which, that's a distinction in the hard and more formal fields like math and physics. They have no issues pushing out papers that may have errors in them because the process is to attack works as hard as possible. Then whatever is left is where you build again. You definitely have people take advantage of this, like Avi Loeb publishing about aliens, but it is realistically a small price to pay. And hey, even Loeb's work still contributes. If at some point it actually is aliens, then there's work existing that can be built upon. And when it continues to not be aliens, there's existing work to build on since really his problem is more that the papers just end up concluding "and this is why we can't rule out aliens!" (-__-)
Anyways, long story short, my advice is to just remember that you, and everybody else, is a blubbering idiot and it is a absolute fucking miracle a bunch of mostly hairless apes can even communicate, let alone postulate about the cosmos. At the end of the day we're all on the same team, seeking truth. Truth matters more than our egos and if we start to forget how dumb we are then we'll only hinder our pursuit of truth.
Acknowledge it is normal? Attempt to buy deeper into the delusion ("Yeah my work is awesome and unique!"). Use stimulants to force enthusiastic days every once in awhile?
Uhh... unless you plan to stay in academia? Then, this is a terrible idea.
You’ll almost never see a PhD thesis that has anything particularly interesting, novel or directly applicable to the sciences.
This is definitely the wrong way of going about a research project, and I have rarely seen anyone approach research projects this way. You should read two or at most three papers and build upon them. You only do a deep review of the research literature later in the project, once you have some results and you have started writing them down.
Replicating existing results doesn't meet that criteria so unknowingly repeating someone's work is an existential crisis for PhD students. It can mean that you worked for 4-6 years on something that the committee then can't/won't grant a doctorate for and effectively forcing you to start over.
Theoretically, your advisor is supposed to help prevent this as well by guiding you in good directions, but not all advisors are created equal.
It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”
Moreover, I am not suggesting you don't look at other papers at all. But google scholar and some quick skimming of abstracts and papers you find should suffice to check if someone has already done the work. If you start fully reading more than a handful of papers, your ideas are already locked in by what others have done, and it becomes way harder to produce something novel.
"impediment to action advances action. what stands in the way, becomes the way".
I had a coworker who would always be diplomatic about code changes he felt could be improved but when he felt he was nitpicking, where he would say: It's better than it was. It allowed him to provide criticism while also giving permission to go ahead even if there were minor things that weren't perfect. I strongly endorse this kind of attitude.
nit: this could be changed to XYZ
vs we should use XYZ here
where it was understood nits could be ignored if you didn't feel it was an urgent thing vs a preference.What I am describing would be something higher level, more like a comment on approach, or an observation that there is some high-level redundancy or opportunity for refactor. Something like "in an ideal world we would offload some of this to an external cache server instead of an in-memory store but this is better than hitting the DB on every request".
That kind of observation may come up in top-level comment on a code review, but it might also come up in a tech review long before a line of code has been written. It is about extending that attitude to all aspects of dev.
The trick to overcoming this is not to aim for "clean" but for "cleaner than before".
Just keep chipping away at it, whether it is a messy codebase or a messy kitchen.
The other saying I say is "completion not perfection". That helps me in yard work especially. I'm not going for the cover shot of "Better Homes and Gardens", I just need the lawn to be cut.
The sand blows in endlessly. You don’t aim for a pristine, sandless land. But you can’t ignore it or it takes over.
I’ll just pick up a few things and ferry them towards their “home.” Or go do a small amount of yard work. Etc.
I always thought perfectionism meant extremely high achievements (for too great of a cost). But it can also be quitting without any progress because you can't accept anything less than perfect (which may or may not be achievable). Perfectionism can be someone procrastinating on a large task.
I don't think it holds in 100% of situations but I do think if you're going to make an error one way or the other, I'd rather do something smaller and release too early than do something bigger and waste time.
It's in a field that I have little experience with (Information Retrieval). So there is obviously prior art that I could learn from or even integrate with.
This article motivates me further to learn things by focusing on building my own and peek into prior art as I go, when I'm stuck or need ideas.
Recently a Clojure documentary came out and the approach of Rich Hickey was seemingly the opposite: Deep research of prior art, papers, other languages over a long period of time.
However, he also mentioned that he made other languages before. So the larger story starts earlier, by making things and learning from practice.
Maybe that's also the bigger lesson: Don't overthink, start by making the thing. But later when you learned a bunch of practical lessons and maybe hit a wall or two, then you might need that deeper research to push further.
That was also on my mind thanks to the documentary. Then I followed up with "Easy made Simple" and "Hammock Driven Development", and it makes me want to learn Clojure.
Clojure documentary on CultRepo channel: https://www.youtube.com/watch?v=Y24vK_QDLFg
Simple Made Easy: https://www.youtube.com/watch?v=SxdOUGdseq4
Hammock Driven Development: https://www.youtube.com/watch?v=f84n5oFoZBc
Sometimes you just want to button-mash through, rushing about carefree.
Other times, you want to go entirely stealth, wandering around, trying to find the best path, wasting an hour or more on a level you could have button-mashed in 5 minutes.
Both are fine.
See also "Why does the C++ standard ship every three years" (as opposed to ship when the desired features are ready):
https://news.ycombinator.com/item?id=20428703 (2019-07-13, 220 comments)