When you do a project from scratch, if you work enough on it, you end up wishing you would have started differently and you refactor pieces of it. While using a framework I sometimes have moments where I suddenly get the underlying reasons and advantages of doing things in a certain way, but that comes once you become more of a power user, than at start, and only if you put the effort to question. And other times the framework is just bad and you have to switch...
But ya, I hate when people say they don't like "magic." It's not magic, it's programming.
Yes, it's not magic as in Merlin or Penn and Teller. But it is magic in the aforementioned sense, which is also what people complain about.
Sorry for the snark but why is this such a problem?
https://pomb.us/build-your-own-react/
Certain frameworks were so useful they arguably caused an explosion the productivity. Rails seems like one. React might be too.
const element = document.createElement("h1");
element.innerHTML = "Hello";
element.setAttribute("title", "foo");
const container = document.getElementById("root");
container.appendChild(element);
I now have even less interest in ever touching a React codebase, and will henceforth consider the usage of React a code smell at best.Maybe nobody needs React, I’m not a fan. But a trivial stateless injection of DOM content is no argument at all.
<h1 title=foo>Hello</h1>
I have even less interest in touching any of your codebases!Don't get me wrong, I don't think you need or should need a degree to program, but if your standard of what abstractions you should trust is "all of them, it's perfectly fine to use a bunch of random stuff from anywhere that you haven't the first clue how it works or who made it" then I don't trust you to build stuff for me
The big thing here is that the transformations maintain the clearly and rigorously defined semantics such that even if an engineer can't say precisely what code is being emitted, they can say with total confidence what the output of that code will be.
Sure, obviously, we will not undersatnd every single little thing down to the tiniest atoms of our universe. There are philosophical assumptions underlying everything and you can question them (quite validly!) if you so please.
However, there are plenty of intermediate mental models (or explicit contracts, like assembly, elf, etc.) to open up, both in "engineeering" land and "theory" land, if you so choose.
Part of good engineering as well is deciding exactly when the boundary of "don't cares" and "cares" are, and how you allow people to easily navigate the abstraction hierarchy.
That is my impression of what people mean when they don't like "magic".
In other words, why is one particular abstraction (e.g. Javscript, or the web browser) ok, but another abstraction (e.g. React) not? This attitude doesn't make sense to me.
As far as x86, the 8086 (1978) through the Pentium (1993) used microcode. The Pentium Pro (1995) introduced an out-of-order, speculative architecture with micro-ops instead of microcode. Micro-ops are kind of like microcode, but different. With microcode, the CPU executes an instruction by sequentially running a microcode routine, made up of strange micro-instructions. With micro-ops, an instruction is broken up into "RISC-like" micro-ops, which are tossed into the out-of-order engine, which runs the micro-ops in whatever order it wants, sorting things out at the end so you get the right answer. Thus, micro-ops provide a whole new layer of abstraction, since you don't know what the processor is doing.
My personal view is that if you're running C code on a non-superscalar processor, the abstractions are fairly transparent; the CPU is doing what you tell it to. But once you get to C++ or a processor with speculative execution, one loses sight of what's really going on under the abstractions.
Yeah, JavaScript is an illusion (to be exact, a concept). But it’s the one that we accept as fundamental. People need fundamentals to rely upon.
Sure you can, why can't you? Even if it's deprecated in 20 years, you can still run it and use it, fork it even to expand upon it, because it's still JS at the end of the day, which based on your earlier statement you can code for life with.
If this is true, why have more than one abstraction?
But it is not quite the case. The hand coded solution may be quicker than AI at reaching the business goal.
If there is an elegant crafted solution that stays in prod 10 years and just works it is better than an initially quicker AI coded solution that needs more maintenance and demands a team to maintain it.
If AI (and especially bad operators of AI) codes you a city tower when you need a shed, the tower works and looks great but now you have 500k/y in maintaining it.
Anything that can be automated can be automated poorly, but we accept that trained operators can use looms effectively.
Programming is famously non-linear. Small teams making billion dollar companies due to tech choices that avoid needing to scale up people.
Yes you need marketing, strategy, investment, sales etc. But on the engineering side, good choices mean big savings and scalability with few people.
The loom doesn't have these choises. There is no make a billion tshirts a day for a well configured loom.
Now AI might end up either side of this. It may be too sloppy to compete with very smart engineers, or it may become so good that like chess no one can beat it. At that point let it do everything and run the company.
Electricity is magic. TCP is magic. Browsers are hall-of-mirrors magic. You’ll never understand 1% of what Chromium does, and yet we all ship code on top of it every day without reading the source.
Drawing the line at React or LLMs feels arbitrary. The world keeps moving up the abstraction ladder because that’s how progress works; we stand on layers we don’t fully understand so we can build the next ones. And yes LLM outputs are probabilistic, but that's how random CSS rendering bugs felt to me before React took care of them
The cost isn’t magic; the cost is using magic you don’t document or operationalize.
If you've only been in a world with React & co, you will probably have a more difficult time understanding the point they're contrasting against.
(I'm not even saying that they're right)
Autovectorization is not a programming model. This still rings true day after day.
LLMs are vastly more complicated and unlike compilers we didn't get a long, slow ramp-up in complexity, but it seems possible we'll eventually develop better intuition and rules of thumb to separate appropriate usage from inappropriate.