Posted by nilirl 19 hours ago
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).
This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".
Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).
I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
Another facet of this is my annoyance at other developers when they persistently incurious about the domain. (Thankfully, this has not been too common.)
I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
In practice, it isn't.
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.
There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're doers and builders, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're decoding the fundamental invariants of the universe.
I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.
Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.
Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.
If there were indeed perfect product managers, there would no need for product support.
Or maybe I'm just a little bit insane. Or both.
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
"Transmissionism" is a term I've seen to describe this
complexity is
not what you believe it is
please try listening
Very cool
that can only be moved around,
not eliminated.
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)
I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.
I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.
To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.
When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
Great thread.
Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.
> and they need to have the right project that provides the opportunity to learn what needs to be learnt.
It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.
Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.
Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").
Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
This is what I was thinking - I'd say the biggest step up a developer can make is to recognize that sometimes you need a bit of one approach, sometimes a bit of another one.
Sometimes minimalism is the way, and you need to wonder if the pain, workload or lacking capabilities and features are problematic. Or, sometimes adding the smallest possible thing is a good way, as long as we don't paint ourself into a corner and enable learning and accumulating information of what we actually need.
Sometimes buying a thing is a good way, if you can find a good vendor and a tool fitting your use case and especially if the effort of doing it on your own is high. This commonly occurs in security, because keeping up to date with the ongoing vulnerability and threat landscape can be a full job on its own.
And sometimes adding something bigger is the way, if the effort of maintaining it are less than the effort and pain incurred by not having it. Or if we can ramp up the effort of the thing incrementally, while reaping benefits along the way. This can be validated often by doing a small thing.
What the AI will do in my opinion is to push the bar more in this direction. Cozily hacking CRUD-Code in a web server together most likely won't be enough in a year or two for the average development job.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
preventing the unnecessary changes can help you get the political capital in your org to push through the changes that really need to happen.
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
> why would you not want to index?
Because if you don't need an index it wastes RAM, as you've learned. Maintaining indices also has a cost. Index only what you need.
In the sense of the blog post: A senior with decent DB experience would have told you. ;)
I am not experienced with MongoDB, I don't know if previous comment reports were the users fault or MongoDB's. But one thing is clear to me, complaining it uses too much RAM and not knowing the reasons for it, is a user problem. A common mistake is to setup a DB and expect it just magically does works. DBs are complicated beasts, you have to know how to deal with them.
I think these are realistic expectations for most apps. Obviously the likes of Netflix and Uber get orders of magnitude more, but 99.9% of apps aren't a Netflix or an Uber, and you don't have to optimize for scaling until your app is on a trajectory to become one, and putting your database on an SSD already let's you handle several thousand concurrent users with ease.
Of course everything depends on use case and constraints. I highlight the extremes here, the initial confusion was why DBs require so much RAM. Traditional DBs are optimized around RAM, that's where they perform best. You can abuse that, but it's not the best they can be in terms of latency, predictability and stability.
At some point they added the docValues configuration option per-field to do the transformation during indexing and store it to disk instead, so none of it had to be stored in the heap. Instead what you're supposed to do is rely on the OS disk cache, which handles eviction automatically, so you can run with significantly less memory but get performance improvements by adding memory without having to change any configuration further.
This do not mean we don't develop new product and services, it just means when we do so, we find the path of least overall entropy, it also applies to operations and tech debt reduction.
premature optimization is root of all evil
The qualities were highlighted because they can all lead to better stability.
Innovation can reduce pain though, if the current pain is strong enough. A stable stream of failures in production can be the kind of "stability" you want to disrupt.
A complete stability is death.
> Yes, yes, of course this is simplistic.
It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:
> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.
One of my favorite .sigs was:
I hate code, and want as little of it as possible in my software.
I don't remember where I saw it, but it was a while ago. It's possible the author has an HN account.One of the things that happens to "avoiders," is that they get attacked for being "negative." It can get career-ending, when the management chain is the "Move fast and break things" type.
I just stopped offering suggestions, after encountering that crap a few times, and learned to just quietly make preparations for when the wheels fall off.
I have spent my entire adult life, shipping, and shipping means lots of "not-shiny," boring stuff. But it gets onto shelves, and into end-users' hands. I was originally trained in hardware development, where mistakes can't be fixed with an OTA update. It taught me to "play the tape through," and make sure that I do a good job on every part of the project; which includes a lot of anticipating problems, and designing mitigations and prevention.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
Old quote: "There is nothing so permanents as a temporary hack."
On the other hand, almost any business problem can be solved in a reasonable way that doesn't send your system through any terrible one-way doors if you zoom out enough and ask enough whys. Of course not every place allows engineering to do that, but the ones that don't aren't able to retain senior folks because they will just go somewhere where their judgment is valued. Sometimes technical debt is the right thing for the business, but sufficiently senior engineers can set things up so there is always a way out. But what you can't do is uphold the purity of the system above the business problem. The systems are paid for by the business, so if you lose sight of that then you've lost the plot and the basis for your influence.
At this point Zig implementation of Bun seems like one written to throw away. And it happened only thanks to AI.
We were highly autonomous team though and hardly had cadence complains. But mostly because the all other departments were lagging. Except marketing, marketing always has "ideas".
Why would you do that though? If you have a working 'prototype' that's handling the demand, has the required features, and doesn't really need to be rebuilt (except to appease the sensibilities of the developers), why would you spend time and effort on that? That makes no sense. The fact it's a prototype or a 'proof of concept' is essentially irrelevant if you can't enumerate what the actual problem with it is.
I work with a bunch of teams that complain that they're mired in tech debt all the time, and complain that it's a huge risk and it slows them down. Except I can see our incidents log and there aren't many incidents and none that can be attributed to running risky code in prod, I have our risk register that has no 'this code is old and rubbish and has past-EOL dependencies on it', and no team has ever managed to articulate how or even how much the tech debt slows them down. They shouldn't really claim to be surprised that no one wants them to spend time 'fixing' a problem that apparently has no impact.
I've also seen the opposite case where a team spent months refactoring an app that they wrote before it launches. They wrote it, then decided they could make it 'better', and spent loads of time reworking most of it before it launched. All the value was delayed because they decided they didn't like their own work. And obviously the leadership team were pissed off about that, and now there's very little trust left.
There should be a good conversation about delivery of work between teams and stakeholders or no one will be happy, but if that isn't happening the stakeholders will always win.
You can get a few feet closer to the moon by building a treehouse, but you still can't turn it into a spaceship.
In a world where people (stakeholders, Product, and dev teams alike) want the prototype to be the full set of MVP features, this is not true.
IMO it is a bit arrogant to assume it is more important to engineer a better version of a thing rather than make money quicker and cut corners. In essence it is better to have a problem which is about how to scale a new product because it got traction rather than solve a problem how to sell more copies of already scalable thing.
Rewrites require an existential-level threat to pursue and should never be taken lightly. They must solve a real verifiable need, backed by real world data. Rewrites for rewrites sake or some lofty or nebulous goal of "better" or "more maintainable" code are doomed to fail and a waste resources.
I've seen the worst of it, from your average monoliths with no separation of concerns to 1000s of lines of self-modifying assembly in dead architectures with no code comments containing critical business logic, etc.
The main rule is to not to bite off more than you can chew, which if I'm being honest you really only learn from fucking up or watching others fuck it up.
Hackathon and overnight oncall fixes ABSOLUTELY should be rewritten or production-hardened, but they very often are not.
That's not to say that my first pass that I show people is ready to go into production, but I build the PoC from the beginning with the idea that it _is_ going into production and make sure I have a plan to get to production with it while I am working on it.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.
A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.
No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
Looking deeper into it: these people don't understand the underlying foundations anymore. Just keep building fast, without building proper mental models (that would take time).
Our work is largely very difficult to understand to outsiders, we need to write docs and have meetings to show what we have done. It's part of the job, and yes, if you don't do that, it doesn't matter how fantastic the software is that you wrote (sadly).
Companies have outlandish hiring practices. They want juniors who already know everything. That's why admitting that you don't know something is seen as showing weakness to the company in the eyes of a junior. Also, not knowing things will actively keep you from getting promoted.
I'm sure it's not like that everywhere but it's juniors playing the corpo game.
seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.
Maybe that’s just my workplace though!
And yes, the office is important.
Orgs get what they measure for. If your team values that sort of interactivity and support, it will ... observe it, measure it, and hire for that sort of person. I've seen groups evolve towards that, and they've been great, but it doesn't seem to be a default - most groups/orgs have to work towards it and and keep working at it.
That said, I completely agree. I learned most of what I know from being in the same room with senior developers and asking questions. Something that just isn't happening these days.
I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
Honestly I have the feeling that this is often insecurity. It's easy to feel uncomfortable if you think you don't follow along.
Another issue is that juniors usually experience culture shock on their first jobs. So they more or less isolate and do thing how they learned it.
I've been a mentor off and on for the last few decades, and I've been really lucky to have some strong mentees. Some I've followed for a better part of a decade and are crushing it out there. All I can really say is that they're out there, sorry I don't have any more helpful to say around how to find them etc. I'll mull on that for a bit..
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.
As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
I still vividly remember reading a z80 instruction set manual on a rainy day during summer vacation by a lake as a kid (maybe 14?)--writing my own assembly by hand in the margins for fun. TBH I probably still have that exact manual in storage somewhere. Had a green stripe down the front edge/binding iirc.
Back then I easily met folks like myself out there on the net, including many kids younger and smarter than me. It was awesome.
I do hope that some form of that 'net lives on in spirit somehow, given that the Internet I knew has largely fallen to corporate interests.
Now that I have my own kids, it's been painful to watch them have such an utterly different experience than I did.
Their Internet is based entirely on consumption and dark patterns designed to capture their attention, while providing nothing (to them) in return besides a dopamine addiction and body dismorphia.
It's simply the case that the supply of "experts" wanting to share "expertise" vastly eclipses the demand by several orders of magnitude.
I think there's a business somewhere, where you get paid to listen to "experts" and they get to feel better about themselves. It's a win-win.
So if people don't perceive you as an "expert" and dont go to you for answers, you simply do not register as one or they have a rather high bar which requires observable undeniable artifacts (and I don't mean credentials, I mean software) and competition is rather fierce - there's simply overproduction of people who think they are "experts" and thus you have to give unmistakable symptoms of being one to register.
"It takes two to tango" i.e. junior developers must first put in some effort and then proactively seek out seniors with expertise.
It may be a cliche, but a truism nevertheless; viz. the juniors are simply not interested in putting in the necessary time/effort to gain knowledge systematically. They want everything to be quick, easy and handed to them on a platter.
I think the main reason for this is; there is just too much out there to learn and everything is being propagandized as being the most important and most indispensable; This swamps the juniors and hence they feel lost and try to keep up with everything which is a fool's errand.
Juniors need to keep the following in mind;
1) Change their learning mindset as follows; - Browse a lot, Read a subset and Study an even smaller subset.
2) Always focus on the essentials and not on the frills. This is determined by your specific goals/needs.
3) Be okay with not knowing everything. Do not base your self-worth on others evaluation of you.
4) Do not compete with others. Do the best you can and always improve on your yesterday's self. As the adage goes "drops of water falling, if they fall continuously, can bore through iron and stone".
5) Be confident in your own intelligence. As Sherlock Holmes said "what one man can invent another can discover". What might seem impenetrable in the beginning will over time become clearer and easier when studied regularly.
6) Everything is dependent on Self-Effort modulated by Timing, Context, Means Employed and finally Random Chance (i.e. lady luck). Manage the last by factoring in its payoffs as part of your self-effort itself (i.e. hedging). Focusing on the above five parameters before starting on anything will guarantee success.
7) You can always short-circuit your studies and gain knowledge quickly by asking seniors with expertise to teach you. Your attitude and way of approach is very important here i.e. you must be sincere and committed.
> Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
Pretty much word for word.It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.
It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
> What if we had one system just for speed?
Like a beta? It would take incredible discipline from the business and customers not to consider that production software and demand 99.99 uptime and bug free.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
I have also seen projects go badly because the eng was trying to be perfect upfront. Whereas quickly getting to an MVP and then iterating tends to go better.
Well said. In Kent Beck’s Tidy First, it explores the slow process that can be summarized by this except from his substack [0]
“Valuable” lives on 2 axes:
Features—what the code does now.
Futures—what we can get the code to do once we learn the lessons of this set of features.
While there might be a component of time to get features out, it’s rarely urgent enough to forget about being flexible and having a somewhat constant velocity.And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
The use of “complexity” in terms of systems theory in comparison to “complicated”, is often misunderstood.
I also agree that it’s a really good framework for evaluating problems and then making decisions on potential solutions because each has its own set of approaches.
Small nick pick. It’s “Cynefin” not “Cynefine”. The word is Welsh (Cymraeg). Roughly pronounced ke-ne-fin.
> "Software systems tend not to be complicated, not complex, until you start getting into distributed systems."
these days so much software is "distributed systems".
For example, at a level of scale, Kubernetes start having emergent behavior.
On the other hand, it doesn’t take much to produce a complex system. The Boids simulation is a complex adaptive system in the form of a flock, yet each member of the flock concurrently follows only three basic rules.
I think complexity is a byword for 'unintentionally complicated' here.