Posted by andrewdutton 1 day ago
At the end of the day we had this weird error where the test suites would randomly fail. Always a different test, different time of day, different engineer running into the issue. This caused “bad days” for all the feature developers.
I kept investigating and pushing back that it wasn’t my implementation. Lo and behold months later it was revealed that any subsequent PR would cancel the docker image built for the active PR. The tests would fail because the image was getting trashed. The reason this kept happening is that the QA env was not actually setup to mirror the production env, as a cost saving measure done much much much before my time.
However since engineering and infrastructure refused to address the issue months ago, the dev org had built up ill will against me. Everyone blamed me and instead of sitting down and still fixing the issue to have a parallel environment to our production env, they laid me off, and removed all my work.
I still have friends that work there and they still fight daily about testing deployments and rolling back because they removed the testing in place.
As senior QA, this alone is going to end me one day.
I also love the “we’ll do the meta-work that improves work velocity after the work is finished. Not before! We’re too busy for that now.”
Primary factor is when my manager and his manager are out of the office.
We do an hour long project update every morning and afternoon so that both managers can poke at our progress and make sure it's "meeting the bar". And my direct manager isn't trusted by his manager, so my skip is in there and they squabble around task prioritization and tasks get re-assigned randomly half done between developers when they don't feel enough was done on the task in the last 3 hours.
There's plenty of good stuff at this job, but that part drives me insane.
If it makes you feel any better the director is doing exactly the same thing to them.
This is a typical mode of failure in management and it far predates remote work.
* Management has lost faith in the manager and/or their team. (e.g., handholding to try to fix the manager/team, or to mitigate temporarily whatever turns out can't be fixed)
* The company, product, or some management role is in crisis, enough for management to be firefighting in desperation mode. (e.g., task re-triage up to once or twice a day can be a legitimate tactic, and I've seen a company saved that way; but two hour-long meetings of the entire team a day consumes huge time&energy, so that would need additional justification)
* Management is operating out of their experience, or overextended, and not adjusting fast enough.
If you trust your manager or the skip, you could go talk to them. But, if they are crazy or cruddy, you should be prepared to be leave, and not necessarily on your schedule.
I always assume that one meeting eliminates 4 hours of productivity, so it looks like you are all set.
Companies stack ranking an inherently team activity will never learn. You can't be a team and have the core incentive to fuck someone over for when the musical chairs end.
The causes will sound familiar to most developers here. The most significant causes are usually outside the person's control, as expected. Interestingly, "interruptions / randomness" was only the sixth most significant, while the most significant cause was "engineering system friction."
The average number of "bad days" per month is 3-5. Accounting for weekends and vacation days, that's nearly 20% of the time! And some report 9+ bad days per month.
So, if you happen to work on corp-infra or other tooling/support role, and you occasionally lament that your work is not in the critical path, you actually have a rather large impact if you think of it in terms of helping reduce the number of bad developer days.
This, 1,000x.
- CI/CD pipeline needs attention: Bad day
- Dev env down: Bad day
- Infrastructure via ServiceNow: Bad day
- Change review board: Kill me now
> It’s a group of people from the project team that meets regularly to consider changes to the project. Through this process of detailed examination (…) decides on the viability of the change request or makes recommendations accordingly.
https://www.projectmanager.com/blog/change-control-board-rol...
I notice that if you add the "lack of focus time" responses to that, you'd get the 3rd most significant factor, which feels closer to right to me...
But small, 2-5 minutes interruptions for a quick question never bothered like it seems to do with a lot of people here on HN. I can go back to my focus pretty instantly after a “quick question interruption”.
Scheduled and unscheduled interruptions?
Meanwhile if I have too many meetings or have to deal with dumb people or get criticised by someone that doesn’t understand what I’m doing that’s actually a bad day and has big negative effects beyond just that one interaction.
Needless to say, I later found out from my manager that, the scrum coach thought I was spending too much time on particular tickets and causing bad jira metrics for the whole team, but he later explained to the scrum coach that the tasks were large architectural changes or research duties, hence could not be comparable to regular 10LOC bug fixes or bug triage tickets and some stuff can't always be broken down into smaller tasks either, as research topics need more investigation before figuring out what needs to be done.
From then on, I have learned to safely(with caution) ignore criticism from non-technical people and improved my day quality.
"I started work on this item and realized that I'm actually missing a lot of context and information in the requirements. I need your help clarifying them"
The scrum Master knew who to contact to get that information would set up a meeting, have the meeting without me if they thought they could do it or schedule it and include me so I could ask the questions I needed, Mark the ticket for me as blocked, fill out the info after the meeting and communicate the risk and reason for delay to stakeholders.
Basically I could say "There's a problem!" and the scrum Master would get it taken care of or find someone who could. Probably the most valuable person on that team.
On top of that the one I worked with who was good at this also took on small cases because otherwise he'd regularly have nothing to do.
1. Bad or no documentation on tools or technology being used
2. Defects in tools being used
These are so bad now that I just don't even want to be in the field anymore much of the time. For either of them, you're often reduced to spraying all the usual forums with a question (and it takes ages to prepare a reproducible case that you can actually share, if that's even possible) and then waiting and hoping. Oh, and in the meantime doing the same searches over and over to see if some previously hidden nugget will turn up and reveal a solution.
But if you're not building another Web SPA, it's as if you don't exist for a lot of these frameworks. And doing simple stuff like deploying your own certificates is undocumented. Also they have a users table whose columns are undocumented and behave in unexpected ways. For example, they have columns to record whether and when a user has been validated (confirmed via E-mail), and for some reason these are set upon new-user creation... when they certainly have not been validated. Why? Who knows.
Another example: I was tasked with defining a REST-style API for a line of products. After learning about OpenAPI, I thought great, I'll design it in an OpenAPI tool and that'll be the source of truth for both front- and back-end code generation.
Fat chance. The OpenAPI ecosystem turned out to be a dysfunctional shitshow. First, the current version (as of years ago) is 3.1. But to this day, almost no tools support it. Version 3.0 was profoundly flawed in several ways. And even tolerating that, the 3.0 code-gen tools just straight-up don't work. Plus, the design tool I was using (Stoplight Studio) has been pulled off the market, and nothing has emerged to replace it. The whole thing was a huge time-suck. I talked to some developers about it after the fact and they said yeah, the whole thing is so bad that even mentioning you were using it is a professional liability.
Defect example: iOS 18 broke trusted certificates (still not fixed in 18.1), so currently you can't develop a network-dependent app on iOS on your own system. When I tried to work around that by targeting my Mac and using HTTP to localhost, another Apple bug caused the app to crash on launch before even getting to my code. So... dead halt to development for a week, despite my opening a paid support incident with Apple. They did finally get back to me and gave me a workaround to the crash-on-launch bug (no charge because confirmed bug), but damn.
More days lost. I had stretches like this before, and then the logjam breaks and I really start making headway. But this shit has been a slog for months.
So if you had a data structure called User, and you used it to represent users in different roles in your API somehow, you can't annotate the instances of User in your document to say what kind of user you're talking about. You can annotate the elements inside the User model (like strings, numbers, or whatever) but not usages of the model itself. Before version 3.1, this blunder rendered OpenAPI half-useless for one of its core purposes: documenting your API.
API-Fiddle acts as though that's still true. There is no Description button anywhere in your schema alongside any use of a model.
Personally my worst days are "I did a thing, rolled it out, found it was seriously broken, rolled it back, and now it's 6pm and that is what I did today". I couldn't see anything in their report that would cover that failure mode (it's worse than "couldn't get anything done", because I was demonstrably incompetent at what I did do).
"Poor productivity" ranked 3rd in figure 4.
-Thomas Edison, on failure and inventing the light bulb.
* formal meetings, where I was, 30 hours of meetings a week.
* Users wanting us to read their minds
* bureaucracy, need to ask permissions to do one little thing
* Jira
I call it Jira diarrhea
They had a person who'd been trained by Atlassian and worked full time on configuring Jira (and Confluence) - in a workforce of maybe 25 developers 5 designers and 4 QA/testers. (out of around 100 people, the rest heavily top loaded on project management and $1000 haircut account managers)
She was _so_ good at setting up Jira so that it worked super well for the developers, and enforced decent requirements and requirement change management on account managers and project managers. The dev and QA all loved her.
Senior management flew that company into the ground about 8 months after she started, leaving 100+ people unpaid for their final month, and stiffed everybody on outstanding leave and 3 months of superannuation (retirement).
She did get 3 of my best devs work at Atlassian within days. She is very very near the top of the list of "people I'd go and poach from whatever they're doing if I landed the right project that needed and was prepared to pay highly for their skillset.