Top
Best
New

Posted by codesuki 1 day ago

GitHub Actions is slowly killing engineering teams(www.iankduncan.com)
378 points | 198 commentspage 8
CSSer 1 day ago|
I hate to say this. I can't even believe I am saying it, but this article feels like it was written in a different universe where LLMs don't exist. I understand they don't magically solve all of these problems, and I'm not suggesting that it's as simple as "make the robot do it for you" either.

However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.

The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.

otterley 1 day ago||
The flip side of your argument is that it no longer matters how obtuse, complicated, baroque, brittle, underspecified, or poorly documented software is anymore. If we can slap an LLM on top of it to paper over those aspects, it’s fine. Maybe efficiency still counts, but only when it meaningfully impacts individual spend.
radarsat1 1 day ago||
Additionally it's not like you're constrained to write it in bash. You could use Python or any other language. The author talks about how you're now redeveloping a shitty CI system with no tests? Well, add some tests for it! It's not rocket science. Yes, your CI system is part of your project and something you should be including in your work. I drew this conclusion way back in the days where I was writing C and C++ and had days where I spent more time on the build system than on the actual code. It's frustrating but at the end of the day having a reliable way to build and test your code is not less important than the code itself. Treat it like a real project.
khbnr 1 day ago||
I don't understand the love for Buildkite around here at all. And I find the author's arguments inconsistent. Feels definitely like an ad for Buildkite.

I have to admit, I have limited experience with GitHub Actions though. My benchmark is GitLab mainly.

> With Buildkite, the agent is a single binary that runs on your machines.

Yes, and so it is for most other established CI systems with differing variance in orchestrator tooling to spawn agents on demand on cloud providers or Kubernetes. Isn't that the default? Am I spoiled?

> Buildkite has YAML too, but the difference is that Buildkite’s YAML is just describing a pipeline. Steps, commands, plugins. It’s a data structure, not a programming language cosplaying as a config format. When you need actual logic? You write a script. In a real language. That you can run locally. Like a human being with dignity and a will to live.

Again, isn't that the default with modern CI tools? The YAML definition is a declarative data structure, that let's me represent which steps to execute under which conditions. That's what I want from my CI tooling, right? That's why declarative pipelines are what everyone's doing right now and I haven't really heard a lot of people wanting to implement the orchestration of their entire pipeline imperatively instead and run them on a single machine.

But that's where you'll run into limitations pretty soon with Buildkite. You have `if` conditionals, but they're quite limited. You finally have `if_changed` since a few months, which you can use to run steps only if the commit / PR / tag contains changes to certain file globs, but it's again quite rudimentary. Also, you can't combine it with `if` conditionals, so you can't implement a full rebuild independent of file changes - which should be a valid feature, e.g. nightly or on main branches.

The recommended solution to all that:

> Dynamic Pipelines > In Buildkite, pipeline steps are just data. You can generate them.

To me, that's the cursed thing about Buildkite. You start your pipeline declaratively, but as soon as you branch out of the most trivial pipelines, you'll have to upload your next steps imperatively if a certain condition is met. Suddenly you'll end up with a Frankensteinian mess that looks like a declarative pipeline declaration initially, but when you look deeper you'll find a bunch of 20+ bash scripts that upload more pipeline fragments from Heredocs or other YAML files conditionally and even run templating logic on top of them. You want to have a mental model on what's happening in your pipeline upfront? You want to model dependencies between steps that are uploaded under different conditions somewhere scattered through bash scripts? Good luck with that.

I really don't see how you can market it as a feature, that you make me re-implement CI basics that other tools just have and even make me pay for it.

And I also don't see how that is more testable locally than a pipeline that's completely declared in YAML. Especially when your scripts need to interact with the buildkite-agent CLI to download artifacts, meta-data or upload artifacts, meta-data and more pipelines.

> I’ll be honest: Buildkite’s plugin system is structurally pretty similar to the GitHub Actions Marketplace. You’re still pulling in third-party code from a repo. You’re still trusting someone else’s work. I won’t pretend there’s some magic architectural difference that makes this safe.

Yep it is and I don't like either. I prefer GitLab's approach of sharing functionality and logic via references to other YAML files checked into a VCS. It's way easier to find out what's actually happening instead of tracing down third-party code in a certain version from an opaque market place.

But yes, the log experience and the possibility to upload annotations to the pipeline is quite nice compared to other tools I've used. Doesn't outweigh the disadvantages and headaches I had with it so far though.

---

I think many of the critique points the author had on GitHub Actions can be avoided when just using common sense when implementing your CI pipelines. No one forces you to use every feature you can declare in your pipelines. You can still still declare larger groups of work as steps in your pipeline and implement the details imperatively in a language of your choice. But to me, it's nice to not have to implement most pipeline orchestration features myself and just use them - resulting in a clear separation of concerns between orchestration logic and actual CI work logic.

mitchjj 18 hours ago|
Yeah, not an ad. Most folk haven't heard of Buildkite, the ones that have and have used it, more often than not are pretty enthusiastic.
xyst 1 day ago||
> this is a product made by one of the richest companies on earth.

nit: no, it was made by a group of engineers that loved git and wanted to make a distributed remote git repository. But it was acquired/bought out then subsequently enshittified by the richest/worst company on earth.

Otherwise the rest of this piece vibes with me.

slackfan 1 day ago||
All CI is just various levels of bullshit over a bash script anyway.
tedk-42 1 day ago|
Yes, but no need for the attitude.

Linux powers the world in this area and bash is the glue which executes all these commands on servers.

Any program or language you write to try and 'revolutionise CI' and be this glue will ultimately make the child process call to a bash/sh terminal anyhow and you need to read both stdout and stderr and exit codes to figure out next steps.

Or you can just use bash.

slackfan 1 day ago||
>no need for the attitude

Why? We've spent years upon years upon years of building systems that enshittify processes. We've spent years losing talent in the industry and the trends aren't going to reverse. We are our own worst enemy, and are directly responsible for the state of the industry, and to an extent, the world.

To not call out bullshit where one sees it, is violence.

wheatbond 1 day ago||
[dead]
_rwo 1 day ago||
The only way this title could be any better is this: Github Actions is slowly KILLING engineering teams /s

Said that - every CI sucks one way or another, Github actions is just good enough to fire up a simple job/automation which seems to be majority of use cases anyway?

I think fully production CI pipelines will always be complicated in one way or another (proper catching alone is a challenge on it's own); I really need to check out woodpeckerci (drone ci fork) tho as I had good memories about droneci, but possibly it because I was younger back then xd

glub103011 1 day ago||
[dead]
black_13 22 hours ago||
[dead]
onyx_writes 1 day ago|
[flagged]
mikepurvis 1 day ago||
The cost of the one-line CI config is that you miss out on integrations with the infrastructure, GUI, etc. You can't command runners of different architectures, or save artifacts, or prompt the user to authorize a deploy, or register test results, or ingest secrets, or show separate logs for parallel tasks, or any number of other similar things.

The real answer here is to put hooks in task-running systems like Nix, Bazel, Docker Bake, CMake, and so on that permit them to expose this kind of status back to a supervising system in an agnostic way, and develop standardized calls for things like artifacts.

It's just... who would actually build this? On the task runner side, it's a chicken and egg issue, and for the platform owners, the lock-in is the point. The challenge is more political than technical.

doctoboggan 1 day ago|||
This is an AI written comment (as admitted on the profile page).

Please keep HN comments for humans.

deepsun 1 day ago|||
That's why I like Maven -- it's declarative and HARD to make non-trivial things. But it's super-easy to write your own module (using code) and make Maven call it.

Also, another point about build scripts and CI/CD -- you usually touch them rarely, and the rarer you touch something, the more verbose it should be. That's why there's zero sense in shortening build/CI/CD commands and invent some operators to make it "more concise" -- you'll have to remember the operator each time you touch it again (like next year).

samtheprogram 1 day ago||
This is by choice, no? In most cases I see stuff like this, it could've been a bash script. That said, the environments in different CI's are different so it won't be totally portable, but still applies.