Posted by codesuki 1 day ago
However, there are very real things LLMs can do that greatly reduce the pain here. Understanding 800 lines of bash is simply not the boogie man it used to be a few years ago. It completely fits in context. LLMs are excellent at bash. With a bit of critical thinking when it hits a wall, LLM agents are even great at GitHub actions.
The scariest thing about this article is the number of things it's right about. Yet my uncharacteristic response to that is one big shrug, because frankly I'm not afraid of it anymore. This stuff has never been hard, or maybe it has. Maybe it still is for people/companies who have super complex needs. I guess we're not them. LLMs are not solving my most complex problems, but they're killing the pain of glue left and right.
I have to admit, I have limited experience with GitHub Actions though. My benchmark is GitLab mainly.
> With Buildkite, the agent is a single binary that runs on your machines.
Yes, and so it is for most other established CI systems with differing variance in orchestrator tooling to spawn agents on demand on cloud providers or Kubernetes. Isn't that the default? Am I spoiled?
> Buildkite has YAML too, but the difference is that Buildkite’s YAML is just describing a pipeline. Steps, commands, plugins. It’s a data structure, not a programming language cosplaying as a config format. When you need actual logic? You write a script. In a real language. That you can run locally. Like a human being with dignity and a will to live.
Again, isn't that the default with modern CI tools? The YAML definition is a declarative data structure, that let's me represent which steps to execute under which conditions. That's what I want from my CI tooling, right? That's why declarative pipelines are what everyone's doing right now and I haven't really heard a lot of people wanting to implement the orchestration of their entire pipeline imperatively instead and run them on a single machine.
But that's where you'll run into limitations pretty soon with Buildkite. You have `if` conditionals, but they're quite limited. You finally have `if_changed` since a few months, which you can use to run steps only if the commit / PR / tag contains changes to certain file globs, but it's again quite rudimentary. Also, you can't combine it with `if` conditionals, so you can't implement a full rebuild independent of file changes - which should be a valid feature, e.g. nightly or on main branches.
The recommended solution to all that:
> Dynamic Pipelines > In Buildkite, pipeline steps are just data. You can generate them.
To me, that's the cursed thing about Buildkite. You start your pipeline declaratively, but as soon as you branch out of the most trivial pipelines, you'll have to upload your next steps imperatively if a certain condition is met. Suddenly you'll end up with a Frankensteinian mess that looks like a declarative pipeline declaration initially, but when you look deeper you'll find a bunch of 20+ bash scripts that upload more pipeline fragments from Heredocs or other YAML files conditionally and even run templating logic on top of them. You want to have a mental model on what's happening in your pipeline upfront? You want to model dependencies between steps that are uploaded under different conditions somewhere scattered through bash scripts? Good luck with that.
I really don't see how you can market it as a feature, that you make me re-implement CI basics that other tools just have and even make me pay for it.
And I also don't see how that is more testable locally than a pipeline that's completely declared in YAML. Especially when your scripts need to interact with the buildkite-agent CLI to download artifacts, meta-data or upload artifacts, meta-data and more pipelines.
> I’ll be honest: Buildkite’s plugin system is structurally pretty similar to the GitHub Actions Marketplace. You’re still pulling in third-party code from a repo. You’re still trusting someone else’s work. I won’t pretend there’s some magic architectural difference that makes this safe.
Yep it is and I don't like either. I prefer GitLab's approach of sharing functionality and logic via references to other YAML files checked into a VCS. It's way easier to find out what's actually happening instead of tracing down third-party code in a certain version from an opaque market place.
But yes, the log experience and the possibility to upload annotations to the pipeline is quite nice compared to other tools I've used. Doesn't outweigh the disadvantages and headaches I had with it so far though.
---
I think many of the critique points the author had on GitHub Actions can be avoided when just using common sense when implementing your CI pipelines. No one forces you to use every feature you can declare in your pipelines. You can still still declare larger groups of work as steps in your pipeline and implement the details imperatively in a language of your choice. But to me, it's nice to not have to implement most pipeline orchestration features myself and just use them - resulting in a clear separation of concerns between orchestration logic and actual CI work logic.
nit: no, it was made by a group of engineers that loved git and wanted to make a distributed remote git repository. But it was acquired/bought out then subsequently enshittified by the richest/worst company on earth.
Otherwise the rest of this piece vibes with me.
Linux powers the world in this area and bash is the glue which executes all these commands on servers.
Any program or language you write to try and 'revolutionise CI' and be this glue will ultimately make the child process call to a bash/sh terminal anyhow and you need to read both stdout and stderr and exit codes to figure out next steps.
Or you can just use bash.
Why? We've spent years upon years upon years of building systems that enshittify processes. We've spent years losing talent in the industry and the trends aren't going to reverse. We are our own worst enemy, and are directly responsible for the state of the industry, and to an extent, the world.
To not call out bullshit where one sees it, is violence.
Said that - every CI sucks one way or another, Github actions is just good enough to fire up a simple job/automation which seems to be majority of use cases anyway?
I think fully production CI pipelines will always be complicated in one way or another (proper catching alone is a challenge on it's own); I really need to check out woodpeckerci (drone ci fork) tho as I had good memories about droneci, but possibly it because I was younger back then xd
The real answer here is to put hooks in task-running systems like Nix, Bazel, Docker Bake, CMake, and so on that permit them to expose this kind of status back to a supervising system in an agnostic way, and develop standardized calls for things like artifacts.
It's just... who would actually build this? On the task runner side, it's a chicken and egg issue, and for the platform owners, the lock-in is the point. The challenge is more political than technical.
Please keep HN comments for humans.
Also, another point about build scripts and CI/CD -- you usually touch them rarely, and the rarer you touch something, the more verbose it should be. That's why there's zero sense in shortening build/CI/CD commands and invent some operators to make it "more concise" -- you'll have to remember the operator each time you touch it again (like next year).