Posted by bsgeraci 23 hours ago
Your package managers — pip, npm, docker, cargo, helm, go, all of them — talk directly to it using their native protocols. Security scanning with Trivy, Grype, and OpenSCAP is built in, with a policy engine that can quarantine bad artifacts before they hit your builds. And if you need a format it doesn't support yet, there's a WASM plugin system so you can add your own without forking the backend.
Why I built it:
Part of what pulled me into computers in the first place was open source. I grew up poor in New Orleans, and the only hardware I had access to in the early 2000s were some Compaq Pentium IIs my dad brought home after his work was tossing them out. I put Linux on them, and it ran circles around Windows 2000 and Millennium on that low-end hardware. That experience taught me that the best software is software that's open for everyone to see, use, and that actually runs well on whatever you've got.
Fast forward to today, and I see the same pattern everywhere: GitLab, JFrog, Harbor, and others ship a limited "community" edition and then hide the features teams actually need behind some paywall. I get it — paychecks have to come from somewhere. But I wanted to prove that a fully-featured artifact registry could exist as genuinely open-source software. Every feature. No exceptions.
The specific features came from real pain points. Artifactory's search is painfully slow — that's why I integrated Meilisearch. Security scanning that doesn't require a separate enterprise license was another big one. And I wanted replication that didn't need a central coordinator — so I built a peer mesh where any node can replicate to any other node. I haven't deployed this at work yet — right now I'm running it at home for my personal projects — but I'd love to see it tested at scale, and that's a big part of why I'm sharing it here.
The AI story (I'm going to be honest about this):
I built this in about three weeks using Claude Code. I know a lot of you will say this is probably vibe coding garbage — but if that's the case, it's an impressive pile of vibe coding garbage. Go look at the codebase. The backend is ~80% Rust with 429 unit tests, 33 PostgreSQL migrations, a layered architecture, and a full CI/CD pipeline with E2E tests, stress testing, and failure injection.
AI didn't make the design decisions for me. I still had to design the WASM plugin system, figure out how the scanning engines complement each other, and architect the mesh replication. Years of domain knowledge drove the design — AI just let me build it way faster. I'm floored at what these tools make possible for a tinkerer and security nerd like me.
Tech stack: Rust on Axum, PostgreSQL 16, Meilisearch, Trivy + Grype + OpenSCAP, Wasmtime WASM plugins (hot-reloadable), mesh replication with chunked transfers. Frontend is Next.js 15 plus native Swift (iOS/macOS) and Kotlin (Android) apps. OpenAPI 3.1 spec with auto-generated TypeScript and Rust SDKs.
Try it:
git clone https://github.com/artifact-keeper/artifact-keeper.git
cd artifact-keeper
docker compose up -d
Then visit http://localhost:30080Live demo: https://demo.artifactkeeper.com Docs: https://artifactkeeper.com/docs/
I'd love any feedback — what you think of the approach, what you'd want to see, what you hate about Artifactory or Nexus that you wish someone would just fix. It doesn't have to be a PR. Open an issue, start a discussion, or just tell me here.
I made a discussion here :) I can keep you posted but if you have a github account just make a commet there follow so you stay up to date with that change.
Would be cool if this also could support the existing artifactory s3 backend format so you could just point this at your existing artifactory s3 bucket and migrate your db over to this.
Congrats on launching!
I made this discussion here. Please jump on github and add some comments and maybe we can get this added :)
I think it’s cool that the OSS version has everything but I hope you’re considering adding an actual enterprise tier for paid support because from my past experience that’s the killer feature large enterprises care about.
If your OSS service becomes a mission-critical service (what an artifact repository usually is), a large org will anyways have to invest into a team that can operate and own it.
If throwing some money at the vendor takes away some of the responsibility (= less time spent by in-house team on ops) then paying for an enterprise support SLA is a feature, not a bug.
It would be great to see more competition in the space even though my current team isn’t working with this problem!
My recommendation with testing out hands free agentic, know it is not fully hands free. I find my self babysitting alot of terminals going at once, like having a bunch of interns or junior devopers.
It is important to plan plan plan.
I want to eventually switch and play with self hosted models but for most agentic stuff Claude is killing it in terms of results.
These tools can’t architect clean solutions that cut out massive chunks of code, and they can’t talk to users and decide whether what they’re building makes sense. For that, we need a human touch.
But coding agents grant insane leverage if they’re just told when they got it wrong and given a chance to get it right.
I think if you follow a few rules whenever making changes and using all the latest tools like linters, security checkers, end to end frameworks, and any other helpful tool you can really make stuff happen.
will your stuff be really opensource?
There are 7 or 8 repositories now in this org. Feel free to take, use, or help imrove the code. MIT Open Source.
I've read the main readme so excuse if comments are covered already but key features and/or opportunities: - backend supporting Azure (Nexus has this under Pro though community does support S3 under community at least) - clear navigable S3 structure that could be sorted by a human if needed, like the on-disk backend of Nexus 2 used to have, not like Nexus' current organisation/obfuscation (which would be understandable but for...) - maintenance routines that actually work (Nexus' are a joke and very limited features for both cleanup and the task set leaving ever growing detritus). - having an automatically take the latest from upstreams is a big problem in the npm world; it would be a perfect fit to introduce this kind of staging concepts and window on upstream (proxied) repos - needs restful APIs and deep links to artifacts for ease of integration - we end up proxying other sources of files in a web proxy since there's no easy "pass through" via Nexus where we don't want to copy the current files into our DB or S3 but just want to pass the latest to the consumer. a direct proxy feature with URL remapping would be cool
Things I'd have to play around to understand what it does currently: - whether it has proper proxy and group support; composition is completely essential - whether that caching is sensible there (Nexus does a poor job, though it's a hard problem, when bad states get cached) - efficient (Maven) metadata generation (Nexus is abysmally slow) - whether rbac is clear over the repo structures (Nexus does ok here except everything is repo level AND the initial setup is very painful). - P2 consumption looks to be a supported format but P2 hosting I think was nerfed after Nexus v2.11 and some clients still use that - rpms added ("yum" to Nexus) but as with repo hierarchies would need to be assured they can be nested and will correctly produced merged repomd.xml and the like so they function properly
other comments: - having the security scanning in an open source tool would be amazing - it would be very hard to get clients to trust this without either a community and review process or a company (that "can be sued") behind it. I know it's very early days but it's a bit chicken and egg as if I can't use this on clients I wouldn't use for anything. Not that I am a valuable customer by myself, but I influence clients decisions who then need that support
Their security comes from transparency and years of public audit, not obscurity. The same principle applies to software. I see the legal argument for wanting a vendor to sue, and I've thought about something like Canonical's model for Ubuntu — offering paid support around a free product. But I don't have years of production use behind this yet. We all start somewhere. So for now, this stays open and free for everyone to use, and for me and others to maintain.
I've spent quite a long time looking at artifact storage, both for work and for personal use and this project literally scratches that itch. So featureful (assuming they're not placeholders ;) ) and yes, Claude Code, but still - the proof will be in whether it works (and how clean the codebase feels - you're making it sound promising :D ).
Very excited to try this - well done :)
CLI with journal of instructions, TUI?
Basically I am using end points so automation can be done with just curl requests. But as a linux nerd I do enjoy my share of CLI/TUI's.