Top
Best
New

Posted by bsgeraci 21 hours ago

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust(github.com)
I'm a software engineer who keeps getting pulled into DevOps no matter how hard I try to escape it. I recently moved into a Lead DevOps Engineer role writing tooling to automate a lot of the pain away. On my own time outside of work, I built Artifact Keeper — a self-hosted artifact registry that supports 45+ package formats. Security scanning, SSO, replication, WASM plugins — it's all in the MIT-licensed release. No enterprise tier. No feature gates. No surprise invoices.

Your package managers — pip, npm, docker, cargo, helm, go, all of them — talk directly to it using their native protocols. Security scanning with Trivy, Grype, and OpenSCAP is built in, with a policy engine that can quarantine bad artifacts before they hit your builds. And if you need a format it doesn't support yet, there's a WASM plugin system so you can add your own without forking the backend.

Why I built it:

Part of what pulled me into computers in the first place was open source. I grew up poor in New Orleans, and the only hardware I had access to in the early 2000s were some Compaq Pentium IIs my dad brought home after his work was tossing them out. I put Linux on them, and it ran circles around Windows 2000 and Millennium on that low-end hardware. That experience taught me that the best software is software that's open for everyone to see, use, and that actually runs well on whatever you've got.

Fast forward to today, and I see the same pattern everywhere: GitLab, JFrog, Harbor, and others ship a limited "community" edition and then hide the features teams actually need behind some paywall. I get it — paychecks have to come from somewhere. But I wanted to prove that a fully-featured artifact registry could exist as genuinely open-source software. Every feature. No exceptions.

The specific features came from real pain points. Artifactory's search is painfully slow — that's why I integrated Meilisearch. Security scanning that doesn't require a separate enterprise license was another big one. And I wanted replication that didn't need a central coordinator — so I built a peer mesh where any node can replicate to any other node. I haven't deployed this at work yet — right now I'm running it at home for my personal projects — but I'd love to see it tested at scale, and that's a big part of why I'm sharing it here.

The AI story (I'm going to be honest about this):

I built this in about three weeks using Claude Code. I know a lot of you will say this is probably vibe coding garbage — but if that's the case, it's an impressive pile of vibe coding garbage. Go look at the codebase. The backend is ~80% Rust with 429 unit tests, 33 PostgreSQL migrations, a layered architecture, and a full CI/CD pipeline with E2E tests, stress testing, and failure injection.

AI didn't make the design decisions for me. I still had to design the WASM plugin system, figure out how the scanning engines complement each other, and architect the mesh replication. Years of domain knowledge drove the design — AI just let me build it way faster. I'm floored at what these tools make possible for a tinkerer and security nerd like me.

Tech stack: Rust on Axum, PostgreSQL 16, Meilisearch, Trivy + Grype + OpenSCAP, Wasmtime WASM plugins (hot-reloadable), mesh replication with chunked transfers. Frontend is Next.js 15 plus native Swift (iOS/macOS) and Kotlin (Android) apps. OpenAPI 3.1 spec with auto-generated TypeScript and Rust SDKs.

Try it:

  git clone https://github.com/artifact-keeper/artifact-keeper.git
  cd artifact-keeper
  docker compose up -d
Then visit http://localhost:30080

Live demo: https://demo.artifactkeeper.com Docs: https://artifactkeeper.com/docs/

I'd love any feedback — what you think of the approach, what you'd want to see, what you hate about Artifactory or Nexus that you wish someone would just fix. It doesn't have to be a PR. Open an issue, start a discussion, or just tell me here.

https://github.com/artifact-keeper

145 points | 60 comments
antonyh 17 hours ago|
I appreciate the honesty about using Claude and the time it took to build this, and it shows how things can look when guided by someone who knows what they are doing.

On the other hand, it also shows that it took three weeks, so why should I use this instead of building a custom toolchain myself that is optimised for what I need and actually use? Trimming away the 45+ formats to the 5 or so that matter to my project. It raises the question - is 'enterprise' software doomed in favour of a proliferation of custom built services where everybody has something unique, or is the real value in the 'support' packages and SLAs? Will devs adopt this and put 'Artifact Keeper' on their CV, or will they put 'built an artifact toolchain with Claude'?

But then again, kudos to you for building something that can (and probably should) eat the lunch of the enterprise-grade tools that are simply unaffordable to small business, individual contractors, and underfunded teams. Truth be told, I'm not going to build my own, so this is certainly something I want to put in a sandbox and try out, and also this is inspirational and may finally convince me that I should give Claude a fair go if it's capable of being guided to create high quality output.

raphinou 17 hours ago||
I'm impressed with the speed of development. I didn't take a look at the quality of the code though. I'm using glm a Kimi k2.5, and I have a lot of corrections to apply to the code. Is Claude that better? Or is my process bad? OP: what's your development process?
antonyh 16 hours ago|||
I've not done enough Rust to truly know, but it looks reasonable from looking at the tests, a few models, some implementation code.

It doesn't use the 'unsafe' keyword anywhere, but that's not necessarily an indicator. Uses unsafe-libyaml which is like what it sounds (a hacky port of libyaml) but is no longer maintained (archived on GH in March 2024), and may have better choices. An SBOM would highlight these dependencies better than me doing random searches through the code.

I'm not sure I'd have put a default in the OIDC callback to localhost, that's about the only thing I've seen in a quick 5-minute skim through. I do like the comments and the lack of emojis :-)

I too would like to know the process, if OP is willing to share.

bsgeraci 15 hours ago||
I have had claude go back and forth with codesimplifier agent (they developed) and a security agent.

I think adding this to your workflow helps but you have to make sure to have end to end testing on the mind. Because some changes can break things real fast.

My process is pretty plain outside of paying anthropic too much money a month. Only thing extra I am using is the beads currently. I was using speckit and ralph-loop but as of last week it does not seem to be needed. THink anthropic is baking some of thes tools into claude code.

antonyh 14 hours ago||
Sounds really clean and simple, combined with classic developer diligence and hard effort to get it built right. Thanks for sharing.
bsgeraci 15 hours ago|||
Claude is... unfortunetly... that much better. They really know how to use the tools that integrate into CLI that just makes the flow so much better.

The only extra stuff I am doing now is beads. https://github.com/steveyegge/beads

I was using speckit and ralph-loop but think anthropic baked in that ralph-loop. Basically a dumb while true until you break with the condition.

0x457 8 hours ago|||
Coding agents changed "build vs buy" dynamics in my opinion. Hopefully it will result in SaaS dropping pay-gating SSO.
bsgeraci 16 hours ago|||
I would say do not trust it, but use it and try it. Hopefully over time I can build trust by people using it.
antonyh 14 hours ago||
Trust it to proxy artifacts from the web? Yes I think so.

Trust it not to leak credentials? No, that's something that is never taken for granted.

Trust it to hold a full history of uploaded binaries? That depends on the value of the releases. For incubator work, or web projects, or even Appstore apps where it's released to those stores to manage, maybe there should be enough trust. I just wouldn't use it for code where I want access to many stable versions, and I wouldn't put it publicly on the web either - not that I would do so with Sonatype Nexus without vendor support and many safeguards. I think it'll earn trust over time, once folk are convinced to use it for real workloads.

There's a lot of forms of trust.

esafak 9 hours ago||
Why would you re-invent the wheel? Are the existing options that bad?
bsgeraci 6 hours ago||
There is no exsiting option :) unless you know where one is at. Artifactory OSS is a joke, and no other product is out there. Trust me I hate reinvinting the wheel... I rather take a nice wheel and use it.

If you find an existing full blown artifactory alternative that is opensource let me know.

edoceo 6 hours ago||
I was working on one; for similar reasons but I may just adopt yours - same wheel reasons.
stroebs 18 hours ago||
I’m a fairly heavy user of the JFrog platform with Enterprise+, Xray, their new Curation license, and my org is spending in excess of $500k/year on Artifact storage. Not including my time babysitting it. I’d love to see the end of it, and I hope you manage to build a community around this.

Part of the reason we pay the big license fee is so we have someone to turn to when it inevitably breaks because we’ve used it in a way nobody has before. In Jan last year we were using 30TB of artifact storage in S3. That’s 140TB today.

Where do you get your CVE data? Would built artifacts have their CVEs updated after the fact? Do you have blocking policies on artifacts based on CVEs, licenses, artifact age, etc?

bsgeraci 16 hours ago||
I am using the openscap and trivia. Can you add a discussion to my github about some of this. I would love some of your feedback on what you need on your level. I need to check the update mechanism so we are keeping the database up to date. I also want a way to keep it up to date when it is airgapped, not everyones use case but one I have delt with at my jobs.

I still need to put some e2e testing on those policies. https://demo.artifactkeeper.com/security/policies here is a demo and you can add a policy. Again that one I need to make a series of end to end testing but that was designed in mind :) I really want a staging area and promotion of packages after scans.

On my list of things to do.

stroebs 14 hours ago||
I'll carve out some time to add a discussion as I've become quite passionate about artifact storage in the last 18 months as a result of having to look after this behemoth. Air-gapping is also pretty important - JFrog supports granular proxy specification by repo.

It's a great start. What I can say is that granularity of CVE's in policies will become important for larger consumers. We have about 4.5mn artifacts so even getting CVSSv3 10's blocked was a challenge, let alone 9.8.

raphinou 18 hours ago|||
I looked at your profile but didn't seen any contact info, hence this comment. I'm working on a fully open source multisig solution for artifact authentication. I would be interested to have your opinion and if you see opportunities for such a project in companies as the one you work for now to make the project financially sustainable. Can you contact me? (Email in my profile)

Edit: the project if anyone reading this is interested: http://github.com/asfaload/asfaload (looking for feedback!)

eyeris 18 hours ago|||
Since the cve data is from Trivy/Grype, that should be osv.dev
M0r13n 18 hours ago|||
JFrogs platform is fairly robust. Only time will tell if this project can keep up. I highly doubt it's more than a fancy-looking prototype at this stage
gjvc 16 hours ago||
tell me mr armchair general, what have you done that's worth talking about?
M0r13n 8 hours ago|||
My comment was not intended to be any criticism or to downplay the performance - quite the opposite :)
bsgeraci 6 hours ago||
I did not take your original comment as critiscism. Feel free to follow the repository and see how long I can keep it alive :)
bsgeraci 16 hours ago|||
I think it is right to be skeptectial and I hope this project can prove people wrong.
moezd 18 hours ago||
Unfortunately I'm also in the same camp, with SBOM generation, Xray, Curation, the whole shebang. I couldn't find these in the docs as well, which would matter in my case.
bsgeraci 15 hours ago|||
Ok updated docs https://artifactkeeper.com/docs/security/scanning/

SHould have info on the CVE, please leave some issues on the repository if you want to see more infromatoin on the actual dashbaord/ui :)

Thanks for the feedback!

bsgeraci 16 hours ago|||
I will add some todo's for me. I know how important this is.
j1elo 8 hours ago||
Question about the license choice: now that we're past so many projects that started up as FOSS but with a longer-term plan of monetization and/or corporate-tier support level, which saw their choice of license allowing other bigger players to just get the code and run a competing service with proprietary extensions (which is what something so exceedingly open as MIT allows), isn't there any worry that this could happen again here?

I'm curious if AGPL shouldn't be more common (even though it's not a silver bullet), but MIT projects with foreseeable needs of some monetization to survive long term never ceased to show up, despite so many FOSS drama in the last couple years.

bsgeraci 6 hours ago|
Great question and I think about this a lot. I chose MIT deliberately and I'll explain why.

My graduate research focused on common computer security misconceptions — one of the biggest being that open source is inherently insecure. The algorithms and systems we trust most are the ones open to public scrutiny. AES was selected through an open competition where every candidate was published for the world to attack. TLS, SHA-256, RSA — their security comes from transparency, not obscurity. I believe the same applies to software infrastructure.

Could a bigger player take this and run a competing service? Sure, MIT allows that. But I'd rather have the code out there being used, audited, and improved than restrict it to protect a business model I don't even have yet. If someone like AWS wraps this in a managed service, that honestly means I built something worth wrapping — and the open version still exists for anyone who wants to self-host.

I've thought about the Canonical model — paid support around a free product — and I might go there someday. But I don't have years of production use behind this yet. We all start somewhere. Right now I'd rather focus on making the software good and building a community around it than optimizing a license for a monetization strategy that doesn't exist.

AGPL is a valid choice and I respect projects that use it. But for me, MIT is a statement about what I actually care about — the code being out there for everyone.

j1elo 5 hours ago||
Yours is truly an informed and well thought out decision! I appreciate it, and enjoyed reading your reasoning; thanks for the clarity and props for the whole effort of this project!

I agree that an extreme of the permisiveness is indeed the most likely to attract major usage. On the other hand, its freedom is more fragile. All is well with each project striking the preferred balance in that axis.

ashishbijlani 5 hours ago||
This is a great initiative. Thanks for sharing! I will use it to create my personal cache of package registries (beyond obvious advantages of caching, it can also mitigate typo-squatting attacks).

BTW, if there's an interest, I'd love to collaborate and integrate Packj [1] audit for malware scans.

1. Packj (https://github.com/ossillate-inc/packj) detects malicious PyPI/NPM/Ruby/PHP/etc. dependencies using behavioral analysis. It uses static+dynamic code analysis to scan for indicators of compromise (e.g., spawning of shell, use of SSH keys, network communication, use of decode+eval, etc). It also checks for several metadata attributes to detect bad actors (e.g., typo squatting).

kamma4434 18 hours ago||
I have been looking for ways to only use local packages for our software builds. I am looking for something that can act as a local cache for Java and NPM packages. The idea would be that developers can only use packages belonging to the allowed set for development, and there is a vetting process where packages are added to the allowed set (or removed).

I have been playing with the idea of using a single git repository to host them, Java packages as an Ivy repository and JavaScript packages as simply the contents of node_modules.

Anybody does something similar?

Woshiwuja 18 hours ago|
[dead]
mqus 5 hours ago||
I just counted the list of supported formats and landed at 35. This is still impressive, but if even this simple fact is wrong in the project description("40+ formats!!!"), I have no faith at all that any of this actually works.
bsgeraci 23 minutes ago||
yea so technically some of the formats are aliased :)

https://artifactkeeper.com/docs/package-formats/ Check this, you should see around 35 then 9 alias's that map to the other ones.

awakeasleep 5 hours ago||
Well the enterprise alternatives dont really work either
figmert 16 hours ago||
I've been wanting something like this that isn't artifactory (I've ran it in previous companies, it's not a great experience), so I had been thinking of doing it myself, but never bothered. One idea I had is to write a proxy that essentially translates the various package manager endpoints into OCI registry, thus causing everything to be stored on any OCI backend. My thinking was this way you could in theory use any OCI backend (including ready available, battle-tested self-hosted applications), but this proxy would never need it's own state, thus making it (hopefully) easier to run.

Now that you've implemented, was there a reason you didn't go for such an approach so that you would worry about less as someone hosting something like this?

bsgeraci 14 hours ago|
I have used oras, and that might be an interesting approach but not sure how it would work with search , version listing, dependenciton resolution, metadata quirere, permissions, and audit logs.

Are you sugesting some hybrid approach?

no_circuit 12 hours ago||
Impressive looking project generated with AI help. Have similar goals of having an artifacts system myself.

I think the approach of multi-format, multi-UI, and new (to you) programming language isn't optimal even with AI help. Any mistake that is made in the API design or internal architecture will impact time and cost since everything will need to be refactored and tested.

The approach I'm trying to take for my own projects is to create a polished vertical slice and then ask the AI to replicate it for other formats / vertical slices. Are there any immediate use cases to even use and maintain a UI?

So a few comments on the code:

- feature claims rate limiting, but the code seems unused other than in unit tests... if so why wasn't this dead code detected?

- should probably follow Google/Buf style guide on protos and directory structure for them

- besides protos, we probably need to rely more on openapi spec as well for code generation to save on AI costs, I see openapi spec was only used as task input for the AI?

- if the AI isn't writing a postgres replacement for us, why have it write anything to do with auth as well? perhaps have setup instructions to use something like Keycloak or the Ory system?

geauxvirtual 6 hours ago||
How much of this code was actually reviewed? Doing a quick glance through some of the features being touted around SSO, there are a few vulnerabilities, and wonder if these actually work and have been tested with different providers.

* I say this as an engineer who has supported an authentication platform for years for a SAAS company and know not one IdP has implemented SAML the same as others.

bsgeraci 57 minutes ago|
That is interesting, I have a good idea about setting up some red team agents and blueteam agents with claude code and seeing if we can improve security by testing things. In the pen test world these AI tools have basically beat all humans breaking into systems with no AI assistance.

If you have anymore details I would love to hear from your experience and what you think would be useful to look at. I will make a ticket based on this concern. I really want to make this as secure as we can and have people poke and do the code reviews. :)

Or analyse the code base. I am using security agents to harden the code base and testing end to end testing based on that.

Adding some security agents in the loop is a great idea!

the_harpia_io 13 hours ago|
The Trivy + Grype combo is interesting - in my experience they catch different things, especially on container scanning vs dependencies. You see them disagree much on severity?

Re: the vibe coding angle - the thing I keep running into is that standard scanners are tuned for human-written code patterns. Claude code is structurally different. More verbose, weirdly sparse on the explicit error handling that would normally trigger SAST rules. Auth code especially - it looks textbook correct and passes static analysis fine, but edge cases are where it falls apart. Token validation that works great except for malformed inputs, auth checks that miss specific header combinations, that kind of thing.

The policy engine sounds flexible enough that people could add custom rules for AI-specific patterns? That'd be the killer feature tbh.

bsgeraci 6 hours ago|
I am totally thinking about adding this so you can connect to an API or use self hosted models that run in a container if you have the resources!!!! You are spot on.
More comments...