Top
Best
New

Posted by gonzalovargas 21 hours ago

Google Workspace CLI(github.com)
879 points | 273 comments
ksri 13 hours ago|
I have been working on extrasuite (https://github.com/think41/extrasuite). This is like terraform, but for google drive files.

It provides a git like pull/push workflow to edit sheets/docs/slides. `pull` converts the google file into a local folder with agent friendly files. For example, a google sheet becomes a folder with a .tsv, a formula.json and so on. The agent simply edits these files and `push`es the changes. Similarly, a google doc becomes an XML file that is pure content. The agent edits it and calls push - the tool figures out the right batchUpdate API calls to bring the document in sync.

None of the existing tools allow you to edit documents. Invoking batchUpdate directly is error prone and token inefficient. Extrasuite solves these issues.

In addition, Extrasuite also uses a unique service token that is 1:1 mapped to the user. This means that edits show up as "Alice's agent" in google drive version history. This is secure - agents can only access the specific files or folders you explicitly share with the agent.

This is still very much alpha - but we have been using this internally for our 100 member team. Google sheets, docs, forms and app scripts work great - all using the same pull/push metaphor. Google slides needs some work.

lewisjoe 1 hour ago||
Excellent project! I see that the agent modifies the google docs using an interesting technique: convert doc to html, AI operates over the HTML and then diff the original html with ai-modified html, send the diff as batchUpdate to gdocs.

IMO, this is a better approach than the one used by Anthropic docx editing skill.

1. Did you compare this one with other document editing agents? Did you have any other ideas on how to make AI see and make edits to documents?

2. What happens if the document is a big book? How do you manage context when loading big documents?

PS:I'm working on an AI agent for Zoho Writer(gdocs alternative) and I've landed on a similar html based approach. The difference is I ask the AI to use my minimal commands (addnode, replacenode, removenode) to operate over the HTML and convert them into ops.

This works pretty well for me.

sothatsit 11 hours ago||
We have been using something similar for editing Confluence pages. Download XML, edit, upload. It is very effective, much better than direct edit commands. It’s a great pattern.
holmb 11 hours ago|||
I would be very interested in this if you could share? Maintaining a Knowledge Base without a Git workflow is a pain currently.
Jagerbizzle 7 hours ago|||
You can use the Copilot CLI with the atlassian mcp to super easily edit/create confluence pages. After having the agent complete a meaningful amount of work, I have it go create a confluence page documenting what has been done. Super useful.
reachableceo 1 hour ago||||
Edit the markdown using GitHub workflow. Then insert markup (pick markdown) into the confluence page.

Works wonderfully!

sothatsit 10 hours ago|||
I'm afraid I can't easily share this, as we have embedded a lot of company-specific information in our setup, particularly for cross-linking between confluence/jira/zendesk and other systems. I can try explain it though, and then Claude Code is great at implementing these simple CLI tools and writing the skills.

We wrote CLIs for Confluence, Jira, and Zendesk, with skills to match. We use a simple OAuth flow for users to login (e.g., they would run jira login). Then confluence/jira/zendesk each have REST APIs to query pages/issues/tickets and submit changes, which is what our CLIs would use. Claude Code was exceptional at finding the documentation for these and implementing them. Only took a couple days to set these up and Claude Code is now remarkably good at loading the skills and using the CLIs. We use the skills to embed a lot of domain-specific information about projects, organisation of pages, conventions, standard workflows, etc.

Being able to embed company-specific links between services has been remarkably useful. For example, we look for specific patterns in pages like AIT-553 or zd124132 and then can provide richer cross-links to Jira or Zendesk that help agents navigate between services. This has made agents really efficient at finding information, and it makes them much more likely to actually read from multiple systems. Before we made changes like this, they would often rabbit-hole only looking at confluence pages, or only looking at jira issues, even when there was a lot of very relevant information in other systems.

My favourite is the confluence integration though, as I like to record a lot of worklog-style information in there that I would previously write down as markdown files. It's nicer to have these in Confluence as then they are accessible no matter what repo I am working in, what region I am working in, or what branch or feature I'm working on. I've been meaning to try to set something similar up for my personal projects using the new Obsidian CLI.

holmb 6 hours ago|||
Thanks for the insights!

We have been doing something similar but it sounds like you have come further along this way of working. We (with help from Claude) have built a similar tool that you describe to interface with our task- and project management system, and use it together with the Gitlab and Github CLI tools to allow agents to read tickets, formulate a plan and create solutions and create MR/PR to the relevant repos. For most of our knowledge base we use Markdown but some of it is tied up in Confluence, that's why I have an interest in that part. And, some is even in workflows are in Google Docs which makes the OP tool interesting as well -- currently our tool output Markdown and we just "paste from markdown" into Gdocs. We might be able to revise and improve that too.

graeme 8 hours ago|||
Thank you! Sounds like a fantastic setup. Are the claude code agents acting autonomously from any trigger conditions or is this all manual work with them? And how do you manage write permissions for documents amongst team members/agents, presumably multiple people have access to this system?

(Not OP, but have been looking into setting up a system for a similar use case)

neuronexmachina 6 hours ago|||
I've found that usually works ok, but currently tends to timeout with the Atlassian MCP when trying to do updates on large Confluence pages: https://github.com/atlassian/atlassian-mcp-server/issues/59
d4rkp4ttern 9 hours ago||
Related, I often work with markdown docs (usually created via CLI agents like Claude Code) and need to collaborate with others in google docs, which is extremely markdown-unfriendly[1], so I built small quality-of-life CLI tools to convert Gdocs -> md and vice versa, called gdoc2md and md2gdoc:

https://pchalasani.github.io/claude-code-tools/integrations/...

They handle embedded images in both directions. There are similar gsheet2csv and csv2gsheet tools in the same repo.

Similar to the posted tool, there is a first time set up involving creating an app, that is documented above.

[1] in the sense there are multiple annoying clicks/steps to get a markdown doc to look good in Gdocs. You'd know the pain if you've tried it.

greymalik 8 hours ago||
Paste from markdown (Chrome only) works _really_ well for me. What are the extra steps you’re running into?
d4rkp4ttern 8 hours ago||
Interesting, in my Arc browser, I just tried File -> open -> upload -> blah.md and it does seem to render fine. This exact thing did not work a few weeks ago, meaning the various header markers etc showed up as raw "##" etc, and I had to further select something like "open as new doc" to finally make it look good.
z3ugma 7 hours ago||
Right click > "Paste from Markdown" instead of just straight up pasting in
d4rkp4ttern 6 hours ago||
Images wouldn't work though, right? I'd be amazed if that worked. My CLI tools handle those.
chrisweekly 6 hours ago||
Obsidian has become almost an operating system for working with markdown. Its Live View / Edit mode is excellent (WYSIWYG) and its ability to accept pasted content and handle it appropriately is good and getting better. Its plugin/extension ecosystem is robust (and has a low barrier to entry), and now that it has a CLI I expect to see an acceleration of clever workflows and integrations.

No affiliation, just a very happy ~early adopter and daily user.

Lord_Zero 3 hours ago|||
BUT the main supported sync module is cloud only they wont let you self host for free which is really shitty and lame.
chrisweekly 2 hours ago|||
Wow, that's a strong opinion and harsh words that come across as really entitled, and probably unfair. From my PoV, they're a tiny, scrappy, transparent and likeable company who built and maintain a fantastic software application that radically improved ~everything about my daily workflow and PKM. I get more value out of Obsidian in a day than most other apps in their entire lifespan. The core app is free! They have to eat. I'd probably throw $ at them even if they didn't charge a few bucks / month for Sync. (Which works flawlessly.) Sure it'd be cool if you could self-host their Sync module -- but many Obsidian users use other DIY approaches for sync; in the end it's markdown files on a local disk, do with it what you will.
quietsegfault 2 hours ago|||
[dead]
d4rkp4ttern 6 hours ago|||
I’m intrigued by their recent CLI release as well. I’ll have to check out the markdown edit support too, thanks
ritzaco 6 hours ago||
interesting we have a very similar internal flow - we like working in markdown but our customers want to leave feedback in Google docs, so we also have an md -> gdoc tool. We don't do the reverse as we ask them to only leave comments/suggested changes and we apply those directly to the markdown and re-export.

I ran into similar issues as you for the image handling, and the work around I use is to use pandoc to convert to docx as a first step and then import that as a Google Doc using the API, as Google Docs seems to handle docx much better than markdown from what I've seen.

tclancy 19 hours ago||
Interesting post from the main contributor about this (at least I assume it’s what he’s referencing) https://justin.poehnelt.com/posts/rewrite-your-cli-for-ai-ag...
dang 16 hours ago||
Thanks! Looks like he submitted it here, judging by the username:

You need to rewrite your CLI for AI agents - https://news.ycombinator.com/item?id=47252459.

I think that's pretty cool so I put the post in the SCP (https://news.ycombinator.com/item?id=26998308).

Barbing 15 hours ago||
TIL Second Chance Pool, great idea
juanre 11 hours ago|||
This is really interesting: "Humans hate writing nested JSON in the terminal. Agents prefer it." Are others seeing the same thing? I've just moved away from json-default because agents were always using jq to convert it to what I could have been producing anyway.
lostmsu 4 hours ago||
In my experience agents struggle with escape sequence nesting as much as humans do. IMHO that is one well-paved road to RCE via code injection.
albert_e 16 hours ago|||
Looks like I am hitting some Cloudflare Block when accessing this URL
abustamam 8 hours ago||
Probably because he built his site for agents not humans
hamasho 7 hours ago||
lol but it’s definitely happening. Some services are solely for llm consumption and human is not a welcomed customer.
winwang 19 hours ago|||
Really interesting. I was thinking about something similar regarding the shape of code. I have no qualms recommending my agents take static analysis to the extreme, though it would cumbersome for most people.
blks 8 hours ago||
No, we won’t be in fact doing that. Machine parsable, readable for other tools - yes.
jillesvangurp 13 hours ago||
Generating a good cli isn't all that hard for agentic coding tools. When you do it manually it's highly repetitive work. But all you are doing is low level plumbing. Given some parsed arguments, call a function, return the result (with some formatting, prettying, etc.). In the end it's just a facade for an API, library, or whatever else you want to have a cli for. Easy to write. Easy to test. But manually going through your API resource by resource, parameter by parameter, etc. takes a long time. An LLM just blazes through that in a few minutes. Generate some tests, tweak as needed, and you are good to go.

I did a few CLIs with codex in the last few weeks. I do simple ops with this stuff. I've had a few use cases for new features where previously I would have had to build some kind of quick and dirty admin UI just to use and test a new API feature before being able to integrate it into our product. With a generated cli, I can just play with it from the command line. Or make codex do that for me.

A good cli with a modern command line argument parser, well documented options, bash/zsh auto complete, pretty colors, etc. is generally nice to have. I mapped resources to commands and sub commands, made it add parameters with sensible defaults or optional ones. Then I got lazy and just asked it what else it thought it was missing, it made some suggestions and I gave it the thumbs up and it all got added. I even generated a simple interactive TUI at some point. Because why not? I also made it generate a md skill file explaining how to use the cli that you can just drop in your skills directory.

codeulike 13 hours ago||
But manually going through your API resource by resource, parameter by parameter, etc. takes a long time.

This CLI dynamically generates itself at run time though

gws doesn't ship a static list of commands. It reads Google's own Discovery Service at runtime and builds its entire command surface dynamically

qalmakka 12 hours ago||
> gws doesn't ship a static list of commands. It reads Google's own Discovery Service at runtime and builds its entire command surface dynamically

You're not exactly describing rocket science. This is basically how websites work, there's never been anything stopping anyone from doing dynamic UI in TUIs except the fact that TUI frameworks were dog poop until a few years ago (and there was no Windows Terminal, so no Windows support). Try doing that in ncurses instead of Rataui or whatever, it's horrendous

hrmtst93837 13 hours ago||
> Disclaimer

> This is not an officially supported Google product.

Looked like an official Google Product on the first glance.

qmarchi 12 hours ago||
Generally, this disclaimer is required for products that are released under the "Google" name but without any kind of support guarantees for enterprise customers.

That or it's a personal project that IARC decided could live in the workspace project.

Disc: Former Googler

bandrami 6 hours ago||
> but without any kind of support guarantees for enterprise customers

Also known as every single Google product

ivanjermakov 13 hours ago|||
I'm still confused, @googleworkspace is not affiliated with Google?

Seems like it was made by Google employee: https://justin.poehnelt.com/posts/rewrite-your-cli-for-ai-ag...

tuckerman 3 hours ago|||
Google operates across so many verticals that it's difficult to argue a side project is outside the scope of Google’s business and therefore Google could argue it has copyright over the work. To make it easier for engineers to keep contributing to open source, there’s a fairly straightforward path to release code through a Google-owned repository (if you look at github.com/google it is full of personal projects alongside official ones).

There is an official process where an engineer can apply to a committee to have Google waive any copyright claim. That requires additional work so if your goal is simply to publish the code as open source and you do not mind it living under the Google org, using the Google repo path is usually much faster.

Disclaimer: ex-googler, not a lawyer, not arguing whether or not the situation with copyright assignment is legally enforceable or not/good or bad/etc.

hrmtst93837 13 hours ago|||
I think an official project from Google would be hosted under https://github.com/google, a GitHub Org which contains 2,800 repositories and has more than 500 Google employees as member.

googleworkspace/cli appears to be more of a hobby project developed by a single Google employee.

jsnell 10 hours ago||
Most projects under the "google" org will have exactly the same disclaimer about not being official Google products.
krzyk 8 hours ago|||
Crazy.

And this project uses "google" in its org, so I would assume it is offical or at least lawyers are running toward the owner with lawsuits.

hrmtst93837 9 hours ago|||
But at least they are under the Google organization. Thing is anyone could create an organization, name it something like "googlesomething", use Google logos, and design it in a way that some users might believe it has an official connection.
abustamam 8 hours ago||
Couldn't Google do a cease and desist for that kind of impersonation?
hrmtst93837 7 hours ago||
I think so, but it could be enough for someone to create such an organization, share it on HN for malicious purposes, such as infecting devices, and have it taken down only afterward. I'm not saying that's what happened here, but it does illustrate a potential attack vector.
decimalenough 10 hours ago|||
It's by Google, but it's open source and comes with no SLAs.
whizzter 13 hours ago||
Yeah that github name made my spider senses tingle, large scale credentials harvesting?
yenepho 13 hours ago||
Also the use of the google logo.

Edit: Oh, I think this actually is an official account. Very confusing

udioron 8 hours ago||
What a shame Google Photos have no decent API or CLI. Photos could have been the best SAAS but changes in the API make it terribely unusable.

I wish I could use an API/CLI to query/geoquery my photos.

k4rnaj1k 6 hours ago|
[dead]
mogili1 15 hours ago||
I was excited to see this but all of that went away when I realized you need to create an app in GCP to use it. Can't really expect non technical users to set this up across the company.
heinrichhartman 14 hours ago||
Can someone explain to me, why Google can't (or does not want to) implement the same auth flow that any other SaaS company uses:

# API Keys in Settings

1. Go to Settings -> API Keys Page

2. Create Token (set scope and expiration date)

# OAuth flow

1. `gws login` shows url to visit

2. Login with Google profile & select data you want to share

3. Redirect to localhost page confirms authentication

I get that I need to configure Project and OAuth screens if I want to develop an Applications for other users, that uses GCP services. This is fine. But I am trying to access my own data over a (/another) HTTP API. This should not be hard.

AJRF 13 hours ago||
Can you name a service you think works like that?

Google have over a billion very non-technical users.

The friction of not having this in the account page that everyone has access too probably saves both parties lots of heartbreak.

krzyk 8 hours ago||
github? I just do some click here, click there, copy paste and gh cli is ready.

For google I need PhD to setup any kind of API access to my own data. And it frequently blocks you, because you can setup as a test product, add test accounts (but it can't be owner account (WTF?)) etc.

I gave up on using a google calendar cli project because of all that lack of normal UX.

UX for google APIs looks like it was designed by accountant.

gws auth setup looks promising, but it won't work yet for personal accounts.

aantix 4 hours ago|||
It's an un-invite. A hollow gesture.

Google's Gemini can read Google Docs directly.

They really don't want you to use another LLM product.

So they make the setup as difficult as possible.

tomashubelbauer 9 hours ago|||
Same story here, I installed it and ran `gwc auth setup` only to find I needed to install a `gloud` CLI by hand. That led me to this link with install instructions: https://cloud.google.com/sdk/docs/install. Unmistakeable Google DX strikes again.
fermisea 14 hours ago|||
https://www.supyagent.com

We’re trying to create a single unified cli to every service on the planet, and make sure that everything can be set up with 3 clicks

justinwp 14 hours ago|||
Yeah, still no way around this unfortunately.
limpcomedian 13 hours ago||
[dead]
virgildotcodes 18 hours ago||
God, getting this set up is frustrating. I've spent 45 minutes trying to get this to work, just following their defaults the whole way through.

Multiple errors and issues along the way, now I'm on `gws auth login`, and trying to pick the oAuth scopes. I go ahead and trust their defaults and select `recommended`, only to get a warning that this is too many scopes and may error out (then why is this the recommended setting??), and then yeah, it errors out when trying to authenticate in the browser.

The error tells me I need to verify my app, so I go to the app settings in my cloud console and try to verify and there's no streamlined way to do this. It seems the intended approach is for me to manually add, one by one, each of the 85 scopes that are on the "recommended" list, and then go through the actual verification.

Have the people that built and released this actually tried to install and run this, just a single time, purely following their own happy path?

varenc 18 hours ago||
Similar frustrations. I was only able auth using some Google app I created for an old project years ago that happened to have the right bits.

It wild that this process is still so challenging. There's got to be some safe streamlined way that sets up an app identity you own that can only use to access your own account.

My guess is that organizationally within Google, the developer app authorization process must have many teams involved in its implementation and many other outside stakeholders. A single unified team wouldn't responsible for this confusion and complexity. I get why... it's a huge source of bad actors. But there's got to be a better way.

38 16 hours ago||
[dead]
m8s 18 hours ago|||
I’ve been really unhappy with pretty much every Google product I’ve used except their consumer productivity tools — Gmail, Calendar, and Meet. Diving into Google Cloud has been extremely unsatisfactory
brightball 18 hours ago|||
I ran a project for a company on Google Cloud a few years ago and enjoyed it once I got used to everything. I’d use it more now if they had better low end pricing to start projects there.

It’s a very different experience than AWS though and takes some getting used to.

julianozen 9 hours ago|||
Same. I was using Gemini and firebase for a work project and I was stunned how hard it was for me to use
SamDc73 17 hours ago|||
I find https://github.com/steipete/gogcli a bit easier (but still confusing to setup)

Google Workspace API(s) keys and Roles was always confusing to me at so many levels .. and they just seem to keeping topping that confusion, no one is addressing the core (honestly not sure if that is even possible at this point)

38 16 hours ago||
[dead]
semenko 3 hours ago|||
I have "Advanced Protection" turned on, so I just can't use this at all, because my newly created Google Cloud GCP app isn't trusted (even though I own it and I'm requesting read-only scopes). What a mess.

  Access blocked: [app name] is not approved by Advanced Protection. Error 400: policy_enforced
upcoming-sesame 15 hours ago|||
had the same frustration trying to set up Google analytics MCP server: https://github.com/googleanalytics/google-analytics-mcp

getting the authentication to work is a real pain and it's basically preventing people access to an otherwise really good and useful MCP

Imagine a marketing person trying to set it up...

justinwp 14 hours ago|||
There are many gotchas in this process and unfortunately there is no easy way to deal with the OAuth setup.
jitl 18 hours ago|||
i had to do all that the last time i wanted to do a little js in my google sheets. when i saw their quick start required gcloud already set up, i decided not to bother trying this out. idk why google makes something that should take 15s (clicking “ok” in an oauth popup) take tens of minutes to hours of head scratching.
sagarpatil 15 hours ago||
I used Claude in chrome and Claude Code. It did everything for me.
KerrickStaley 2 hours ago||
Tried this out today and it feels half-baked unfortunately. I can't get auth working (https://github.com/googleworkspace/cli/issues/198).

The decision to pass all params as a JSON string to --params makes it unfriendly for humans to experiment with, although Claude Code managed to one-shot the right command for me, so I guess this is fine. This is an intentional design per https://justin.poehnelt.com/posts/rewrite-your-cli-for-ai-ag...

betaby 19 hours ago|
I'm curious why `npm` is used to install a `rust` binary?
cobbal 19 hours ago||
They're not doing so here, but shipping a wasm-compiled binary with npm that uses node's WASI API is a really easy way to ship a cross-platform CLI utility. Just needs ~20 lines of JS wrapping it to set up the args and file system.
mountainriver 18 hours ago|||
Doesn’t this seem excessive over just using rust’s cross platform builds?
csomar 17 hours ago||
There's no such thing as a truly "cross-platform" build. Depending on what you use, you might have to target specific combinations of OS and processor architecture. That's actually why WASM (though they went with WASI) is a better choice; especially for libraries, since anyone can drop it into their environment without worrying about compatibility.
jitl 17 hours ago||
there’s 3 os and 2 architectures minus darwin-amd64 so you just need to do 5 builds to avoid the WASM performance tax.

(freebsd runs linux binaries and the openbsd people probably want to build from source anyways)

Lord_Zero 18 hours ago|||
Can you link to a sample of how I can do this?
taskylizard 14 hours ago||
https://axodotdev.github.io/cargo-dist/
varenc 19 hours ago|||
I found that strange as well. My guess is that `npm` is just the package manager people are most likely to already have installed and doing it this way makes it easy. They might think asking people to install Cargo is too much effort. Wonder if the pattern of using npm to install non-node tools will keep gaining traction.
m000 11 hours ago|||
It's still weird. Why not just use an effing install.sh script like everybody else? And don't tell me "security". Because after installation you will be running an unknown binary anyway.
bigstrat2003 18 hours ago||||
Most people aren't going to have npm installed though. Nobody outside of web devs uses it.
patates 14 hours ago|||
A lot of people who are not web devs use it, that's what I see. I even saw some mainframe developers use npx to call some tool on some data dump.

Also, this is a web project anyway. Google Workspace is web based, so while there is a good chance that the users aren't web developers, it's a better chance that they have npm than anything else.

In the case that they don't, releases can be downloaded directly too: https://github.com/googleworkspace/cli/releases

gempir 14 hours ago||||
If you had to pick one package manager that was most likely installed across all the different user machines in the world, I'd say npm is a pretty good bet.
wiseowise 11 hours ago||
Pip.
sankalpmukim 17 hours ago|||
"Most people" are webdevs

Bracing for getting cancelled

freakynit 19 hours ago|||
Why not just downloadable binary then?
varenc 19 hours ago|||
For many, installing something with npm is still easier. It chooses the right binary for your OS/architecture, puts it on your PATH, and streamlines upgrades.

Their Github releases provides the binaries, as well as a `curl ... | sh` install method and a guide to use github releases attestation which I liked.

krzyk 7 hours ago|||
I feel better with `curl ... | sh` than with npm.

npm suggests projects written in js, which is not something I'm comfortable.

It is nice to see that this is not JS, but Rust.

freakynit 18 hours ago|||
Hmm, that's right... thanks..
patates 14 hours ago|||
They have them: https://github.com/googleworkspace/cli/releases
brunoborges 19 hours ago|||
NPM as a cross platform package distribution system works really well.

The install script checks the OS and Arch, and pulls the right Rust binary.

Then, they get upgrade mechanism out of the box too, and an uninstall mechanism.

NPM has become the de facto standard for installing any software these days, because it is present on every OS.

danpalmer 19 hours ago|||
To my knowledge NPM isn't shipped in _any_ major OSes. It's available to install on all, just like most package managers, but I'm not sure it's in the default distributions of macOS, Windows, or the major Linux distros?
anilgulecha 19 hours ago||
No package manager is. But of the ones that are installed by users, npm is probably the most popular.
spinagon 19 hours ago|||
What about pip? It's either installed or immediately available on many OSes
Fabricio20 17 hours ago|||
pip might be but it was historically super inconsistent (at least in my experience). Is it `pip install`? `python3 -m pip install`? maybe `pip3 install`? Yeah ubuntu did a lot of damage to pip here. npm always worked because you had to install it and it didnt have a transition phase from python2 being in the OS by default.
piperswe 19 hours ago||||
`pip install` either doesn’t work out of the box or has the chance to clobber system files though
jitl 18 hours ago||||
system pip w/ sudo usually unleashes Zalgo, i’d rather curl | bash but npm is fine too. it’s just about meeting people where they’re at, and in the ai age many devs have npm

if you build for the web, no matter what your backend is (python, go, rust, java, c#), your frontend will almost certainly have some js, so likely you need npm.

nikanj 18 hours ago|||
This is about eight years old. The python situation has mostly gotten worse since https://xkcd.com/1987/
jitl 17 hours ago|||
python packaging / envs is solved now by uv. its not promising or used by people in the know like the last 2 trendy python package managers. i was a big time python hater since it was a pita to support as a devtools guy but now its trivial. uv just works, it won.
abustamam 7 hours ago||
I'm not a python dev, but I see a bit of its ecosystem. How does uv compare with conda or venv? I thought JS had the monopoly on competing package managers.
a_t48 17 hours ago|||
What? It’s much much better now, you can just use uv. Yeah, it’s yet another package manager, but it does it well.
chrisweekly 6 hours ago||
Or go up a rung or two on the abstraction ladder, and use mise to manage all the things (node, npm, python, etc).
oefrha 19 hours ago||||
> The install script checks the OS and Arch, and pulls the right Rust binary.

That's the arbitrary code execution at install time aspect of npm that developers should be extra wary of in this day and age. Saner node package managers like pnpm ignore the build script and you have to explicitly approve it on a case-by-case basis.

That said, you can execute code with build.rs with cargo too. Cargo is just not a build artifact distribution mechanism.

mcmcmc 16 hours ago||||
More of a de facto standard for supply chain attacks tbh
mountainriver 18 hours ago||||
Yeah except you need to install NPM, whereas with a rust binary, which can easily compile cross platform, you don’t.

Honestly I’m shocked to see so many people supporting this

bigstrat2003 17 hours ago||||
> NPM has become the de facto standard for installing any software these days, because it is present on every OS.

That's not remotely true. If there is a standard (which I wouldn't say there is), it's either docker or curl|bash. Nobody is out there using npm to install packages except web devs, this is absolutely ridiculous on Google's part.

abustamam 7 hours ago|||
I agree but this isn't a Google project, it's one Google employee.
jitl 17 hours ago|||
they offer npm for the large market of cli users who have it, and curl|bash to those who don’t. ¯\_(ツ)_/¯
koakuma-chan 16 hours ago||||
I think there has been an influx of people vibe coding in Rust because its "fast" but otherwise they have no idea about Rust.
gck1 15 hours ago||
Not because it's fast, but because of its compiler. It acts as a very good guardrail and feedback mechanism for LLMs.
abustamam 7 hours ago||
Typescript has surpassed Python and JS as most used on Github for a similar reason

https://xcancel.com/github/status/2029277638934839605?s=20

koakuma-chan 6 hours ago||
> making strict typing an advantage, not a chore

It's crazy that people think strict typing is a chore. Says a lot about our society.

abustamam 4 hours ago||
I learned TS after a few years with JS. I thought having strict types was cool. Many of my colleagues with much more (JS) experience than me thought it was a hassle. Not sure if they meant the setup or TS or what but I always thought it was weird.
xarope 16 hours ago|||
"NPM has become the de facto standard for installing any software these days, because it is present on every OS."

What?!? Must not be in any OS I've ever installed.

Now tar, on the other hand, exists even in windows.

r2champloo 18 hours ago|||
Interesting fact, because cargo builds every tool it downloads from source, you can’t actually run cargo install on Google laptops internally.
efreak 13 hours ago||
I use cargo-binstall, which supports quick install and a couple other methodsfor downloading binaries for rust packages
jamesmishra 18 hours ago||
Why should the package's original language matter?

When I use apt-get, I have no idea what languages the packages were written in.

hahn-kev 16 hours ago|||
Because npm is not an os package manager, it's a nodejs package manager
nazgul17 16 hours ago|||
Not everyone has or wants yet another package manager in their system.
More comments...