Posted by speckx 12/10/2025
1. You can access those models via three APIs: the Gemini API (which it turns out is only for prototyping and returned errors 30% of the time), the Vertex API (much more stable but lacking in some functionality), and the TTS API (which performed very poorly despite offering the same models). They also have separate keys (at least, Gemini vs Vertex).
2. Each of those APIs supports different parameters (things like language, whether you can pass a style prompt separate from the words you want spoken, etc). None of them offered the full combination we wanted.
3. To learn this, you have to spend a couple hours reading API docs, or alternatively, just have Claude Code read the docs then try all different combinations and figure out what works and what doesn't (with the added risk that it might hallucinate something).
- The models perform differently when called via the API vs in the Gemini UI.
- The Gemini API will randomly fail about 1% of the time, retry logic is basically mandatory.
- API performance is heavily influenced by the whims of the Google we've observed spreads between 30 seconds and 4 minutes for the same query depending on how Google is feeling that day.
That is sadly true across the board for AI inference API providers. OpenAI and Anthropic API stability usually suffers around launch events. Azure OpenAI/Foundry serving regularly has 500 errors for certain time periods.
For any production feature with high uptime guarantees I would right now strongly advise for picking a model you can get from multiple providers and having failover between clouds.
I'm passing docs for bulk inference via Vertex, and a small number of returned results will include gibberish in Japanese.
This shouldn't be surprised, e.g. the model != the product. The same way GPT4o behaves differently than the ChatGPT product when using GPT4o.
This difference between API vs UI responses being different is common across all the big players (Claude, GPT models, etc.)
The consumer chat interfaces are designed for a different experience than a direct API call, even if pinging the same model.
Was really curious about that when I saw this in the posted article:
> I had some spare cash to burn on this experiment,
Hopefully the article's author is fully aware of the real risk of giving Alphabet his CC details on a project which has no billing caps.
We are working on billing caps along with credits right now. Billing caps will land first in Jan!
4. If you read about a new Gemini model, you might want to use it - but are you using @google/genai, @google/generative-ai (wow finally deprecated) or @google-ai/generativelanguage? Silly mistake, but when nano banana dropped it was highly confusing image gen was available only through one of these.
5. Gemini supports video! But that video first has to be uploaded to "Google GenAI Drive" which will then splices it into 1 FPS images and feeds it to the LLM. No option to improve the FPS, so if you want anything properly done, you'll have to splice it yourself and upload it to generativelanguage.googleapis.com which is only accessible using their GenAI SDK. Don't ask which one, I'm still not sure.
6. Nice, it works. Let's try using live video. Open the docs, you get it mentioned a bunch of times but 0 documentation on how to actually do it. Only suggestions for using 3rd party services. When you actually find it in the docs, it says "To see an example of how to use the Live API in a streaming audio and video format, run the "Live API - Get Started" file in the cookbooks repository". Oh well, time to read badly written python.
7. How about we try generating a video - open up AI studio, see only Veo 2 available from the video models. But, open up "Build" section, and I can have Gemini 3 build me a video generation tool that will use Veo 3 via API by clicking on the example. But wait why cant we use Veo 3 in the AI studio with the same API key?
8. Every Veo 3 extended video has absolutely garbled sound and there is nothing you can do about it, or maybe there is, but by this point I'm out of willpower to chase down edgy edge cases in their docs.
9. Let's just mention one semi-related thing - some things in the Cloud come with default policies that are just absurdly limiting, which means you have to create a resource/account, update policies related to whatever you want to do, which then tells you these are _old policies_ and you want to edit new ones, but those are impossible to properly find.
10. Now that we've setup our accounts, our AI tooling, our permissions, we write the code which takes less than all of the previous actions in the list. Now, you want to test it on Android? Well, you can:
- A. Test it with your account by signing in into emulators, be it local or cloud, manually, which means passing 2FA every time if you want to automate this and constantly risking your account security/ban.
- B. Create a google account for testing which you will use, add it to Licensed Testers on the play store, invite it to internal testers, wait for 24-48 hours to be able to use it, then if you try to automate testing, struggle with having to mock a whole Google Account login process which every time uses some non-deterministic logic to show a random pop-up. Then, do the same thing for the purchase process, ending up with a giant script of clicking through the options
11. Congratulations, you made it this far and are able to deploy your app to Beta. Now, find 12 testers to actively use your app for free, continuously for 14 days to prove its not a bad app.
At this point, Google is actively preventing you from shipping at every step, causing more and more issues the deeper down the stack you go.
13. Get your whole google account banned.
Yeah, I hear you, open to suggestions to make this more clear, but it is google/genai going forward. Switching packages sucks.
> Gemini supports video! But that video first has to be uploaded to "Google GenAI Drive" which will then splices it into 1 FPS images and feeds it to the LLM. No option to improve the FPS, so if you want anything properly done, you'll have to splice it yourself and upload it to generativelanguage.googleapis.com which is only accessible using their GenAI SDK. Don't ask which one, I'm still not sure.
We have some work ongoing (should launch in the next 3-4 weeks) which will let you reference files (video included) from links directly so you don't need to upload to the File API. We do also support custom FPS: https://ai.google.dev/gemini-api/docs/video-understanding#cu...
> 6. Nice, it works. Let's try using live video. Open the docs, you get it mentioned a bunch of times but 0 documentation on how to actually do it. Only suggestions for using 3rd party services. When you actually find it in the docs, it says "To see an example of how to use the Live API in a streaming audio and video format, run the "Live API - Get Started" file in the cookbooks repository". Oh well, time to read badly written python.
Just pinged the team, we will get a live video example added here: https://ai.google.dev/gemini-api/docs/live?example=mic-strea... should have it live Monday, not sure why that isn't there, sorry for the miss!
> 7. How about we try generating a video - open up AI studio, see only Veo 2 available from the video models. But, open up "Build" section, and I can have Gemini 3 build me a video generation tool that will use Veo 3 via API by clicking on the example. But wait why cant we use Veo 3 in the AI studio with the same API key?
We are working on adding Veo 3.1 into the drop down, I think it is being tested by QA right now, pinged the team to get ETA, should be rolling out ASAP though, sorry for the confusing experience. Hoping this is fixed by Monday EOD!
> 8. Every Veo 3 extended video has absolutely garbled sound and there is nothing you can do about it, or maybe there is, but by this point I'm out of willpower to chase down edgy edge cases in their docs.
Checking on this, haven't used extend a lot but will see if there is something missing we can clarify.
On some of the later points, I don't have enough domain expertise to weight in but will forward to folks n the Android / Play side to see what we can do to streamline things!
Thank you for taking the time to write up this feedback : ) hoping we can make the product better based on this.
Trying to split all videos into frames was a PITA mostly due to weird inputs from different Android phones requiring handling all kinds of edge cases, then uploading each to Upload API with retry was also adding a lag + complexity, so doing it all in one go will save me both time and nerves (and tokens).
Thanks for listening and all the great work you do, since you came in the experience improved by an immeasurable amount.
I have a friend that says Google's decline came when they bought DoubleClick in 2008 and suffered a reverse-takeover: their customers shifted from being Internet users and became other, matchingly-sized corporations.
I know part of it is that sales wants to be able to price discriminate and wants to be able to use their sales skills on a customer, but I am never going to sign up for anything that makes me talk to someone before I can buy.
1. Never make it hard for people to give you money.
customer service is unable to acknowledge why that feature is offered and can only assert that if you park you gotta pay. after threatening to complain to the BBB and my state AG they have graciously offered to drop the ticket to $25.
thank you for listening to me vent :)
Edit: On second thought, there is a perverse incentive at work (and probably one of the "lowest friction" ways to get money), which is issuing government enforced fines.
Start app, wait for gps, turn time wheel, press start.
Mostly these days all paid parking has registration camera's, and it just starts and stops parking for you automatically. However, there are like 3 or so apps that compete here so you need a profile with all of them for this to work and you also need to enable this on all the apps.
Well you can extend the parking time while not at your car. That is a big plus.
Without the app you have to find the meter, pay, print the receipt and get back to your car to put in in the window. Remember the time and get back.
With the app you can just start walking towards your destination while you start the metering.
There's no such thing as a monopoly when it comes to parking. If there is -- if every single parking spot within walking distance is locked behind a shitty app -- then you need to spend some quality time at your next city council meeting making yourself a royal PIA.
https://en.wikipedia.org/wiki/Chicago_Parking_Meters
This doesn't apply to private pay lots though, so there's still some amount of "choice".
Also, I only have time for so many hills on which to die. I’m not sure parking reform, while worthy, makes the cut.
I raised the issue with my local city council rep. She didn't care.
Sales is so focused on their experience that they completely discount what the customer wants. Senior management wants what's best for sales & the bottom line, so they go along with it. Meanwhile, as a prospective customer I would never spend a minute evaluating our product if it means having to call sales to get a demo & a price quote.
My team was focused on an effort to implement self-service onboarding -- that is, allowing users to demo our SaaS product (with various limitations in place) & buy it (if so desired) without the involvement in sales. We made a lot of progress in the year that I was there, but ultimately our team got shutdown & the company was ready to revert back to sales-led onboarding. Last I heard, the CEO "left" & 25% of the company was laid off; teams had been "pivoting" every which way in the year since I'd been let go, as senior management tried to figure out what might help them get more traction in their market.
You say that as if it isn’t the entire reason why these interactions should be avoided at all costs. Dynamic pricing should be a crime.
Does segmentation also count as dynamic pricing?
--
The IT guy at Podunk Lutheran College has no money: Gratis.
The IT guy at a medium-sized real estate agency has some money: $500.
The IT guy at a Fortune 100 company has tons of money: $50,000.
https://blog.codinghorror.com/oh-you-wanted-awesome-edition/If everybody can see the prices that would be quoted in other circumstances, that exerts a strong moderating force against abuse.
It won't help you if there's a monopoly, but I consider that a separate problem needing separate solutions.
All of their products, however realistically commoditized, will require a drawn out engagement with a rep who knows how much money you’ve received recently and even has an outline what research you plan to do over the next few years since even the detailed applications often get published alongside funding allocations.
The exact same piece of equipment, consumables required to use it, and service agreements might be anywhere from X to 10X depending on what they (as a result of asymmetrically available knowledge) know you need and how much you could theoretically spend.
Getting just the university of California should be enough critical mass.
It's not uncommon though for eg departments to have common equipment that they negotiate together.
In your example, you’re paying extra for additional capabilities. Doesn’t really matter if it’s a nonlinear increase in cost with the number of seats. Two companies buy 500 seats and pay the same price.
What I object to is some sales bro deciding I should pay 5x more for those same licenses because of who I am, what I look like, where I’m from, etc. It’s absolutely repulsive. Why can’t you simply provide a fair service at a fair price and stop playing these fuck-fuck games? You’re making a profit on this sale either way. Stop trying to steal my profit margin.
Instead of trying to scam me by abusing information asymmetry, why not use your sales talents to upsell me on additional or custom services, once you’ve demonstrated value? Honest and reliable vendors generally get continued (and increasing) business.
Conversely, these Broadcom/private-equity/mafia tactics generally have me running for the exits ASAP. Spite is one hell of a motivator.
My guy saved a lot of people from making dumb mistakes. Then again he's good at his job, and if he was not I would wipe his business. Aligning incentives was very important for me. Most brokers are just bad.
Finding that human is also hard because of the perverse incentives to sell more lucrative products.
A simpler product would be better for consumers, but won't happen because there are industries (and a lot of lobbying) built up around keeping the money train rolling.
Anyway, long story short: I now require the price and details before I'll even consider talking to a salesperson, not the other way around. Might actually be a good job for an AI agent; they can talk to these sales bozos (respectfully) for me.
Someone who works in finance or conpliances might want a demo, or views those things as signals the product is for serious use cases.
About the only time you’ll be asked to evaluate such a product as an IC is when someone wants an opinion about API support or something equivalent. And if you refuse to do it, the decision-makers will just find the next guy down the hall who won’t be so cranky.
In these conversations, I never ever see the buyers justifying or requesting a sales process involving people and meetings and opaque pricing.
It’s true that complicated software needs more talking, but there is a LOT of software that could be bought without a meeting. The sales department won’t stand for it though.
Not really. Even if we keep the conversation in the realm of startups (which are not representative of anything other than chaos), ICs have essentially no ability to take unilateral financial risk. The Github “direct to developer” sales model worked for Github at that place and time, but even they make most of their money on custom contracts now.
You’re basically picking the (very) few services that are most likely to be acquired directly by end users. Slack is like an org-wide bike-shedding exercise, and Github is a developer tool. But once the org gets big enough, the contracts are all mediated by sales.
Outside of these few examples, SaaS software is almost universally sold to non-technical business leaders. Engineers have this weird, massive blind spot for the importance of sales, even if their own paycheck depends on it.
Also, it isn’t just ICs. I have worked as a senior director, with a few dozen people reporting into me… and I still never want to talk to a sales person on the phone about a product. I want to be able to read the docs, try it out myself, maybe sign up for a small plan. Look, if you want to put the extras (support contracts, bulk discounts, contracting help, etc) behind a sales call, fine. But I need to be able to use your product at a basic level before I would ever do a sales call.
> talk to people
There will clearly be a gap in understanding, when their whole job is to talk to people, and you come to them to argue for clients to not do that.
As you point out it's not that black and white, most companies will have tiers of client they want to spend less or more time with etc. but sales wanting direct contact with clients is I think a fundamental bit.
But what do the clients want? Your business should not be structured to make sales people happy.
If a platform is designed in a way that users can sign up and go, it can work well.
If an application is complicated or it’s a tool that the whole business runs on, often times the company will discover their customers have more success with training and a point of contact/account manager to help with onboarding.
"Give access now, cancel if validation fails" doesn't work either - so long as attackers can extract more than 0 value in that duration they'll flood you with bad accounts.
If you give me a form where I can upload my passport or enter a random number from a charge on my card, that counts as "instant" enough. On the other hand, if you really need to make me wait several days while you manually review my info, fine, just tell me upfront so I can stop wasting my time. And be consistent in your UI as to whether I'm verified yet. It's all about managing expectations.
Besides, Amazon hands out reasonable quotas to newly created accounts without much hassle, and they seem to be doing okay. I won't believe for a second that trillion-dollar companies like Google don't know how to keep abuse at a manageable level without making people run in circles.
Boy oh boy are they going to be surprised when they learn what AI can replace.
[1]: https://adstransparency.google.com/advertiser/AR129387695568...
When Google has a bad/empty profile of you, advertisers don't bid on you, so it goes to the bottom feeders. Average (typically tech illiterate) people wandering through the internet mostly get ads for Tide, Chevy, and [big brand], because they pay Google much more for those well profiled users. These scam advertisers really don't pay much, but are willing to be shown to mostly anyone. They are a bit like the advertiser of last resort.
All of that is to say, if you are getting malware/scam ads from Google, it's probably because (ironically) you know what you are doing.
One of my co-workers left with an active account and active card but no passwords noted. The company gave up and just had to cancel + create a new account for the next adwords specialist.
Look how quaint this seems now: https://www.cnet.com/tech/services-and-software/consumer-gro...
The only way we could get it resolved was to (somehow) get a real human at google on the phone because we're in some startup program or something and have some connection there. Then he put in a manual request to bump our quota up.
Google cloud is the most kafkaesque insane system I've ever had the misfortune of dealing with. Every time I use it I can tell the org chart is leaking.
Calling it kafkaesque is giving it too much credit.
So I got no idea what to do to address it. I feel my best option is wait for it to get disabled and try to address it afterwards.
But just closing the bank account will stop auto billing (it's considered a decline). So if you closed the account, it would just stop paying for whatever it is, and then cloud may lock the gcp account until it's paid. (I'm not 100% sure what cloud does with unpaid invoices).
There’s a quote for your general class of query, and there’s a quota for how many can be in flight on a given server. It’s not necessarily about you specifically.
It just leaves a bad taste, and the second a competitor comes along that has an acceptable offering, then I'll move. Just ridiculous gaslighting behavior.
pls send feedback if this is helpful!
I hope they figure out a lot of the issues but at the same time, I hope Gemini just disappears back into products rather than being at the forefront, because I think that's when Google does it's best work.
It does make you wonder, why not just be a lot smaller? It's not like most of these teams actually generate any revenue. It seems like a weird structural decision which maybe made sense when hoovering up available talent was its own defensive moat but now that strategy is no longer plausible should be rethought?
100% agree
You can walk into a McDonalds without being able to read, write, or speak English, and the order touchscreen UI is so good (er, "good") that you can successfully order a hamburger in about 60 seconds. Why can't Google (of all companies) figure this out?
It makes sense for IBM, seems like google is just reaching that stage?
I made a free Chrome extension that uses Fal api key if you want a UI instead of code
https://chromewebstore.google.com/detail/ai-slop-canvas/dogg...
I google `gemini API key` and the first result* is this docs page: https://ai.google.dev/gemini-api/docs/api-key
That docs page has a link in the first primary section on the page. Sure, it could be a huge CTA, but this is a docs page, so it's kinda nice that it's not gone through a marketing make over.
* besides sponsored result for AI Studio
(Maybe I misunderstood and all the complaints are about billing. I don't remember having issues when I added my card to GCP in the past, but maybe I did)
If you bring it up to Logan he'll just brush it off — I honestly don't know if they test these UX flows with their own personal accounts, or if something is buggy with my account.
But somehow personally even though I'm a paying Google One subscriber and have a GCP billing account with a credit card, I get confusing errors when trying to use the Gemini API
I feel his team is really hitting a wall now in terms of improvements, because it involves Google teams/products outside of their control, or require deep collaboration.
But also the (theoretical) production platform for Gemini is Vertex AI, not AI Studio.
And until pretty recently using that took figuring out service accounts, and none of Google's docs would demonstrate production usage.
Instead they'd use the gcloud CLI to authenticate, and you'd have to figure out how each SDK consumed a credentials file.
-
Now there's "express mode" for Vertex which uses an API Key, so things are better, but the complaints were well earned.
At one point there were even features (like using a model you finetuned) that didn't work without gcloud depending on if you used Vertex or AI Studio: https://discuss.ai.google.dev/t/how-can-i-use-fine-tuned-mod...
I did edit my message to mention I had GCP billing set up already. I'm guessing that's one of the differences between those having trouble and those not.
I've been using the AI Studio with my personal Workspace account. I can generate an API key. That worked for a while, but now Gemini CLI won't accept it. Why? No clue. It just says that I'm "not allowed" to use Gemini Pro 3 with the CLI tool. No reason given, no recourse, just a hand in your face flatly rejecting access to something I am paying for and can use elsewhere.
Simultaneously, I'm trying to convince my company to pay for a corporate account of some sort so that I can use API keys with custom tools and run up a bill of potentially thousands of dollars that we can charge back to the customer.
My manager tried to follow the instructions and... followed the wrong ones. They all look the same. They all talk about "Gemini" and "Enterprise". He ended up signing up for Google's equivalent of Copilot for business use, not something that provides API keys to developers. Bzzt... start over from the beginning!
I did eventually find the instructions by (ironically) asking Gemini Pro, which provided the convenient 27 step process for signing up to three different services in a chain before you can do anything. Oh, and if any of them trigger any kind of heuristic, again, you get a hand in face telling you firmly and not-so-politely to take a hike.
PS: Azure's whatever-it-is-called-today is just as bad if not worse. We have a corporate account and can't access GPT 5 because... I dunno. We just can't. Not worthy enough for access to Sam Altman's baby, apparently.
Passing along this feedback to the CLI team, no clue why this would be the case.
Excuse me? If you mean AI Studio, are you talking about the product where you can’t even switch which logged in account you’re using without agreeing to its terms under whatever random account it selected, where the ability to turn off training on your data does not obviously exist, and where it’s extremely unclear how an organization is supposed to pay for it?
Hint: you can often avoid some of this mess by adding the authuser=user@domain to the URL.
Def use multiple chrome profiles if you aren't. You can color code them to make visual identification a breeze
I don't want my history, bookmarks, open tabs and login sessions at every website divided among my 5 GSuite workspace accounts and my 1 personal Gmail. That adds a bunch of hassle for what? The removal of a minor annoyance when I use these specific Google apps? That is taking a sledge hammer to a slightly bent nail.
If it works for you, great, that's why it's there. But doing this for anything more than the basic happy path setup of "I have one personal account and 1 GSuite work account" is nuts in my opinion.
Python is the primary implementation, Java is there, Go is relatively new and aiming for parity. They could have contributed the Typescript implementation and built on common, solid foundation, but alas, the hydra's heads are not communicating well
These other "frameworks" are (1) built by people who need to sell something, so they are often tied to their current thinking and paid features (2) sit at the wrong level. ADK gives me building blocks for generalized agents, whereas most of these frameworks are tied to coding and some peculiarities you see there (like forcing you to deal with studio, no thanks). They also have too much abstraction and I want to be able to control the lower level knobs and levers
ADK is the closest to what I've been looking for, an analog to kubernetes in the agentic space. Deal with the bs, give me great abstractions and building blocks to set me free. So many of the other frameworks want to box you into how they do things, today, given current understanding. ADK is minimal and easy to adjust as we learn things
ADK has an option to use litellm (openrouter alternative), among many options
https://google.github.io/adk-docs/agents/models/#using-cloud...
Like the OP others I didn't use the API for gemini and it was not obvious how to do that -- that said it's not cost effective to develop without a Sub vs on API pay-as-you-go, so i do no know why you would? Sure you need API for any applications with built-in LLM features, but not for developing in the LLM assisted CLI tools.
I think the issue with cli tools for many is you need to be competent with cli like a an actual nix user not Mac first user etc. Personally I have over 30 years of daily shell use and a sysadmin and developer. I started with korn and csh and then every one you can think of since.
For me any sort of a GUI slows me down so much it's not feasible. To say nothing of the physical aliments associated with excessive mousing.
Having put approaching thousands of hours working with LLM coding tools so far, for me claude-code is the best, gemini is very close and might have a better interface, and codex is unusable and fights me the whole time.
My spend is lower, so I conclude otherwise
> I think the issue with cli tools for many is...
Came from that world, vim, nvim, my dev box is remote, homelab
The issue is not that it is a CLI, it's that you are trying to develop software through the limited portal of a CLI. How do you look at multiple files at the same time? How do you scroll through that file
1. You cannot through a tool like gemini-cli
2. You are using another tool to look at files / diffs
3. You aren't looking at the code and vibe coding your way to future regret
> or me any sort of a GUI slows me down so much it's not feasible.
vim is a "gui" (tui), vs code has keyboard shortcuts, associating GUI with mouse work
> Having put approaching thousands of hours working with LLM coding tools so far, for me claude-code is the best, gemini is very close and might have a better interface, and codex is unusable and fights me the whole time.
Anecdotal "vibe" opinions are not useful. We need to do some real evals because people are telling stories like they do about their stock wins, i.e. they don't tell you about the losses.
Thousands of hours sounds like your into the vibe coding / churning / outsourcing paradigm. There are better ways to leverage these tools. Also, if you have 1000+ hours of LLM time, how have you not gone below the prepackaged experience Big AI is selling you?
Paying is hard. And it is confusing how to set it up: you have to create a Vertex billing account and go through a cumbersome process to then connect your AIStudio to it and bring over a "project" which then disconnects all the time and which you have to re-select to use Nano Banana Pro or Gemini 3. It's a very bad process.
It's easy to miss this because they are very generous with the free tier, but Gemini 3 is not free.
I often see coworkers offload their work of critical thinking to an AI to give them answers instead doing the grunt work nessecary to find their answers on their own.
> [They seemingly] can't think on their own without an AI [moderating]
They _literally_ can think on their own, and they _literally_ did think up a handful of prompts.
A more constructive way to make what I assume to be your point would be highlighting why this shift is meaningful and leaving the appeal to ego for yourself.
I've edited my post to be more charitable
Low energy afternoons you might be able to come up with a prompt but not the actual solution.
There are people offloading all thoughts into prompts instead of doing the research themselves and some have reached a point where they lost the ability to do something because of over AI use.
I assume it has something to do with the underlying constraint grammar/token masks becoming too long/taking too long to compute. But as end users we have no way of figuring out what the actual limits are.
OpenAI has more generous limits on the schemas and clearer docs. https://platform.openai.com/docs/guides/structured-outputs#s....
You guys closed this issue for no reason: https://github.com/googleapis/python-genai/issues/660
Other than that, good work! I love how fast the Gemini models are. The current API is significantly less of a shitshow compared to last year with property ordering etc.
Will follow up on some of the other threads in here!
Does this mean I can finally use premium features without onboarding my entire google workspace? I made the mistake of getting a good chunk of my family on my google app domain back in ~2007. For the last few years, I spend $80 per month just to host their email because the cost is easier to deal with than the human beings themselves. But I want to use the latest premium Google AI tooling and as far as I can tell the only way to do that is to upgrade my google workspace to the next tier and blow even more $$$ away each month. Suffice to say I have not done this, but it is a blocker from using things like Nano Banana with a non-gmail account.
I want to release a service using computer-use but am worried about 429 quota errors if I have actual users.
1. cart out in front of the horse a bit on this one, lame hype building at best
2. Not at all what I want the team focusing on, they don't seem to have a clear mission
Generally Google PMs and leaders have not been impressive or in touch for many years, since about the time all the good ones cashed out and started their own companies
hmmm
> 2. Not at all what I want the team focusing on, they don't seem to have a clear mission
allow anyone to build with Google's latest AI models, be the fastest path from prompt to production with Gemini
They'll get to it when it becomes strategically important to.
Why making it easier to pay them isn't always strategically important, I'm not sure.