When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies.
To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.
mattzito · 1h ago
It's a lot more nuanced than that. If you use the free edition of Code Assist, your data can be used UNLESS you opt out, which is at the bottom of the support article you link to:
"If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals."
If you pay for code assist, no data is used to improve. If you use a Gemini API key on a pay as you go account instead, it doesn't get used to improve. It's just if you're using a non-paid, consumer account and you didn't opt out.
That seems different than what you described.
ipsum2 · 49m ago
Sorry, that's not correct. Did you check out the link? It doesn't describe the CLI, only the IDE.
"You can find the Gemini Code Assist for individuals privacy notice and settings in two ways:
- VS Code
- IntelliJ
"
tiahura · 22m ago
As a lawyer, I'm confused.
I guess the key question is whether the Gemini CLI, when used with a personal Google account, is governed by the broader Gemini Apps privacy settings here? https://myactivity.google.com/product/gemini?pli=1
If so, it appears it can be turned off. However, my CLI activity isn't showing up there?
Can someone from Google clarify?
mil22 · 39m ago
There is some information on this buried in configuration.md under "Usage Statistics". They claim:
*What we DON'T collect:*
- *Personally Identifiable Information (PII):* We do not collect any personal information, such as your name, email address, or API keys.
- *Prompt and Response Content:* We do not log the content of your prompts or the responses from the Gemini model.
- *File Content:* We do not log the content of any files that are read or written by the CLI.
This is useful, and directly contradicts the terms and conditions for Gemini CLI. I wonder which one is true?
jdironman · 31m ago
I wonder what the legal difference between "collect" and "log" is.
kevindamm · 22m ago
Collection means it gets sent to a server, logging implies (permanent or temporary) retention of that data. I tried finding a specific line or context in their privacy policy to link to but maybe someone else can help me provide a good reference. Logging is a form of collection but not everything collected is logged unless mentioned as such.
There's nothing with promoting your own projects, but its a little weird that you don't disclose that you're the creator.
nicce · 1h ago
That is just Gemma model. Most people seek capabilities equivalent for Gemini 2.5 Pro if they want to do any kind of coding.
jart · 1h ago
Gemma 27b can write working code in dozens of programming languages. It can even translate between languages. It's obviously not as good as Gemini, which is the best LLM in the world, but Gemma is built from the same technology that powers Gemini and Gemma is impressively good for something that's only running locally on your CPU or GPU. It's a great choice for airgapped environments. Especially if you use old OSes like RHEL5.
nicce · 11m ago
It may be sufficient for generating serialized data and for some level of autocomplete but not for any serious agentic coding where you won't end up wasting time. Maybe some junior level programmers may find it still fascinating but senior level programmers end up fighting with bad design choices, poor algorithms and other verbose garbage most of the time. This happens even with the best models.
seunosewa · 11m ago
The technology that powers Gemini created duds until Gemini 2.5 Pro; 2.5 Pro is the prize.
Workaccount2 · 58m ago
This is just for free use (individuals), for standard and enterprise they don't use the data.
Which pretty much means if you are using it for free, they are using your data.
I don't see what is alarming about this, everyone else has either the same policy or no free usage. Hell the surprising this is that they still let free users opt-out...
thimabi · 37m ago
> everyone else has either the same policy or no free usage
That’s not true. ChatGPT, even in the free tier, allows users to opt out of data sharing.
joshuacc · 10m ago
I believe they are talking about the OpenAI API, not ChatGPT.
mil22 · 47m ago
They really need to provide some clarity on the terms around data retention and training, for users who access Gemini CLI free via sign-in to a personal Google account. It's not clear whether the Gemini Code Assist terms are relevant, or indeed which of the three sets of terms they link at the bottom of the README.md apply here.
rudedogg · 1h ago
Insane to me there isn’t even an asterisk in the blog post about this. The data collection is so over the top I don’t think users suspect it because it’s just absurd. For instance Gemini Pro chats are trained on too.
If this is legal, it shouldn’t be.
FiberBundle · 51m ago
Do you honestly believe that the opt-out by Anthropic and Cursor means your code won't be used for training their models? Seems likely that they would rather just risk taking a massive fine for potentially solving software development than to let some competitor try it instead.
rudedogg · 41m ago
Yes.
The resulting class-action lawsuit would bankrupt the company, along with the reputation damage, and fines.
nojito · 57m ago
>If you use this, all of your code data will be sent to Google.
Not if you pay for it.
reaperducer · 53m ago
>If you use this, all of your code data will be sent to Google.
Not if you pay for it.
Today.
In six months, a "Terms of Service Update" e-mail will go out to an address that is not monitored by anyone.
nojito · 25m ago
Sure but then you can stop paying.
There's also zero chance they will risk paying customers by changing this policy.
iandanforth · 4h ago
I love how fragmented Google's Gemini offerings are. I'm a Pro subscriber, but I now learn I should be a "Gemini Code Assist Standard or Enterprise" user to get additional usage. I didn't even know that existed! As a run of the mill Google user I get a generous usage tier but paying them specifically for "Gemini" doesn't get me anything when it comes to "Gemini CLI". Delightful!
diegof79 · 2h ago
Google suffers from Microsoft's issues: it has products for almost everything, but its confusing product messaging dilutes all the good things it does.
I like Gemini 2.5 Pro, too, and recently, I tried different AI products (including the Gemini Pro plan) because I wanted a good AI chat assistant for everyday use. But I also wanted to reduce my spending and have fewer subscriptions.
The Gemini Pro subscription is included with Google One, which is very convenient if you use Google Drive. But I already have an iCloud subscription tightly integrated with iOS, so switching to Drive and losing access to other iCloud functionality (like passwords) wasn’t in my plans.
Then there is the Gemini chat UI, which is light years behind the OpenAI ChatGPT client for macOS.
NotebookLM is good at summarizing documents, but the experience isn’t integrated with the Gemini chat, so it’s like constantly switching between Google products without a good integrated experience.
The result is that I end up paying a subscription to Raycast AI because the chat app is very well integrated with other Raycast functions, and I can try out models. I don’t get the latest model immediately, but it has an integrated experience with my workflow.
My point in this long description is that by being spread across many products, Google is losing on the UX side compared to OpenAI (for general tasks) or Anthropic (for coding). In just a few months, Google tried to catch up with v0 (Google Stitch), GH Copilot/Cursor (with that half-baked VSCode plugin), and now Claude Code. But all the attempts look like side-projects that will be killed soon.
Fluorescence · 1h ago
> The Gemini Pro subscription is included with Google One
It's not in Basic, Standard or Premium.
It's in a new tier called "Google AI Pro" which I think is worth inclusion in your catalogue of product confusion.
Oh wait, there's even more tiers that for some reason can't be paid for annually. Weird... why not? "Google AI Ultra" and some others just called Premium again but now include AI. 9 tiers, 5 called Premium, 2 with AI in the name but 6 that include Gemini. What a mess.
scoopdewoop · 23m ago
It is bold to assume these products will even exist in a year
behnamoh · 3h ago
Actually, that's the reason a lot of startups and solo developers prefer non-Google solutions, even though the quality of Gemini 2.5 Pro is insanely high. The Google Cloud Dashboard is a mess, and they haven't fixed it in years. They have Vertex that is supposed to host some of their models, but I don't understand what's the difference between that and their own cloud. And then you have two different APIs depending on the level of your project: This is literally the opposite of what we would expect from an AI provider where you start small and regardless of the scale of your project, you do not face obstacles. So essentially, Google has built an API solution that does not scale because as soon as your project gets bigger, you have to switch from the Google AI Studio API to the Vertex API. And I find it ridiculous because their OpenAI compatible API does not work all the time. And a lot of tools that rely on that actually don't work.
Google's AI offerings that should be simplified/consolidated:
- Jules vs Gemini CLI?
- Vertex API (requires a Google Cloud Account) vs Google AI Studio API
Also, since Vertex depends on Google Cloud, projects get more complicated because you have to modify these in your app [1]:
```
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
```
It took me a while but I think the difference between Vertex and Gemini APIs is that Vertex is meant for existing GCP users and Gemini API for everyone else. If you are already using GCP then Vertex API works like everything else there. If you are not, then Gemini API is much easier. But they really should spell it out, currently it's really confusing.
Also they should make it clearer which SDKs, documents, pricing, SLAs etc apply to each. I still get confused when I google up some detail and end up reading the wrong document.
fooster · 2h ago
The other difference is that reliability for the gemini api is garbage, whereas for vertex ai it is fantastic.
cperry · 2h ago
@sachinag is afk but wanted me to flag that he's on point for fixing the Cloud Dashboard - it's WIP!
sachinag · 1h ago
Thanks Chris!
"The Google Cloud Dashboard is a mess, and they haven't fixed it in years." Tell me what you want, and I'll do my best to make it happen.
One more suggestion: Please remove the need to make a project before we can use Gemini API. That seriously impedes our motivation in using Gemini for one-off scripts and proof-of-concept products where creating a project is overkill.
Ideally what I want is this: I google "gemini api" and that leads me to a page where I can login using my Google account and see the API settings. I create one and start using it right away. No extra wizardry, no multiple packages that must be installed, just the gemini package (no gauth!) and I should be good to go.
sachinag · 37m ago
Totally fair. Yes, Google AI Studio [ https://aistudio.google.com ] lets you do this but Google Cloud doesn't at this time. That's super duper irritating, I know.
sitkack · 1h ago
That will never happen. Just make a scrub project that is your misc-dev-drawer.
WXLCKNO · 2h ago
You guys should try my AGI test.
It's easy, you just ask the best Google Model to create a script that outputs the number of API calls made to the Gemini API in a GCP account.
100% fail rate so far.
coredog64 · 2h ago
At least a bunch of people got promotions for demonstrating scope via the release of a top-level AI product.
irthomasthomas · 1h ago
I just use gemini-pro via openrouter API. No painful clicking around on the cloud to find the billing history.
behnamoh · 1h ago
but you won't get the full API capabilities of Gemini (like setting the safety level).
bayindirh · 4h ago
There's also $300/mo AI ULTRA membership. It's interesting. Google One memberships even can't detail what "extra features" I can have, because it possibly changes every hour or so.
SecretDreams · 3h ago
Maybe their products team is also just run by Gemini, and it's changing its mind every day?
I also just got the email for Gemini ultra and I couldn't even figure out what was being offered compared to pro outside of 30tb storage vs 2tb storage!
ethbr1 · 2h ago
> Maybe their products team is also just run by Gemini, and it's changing its mind every day?
Never ascribe to AI, that which is capable of being borked by human PMs.
Keyframe · 3h ago
> There's also $300/mo AI ULTRA membership
Not if you're in EU though. Even though I have zero or less AI use so far, I tinker with it. I'm more than happy to pay $200+tax for Max 20x. I'd be happy to pay same-ish for Gemini Pro.. if I knew how and where to have Gemini CLI like I do with Claude code. I have Google One. WHERE DO I SIGN UP, HOW DO I PAY AND USE IT GOOGLE? Only thing I have managed so far is through openrouter via API and credits which would amount to thousands a month if I were to use it as such, which I won't do.
What I do now is occasionally I go to AI Studio and use it for free.
GardenLetter27 · 3h ago
Google is fumbling the bag so badly with the pricing.
Gemini 2.5 Pro is the best model I've used (even better than o3 IMO) and yet there's no simple Claude/Cursor like subscription to just get full access.
Nevermind Enterprise users too, where OpenAI has it locked up.
bachmeier · 2h ago
> Google is fumbling the bag so badly with the pricing.
In certain areas, perhaps, but Google Workspace at $14/month not only gives you Gemini Pro, but 2 TB of storage, full privacy, email with a custom domain, and whatever else. College students get the AI pro plan for free. I recently looked over all the options for folks like me and my family. Google is obviously the right choice, and it's not particularly close.
kingsleyopara · 2m ago
Gemini 2.5 pro in workspace was restricted to 32k tokens [0] - do you know if this is still the case?
I know they raised the price on our Google Workspace Standard subscriptions but don't really know what we got for that aside from Gemini integration into Google Drive etc. Does this mean I can use Gemini CLI using my Workspace entitlement? Do I get Code Assist or anything like that? (But Code Assist seems to be free on a personal G account...?)
Google is fumbling with the marketing/communication - when I look at their stuff I am unclear on what is even available and what I already have, so I can't form an opinion about the price!
thimabi · 26m ago
> Does this mean I can use Gemini CLI using my Workspace entitlement?
No, you cannot use neither Gemini CLI nor Code Assist via Workspace — at least not at the moment. However, if you upgrade your Workspace plan, you can use Gemini Advanced via the Web or app interfaces.
pbowyer · 15m ago
I'm so confused.
Workspace (standard?) customer for over a decade.
thimabi · 5m ago
Workspace users with the Business Standard plan have access to Gemini Advanced, which is Google’s AI offering via the Web interface and mobile apps. This does not include API usage, AI Studio, Gemini CLI, etc. — all of which are of course available, but must be paid separately or used in the free tier.
In the case of Gemini CLI, it seems Google does not even support Workspace accounts in the free tier. If you want to use Gemini CLI as a Workspace customer, you must pay separately for it via API billing (pay-as-you-go). Otherwise, the alternative is to login with a personal (non-Workspace) account and use the free tier.
Fluorescence · 1h ago
Only "NetworkLM" and "Chat with AI in the Gemini app" in the UK even with "Enterprise Plus". I assume that is not Pro.
weird-eye-issue · 2h ago
And yet there were still some AI features that were unavailable to workspace users for a few months and you had to use a personal account. I think it's mostly fixed now but that was quite annoying since it was their main AI product (Gemini Studio or whatever, I don't remember for sure)
bcrosby95 · 1h ago
They're 'fumbling' because these models are extremely expensive to run. It's also why there's so many products and so much confusion across the whole industry.
thimabi · 22m ago
An interesting thing is that Google AI offers are much more confusing than the OpenAI ones — despite the fact that ChatGPT models have one of the worst naming schemes in the industry. Google has confusing model names, plans, API tiers, and even interfaces (AI Studio, Gemini app, Gemini Web, Gemini API, Vertex, Google Cloud, Code Assist, etc.). More often than not, these things overlap with one another, ensuring minimal clarity and preventing widespread usage of Google’s models.
llm_nerd · 2h ago
I wouldn't dream of thinking anyone has anything "locked up". Certainly not OpenAI which increasingly seems to be on an uphill battle against competitors (including Microsoft who even though they're a partner, are also a competitor) who have other inroads.
Not sure what you mean by "full access", as none of the providers offer unrestricted usage. Pro gets you 2.5 Pro with usage limits. Ultra gets you higher limits + deep think (edit: accidentally put research when I meant think where it spends more resources on an answer) + much more Veo 3 usage. And of course you can use the API usage-billed model.
tmoertel · 2h ago
The Gemini Pro subscription includes Deep Research and Veo 3; you don't need the pricey Ultra subscription: https://gemini.google/subscriptions/
magic_hamster · 2h ago
Veo 3 is available only in some regions even for Pro users.
__MatrixMan__ · 3h ago
Anthropic is the same. Unless it has changed within the last few months, you can subscribe to Claude but if you want to use Claude Code it'll come out of your "API usage" bucket which is billed separately than the subscription.
Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them.
Workaround is to use their GUI with some MCPs but I dislike it because window navigation is just clunky compared to terminal multiplexer navigation.
carefulfungi · 3h ago
This is down voted I guess because the circumstances have changed - but boy is it still confusing. All these platforms have chat subscriptions, api pay-as-you-go, CLI subscriptions like "claude code" ... built-in offers via Github enterprise or Google Workspace enterprise ...
It's a frigg'n mess. Everyone at our little startup has spent time trying to understand what the actual offerings are; what the current set of entitlements are for different products; and what API keys might be tied to what entitlements.
I'm with __MatrixMan__ -- it's super confusing and needs some serious improvements in clarity.
justincormack · 2h ago
And claude code can now be connected to either an API sub or a chat sub apparently.
HarHarVeryFunny · 38m ago
Isn't that a bit like saying that gasoline should be sold as a fixed price subscription rather than a usage based scheme where long distance truckers pay more than someone driving < 100 miles per week?
A ChatBot is more like a fixed-price buffet where usage is ultimately human limited (even if the modest eaters are still subsidizing the hogs). An agentic system is going to consume resources in much more variable manner, depending on how it is being used.
> Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them
Obviously these companies want you to increase the amount of their product you consume, but it seems odd to call that a jerk move! FWIW, Athropic's stated motivation for Claude Code (which Gemini is now copying) was be agnostic to your choice of development tools since CLI access is pretty much ubiquitous, even inside IDEs. Whether it's the CLI-based design, the underlying model, or the specifics of what Claude Code is capable of, they seem to have got something right, and apparently usage internal to Anthropic skyrocketed just based on word of mouth.
__MatrixMan__ · 28m ago
Claude desktop editing files and running commands via the desktop commander MCP is pretty much equivalent functionality wise to Claude Code. I can set both of them to go, make tea, and come back to see that they're still cranking after modifying several files and running several commands.
It's just a UI difference.
HarHarVeryFunny · 11m ago
These companies are all for-profit, regardless of what altruistic intent they are trying to spin. Free tier usage and fixed price buffets are obviously not where the profit is, so it's hard to blame them for usage-based pricing for their premium products targeting mass adoption.
gnur · 3h ago
This has changed actually, since this month you can use claude code if you have a cloud pro subscription.
__MatrixMan__ · 1h ago
Great news, thanks.
Workaccount2 · 3h ago
I think it is pretty clear that these $20/subs are loss leaders, and really only meant to get regular people to really start leaning on LLMs. Once they are hooked, we will see what the actual price of using so much compute is. I would imagine right now they are pricing their APIs either at cost or slightly below.
sebzim4500 · 1h ago
I'm sure that there are power users who are using much more than $20 worth of compute, but there will also be many users who pay but barely use the service.
stpedgwdgfhgdd · 1h ago
When using a single terminal Pro is good enough (even with a medium-large code base). When I started working with two terminals at two different issues at the same time, i’m reaching the credit limit.
ethbr1 · 2h ago
Or they're planning on the next wave of optimized hardware cutting inference costs.
trostaft · 3h ago
AFAIK, Claude code operates on your subscription, no? That's what this support page says
Could have changed recently. I'm not a user so I can't verify.
re5i5tor · 3h ago
In recent research (relying on Claude so bear that in mind), connecting CC via Anthropic Console account / API key ends up being less expensive.
SparkyMcUnicorn · 2h ago
If you're doing anything more than toying around, this is not the case.
Using the API would have cost me $1200 this month, if I didn't have a subscription.
I'm a somewhat extensive user, but most of my coworkers are using $150-$400/month with the API.
CGamesPlay · 2h ago
There's a log analyzer tool that will tell you how much the API costs are for your usage: https://ccusage.com
willsmith72 · 2h ago
less expensive than what? You can use CC on the $20 plan. If you're using the maximum of your $20 subscription usage every 4 hours every day, the equivalent API cost would be at least hundreds per month
unshavedyak · 3h ago
In addition to others mentioning subscriptions being better in Claude Code, i wanted to compare the two so i tried to find a Claude Max equivalent license... i have no clue how. In their blog post they mention `Gemini Code Assist Standard or Enterprise license` but they don't even link to it.. lol.
You’d think with all this AI tooling they'd be able to organize better, but I think that the AI Age will be a very messy one with messaging and content
tmaly · 1h ago
I was just trying to figure out if I get anything as a pro user. Thank you, you answered my question.
This is very confusing how they post about this on X, you would think you get additional usage. Messaging is very confusing.
bachmeier · 2h ago
I had a conversation with Copilot about Copilot offerings. Here's what they told me:
If I Could Talk to Satya...
I'd say:
“Hey Satya, love the Copilots—but maybe we need a Copilot for Copilots to help people figure out which one they need!”
Then I had them print out a table of Copilot plans:
- Microsoft Copilot Free
- Github Copilot Free
- Github Copilot Pro
- Github Copilot Pro+
- Microsoft Copilot Pro (can only be purchased for personal accounts)
- Microsoft 365 Copilot (can't be used with personal accounts and can only be purchased by an organization)
boston_clone · 1h ago
I'd really like to hear your own personal perspective on the topic instead of a regurgitation of an LLM.
3abiton · 4h ago
And they say our scale up is siloed. Leave it to google to show' em.
nojito · 3h ago
You don't get API keys for that subscription because it's a flat monthly cost.
iandanforth · 2h ago
That's not a given, Anthropic recently added Claude CLI access to their $20/m "Pro" plan removing the need for a separate API key.
ur-whale · 3h ago
> Delightful!
You clearly have never had the "pleasure" to work with a Google product manager.
Especially the kind that were hired in the last 15-ish years.
This type of situation is absolutely typical, and probably one of the more benign thing among the general blight they typically inflict on Google's product offering.
The cartesian product of pricing options X models is an effing nightmare to navigate.
cperry · 4h ago
Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.
Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.
bsenftner · 2h ago
Thank you for your work on this. I spent the afternoon yesterday trying to convert an algorithm written in ruby (which I do not know) to vanilla JavaScript. It was a comedy of failing nonsense as I tried to get gpt-4.1 to help, and it just led me down pointless rabbit holes. I installed Gemini CLI out of curiosity, pointed it at the Ruby project, and it did the conversion from a single request, total time from "think I'll try this" to it working was 5 minutes. Impressed.
cperry · 2h ago
<3 love to hear it!
yomismoaqui · 21m ago
I have been evaluating other tools like Amp (from Sourcegraph) and when trying Gemini Cli on VsCode I found some things to improve:
- On a new chat I have to re-approve things like executing "go mod tidy", "git", write files... I need to create a new chat for each feature, (maybe an option to clear the current chat on VsCode would work)
- I have found some problems with adding some new endpoint on an example Go REST server I was trying it on, it just deleted existing endpoints on the file. Same with tests, it deleted existing tests when asking to add a test. For comparison I didn't find these problems when evaluating Amp (uses Claude 4)
Overall it works well and hope you continue with polishing it, good job!!
ebiester · 3h ago
So, as a member of an organization who pays for google workspace with gemini, I get the message `GOOGLE_CLOUD_PROJECT environment variable not found. Add that to your .env and try again, no reload needed!`
At the very least, we need better documentation on how to get that environment variable, as we are not on GCP and this is not immediately obvious how to do so. At the worst, it means that your users paying for gemini don't have access to this where your general google users do.
thimabi · 3h ago
I believe Workspace users have to pay a separate subscription to use the Gemini CLI, the so-called “Gemini for Google Cloud”, which starts at an additional 19 dollars per month [^1]. If that’s really the case, it’s very disappointing to me. I expected access to Gemini CLI to be included in the normal Workspace subscription.
[edit] all lies - I got my wires crossed, free tier for Workspace isn't yet supported. sorry. you need to set the project and pay. this is WIP.
Workspace users [edit: cperry was wrong] can get the free tier as well, just choose "More" and "Google for Work" in the login flow.
It has been a struggle to get a simple flow that works for all users, happy to hear suggestions!
thimabi · 2h ago
Thanks for your clarification. I've been able to set up Gemini CLI with my Workspace account.
Just a heads-up: your docs about authentication on Github say to place a GOOGLE_CLOUD_PROJECT_ID as an environment variable. However, what the Gemini CLI is actually looking for, from what I can tell, is a GOOGLE_CLOUD_PROJECT environment variable with the name of a project (rather than its ID). You might want to fix that discrepancy between code and docs, because it might confuse other users as well.
I don’t know what constraints made you all require a project ID or name to use the Gemini CLI with Workspace accounts. However, it would be far easier if this requirement were eliminated.
cperry · 2h ago
sorry, I was wrong about free tier - I've edited above. this is WIP.
noted on documentation, there's a PR in flight on this. also found some confusion around gmail users who are part of the developer program hitting issues.
thimabi · 2h ago
> free tier for Workspace isn't yet supported. sorry. you need to set the project and pay.
Well, I've just set up Gemini CLI with a Workspace account project in the free tier, and it works apparently for free. Can you explain whether billing for that has simply not been configured yet, or where exactly billing details can be found?
Unfortunately all of that is pretty confusing, so I'll hold off using Gemini CLI until everything has been clarified.
bachmeier · 2h ago
> noted on documentation, there's a PR in flight on this. also found some confusion around gmail users who are part of the developer program hitting issues.
Maybe you have access to an AI solution for this.
rtaylorgarlock · 3h ago
I can imagine. Y'all didn't start simple like some of your competitors; 'intrapraneurial' efforts in existing contexts like yours come with well-documented struggles. Good work!
Workaccount2 · 3h ago
Just get a pop-up or something in place to make it dead simple, because workspace users are probably the core users of the product.
827a · 2h ago
Having played with the gemini-cli here for 30 minutes, so I have no idea but best guess: I believe that if you auth with a Workspace account it routes all the requests through the GCP Vertex API, which is why it needs a GOOGLE_CLOUD_PROJECT env set, and that also means usage-based billing. I don't think it will leverage any subscriptions the workspace account might have (are there still gemini subscriptions for workspace? I have no idea. I thought they just raised everyone's bill and bundled it in by default. What's Gemini Code Assist Standard or Enterprise? I have no idea).
While I get my organization's IT department involved, I do wonder why this is built in a way that requires more work for people already paying google money than a free user.
rtaylorgarlock · 2h ago
@ebiester, my wife's maiden name is E. Biester. I did a serious double take. Got you on X :)
Maxious · 2h ago
I'd echo that having to get the IT section involved to create a google cloud project is not great UX when I have access to NotebookLM Pro and Gemini for Workplace already.
Also this doco says GOOGLE_CLOUD_PROJECT_ID but the actual tool wants GOOGLE_CLOUD_PROJECT
conception · 2h ago
Google Gemini
Google Gemini Ultra
AI Studio
Vertex AI
Notebook LLM
Jules
All different products doing the sameish thing. I don’t know where to send users to do anything. They are all licensed differently. Bonkers town.
elashri · 3h ago
Hi, Thanks for this work.
currently it seems these are the CLI tools available. Is it possible to extend or actually disable some of these tools (for various reasons)?
I tried to get Gemini CLI to update itself using the MCP settings for Claude. It went off the rails. I then fed it the link you provided and it correctly updates it's settings file. You might mention the settings.json file in the README.
mkagenius · 3h ago
Hi - I integrated Apple Container on M1 to run[1] the code generated by Gemini CLI. It works great!
Using the Gemini CLI the first thing I tried to do was "Create GEMINI.md files to customize your interactions with Gemini." The command ran for about a minute before receiving a too many requests error.
Right now authentication doesn't work if you're working on a remote machine and try to authenticate with Google, FYI. You need an alternate auth flow that gives the user a link and lets them paste a key in (this is how Claude Code does it).
sandGorgon · 18m ago
i have a Google AI Pro subscription - what kind of credits/usage/allowance do i get towards gemini cli ?
hiAndrewQuinn · 1h ago
Feature request! :)
I'm a Gemini Pro subscriber and I would love to be able to use my web-based chat resource limits with, or in addition to, what is offered here. I have plenty of scripts that are essentially "Weave together a complex prompt I can send to Gemini Flash to instantly get the answer I'm looking for and xclip it to my clipboard", and this would finally let me close the last step in that scripts.
Love what I'm seeing so far!
imjonse · 1h ago
Hi. It is unclear from the README whether the free limits apply also when there's an API key found in the environment - not explicitly set for this tool - and there is no login requirement.
GenerWork · 2h ago
I'm just a hobbyist, but I keep getting the error "The code change produced by Gemini cannot be automatically applied. You can manually apply the change or ask Gemini to try again". I assume this is because the service is being slammed?
Edit: I should mention that I'm accessing this through Gemini Code Assist, so this may be something out of your wheelhouse.
I don't think that's capacity, you should see error codes.
streb-lo · 2h ago
Is there a reason all workspace accounts need a project ID? We pay for gemini pro for our workspace accounts but we don't use GCP or have a project ID otherwise.
thimabi · 2h ago
The reason is that billing is separate, via the paid tier of the API. Just a few minutes ago, I was able to test Gemini CLI using a Workspace account after setting up a project in the free tier of the API. However, that seems to have been a bug on their end, because I now get 403 errors (Forbidden) with that configuration. The remaining options are either to set up billing for the API or use a non-Workspace Google account.
cperry · 1h ago
the short answer is b/c one of our dependencies requires it and hasn't resolved it.
danavar · 2h ago
Is there a way to instantly, quickly prompt it in the terminal, without loading the full UI? Just to get a short response without filling the terminal page.
like to just get a short response - for simple things like "what's a nm and grep command to find this symbol in these 3 folders". I use gemini alot for this type of thing already
Both allow you to switch between models, send short prompts from a CLI, optionally attach some context. I prefer mods because it's an easier install and I never need to worry about Python envs and other insanity.
indigodaddy · 2h ago
Didn't know about mods, looks awesome.
cperry · 2h ago
-p is your friend
hiAndrewQuinn · 1h ago
gemini --prompt "Hello"
carraes · 3h ago
it would be cool to work with my google ai pro sub
cperry · 3h ago
working on it
Freedom2 · 42m ago
Pointed it at a project directory and asked it to find and fix an intentionally placed bug without referencing any filenames. It seemed to struggle finding any file or constructing a context about the project unless specifically asked. FWIW, Claude Code tries to build an 'understanding' of the codebase when given the same prompt. For example, it struggled when I asked to "fix the modal logic" but nothing was specifically called a modal.
Is the recommendation to specifically ask "analyze the codebase" here?
nojito · 3h ago
How often did you use gemini-cli to build on gemini-cli?
_ryanjsalva · 3h ago
We started using Gemini CLI to build itself after about week two. If I had to guess, I'd say better than 80% of the code was written with Gemini CLI. Honestly, once we started using the CLI, we started experimenting a lot more and building waaaaay faster.
bdmorgan · 3h ago
100% of the time
javier123454321 · 3h ago
one piece of feedback. Please do neovim on top of vim or have a way to customize the editor beyond your list.
ur-whale · 3h ago
> Hi - I work on this.
There was a time where Google produced products that had:
- 1 logo
- 1 text field
- 2 buttons.
This ended up being a sizable part of why Google became so successful.
I would suggest that you allow yourself and your team to be visited by the spirit of those days.
atonse · 2m ago
This is all very cool, but I hate to be the "look at the shiny lights" guy...
How did they do that pretty "GEMINI" gradient in the terminal? is that a thing we can do nowadays? It doesn't seem to be some blocky gradient where each character is a different color. It's a true gradient.
bbminner · 1m ago
TIL about several more cool gemini-powered prototyping tools: both 1) Canvas tool option in Gemini web (!) app and 2) Build panel in Google AI Studio can generate amazing multi-file shareable web apps in seconds.
joelm · 54m ago
Been using Claude Code (4 Opus) fairly successfully in a large Rust codebase, but sometimes frustrated by it with complex tasks. Tried Gemini CLI today (easy to get working, which was nice) and it was pretty much a failure. It did a notably worse job than Claude at having the Rust code modifications compile successfully.
However, Gemini at one point output what will probably be the highlight of my day:
"I have made a complete mess of the code. I will now revert all changes I have made to the codebase and start over."
What great self-awareness and willingness to scrap the work! :)
ZeroCool2u · 32m ago
Personally my theory is that Gemini benefits from being able to train on Googles massive internal code base and because Rust has been very low on uptake internally at Google, especially since they have some really nice C++ tooling, Gemini is comparatively bad at Rust.
wohoef · 2h ago
A few days ago I tested Claude Code by completely vibe coding a simple stock tracker web app in streamlit python. It worked incredibly well, until it didn't. Seems like there is a critical project size where it just can't fix bugs anymore.
Just tried this with Gemini CLI and the critical project size it works well for seems to be quite a bit bigger. Where claude code started to get lost, I simply told Gemini CLI to "Analyze the codebase and fix all bugs". And after telling it to fix a few more bugs, the application simply works.
We really are living in the future
agotterer · 35m ago
I wonder how much of this had to do with the context window size? Gemini’s window is 5x larger than Cladue’s.
I’ve been using Claude for a side project for the past few weeks and I find that we really get into a groove planning or debugging something and then by the time we are ready to implement, we’ve run out of context window space. Despite my best efforts to write good /compact instructions, when it’s ready to roll again some of the nuance is lost and the implementation suffers.
I’m looking forward to testing if that’s solved by the larger Gemini context window.
AJ007 · 2h ago
Current best practice for Claude Code is to have heavy lifting done by Gemini Pro 2.5 or o3/o3pro. There are ways to do this pretty seamlessly now because of MCP support (see Repo Prompt as an example.) Sometimes you can also just use Claude but it requires iterations of planning, integration while logging everything, then repeat.
I haven't looked at this Gemini CLI thing yet, but if its open source it seems like any model can be plugged in here?
I can see a pathway where LLMs are commodities. Every big tech company right now both wants their LLM to be the winner and the others to die, but they also really, really would prefer a commodity world to one where a competitor is the winner.
If the future use looks more like CLI agents, I'm not sure how some fancy UI wrapper is going to result in a winner take all. OpenAI is winning right now with user count by pure brand name with ChatGPT, but ChatGPT clearly is an inferior UI for real work.
sysmax · 1h ago
I think, there are different niches. AI works extremely well for Web prototyping because a lot of that work is superficial. Back in the 90s we had Delphi where you could make GUI applications with a few clicks as opposed to writing tons of things by hand. The only reason we don't have that for Web is the decentralized nature of it: every framework vendor has their own vision and their own plan for future updates, so a lot of the work is figuring out how to marry the latest version of component X with the specific version of component Y because it is required by component Z. LLMs can do that in a breeze.
But in many other niches (say embedded), the workflow is different. You add a feature, you get weird readings. You start modelling in your head, how the timing would work, doing some combination of tracing and breakpoints to narrow down your hypotheses, then try them out, and figure out what works the best. I can't see the CLI agents do that kind of work. Depends too much on the hunch.
Sort of like autonomous driving: most highway driving is extremely repetitive and easy to automate, so it got automated. But going on a mountain road in heavy rain, while using your judgment to back off when other drivers start doing dangerous stuff, is still purely up to humans.
TechDebtDevin · 1h ago
Yeah but this collapses under any real complexity and there is likely an extreme amount of redundant code and would probably be twice as memory efficient if you just wrote it yourself.
Im actually interested to see if we see a rise in demand for DRAM that is greater than usual because more software is vibe coded than being not, or some form of vibe coding.
ugh123 · 1h ago
Claude seems to have trouble with extracting code snippets to add to the context as the session gets longer and longer. I've seen it get stuck in a loop simply trying to use sed/rg/etc to get just a few lines out of a file and eventually give up.
dawnofdusk · 2h ago
I feel like you get more mileage out of prompt engineering and being specific... not sure if "fix all the bugs" is an effective real-world use case.
crazylogger · 1h ago
Ask the AI to document each module in a 100-line markdown. These should be very high level, don't contain any detail, but just include pointers to relevant files for AI to find out by itself. With a doc as the starting point, AI will have context to work on any module.
If the module just can't be documented in this way in under 100 lines, it's a good time to refactor. Chances are if Claude's context window is not enough to work with a particular module, a human dev can't either. It's all about pointing your LLM precisely at the context that matters.
tvshtr · 1h ago
Yeah, and it's variable, can happen at 250k, 500k or later. When you interrogate it; usually the issue comes to it being laser focused or stuck on one specific issue, and it's very hard to turn it around.
For the lack of the better comparison it feels like the AI is on a spectrum...
ZeroCool2u · 4h ago
Ugh, I really wish this had been written in Go or Rust. Just something that produces a single binary executable and doesn't require you to install a runtime like Node.
qsort · 4h ago
Projects like this have to update frequently, having a mechanism like npm or pip or whatever to automatically handle that is probably easier. It's not like the program is doing heavy lifting anyway, unless you're committing outright programming felonies there shouldn't be any issues on modern hardware.
It's the only argument I can think of, something like Go would be goated for this use case in principle.
mpeg · 2h ago
You'd think that, but a globally installed npm package is annoying to update, as you have to do it manually and I very rarely need to update other npm global packages so at least personally I always forget to do it.
masklinn · 2h ago
> having a mechanism like npm or pip or whatever to automatically handle that is probably easier
Re-running `cargo install <crate>` will do that. Or install `cargo-update`, then you can bulk update everything.
And it works hella better than using pip in a global python install (you really want pipx/uvx if you're installing python utilities globally).
IIRC you can install Go stuff with `go install`, dunno if you can update via that tho.
StochasticLi · 2h ago
This whole thread is a great example of the developer vs. user convenience trade-off.
A single, pre-compiled binary is convenient for the user's first install only.
MobiusHorizons · 1h ago
How so? Doesn’t it also make updates pretty easy? Have the precompiled binary know how to download the new version. Sure there are considerations for backing up the old version, but it’s not much work, and frees you up from being tied to one specific ecosystem
masklinn · 2h ago
Unless you build self-updating in, which Google certainly has experience in, in part to avoid clients lagging behind. Because aside from being a hindrance (refusing to start and telling the client to update) there's no way you can actually force them to run an upgrade command.
JimDabell · 1h ago
I don’t think that’s true. For instance, uv is a single, pre-compiled binary, and I can just run `uv self update` to update it to the latest version.
re-thc · 1h ago
> Re-running `cargo install <crate>` will do that. Or install `cargo-update`, then you can bulk update everything.
How many developers have npm installed vs cargo? Many won't even know what cargo is.
ZeroCool2u · 3h ago
I feel like Cargo or Go Modules can absolutely do the same thing as the mess of build scripts they have in this repo perfectly well and arguably better.
koakuma-chan · 3h ago
If you use Node.js your program is automatically too slow for a CLI, no matter what it actually does.
fhinkel · 4h ago
Ask Gemini CLI to re-write itself in your preferred language
ZeroCool2u · 4h ago
Unironically, not a bad idea.
AJ007 · 2h ago
Contest between Claude Code and Gemini CLI, who rewrites it faster/cheaper/better?
i_love_retros · 3h ago
This isn't about quality products, it's about being able to say you have a CLI tool because the other ai companies have one
clbrmbr · 3h ago
Fast following is a reasonable strategy. Anthropic provided the existence proof. It’s an immensely useful form factor for AI.
mike_hearn · 2h ago
The question is whether what makes it useful is actually being in the terminal (limited, glitchy, awkward interaction) or whether it's being able to run next to files on a remote system. I suspect the latter.
closewith · 3h ago
Yeah, it would be absurd to avoid a course of action proven productive by a competitor.
behnamoh · 3h ago
> This isn't about quality products, it's about being able to say you have a CLI tool because the other ai companies have one
Anthropic's Claude Code is also installed using npm/npx.
rs186 · 2h ago
Eh, I can't see how your comment is relevant ti the parent thread. Creating a CLI in Go is barely more complicated than JS. Rust, probably, but people aren't asking for that.
iainmerrick · 4h ago
Looks like you could make a standalone executable with Bun and/or Deno:
Note, I haven't checked that this actually works, although if it's straightforward Node code without any weird extensions it should work in Bun at least. I'd be curious to see how the exe size compares to Go and Rust!
That is point not a line. An extra 2MB of source is probably a 60MB executable, as you are measuring the runtime size. Two "hello worlds" are 116MB? Who measures executables in Megabits?
iainmerrick · 1h ago
What's a typical Go static binary size these days? Googling around, I'm seeing wildly different answers -- I think a lot of them are outdated.
MobiusHorizons · 1h ago
It depends a lot on what the executable does. I don’t know the hello world size, but anecdotally I remember seeing several go binaries in the single digit megabyte range. I know the code size is somewhat larger than one might expect because go keeps some type info around for reflection whether you use it or not.
iainmerrick · 1h ago
Ah, good point. I was just wondering about the fixed overhead of the runtime system -- mainly the garbage collector, I assume.
JimDabell · 4h ago
I was going to say the same thing, but they couldn’t resist turning the project into a mess of build scripts that hop around all over the place manually executing node.
iainmerrick · 1h ago
Oh, man!
I guess it needs to start various processes for the MCP servers and whatnot? Just spawning another Node is the easy way to do that, but a bit annoying, yeah.
ZeroCool2u · 4h ago
Yeah, this just seems like a pain in the ass that could've been easily avoided.
iainmerrick · 3h ago
From my perspective, I'm totally happy to use pnpm to install and manage this. Even if it were a native tool, NPM might be a decent distribution mechanism (see e.g. esbuild).
Obviously everybody's requirements differ, but Node seems like a pretty reasonable platform for this.
jstummbillig · 3h ago
It feels like you are creating a considerable fraction of the pain by taking offense with simply using npm.
evilduck · 3h ago
As a longtime user of NPM but overall fan of JS and TS and even its runtimes, NPM is a dumpster fire and forcing end users to use it is brittle, lazy, and hostile. A small set of dependencies will easily result in thousands (if not tens of thousands) of transitive dependency files being installed.
If you have to run end point protection that will blast your CPU with load and it makes moving or even deleting that folder needlessly slow. It also makes the hosting burden of NPM (nusers) who must all install dependencies instead of (nCI instances), which isn't very nice to our hosts. Dealing with that once during your build phase and then packaging that mess up is the nicer way to go about distributing things depending on NPM to end users.
jart · 1h ago
See gemmafile which gives you an airgapped version of gemini (which google calls gemma) that runs locally in a single file without any dependencies.
My thoughts exactly. Neither Rust not Go, not even C/C++ which I could accept if there were some native OS dependencies. Maybe this is a hint on who could be its main audience.
ur-whale · 3h ago
> Maybe this is a hint on who could be its main audience.
Or a hint about the background of the folks who built the tool.
ur-whale · 3h ago
> and doesn't require you to install a runtime like Node.
My exact same reaction when I read the install notes.
Even python would have been better.
Having to install that Javascript cancer on my laptop just to be able to try this, is a huge no.
lazarie · 3h ago
"Failed to login. Ensure your Google account is not a Workspace account."
Is your vision with Gemini CLI to be geared only towards non-commercial users? I have had a workspace account since GSuite and have been constantly punished for it by Google offerings all I wanted was gmail with a custom domain and I've lost all my youtube data, all my fitbit data, I cant select different versions of some of your subscriptions (seemingly completely random across your services from a end-user perspective), and now as a Workspace account I cant use Gemini CLI for my work, which is software development. This approach strikes me as actively hostile towards your loyal paying users...
The barrier to use this project is maddening. I went through all of the setup instructions and getting the workspace error for a personal gmail account.
Googlers, we should not have to do all of this setup and prep work for a single account. Enterprise I get, but for a single user? This is insufferable.
zxspectrum1982 · 2h ago
Same here.
asadm · 4h ago
I have been using this for about a month and it’s a beast, mostly thanks to 2.5pro being SOTA and also how it leverages that huge 1M context window. Other tools either preemptively compress context or try to read files partially.
I have thrown very large codebases at this and it has been able to navigate and learn them effortlessly.
zackify · 4h ago
When I was using it in cursor recently, I found it would break imports in large python files. Claude never did this. Do you have any weird issues using Gemini? I’m excited to try the cli today
asadm · 4h ago
not at all. these new models mostly write compiling code.
Definitely not because of Claude Code eating our lunch!
jstummbillig · 3h ago
I find it hard to imagine that any of the major model vendors are suffering from demand shortages right now (if that's what you mean?)
If you mean: This is "inspired" by the success of Claude Code. Sure, I guess, but it's also not like Claude Code brought anything entirely new to the table. There is a lot of copying from each other and continually improving upon that, and it's great for the users and model providers alike.
coolKid721 · 1h ago
ai power users will drop shit immediately, yes they probably have long term contracts with companies but anyone seriously engaged has switched to claude code now (probably including many devs AT openai/google/etc.)
If you don't think claude code is just miles ahead of other things you haven't been using it (or well)
I am certain they keep metrics on those "power users" (especially since they probably work there) and when everyone drops what they were using and moves to a specific tool that is something they should be careful of.
unshavedyak · 4h ago
Yea, i'm not even really interested in Gemini atm because last i tried 2.5 Pro it was really difficult to shape behavior. It would be too wordy, or offer too many comments, etc - i couldn't seem to change some base behaviors, get it to focus on just one thing.
Which is surprising because at first i was ready to re-up my Google life. I've been very anti-Google for ages, but at first 2.5 Pro looked so good that i felt it was a huge winner. It just wasn't enjoyable to use because i was often at war with it.
Sonnet/Opus via Claude Code are definitely less intelligent than my early tests of 2.5 Pro, but they're reasonable, listen, stay on task and etc.
I'm sure i'll retry eventually though. Though the subscription complexity with Gemini sounds annoying.
sirn · 2h ago
I've found that Gemini 2.5 Pro is pretty good at analyzing existing code, but really bad at generating a new code. When I use Gemini with Aider, my session usually went like:
Me: build a plan to build X
Gemini: I'll do A, B, and C to achieve X
Me: that sounds really good, please do
Gemini: <do A, D, E>
Me: no, please do B and C.
Gemini: I apologize. <do A', C, F>
Me: no! A was already correct, please revert. Also do B and C.
Gemini: <revert the code to A, D, E>
Whereas Sonnet/Opus on average took me more tries to get it to the implementation plan that I'm satisfied with, but it's so much easier to steer to make it produce the code that I want.
ur-whale · 3h ago
> It would be too wordy, or offer too many comments
Wholeheartedly agree.
Both when chatting in text mode or when asking it to produce code.
The verbosity of the code is the worse. Comments often longer than the actual code, every nook and cranny of an algorithm unrolled over 100's of lines, most of which unnecessary.
Feels like typical code a mediocre Java developer would produce in the early 2000's
porridgeraisin · 2h ago
> Feels like typical code a mediocre Java developer would produce in the early 2000's
So, google's codebase
troupo · 4h ago
And since they have essentially unlimited money they can offer a lot for free/cheaply, until all competitors die out, and then they can crank up the prices
pzo · 3h ago
yeah we already seen this with gemini 2.5 flash. Gemini 2.0 is such a work horse for API model with great price. Gemini 2.5 flash lite same price but is not as good except math and coding (very niche use case for API key)
meetpateltech · 3h ago
Key highlights from blog post and GitHub repo:
- Open-source (Apache 2.0, same as OpenAI Codex)
- 1M token context window
- Free tier: 60 requests per minute and 1,000 requests per day (requires Google account authentication)
It integrates with VS Code, which suits my workflow better. And buying credits through them (at cost) means I can use any model I want without juggling top-ups across several different billing profiles.
joelthelion · 3h ago
Or aider. In any case, while top llms will likely remain proprietary for some time, there is no reason for these tools to be closed source or tied to a particular llm vendor.
solomatov · 2h ago
I couldn't find any mentions of whether they train their models on your source code. May be someone was able to?
dawnofdusk · 1h ago
Yes they do. Scroll to bottom of Github readme
>This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
Click Gemini API, scroll
>When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.
>To help with quality and improve our products, human reviewers may read, annotate, and process your API input and output. Google takes steps to protect your privacy as part of this process. This includes disconnecting this data from your Google Account, API key, and Cloud project before reviewers see or annotate it. Do not submit sensitive, confidential, or personal information to the Unpaid Services.
lordofgibbons · 55m ago
How does this compare to OpenCode and OAI's Codex? Those two are also free, they work with any LLM.
Wow, this is next-level. I can't believe this is free. This blows cline out of the water!
alpb · 1h ago
Are there any LLMs that offer ZSH plugins that integrate with command history, previous command outputs, system clipboard etc to assist writing the next command? Stuff like gemini/copilot CLI don't feel particularly useful to me. I'm not gonna type "?? print last 30 lines of this file"
Anyone else think it's interesting all these CLIs are written in TypeScript? I'd expect Google to use Go.
albertzeyer · 3h ago
The API can be used both via your normal Google account, or via API key?
Because it says in the README:
> Authenticate: When prompted, sign in with your personal Google account. This will grant you up to 60 model requests per minute and 1,000 model requests per day using Gemini 2.5 Pro.
> For advanced use or increased limits: If you need to use a specific model or require a higher request capacity, you can use an API key: ...
When I have the Google AI Pro subscription in my Google account, and I use the personal Google account for authentication here, will I also have more requests per day then?
I'm currently wondering what makes more sense for me (not for CLI in particular, but for Gemini in general): To use the Google AI Pro subscription, or to use an API key. But I would also want to use the API maybe at some point. I thought the API requires an API key, but here it seems also the normal Google account can be used?
bdmorgan · 3h ago
It's firmly on the radar - we will have a great answer for this soon.
Mond_ · 3h ago
Oh hey, afaik all of this LLM traffic goes through my service!
Set up not too long ago, and afaik pretty load-bearing for this. Feels great, just don’t ask me any product-level questions. I’m not part of the Gemini CLI team, so I’ll try to keep my mouth shut.
Not going to lie, I’m pretty anxious this will fall over as traffic keeps climbing up and up.
asadm · 2h ago
do you mean the genai endpoints?
ruffrey · 2h ago
Thanks, Google. A bit of feedback - integration with `gcloud` CLI auth would have been appreciated.
dmd · 27m ago
Well, color me not impressed. On my very first few tries, out of 10 short-ish (no more than 300 lines) python scripts I asked it to clean up and refactor, 4 of them it mangled to not even run any more, because of syntax (mostly quoting) errors and mis-indenting. Claude has never done that.
iddan · 2h ago
This is awesome! We recently started using Xander (https://xander.bot). We've found it's even better to assign PMs to Xander on Linear comments and get a PR. Then, the PM can validate the implementation in a preview environment, and engineers (or another AI) can review the code.
Aeolun · 1h ago
How am I supposed to use this when actually working on a cli? The sign in doesn’t display s link I can open. Presumably it’s trying and failing to open firefox?
barbazoo · 3h ago
> To use Gemini CLI free-of-charge, simply login with a personal Google account to get a free Gemini Code Assist license. That free license gets you access to Gemini 2.5 Pro and its massive 1 million token context window. To ensure you rarely, if ever, hit a limit during this preview, we offer the industry’s largest allowance: 60 model requests per minute and 1,000 requests per day at no charge.
If it sounds too good to be true, it probably is. What’s the catch? How/why is this free?
My guess: So that they can get more training data to improve their models which will eventually be subscription only.
jabroni_salad · 2h ago
They recently discontinued the main Gemini free tier which offered similar limits. I would say expect this to disappear when it hits GA or if it gets a lot of targeted abuse.
raincole · 3h ago
Because Google is rich and they'd like to get you hooked. Just like how ChatGPT has a free tier.
Also they can throttle the service whenever they feel it's too costly.
rtaylorgarlock · 3h ago
I spent 8k tokens after giving the interface 'cd ../<other-dir>', resulting in Gemini explaining that it can't see the other dir outside of current scope but with recommendation ls files in that dir.
Which then reminded me of my core belief that we will always have to be above these tools in order to understand and execute. I wonder if/when I'll be wrong.
Oras · 2h ago
Appreciate how easy it is to report a bug! I like these commands.
A bit gutted by the `make sure it is not a workspace account`. What's wrong with Google prioritising free accounts vs paid accounts? This is not the first time they have done it when announcing Gemini, too.
Hope this will pressure Anthropic into releasing Claude Code as open source.
zackify · 4h ago
What’s neat is we can proxy requests from Gemini or fork it with only replacing the api call layer so it can be used with local models!!!
fhinkel · 4h ago
I love healthy competition that leads to better use experiences
willsmith72 · 4h ago
As a heavy Claude code user that's not really a selling point for me
Ultimately quality wins out with LLMs. Having switched a lot between openai, google and Claude, I feel there's essentially 0 switching cost and you very quickly get to feel which is the best. So until Claude has a solid competitor I'll use it, open source or not
lherron · 3h ago
Even if you don't care about open source, you should care about all the obfuscation happening in the prompts/models being used by Cursor/Claude Code/etc. With everything hidden, you could be paying 200/mo and get served Haiku instead of Sonnet/Opus. Or you could be getting 1k tokens of your code inserted as context instead of 100k to save on inference costs.
willsmith72 · 2h ago
so what? I care about the quality of the result. They can do that however they want
A more credible argument is security and privacy, but I couldn't care less if they're managing to be best in class using haiku
b0a04gl · 3h ago
why’d the release post vanish this morning and then show up again 8 hours later like nothing happened. some infra panic or last-minute model weirdness. was midway embedding my whole notes dir when the repo 404’d and I thought y’all pulled a firebase moment.. what's the real story?
mil22 · 2h ago
Does anyone know what Google's policy on retention and training use will be when using the free version by signing in with a personal Google account? Like many others, I don't want my proprietary codebase stored permanently on Google servers or used to train their models.
At the bottom of README.md, they state:
"This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
* Gemini API key
* Gemini Code Assist
* Vertex AI"
The Gemini API terms state: "for Unpaid Services, all content and responses is retained, subject to human review, and used for training".
The Gemini Code Assist terms trifurcate for individuals, Standard / Enterprise, and Cloud Code (presumably not relevant).
* For individuals: "When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies."
* For Standard and Enterprise: "To help protect the privacy of your data, Gemini Code Assist Standard and Enterprise conform to Google's privacy commitment with generative AI technologies. This commitment includes items such as the following: Google doesn't use your data to train our models without your permission."
The Vertex AI terms state "Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction."
What a confusing array of offerings and terms! I am left without certainty as to the answer to my original question. When using the free version by signing in with a personal Google account, which doesn't require a Gemini API key and isn't Gemini Code Assist or Vertex AI, it's not clear which access mechanism I am using or which terms apply.
It's also disappointing "Google's privacy commitment with generative AI technologies" which promises that "Google doesn't use your data to train our models without your permission" doesn't seem to apply to individuals.
frereubu · 3h ago
I have access to Gemini through Workspace, but despite spending quite a while trying to find out how, I cannot figure out how to use that in Copilot. All I seem to be able to find is information on the personal account or enterprise tiers, neither of which I have.
acedTrex · 3h ago
Everyone writing the same thing now lol, its plainly obvious this is the workflow best suited to llms
fhinkel · 4h ago
I played around with it to automate GitHub tasks for me (tagging and sorting PRs and stuff). Sometimes it needs a little push to use the API instead of web search, but then it even installs the right tools (like gh) for you. https://youtu.be/LP1FtpIEan4
sync · 4h ago
These always contain easter eggs. I got some swag from Claude Code, and as suspected, Gemini CLI includes `/corgi` to activate corgi mode.
GavCo · 4h ago
They sent you swag in the mail? How did that work?
sync · 4h ago
Yeah, I'm not sure if it's still there (their source code is increasingly obfuscated) but if you check out the source for the first public version (0.2.9) you'll see the following:
Sends the user swag stickers with love from Anthropic.",bq2=`This tool should be used whenever a user expresses interest in receiving Anthropic or Claude stickers, swag, or merchandise. When triggered, it will display a shipping form for the user to enter their mailing address and contact details. Once submitted, Anthropic will process the request and ship stickers to the provided address.
Common trigger phrases to watch for:
- "Can I get some Anthropic stickers please?"
- "How do I get Anthropic swag?"
- "I'd love some Claude stickers"
- "Where can I get merchandise?"
- Any mention of wanting stickers or swag
The tool handles the entire request process by showing an interactive form to collect shipping information.
9cb14c1ec0 · 3h ago
Just tried it. Doesn't work anymore.
b0a04gl · 3h ago
been testing edge cases - is the 1M context actually flat or does token position, structure or semantic grouping change how attention gets distributed?
when I feed in 20 files, sometimes mid-position content gets pulled harder than stuff at the end. feels like it’s not just order, but something deeper - ig the model’s building a memory map with internal weighting.
if there’s any semantic chunking or attention-aware preprocessing happening before inference, then layout starts mattering more than size. prompt design becomes spatial.
any internal tooling to trace which segments are influencing output?
jsnell · 3h ago
What's up with printing lame jokes every few seconds? The last thing I want from a tool like this is my eye to be drawn to the window all the time as if something had changed and needs my action. (Having a spinner is fine, having changing variable length text isn't.)
asadm · 3h ago
can disable it from accessibility setting. it does show model thinking instead of joke when that’s available.
jsnell · 28m ago
Thanks, but where are those accessibility settings? /help shows nothing related to settings other than auth and theme, there's no related flags, and there's a ~/.gemini/settings.json that contains just the auth type.
Disclaimar: I haven't used aider in probably a year. I found Aider to require much more understanding to use properly. Claude code _just works_, more or less out of the box. Assuming the Gemini team took cues from CC—I'm guessing it's more user-friendly than Aider.
Again, I haven't used aider in a while so perhaps that's not the case.
bananapub · 3h ago
Claude Code and OpenAI Codex and presumably this are much much more aggressive about generating work for themselves than Aider is.
For complicated changes Aider is much more likely to stop and need help, whereas Claude Code will just go and go and end up with something.
Whether that's worth the different economic model is up to you and your style and what you're working on.
koakuma-chan · 2h ago
It doesn't work. It just gives me 429 after a minute.
bufo · 2h ago
Grateful that this one supports Windows out of the box.
mekpro · 3h ago
Just refactored 1000 lines of Claude Code generated to 500 lines with Gemini Pro 2.5 ! Very impressed by the overall agentic experience and model performance.
incomingpain · 30m ago
Giving this a try, I'm rather astounded how effective my tests have gone.
That's a ton of free limit. This has been immensely more successful than void ide.
htrp · 4h ago
symptomatic of Google's lack of innovation and pm's rushing to copy competitor products
better question is why do you need a modle specific CLI when you should be able to plug in to individual models.
shmoogy · 4h ago
If Claude code is any indication it's because they can tweak it and dogfood to extract maximum performance from it. I strongly prefer Claude code to aider - irrespective of the max plan.
Haven't used Jules or codex yet since I've been happy and am working on optimizing my current workflow
jvanderbot · 4h ago
Aider is what you want for that.
wagslane · 4h ago
check out opencode by sst
matltc · 3h ago
Sweet, I love Claude and was raring to try out their CLI that dropped a few days ago, but don't have a sub. This looks to be free
phillipcarter · 3h ago
An aside, but with Claude Code and now Gemini instrumenting operations with OpenTelemetry by default, this is very cool.
ivanjermakov · 1h ago
Gemini, convert my disk from MBR to GPT
Keyframe · 1h ago
Hmm, with Claude code at $200+tax, this seems to be alternative which comes out at free or $299+tax a YEAR if I need more which is great. I found that buried at developers.google.com
Gemini Pro and Claude play off of each other really well.
Just started playing with Gemini CLI and one thing I miss immediately from Claude code is being able to write and interject as the AI does its work. Sometimes I interject by just saying stop, it stops and waits for more context or input or ai add something I forgot and it picks it up..
llm_nerd · 1h ago
Given that there's another comment complaining about this being in node...
This perfectly demonstrates the benefit of the nodejs platform. Trivial to install and use. Almost no dependency issues (just "> some years old version of nodejs"). Immediately works effortlessly.
I've never developed anything on node, but I have it installed because so many hugely valuable tools use it. It has always been absolutely effortless and just all benefit.
And what a shift from most Google projects that are usually a mammoth mountain of fragile dependencies.
(uv kind of brings this to python via uvx)
Jayakumark · 3h ago
Whether any CLI interactions are used to train the model or no ?
imiric · 1h ago
Ha. It would be naive to think that a CLI tool from an adtech giant won't exploit as much data as it can collect.
thimabi · 18m ago
You raise an interesting topic. Right now, when we think about privacy in the AI space, most of the discussion hinges on using our data for training purposes or not. That being said, I figure it won’t be long before AI companies use the data they collect to personalize ads as well.
logicchains · 1h ago
That's giving a lot away for free! When I was using Gemini 2.5 Pro intensively for automated work and regularly hitting the 1000 requests per day limit, it could easily cost $50+ per day with a large context. I imagine after a couple months they'll probably limit the free offering to a cheaper model.
zxspectrum1982 · 3h ago
Does Gemini CLI require API access?
jonnycoder · 2h ago
The plugin is getting bad reviews this morning. It doesn't work for me on latest Pycharm.
iaresee · 3h ago
Whoa. Who at Google thought providing this as an example of how to test your API key was a good idea?
Not a great look. I let our GCloud TAM know. But still.
asadm · 1h ago
What's wrong here?
iaresee · 14m ago
Don't put your API keys as parameters in your URL. Great way to have them land in server logs, your shell history, etc. You're trusting no one with decryption capabilities is doing logging and inspection correctly, which you shouldn't.
nickysielicki · 1h ago
it's wrapped in TLS, is ok.
titusblair · 3h ago
Nice work excited to use it!
revskill · 3h ago
Nice, at least i could get rid of the broken Warp CLI which prevents offline usage with their automatic cloud ai feature enabled.
andrewstuart · 3h ago
I really wish these AI companies would STOP innovating until they work out how to let us “download all files” on the chat page.
We are now three years into the AI revolution and they are still forcing us to copy and paste and click click crazy to get the damn files out.
STOP innovating. STOP the features.
Form a team of 500 of your best developers. Allocate a year and a billion dollar budget.
Get all those Ai super scientists into the job.
See if you can work out “download all files”. A problem on the scale of AGI or Dark Matter, but one day google or OpenAI will crack the problem.
Workaccount2 · 2h ago
It seems you are still using the web interface.
When you hop over to platforms that use the API, the files get written/edited in situ. No copy/pasting. No hunting for where to insert edited code.
Trust me it's a total game changer to switch. I spent so much time copy/pasting before moving over.
This edits the files directly. Using a chat hasn't been an optimal workflow for a while now.
raincole · 3h ago
What does this even mean lol. "Download all files"...?
poszlem · 4h ago
The killer feature of Claude Code is that you can just pay for Max and not worry about API billing. It lets me use it pretty much all the time without stressing over every penny or checking the billing page.
Until they do that - I'm sticking with Claude.
jedi3335 · 4h ago
No per-token billing here either: "...we offer the industry’s largest allowance: 60 model requests per minute and 1,000 requests per day at no charge."
Don’t know about Claude, but usually Google’s free offers have no privacy protections whatsoever — all data is kept and used for training purposes, including manual human review.
fhinkel · 4h ago
If you use your personal gmail account without billing enabled, you get generous requests and never have to worry about a surprise bill.
indigodaddy · 2h ago
If I have a CC linked to my personal Google for my Google One storage and YouTube Premium, that doesn't make me "billing enabled" for Gemini CLI does it?
therealmarv · 4h ago
That's a golden cage and you limit yourself to Anthropic only.
I'm happy I can switch models as I like with Aider. The top models from different companies see different things in my experiences and have their own strengths and weaknesses. I also do not see Anthropic's models on the top of my (subjective) list.
unshavedyak · 4h ago
Same. Generally i really prefer Claude Code's UX (CLI based, permissions, etc) - it's all generally close to right for me, but not perfect.
However i didn't use Claude Code before the Max plan because i just fret about some untrusted AI going ham on some stupid logic and burning credits.
If it's dumb on Max i don't mind, just some time wasted. If it's dumb on credits, i just paid for throw away work. Mentally it's just too much overhead for me as i end up worrying about Claude's journey, not just the destination. And the journey is often really bad, even for Claude.
mhb · 4h ago
How does that compare to using aider with Claude models?
adamcharnock · 4h ago
I did a little digging into this just yesterday. The impression I got was that Claude Code was pretty great, but also used a _lot_ more tokens than similar work using aider. Conversations I saw stated 5-10x more.
So yes with Claude Code you can grab the Max plan and not worry too much about usage. With Aider you'll be paying per API call, but it will cost quite a bit less than the similar work if using Claude Code in API-mode.
I concluded that – for me – Claude Code _may_ give me better results, but Aider will likely be cheaper than Claude Code in either API-mode or subscription-mode. Also I like that I really can fill up the aider context window if I want to, and I'm in control of that.
bananapub · 3h ago
> I concluded that – for me – Claude Code _may_ give me better results, but Aider will likely be cheaper than Claude Code in either API-mode or subscription-mode.
I'd be pretty surprised if that was the case - something like ~8 hours of Aider use against Claude can spend $20, which is how much Claude Pro costs.
therealmarv · 4h ago
Using Claude models in aider burns tokens you need to top up. With Claude Max subscription you can pay a 100 or 200 USD per month plan and use their internal tool claude code without the need to buy additional pay as you go tokens. You get a "flatrate", the higher plan gives you more usage with less rate limiting.
rusk · 4h ago
This insistence by SAAS vendors upon not protecting you from financial ruin must surely be some sort of deadweight loss.
Sure you might make a few quick wins from careless users but overall it creates an environment of distrust where users are watching their pennies and lots are even just standing off.
I can accept that with all the different moving parts this may be a trickier problem than a pre paid pump, or even a Telco, and while to a product manager this might look like a lot of work/money for something that “prevents” users overspending.
But we all know that’s shortsighted and stupid and its the kind of thinking that broadly signals more competition is required.
stpedgwdgfhgdd · 2h ago
Another JS implementation…
I do not get it why they don’t pick Go or Rust so i get a binary.
ape4 · 3h ago
In the screenshot it's asked about Gemini CLI and it says its going to search the web and read the README.md - what ever did we do before AI /s
i_love_retros · 3h ago
Boring. Any non llm news?
rhodysurf · 4h ago
I neeeed this google login method in sst's opencode now haha
https://developers.google.com/gemini-code-assist/resources/p...
When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies.
To help with quality and improve our products (such as generative machine-learning models), human reviewers may read, annotate, and process the data collected above. We take steps to protect your privacy as part of this process. This includes disconnecting the data from your Google Account before reviewers see or annotate it, and storing those disconnected copies for up to 18 months. Please don't submit confidential information or any data you wouldn't want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.
"If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals."
and then the link: https://developers.google.com/gemini-code-assist/docs/set-up...
If you pay for code assist, no data is used to improve. If you use a Gemini API key on a pay as you go account instead, it doesn't get used to improve. It's just if you're using a non-paid, consumer account and you didn't opt out.
That seems different than what you described.
"You can find the Gemini Code Assist for individuals privacy notice and settings in two ways:
- VS Code - IntelliJ "
I guess the key question is whether the Gemini CLI, when used with a personal Google account, is governed by the broader Gemini Apps privacy settings here? https://myactivity.google.com/product/gemini?pli=1
If so, it appears it can be turned off. However, my CLI activity isn't showing up there?
Can someone from Google clarify?
*What we DON'T collect:*
- *Personally Identifiable Information (PII):* We do not collect any personal information, such as your name, email address, or API keys.
- *Prompt and Response Content:* We do not log the content of your prompts or the responses from the Gemini model.
- *File Content:* We do not log the content of any files that are read or written by the CLI.
https://github.com/google-gemini/gemini-cli/blob/0915bf7d677...
Which pretty much means if you are using it for free, they are using your data.
I don't see what is alarming about this, everyone else has either the same policy or no free usage. Hell the surprising this is that they still let free users opt-out...
That’s not true. ChatGPT, even in the free tier, allows users to opt out of data sharing.
If this is legal, it shouldn’t be.
The resulting class-action lawsuit would bankrupt the company, along with the reputation damage, and fines.
Not if you pay for it.
Not if you pay for it.
Today.
In six months, a "Terms of Service Update" e-mail will go out to an address that is not monitored by anyone.
There's also zero chance they will risk paying customers by changing this policy.
I like Gemini 2.5 Pro, too, and recently, I tried different AI products (including the Gemini Pro plan) because I wanted a good AI chat assistant for everyday use. But I also wanted to reduce my spending and have fewer subscriptions.
The Gemini Pro subscription is included with Google One, which is very convenient if you use Google Drive. But I already have an iCloud subscription tightly integrated with iOS, so switching to Drive and losing access to other iCloud functionality (like passwords) wasn’t in my plans.
Then there is the Gemini chat UI, which is light years behind the OpenAI ChatGPT client for macOS.
NotebookLM is good at summarizing documents, but the experience isn’t integrated with the Gemini chat, so it’s like constantly switching between Google products without a good integrated experience.
The result is that I end up paying a subscription to Raycast AI because the chat app is very well integrated with other Raycast functions, and I can try out models. I don’t get the latest model immediately, but it has an integrated experience with my workflow.
My point in this long description is that by being spread across many products, Google is losing on the UX side compared to OpenAI (for general tasks) or Anthropic (for coding). In just a few months, Google tried to catch up with v0 (Google Stitch), GH Copilot/Cursor (with that half-baked VSCode plugin), and now Claude Code. But all the attempts look like side-projects that will be killed soon.
It's not in Basic, Standard or Premium.
It's in a new tier called "Google AI Pro" which I think is worth inclusion in your catalogue of product confusion.
Oh wait, there's even more tiers that for some reason can't be paid for annually. Weird... why not? "Google AI Ultra" and some others just called Premium again but now include AI. 9 tiers, 5 called Premium, 2 with AI in the name but 6 that include Gemini. What a mess.
Google's AI offerings that should be simplified/consolidated:
- Jules vs Gemini CLI?
- Vertex API (requires a Google Cloud Account) vs Google AI Studio API
Also, since Vertex depends on Google Cloud, projects get more complicated because you have to modify these in your app [1]:
``` # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True ```
[1]: https://cloud.google.com/vertex-ai/generative-ai/docs/start/...
Also they should make it clearer which SDKs, documents, pricing, SLAs etc apply to each. I still get confused when I google up some detail and end up reading the wrong document.
"The Google Cloud Dashboard is a mess, and they haven't fixed it in years." Tell me what you want, and I'll do my best to make it happen.
In the interim, I would also suggest checking out Cloud Hub - https://console.cloud.google.com/cloud-hub/ - this is us really rethinking the level of abstraction to be higher than the base infrastructure. You can read more about the philosophy and approach here: https://cloud.google.com/blog/products/application-developme...
Ideally what I want is this: I google "gemini api" and that leads me to a page where I can login using my Google account and see the API settings. I create one and start using it right away. No extra wizardry, no multiple packages that must be installed, just the gemini package (no gauth!) and I should be good to go.
It's easy, you just ask the best Google Model to create a script that outputs the number of API calls made to the Gemini API in a GCP account.
100% fail rate so far.
I also just got the email for Gemini ultra and I couldn't even figure out what was being offered compared to pro outside of 30tb storage vs 2tb storage!
Never ascribe to AI, that which is capable of being borked by human PMs.
Not if you're in EU though. Even though I have zero or less AI use so far, I tinker with it. I'm more than happy to pay $200+tax for Max 20x. I'd be happy to pay same-ish for Gemini Pro.. if I knew how and where to have Gemini CLI like I do with Claude code. I have Google One. WHERE DO I SIGN UP, HOW DO I PAY AND USE IT GOOGLE? Only thing I have managed so far is through openrouter via API and credits which would amount to thousands a month if I were to use it as such, which I won't do.
What I do now is occasionally I go to AI Studio and use it for free.
Gemini 2.5 Pro is the best model I've used (even better than o3 IMO) and yet there's no simple Claude/Cursor like subscription to just get full access.
Nevermind Enterprise users too, where OpenAI has it locked up.
In certain areas, perhaps, but Google Workspace at $14/month not only gives you Gemini Pro, but 2 TB of storage, full privacy, email with a custom domain, and whatever else. College students get the AI pro plan for free. I recently looked over all the options for folks like me and my family. Google is obviously the right choice, and it's not particularly close.
[0] https://www.reddit.com/r/GoogleGeminiAI/comments/1jrynhk/war...
Google is fumbling with the marketing/communication - when I look at their stuff I am unclear on what is even available and what I already have, so I can't form an opinion about the price!
No, you cannot use neither Gemini CLI nor Code Assist via Workspace — at least not at the moment. However, if you upgrade your Workspace plan, you can use Gemini Advanced via the Web or app interfaces.
Workspace (standard?) customer for over a decade.
In the case of Gemini CLI, it seems Google does not even support Workspace accounts in the free tier. If you want to use Gemini CLI as a Workspace customer, you must pay separately for it via API billing (pay-as-you-go). Otherwise, the alternative is to login with a personal (non-Workspace) account and use the free tier.
Not sure what you mean by "full access", as none of the providers offer unrestricted usage. Pro gets you 2.5 Pro with usage limits. Ultra gets you higher limits + deep think (edit: accidentally put research when I meant think where it spends more resources on an answer) + much more Veo 3 usage. And of course you can use the API usage-billed model.
Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them.
Workaround is to use their GUI with some MCPs but I dislike it because window navigation is just clunky compared to terminal multiplexer navigation.
It's a frigg'n mess. Everyone at our little startup has spent time trying to understand what the actual offerings are; what the current set of entitlements are for different products; and what API keys might be tied to what entitlements.
I'm with __MatrixMan__ -- it's super confusing and needs some serious improvements in clarity.
A ChatBot is more like a fixed-price buffet where usage is ultimately human limited (even if the modest eaters are still subsidizing the hogs). An agentic system is going to consume resources in much more variable manner, depending on how it is being used.
> Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them
Obviously these companies want you to increase the amount of their product you consume, but it seems odd to call that a jerk move! FWIW, Athropic's stated motivation for Claude Code (which Gemini is now copying) was be agnostic to your choice of development tools since CLI access is pretty much ubiquitous, even inside IDEs. Whether it's the CLI-based design, the underlying model, or the specifics of what Claude Code is capable of, they seem to have got something right, and apparently usage internal to Anthropic skyrocketed just based on word of mouth.
It's just a UI difference.
https://support.anthropic.com/en/articles/11145838-using-cla...
Could have changed recently. I'm not a user so I can't verify.
Using the API would have cost me $1200 this month, if I didn't have a subscription.
I'm a somewhat extensive user, but most of my coworkers are using $150-$400/month with the API.
Some googling lands me to a guide: https://cloud.google.com/gemini/docs/discover/set-up-gemini#...
I stopped there because i don't want to signup i just wanted to review, but i don't have an admin panel or etc.
It feels insane to me that there's a readme on how to give them money. Claude's Max purchase was just as easy as Pro, fwiw.
https://github.com/google-gemini/gemini-cli/issues/1427
This is very confusing how they post about this on X, you would think you get additional usage. Messaging is very confusing.
If I Could Talk to Satya...
I'd say:
“Hey Satya, love the Copilots—but maybe we need a Copilot for Copilots to help people figure out which one they need!”
Then I had them print out a table of Copilot plans:
- Microsoft Copilot Free - Github Copilot Free - Github Copilot Pro - Github Copilot Pro+ - Microsoft Copilot Pro (can only be purchased for personal accounts) - Microsoft 365 Copilot (can't be used with personal accounts and can only be purchased by an organization)
You clearly have never had the "pleasure" to work with a Google product manager.
Especially the kind that were hired in the last 15-ish years.
This type of situation is absolutely typical, and probably one of the more benign thing among the general blight they typically inflict on Google's product offering.
The cartesian product of pricing options X models is an effing nightmare to navigate.
Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.
- On a new chat I have to re-approve things like executing "go mod tidy", "git", write files... I need to create a new chat for each feature, (maybe an option to clear the current chat on VsCode would work)
- I have found some problems with adding some new endpoint on an example Go REST server I was trying it on, it just deleted existing endpoints on the file. Same with tests, it deleted existing tests when asking to add a test. For comparison I didn't find these problems when evaluating Amp (uses Claude 4)
Overall it works well and hope you continue with polishing it, good job!!
At the very least, we need better documentation on how to get that environment variable, as we are not on GCP and this is not immediately obvious how to do so. At the worst, it means that your users paying for gemini don't have access to this where your general google users do.
[^1]: https://console.cloud.google.com/marketplace/product/google/...
Workspace users [edit: cperry was wrong] can get the free tier as well, just choose "More" and "Google for Work" in the login flow.
It has been a struggle to get a simple flow that works for all users, happy to hear suggestions!
Just a heads-up: your docs about authentication on Github say to place a GOOGLE_CLOUD_PROJECT_ID as an environment variable. However, what the Gemini CLI is actually looking for, from what I can tell, is a GOOGLE_CLOUD_PROJECT environment variable with the name of a project (rather than its ID). You might want to fix that discrepancy between code and docs, because it might confuse other users as well.
I don’t know what constraints made you all require a project ID or name to use the Gemini CLI with Workspace accounts. However, it would be far easier if this requirement were eliminated.
noted on documentation, there's a PR in flight on this. also found some confusion around gmail users who are part of the developer program hitting issues.
Well, I've just set up Gemini CLI with a Workspace account project in the free tier, and it works apparently for free. Can you explain whether billing for that has simply not been configured yet, or where exactly billing details can be found?
For reference, I've been using this panel to keep track of my usage in the free tier of the Gemini API, and it has not been counting Gemini CLI usage thus far: https://console.cloud.google.com/apis/api/generativelanguage...
Unfortunately all of that is pretty confusing, so I'll hold off using Gemini CLI until everything has been clarified.
Maybe you have access to an AI solution for this.
Also this doco says GOOGLE_CLOUD_PROJECT_ID but the actual tool wants GOOGLE_CLOUD_PROJECT
All different products doing the sameish thing. I don’t know where to send users to do anything. They are all licensed differently. Bonkers town.
currently it seems these are the CLI tools available. Is it possible to extend or actually disable some of these tools (for various reasons)?
> Available Gemini CLI tools:
{ "excludeTools": ["run_shell_command", "write_file"] }
but if you ask Gemini CLI to do this it'll guide you!
You can also extend with the Extensions feature - https://github.com/google-gemini/gemini-cli/blob/main/docs/e...
1. CodeRunner - https://github.com/BandarLabs/coderunner/tree/main?tab=readm...
> You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits.
Discouraging
I'm a Gemini Pro subscriber and I would love to be able to use my web-based chat resource limits with, or in addition to, what is offered here. I have plenty of scripts that are essentially "Weave together a complex prompt I can send to Gemini Flash to instantly get the answer I'm looking for and xclip it to my clipboard", and this would finally let me close the last step in that scripts.
Love what I'm seeing so far!
Edit: I should mention that I'm accessing this through Gemini Code Assist, so this may be something out of your wheelhouse.
I don't think that's capacity, you should see error codes.
like to just get a short response - for simple things like "what's a nm and grep command to find this symbol in these 3 folders". I use gemini alot for this type of thing already
Or would that have to be a custom prompt I write?
other people use simon willison's `llm` tool https://github.com/simonw/llm
Both allow you to switch between models, send short prompts from a CLI, optionally attach some context. I prefer mods because it's an easier install and I never need to worry about Python envs and other insanity.
Is the recommendation to specifically ask "analyze the codebase" here?
There was a time where Google produced products that had:
This ended up being a sizable part of why Google became so successful.I would suggest that you allow yourself and your team to be visited by the spirit of those days.
How did they do that pretty "GEMINI" gradient in the terminal? is that a thing we can do nowadays? It doesn't seem to be some blocky gradient where each character is a different color. It's a true gradient.
However, Gemini at one point output what will probably be the highlight of my day:
"I have made a complete mess of the code. I will now revert all changes I have made to the codebase and start over."
What great self-awareness and willingness to scrap the work! :)
We really are living in the future
I’ve been using Claude for a side project for the past few weeks and I find that we really get into a groove planning or debugging something and then by the time we are ready to implement, we’ve run out of context window space. Despite my best efforts to write good /compact instructions, when it’s ready to roll again some of the nuance is lost and the implementation suffers.
I’m looking forward to testing if that’s solved by the larger Gemini context window.
I haven't looked at this Gemini CLI thing yet, but if its open source it seems like any model can be plugged in here?
I can see a pathway where LLMs are commodities. Every big tech company right now both wants their LLM to be the winner and the others to die, but they also really, really would prefer a commodity world to one where a competitor is the winner.
If the future use looks more like CLI agents, I'm not sure how some fancy UI wrapper is going to result in a winner take all. OpenAI is winning right now with user count by pure brand name with ChatGPT, but ChatGPT clearly is an inferior UI for real work.
But in many other niches (say embedded), the workflow is different. You add a feature, you get weird readings. You start modelling in your head, how the timing would work, doing some combination of tracing and breakpoints to narrow down your hypotheses, then try them out, and figure out what works the best. I can't see the CLI agents do that kind of work. Depends too much on the hunch.
Sort of like autonomous driving: most highway driving is extremely repetitive and easy to automate, so it got automated. But going on a mountain road in heavy rain, while using your judgment to back off when other drivers start doing dangerous stuff, is still purely up to humans.
Im actually interested to see if we see a rise in demand for DRAM that is greater than usual because more software is vibe coded than being not, or some form of vibe coding.
If the module just can't be documented in this way in under 100 lines, it's a good time to refactor. Chances are if Claude's context window is not enough to work with a particular module, a human dev can't either. It's all about pointing your LLM precisely at the context that matters.
It's the only argument I can think of, something like Go would be goated for this use case in principle.
Re-running `cargo install <crate>` will do that. Or install `cargo-update`, then you can bulk update everything.
And it works hella better than using pip in a global python install (you really want pipx/uvx if you're installing python utilities globally).
IIRC you can install Go stuff with `go install`, dunno if you can update via that tho.
A single, pre-compiled binary is convenient for the user's first install only.
How many developers have npm installed vs cargo? Many won't even know what cargo is.
Anthropic's Claude Code is also installed using npm/npx.
https://bun.sh/docs/bundler/executables
https://docs.deno.com/runtime/reference/cli/compile/
Note, I haven't checked that this actually works, although if it's straightforward Node code without any weird extensions it should work in Bun at least. I'd be curious to see how the exe size compares to Go and Rust!
Claude also requires npm, FWIW.
I guess it needs to start various processes for the MCP servers and whatnot? Just spawning another Node is the easy way to do that, but a bit annoying, yeah.
Obviously everybody's requirements differ, but Node seems like a pretty reasonable platform for this.
If you have to run end point protection that will blast your CPU with load and it makes moving or even deleting that folder needlessly slow. It also makes the hosting burden of NPM (nusers) who must all install dependencies instead of (nCI instances), which isn't very nice to our hosts. Dealing with that once during your build phase and then packaging that mess up is the nicer way to go about distributing things depending on NPM to end users.
https://huggingface.co/jartine/gemma-2-27b-it-llamafile
Or a hint about the background of the folks who built the tool.
My exact same reaction when I read the install notes.
Even python would have been better.
Having to install that Javascript cancer on my laptop just to be able to try this, is a huge no.
Is your vision with Gemini CLI to be geared only towards non-commercial users? I have had a workspace account since GSuite and have been constantly punished for it by Google offerings all I wanted was gmail with a custom domain and I've lost all my youtube data, all my fitbit data, I cant select different versions of some of your subscriptions (seemingly completely random across your services from a end-user perspective), and now as a Workspace account I cant use Gemini CLI for my work, which is software development. This approach strikes me as actively hostile towards your loyal paying users...
... and other stuff.
Googlers, we should not have to do all of this setup and prep work for a single account. Enterprise I get, but for a single user? This is insufferable.
I have thrown very large codebases at this and it has been able to navigate and learn them effortlessly.
Definitely not because of Claude Code eating our lunch!
If you mean: This is "inspired" by the success of Claude Code. Sure, I guess, but it's also not like Claude Code brought anything entirely new to the table. There is a lot of copying from each other and continually improving upon that, and it's great for the users and model providers alike.
If you don't think claude code is just miles ahead of other things you haven't been using it (or well)
I am certain they keep metrics on those "power users" (especially since they probably work there) and when everyone drops what they were using and moves to a specific tool that is something they should be careful of.
Which is surprising because at first i was ready to re-up my Google life. I've been very anti-Google for ages, but at first 2.5 Pro looked so good that i felt it was a huge winner. It just wasn't enjoyable to use because i was often at war with it.
Sonnet/Opus via Claude Code are definitely less intelligent than my early tests of 2.5 Pro, but they're reasonable, listen, stay on task and etc.
I'm sure i'll retry eventually though. Though the subscription complexity with Gemini sounds annoying.
Wholeheartedly agree.
Both when chatting in text mode or when asking it to produce code.
The verbosity of the code is the worse. Comments often longer than the actual code, every nook and cranny of an algorithm unrolled over 100's of lines, most of which unnecessary.
Feels like typical code a mediocre Java developer would produce in the early 2000's
So, google's codebase
- Open-source (Apache 2.0, same as OpenAI Codex)
- 1M token context window
- Free tier: 60 requests per minute and 1,000 requests per day (requires Google account authentication)
- Higher limits via Gemini API or Vertex AI
- Google Search grounding support
- Plugin and script support (MCP servers)
- Gemini.md file for memory instruction
- VS Code integration (Gemini Code Assist)
It integrates with VS Code, which suits my workflow better. And buying credits through them (at cost) means I can use any model I want without juggling top-ups across several different billing profiles.
>This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
Click Gemini API, scroll
>When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.
>To help with quality and improve our products, human reviewers may read, annotate, and process your API input and output. Google takes steps to protect your privacy as part of this process. This includes disconnecting this data from your Google Account, API key, and Cloud project before reviewers see or annotate it. Do not submit sensitive, confidential, or personal information to the Unpaid Services.
https://github.com/opencode-ai/opencode
Because it says in the README:
> Authenticate: When prompted, sign in with your personal Google account. This will grant you up to 60 model requests per minute and 1,000 model requests per day using Gemini 2.5 Pro.
> For advanced use or increased limits: If you need to use a specific model or require a higher request capacity, you can use an API key: ...
When I have the Google AI Pro subscription in my Google account, and I use the personal Google account for authentication here, will I also have more requests per day then?
I'm currently wondering what makes more sense for me (not for CLI in particular, but for Gemini in general): To use the Google AI Pro subscription, or to use an API key. But I would also want to use the API maybe at some point. I thought the API requires an API key, but here it seems also the normal Google account can be used?
Set up not too long ago, and afaik pretty load-bearing for this. Feels great, just don’t ask me any product-level questions. I’m not part of the Gemini CLI team, so I’ll try to keep my mouth shut.
Not going to lie, I’m pretty anxious this will fall over as traffic keeps climbing up and up.
If it sounds too good to be true, it probably is. What’s the catch? How/why is this free?
Also they can throttle the service whenever they feel it's too costly.
A bit gutted by the `make sure it is not a workspace account`. What's wrong with Google prioritising free accounts vs paid accounts? This is not the first time they have done it when announcing Gemini, too.
Ultimately quality wins out with LLMs. Having switched a lot between openai, google and Claude, I feel there's essentially 0 switching cost and you very quickly get to feel which is the best. So until Claude has a solid competitor I'll use it, open source or not
A more credible argument is security and privacy, but I couldn't care less if they're managing to be best in class using haiku
At the bottom of README.md, they state:
"This project leverages the Gemini APIs to provide AI capabilities. For details on the terms of service governing the Gemini API, please refer to the terms for the access mechanism you are using:
* Gemini API key
* Gemini Code Assist
* Vertex AI"
The Gemini API terms state: "for Unpaid Services, all content and responses is retained, subject to human review, and used for training".
The Gemini Code Assist terms trifurcate for individuals, Standard / Enterprise, and Cloud Code (presumably not relevant).
* For individuals: "When you use Gemini Code Assist for individuals, Google collects your prompts, related code, generated output, code edits, related feature usage information, and your feedback to provide, improve, and develop Google products and services and machine learning technologies."
* For Standard and Enterprise: "To help protect the privacy of your data, Gemini Code Assist Standard and Enterprise conform to Google's privacy commitment with generative AI technologies. This commitment includes items such as the following: Google doesn't use your data to train our models without your permission."
The Vertex AI terms state "Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction."
What a confusing array of offerings and terms! I am left without certainty as to the answer to my original question. When using the free version by signing in with a personal Google account, which doesn't require a Gemini API key and isn't Gemini Code Assist or Vertex AI, it's not clear which access mechanism I am using or which terms apply.
It's also disappointing "Google's privacy commitment with generative AI technologies" which promises that "Google doesn't use your data to train our models without your permission" doesn't seem to apply to individuals.
No mention of accessibility in https://github.com/google-gemini/gemini-cli/blob/0915bf7d677... either
Again, I haven't used aider in a while so perhaps that's not the case.
For complicated changes Aider is much more likely to stop and need help, whereas Claude Code will just go and go and end up with something.
Whether that's worth the different economic model is up to you and your style and what you're working on.
That's a ton of free limit. This has been immensely more successful than void ide.
better question is why do you need a modle specific CLI when you should be able to plug in to individual models.
Haven't used Jules or codex yet since I've been happy and am working on optimizing my current workflow
Gemini Pro and Claude play off of each other really well.
Just started playing with Gemini CLI and one thing I miss immediately from Claude code is being able to write and interject as the AI does its work. Sometimes I interject by just saying stop, it stops and waits for more context or input or ai add something I forgot and it picks it up..
This perfectly demonstrates the benefit of the nodejs platform. Trivial to install and use. Almost no dependency issues (just "> some years old version of nodejs"). Immediately works effortlessly.
I've never developed anything on node, but I have it installed because so many hugely valuable tools use it. It has always been absolutely effortless and just all benefit.
And what a shift from most Google projects that are usually a mammoth mountain of fragile dependencies.
(uv kind of brings this to python via uvx)
https://imgur.com/ZIZkLU7
This is shown at the top of the screen in https://aistudio.google.com/apikey as the suggested quick start for testing your API key out.
Not a great look. I let our GCloud TAM know. But still.
We are now three years into the AI revolution and they are still forcing us to copy and paste and click click crazy to get the damn files out.
STOP innovating. STOP the features.
Form a team of 500 of your best developers. Allocate a year and a billion dollar budget.
Get all those Ai super scientists into the job.
See if you can work out “download all files”. A problem on the scale of AGI or Dark Matter, but one day google or OpenAI will crack the problem.
When you hop over to platforms that use the API, the files get written/edited in situ. No copy/pasting. No hunting for where to insert edited code.
Trust me it's a total game changer to switch. I spent so much time copy/pasting before moving over.
https://blog.google/technology/developers/introducing-gemini...
I'm happy I can switch models as I like with Aider. The top models from different companies see different things in my experiences and have their own strengths and weaknesses. I also do not see Anthropic's models on the top of my (subjective) list.
However i didn't use Claude Code before the Max plan because i just fret about some untrusted AI going ham on some stupid logic and burning credits.
If it's dumb on Max i don't mind, just some time wasted. If it's dumb on credits, i just paid for throw away work. Mentally it's just too much overhead for me as i end up worrying about Claude's journey, not just the destination. And the journey is often really bad, even for Claude.
So yes with Claude Code you can grab the Max plan and not worry too much about usage. With Aider you'll be paying per API call, but it will cost quite a bit less than the similar work if using Claude Code in API-mode.
I concluded that – for me – Claude Code _may_ give me better results, but Aider will likely be cheaper than Claude Code in either API-mode or subscription-mode. Also I like that I really can fill up the aider context window if I want to, and I'm in control of that.
I'd be pretty surprised if that was the case - something like ~8 hours of Aider use against Claude can spend $20, which is how much Claude Pro costs.
Sure you might make a few quick wins from careless users but overall it creates an environment of distrust where users are watching their pennies and lots are even just standing off.
I can accept that with all the different moving parts this may be a trickier problem than a pre paid pump, or even a Telco, and while to a product manager this might look like a lot of work/money for something that “prevents” users overspending.
But we all know that’s shortsighted and stupid and its the kind of thinking that broadly signals more competition is required.
I do not get it why they don’t pick Go or Rust so i get a binary.