> Get answers about any cell in seconds: Navigate complex models instantly. Ask Claude about specific formulas, entire worksheets, or calculation flows across tabs. Every explanation includes cell-level citations so you can verify the logic.
this might just be an excellent tool for refactoring Excel sheets into something more robust and maintainable. And making a bunch of suits redundant.
Here's my use case: I have a set of responses from a survey and want to perform sentiment analysis on them, classify them, etc. Ideally, I'd like to feed them one at a time to a local LLM with a prompt like: "Classify this survey response as positive, negative, or off-topic...etc".
If I dump the whole spreadsheet into ChatGPT, I found that because of the context window, it can get "lazy"; while with a local LLM, I could just literally prompt it one row at a time to accomplish my goal, even if it takes a little longer in terms of GPU and wall-clock time.
However, I can't find anything that works off the shelf like this. It seems like a prime use case for local models.
I've worked at MULTIPLE million dollar firms whose entire business relies on 10 Excel workbooks that were created 30 years ago by a person who is either passed on or retired.
Give users who aren't intimately knowledgeable with their source material ai, and you're asking for trouble.
The undo function has a history limit.
The real issue is: at what point are we going to stop chasing efficiency and profit at the sake of humanity?
Claude and OpenAI are built on stretched truths, stolen creativity and what-if statements.
LLMs work best when they can call tools (edit the sheet) and test their results in a loop.
It's like the "value seek" thing Excel has had since forever; "adjust these values until this cell is X"
Excel doesn't have any way to verify that every formula in that 60k line sheet is correct and someone hasn't accidentally replaced one with a static number for example.
I suspect similar tools could be made for Claude and other LLMs except that it wouldn't be plagued by the mind-numbing tedium of doing this sort of audit.
Disclosure: My company builds ingestion pipelines for large multi-tab Excel files, PDFs, and CSVs.
https://www.anthropic.com/news/advancing-claude-for-financia...
item, date, price
abc, 01/01/2023, $30
cde, 02/01/2023, $40
... 100k rows ...
subtotal. $1000
def, 03/01,2023, $20
"Hey Claude, what's the total from this file? > grep for headers > "Ah, I see column 3 is the price value" > SUM(C2:C) -> $2020 > "Great! I found your total!"
If you can find me an example of tech that can solve this at scale on large, diverse Excel formats, then I'll concede, but I haven't found something actually trustworthy for important data sets
If LLMs are a 6/10 right now at basic coding then they’re a 3/10 at excel from my experience.
I would love to learn more about their challenges as I have been working on an Excel AI add-in for quite some time and have followed Ask Rosie from almost their start.
That they now gone through the whole cycle worries me I‘m too slow as a solo building on the side in these fast paced times.
not sure if it binary like that but as startups we will probably collect the scraps leftover indeed instead
Git LFS for workbook + the following prompt :
“Create a commit explains what has changed in the workbook since the last commit. Be brief, but explain the change in business terms as well as code change terms.”