Top
Best
New

Posted by trq_ 10/24/2024

Claude Computer Use – Is Vision the Ultimate API?(www.thariq.io)
113 points | 90 comments
CharlieDigital 10/24/2024|
Vision is the ultimate API.

The historical progression from text to still images to audio to moving images will hold true for AI as well.

Just look at OpenAI's progression as well from LLM to multi-modal to the realtime API.

A co-worker almost 20 years ago said something interesting to me as we were discussing Al Gore's CurrentTV project: the history of information is constrained by "bandwidth". He mentioned how broadcast television went from 72 hours of "bandwidth" (3 channels x 24h) per day to now having so much bandwidth that we could have a channel with citizen journalists. Of course, this was also the same time that YouTube was taking off.

The pattern holds true for AI.

AI is going to create "infinite bandwidth".

swatcoder 10/24/2024||
> The historical progression from text to still images to audio to moving images will hold true for AI as well.

You'll have to explain what you mean by this. Direct speech, text, illustrations, photos, abstract sounds, music, recordings, videos, circuits, programs, cells... these are all just different mediums with different characteristics. There is no "progression" apparent among them. Why should there be? They each fulfill different ends and have different occasions for which they best suit.

We seem to have discovered a new family of tools that help lossilly transform content or intent from one of these mediums to some others, which is sure to be useful in its own ways. But it's not a medium like the above in the first place, and with none of them representing a progression, it certainly doesn't either.

CharlieDigital 10/24/2024|||

    > You'll have to explain what you mean by this
The progression of distribution. Printing press, photos, radio, movies, television. The early web was text, then came images, then audio (Napster age), and then video (remember that Netflix used to ship DVDs?).

The flip side of that is production and the ratio of producers to consumers. As the bandwidth for distribution increases, there is a decrease in the cost and complexity for producers and naturally, we see the same progression with producers on each new platform and distribution technology: text, still images, audio, moving images.

swatcoder 10/24/2024|||
But it's not a progression. There's no transitioning. It's just the introduction of new media alongside prior ones.

And as a sibling commenter noted, the individual history of these media are disjoint and not really in the sequence you suggest.

Regardless, generative AI isn't a media like any of these. It's a means to transform media from one type to another, at some expense and with the introduction of loss/noise. There's something revolutionary about how easy it makes it to perform those transitions, and how generally it can perform them, but it's fundamentally more like a screwdriver than a video.

llm_trw 10/25/2024||
The progressing is the information density we can carry across a medium.

Pretty much all of them have followed the same pattern:

Text, images, audio, video, and maybe hologram as the end goal.

We are getting the same with AI today.

margalabargala 10/24/2024|||
> The progression of distribution. Printing press, photos, radio, movies, television.

Your history is incorrect, though. Still images predate text, by a lot.

Cave paintings came before writing. Woodcuts came before the printing press.

earthnail 10/24/2024|||
Not in the information age. This cascade just corresponds to how much data and processing power is needed for each.

It is entirely logical to say that AI development will follow the same progression as the early internet or as broadcast since they all fall under the same data constraints.

margalabargala 10/24/2024||
In the information age we've seen inconsistencies as well.

Ever since the release of Whisper and others, text-to-speech and speech-to-text have been more or less solved, while image generation seems to still sometimes have trouble. Earlier this week was a thread about how no image model could draw a crocodile without a tail.

Meanwhile, the first photographs predate the first sound recordings. And moving images without sound, of course, predate moving images with sound.

The original poster was trying to sound profound as though there was some set sequence of things that always happens through human development. But the reality is a much more mundane "less complex things tend to be easier than more complex things".

CharlieDigital 10/25/2024||

    > The original poster was trying to sound profound
I'm just here trying to justify why NVDA is still a growth stock; we're nowhere near peak gen AI.
CharlieDigital 10/24/2024||||
Cave paintings are not distribution; you can't produce and distribute it like a copy of text or photograph.
TacticalCoder 10/25/2024|||
> Cave paintings came before writing. Woodcuts came before the printing press.

Toddlers also learn to recognize drawing and to be able to do simple drawing themselves way before they learn to read or to write.

Image definitely predates text.

rhdunn 10/24/2024||||
I'd argue that multimodal analysis can improve uni/bimodal models.

There is overlap between text to image and text to video -- image would help video animating interesting or complex prompts; video would help image learn how to differentiate features as there are additional clues in terms of how the image changes and remains the same.

There's overlap with audio, text transcripts, and video around learning to animate speech e.g. by leaning how faces move with the corresponding audio/text.

There's overlap with sound and video -- e.g. being able to associate sounds like dog barking without direct labelling of either.

ogogmad 10/24/2024||||
> We seem to have discovered a new family of tools that help lossilly transform content or intent from one of these mediums to some others

That's not what LLMs do. More like AI art.

skydhash 10/24/2024|||
It is not. Text is very dense information wise and recursive and you can formalize it. And easily coupled with interactions method. And more apt for automation. You can easily see this with software like Autocad which have both. There's a reason all protocol are texts.

Vision and audio plays a nice role, but that's because of humans and reality. real world <-> vision|audio <-> processing pipeline makes sense. But processing <-> data <-> vision|audio <-> data <-> processing cycle is just non sense and a waste of resources.

ricardo81 10/24/2024|||
>Text is very dense information wise and recursive and you can formalize it.

There's been a lot of attempts over the years and varying degrees of accuracy, but I don't know if you can go as far as to "formalize" it. Beyond the syntax, (tokenising, syntactic chunking and ...beyond) there is the intent, and that is super hard to measure. And possibly, the problem with these prompts is they get things right a lot of the time but wrong say 5% of the time. Purely because they couldn't formalize it. My web hosting has 99.99% uptime which is a bit more reassuring than 95%.

cooper_ganglia 10/24/2024|||
Not a waste of resources, just an increase in use. This is why need more resources.
skydhash 10/24/2024||
It's a waste, because you could have just resolve the middleman and have data <-> processing cycle. When you increase resources use, that means some other metrics should be increased with an higher factor (car vs carriage and horses, computers vs doing computation by hand), otherwise it's a waste
ricardo81 10/24/2024|||
You could call it bandwidth, or call it entropy. I'd lean towards the more physical definition.

I think of how the USA had cable TV and hundreds of channels projecting all kinds of whatever in the 80s while here in the UK we were limited to our finite channels. To be fair those finite channels gave people something to talk about the next day, because millions of people saw the same thing. Surely a lot of what mankind has done is to tame entropy, like steam engines etc.

With AI and everyone having a prompt, it's surely a game changer. How it works out, we'll see.

ToDougie 10/24/2024|||
So long as the spectrum is open for infinity, yes.

Was listening to a Seth Godin interview where he pointed out that there was a time when you had to purchase a slice of spectrum to share your voice on the radio. Nowadays you can put your thoughts on a platform, but that platform is owned by corporations who can and will put their thumb on thoughtcrime or challenges.

I really do love your comment. Cheers.

CharlieDigital 10/24/2024||
Thanks!

There's a related concept as well which is that as "bandwidth" increases, the ratio of producers to consumers pushes upwards towards 1. My take is that generative AI will accelerate this

I write a bit more in depth about it here: https://chrlschn.dev/blog/2024/10/im-a-gen-ai-maximalist-and...

wwweston 10/24/2024|||
The information coursing through the world around us already exceeds our ability to grasp it by high orders of magnitude.

Three channels of television over 8 hours was already more than anyone had time to take in.

AI might be able to create a summarizing layers and relays that help manage that.

AI isn't going to create infinite bandwidth. It's as likely to increase entropy and introduce noise.

CharlieDigital 10/25/2024||
It's not that it makes more channels for all of us, it creates a channel for each of us.
mathgeek 10/26/2024||
There are already more channels than that, as bots training off the media produced by bots will just continue to grow.
layer8 10/24/2024|||
Text is still a predominant medium of communication and information processing, and I don’t see that changing substantially. TFA was an article, not a video, and you wouldn’t want the HN comment section to be composed of videos or images. Similarly, video calls haven’t replaced texting.
CharlieDigital 10/24/2024||
It's not that it will be replaced, but there's a natural progression of the types of media that is available on a given platform of distribution.

RF: text (telegram), audio, still images (fax), moving images

Web had the same progression: text, still images (inverted here), audio age (MP3s, Napster), video (Netflix, YouTube)

AI: text, images, audio (realtime API), ...?

Vision is the obvious next medium.

wwweston 10/24/2024|||
Lately I've been realizing that as much as I value YouTube, much of the content is distorted by various incentives favoring length and frequency and a tech culture which overfocuses on elaborating steps as an equivalent to an explanation. This contributes to more duplicate content (often even within the same channel) and less in terms of refined conceptualization. I find myself often wishing I had outlines, summaries, and hyperlinks to greater details.

Of course, I can use AI tools to get approximations of such things, and it'll probably get better, which means we will now be using this increased bandwidth or progression to produce more video to be pushed out through the pipes and distilled by an AI tool into shorter video or something like hypertext.

Progress!

layer8 10/24/2024|||
That progression is obviously a consequence of being able to tackle increasingly large volumes of data, but that doesn’t make video an “ultimate API”, unless you just mean “the most demanding one in terms of data volume”.
CharlieDigital 10/24/2024||
That's the point: as bandwidth and capacity increases, there's a natural progression. So it's obvious that vision is next after audio.
diffeomorphism 10/25/2024|||
Vision fails the discoverability test: Buttons that don't look like buttons. Fake download buttons. Long press to do something. Swipe in the shape of a hexagon. Grey text on light grey background.

Also: thanks for tuning in, raid shadow legends, many people ask, but how... Anyway, you need these two lines of text (20 minutes YouTube videos could have been a half page of text)

Finally: Huge output of bad quality and very, very limited input capacity. So "infinite bandwidth in" and then horrible traffic jam out.

IAmGraydon 10/25/2024|||
The 'bandwidth' analogy breaks down because it assumes AI's value lies in processing more information, rather than processing information more intelligently. Increasing broadcast bandwidth added linear value, but AI's advancements come from complexity, nuance, and understanding - not just sheer volume of data. 'Infinite bandwidth' doesn't guarantee better insights or decision-making; it may even lead to information overload and decreased relevance.
CharlieDigital 10/25/2024||
Gen AI's value lies in producing more variants of information. Infinitely many.

If you and I prompt OpenAI to generate an image of a woman holding a candle, we'll get two totally novel instances.

slowmovintarget 10/25/2024|||
> AI is going to create "infinite bandwidth".

For whom?

If you mean infinite outpouring, then yes, but it will drown us in a sea of noise. We've constructed a Chinese Room for the mind. The computer was a bicycle, but this is something different.

Bandwidth is carrying ability, and the current incarnation of "AI" does not increase signal. It takes vastly more resources to produce something close enough, but not quite... it.

CharlieDigital 10/25/2024|||
Just as there are thousands of videos on YouTube for making pancakes, there will one day be infinite videos for making pancakes.

That visual interface will watch as you prep your pancakes and give you tips, suggest a substitute if you are missing an ingredient. Your experience with that recipe will be one of "infinitely" many.

pixl97 10/25/2024|||
Eh even current AI where it makes a summary of your preferences is ever so slightly increasing signal. I don't expect this ability to diminish over time, but instead increase, which would lead to more personalized signaling.
croes 10/24/2024|||
Vision especially GUIs are a pretty limited API.
abirch 10/24/2024|||
It reminded me of these old Unix lessons of Master Foo

https://prirai.github.io/books/unix-koans/#master-foo-discou...

CharlieDigital 10/24/2024|||
I mean vision in the most general sense, not just a GUI.

Imagine OpenAI can not only read the inflection in your voice, but also nuances in your facial expressions and how you're using your hands to understand your state of mind.

And instead of merely responding as an audio stream, a real-time avatar.

corobo 10/24/2024|||
Sweet! I'll have my own Holly from Red Dwarf!

or maybe my own HAL9000..

A little bit ambivalent on this haha, looking forward to seeing what comes of it either way though :)

croes 10/24/2024|||
Imagine how this could be abused.
CharlieDigital 10/25/2024||
Or how it could be used.
pixl97 10/25/2024||
Used and abused. That is the nature of intelligence.
robotresearcher 10/25/2024||
> The pattern holds true for AI. AI is going to create "infinite bandwidth".

I’ve worked in AI for more than 30 years and I have no idea what you mean by this. Can you explain?

CharlieDigital 10/25/2024||
There is a ratio of producers to consumers in all media.

There's two ways to think about bandwidth. One is the physical capacity. The other is the content that can be produced and distributed.

We once had 3 channels equating to a maximum of 72h of content in a 24h period. Now we have YouTube which is orders of magnitude more content and bandwidth. The constraint now is the ratio of producers to consumers. Some creator had to create the exact content that you want.

What if gen AI can create the exact content and media experience that you want? Effectively pushing the ratio of producers and consumers towards 1 so that every experience is unique? It is effectively as if there was infinite bandwidth to create and distribute content. You are no longer constrained by physical bandwidth and no longer constrained by production bandwidth (actual creators making the content).

You want your AI generated news reel delivered by Walter Cronkite. I want mine delivered by Barbara Walters wearing a fake mustache while standing on one hand on the moon. It is as if there are infinite producers.

I write a bit more on this topic here: https://chrlschn.dev/blog/2024/10/im-a-gen-ai-maximalist-and...

pixl97 10/25/2024||
What happens after the effective end of society?
viraptor 10/24/2024||
Some time ago I made a prediction that accessibility is the ultimate API for the UI agents, but unfortunately multimodal capabilities went the other way. But we can still change the course:

This is a great place for people to start caring about accessibility annotations. All serious UI toolkits allow you to tell the computer what's on the screen. This allows things like Windows Automation https://learn.microsoft.com/en-us/windows/win32/winauto/entr... to see a tree of controls with labels and descriptions without any vision/OCR. It can be inspected by apps like FlauiInspect https://github.com/FlaUI/FlaUInspect?tab=readme-ov-file#main... But see how the example shows a statusbar with (Text "UIA3" "")? It could've been (Text "UIA3" "Current automation interface") instead for both a good tooltip and an accessibility label.

Now we can kill two birds with one stone - actually improve the accessibility of everything and make sure custom controls adhere to the framework as well, and provide the same data to the coming automation agents. The text description will be much cheaper than a screenshot to process. Also it will help my work with manually coded app automation, so that's a win-win-win.

As a side effect, it would also solve issues with UI weirdness. Have you ever had windows open something on a screen which is not connected anymore? Or under another window? Or minimised? Screenshots won't give enough information here to progress.

simonw 10/24/2024||
If you want to try out Computer Use (awful name) in a relatively safe environment the Docker container Anthropic provide here is very easy to start running (provided you have Docker setup, I used it with Docker Deaktop for Mac): https://github.com/anthropics/anthropic-quickstarts/tree/mai...
trq_ 10/24/2024||
Yes that's a good point! To be honest, I felt that I wanted to try it on the machine I used every day, but it's definitely a bit risky. Let me link that in the article.
danielbln 10/24/2024||
I for one appreciate the name Computer Use, no flashy marketing name, just describes what is she's. LLM using a computer.
croes 10/24/2024|||
Hard to ask questions about Computer Use
swyx 10/24/2024|||
also it contrasts nicely with Tool Use, which is about calling apis rather than clicking on things
lostmsu 10/24/2024||
It is literally a specialized subset of Tool Use both in concept and how it appears to be actually implemented.
unglaublich 10/24/2024||
Vision here means "2d pixel space".

The ultimate API is "all the raw data you can acquire from your environment".

layer8 10/24/2024|
For a typical GUI, the “mental model” actually needs to be 2.5D, due to stacked windows, popups, menus, modals, and so on. The article mentions that the model has difficulties with those.
pabe 10/24/2024||
I don't think vision is the ultimate API. It wasn't with "traditional" RPA and it won't with more advanced AI-RPA. It's inefficient. If you want something to be used by a bot, write an interface for a bot. I'd make an exception for end2end testing.
Veen 10/24/2024|
You're looking at it from a developer's perspective. For non-developers, vision opens up all sorts of new capabilities. And they won't have to rely on the software creator's view of what should be automated and what should not.
skydhash 10/24/2024|||
Most non-developers won't bother. You have shortcut on iOS and macOS which is like Scratch for automation and still only power users use it. Others just download the shortcut they want.
croes 10/24/2024|||
If a GUI is confusing for humans AI will be have problems too.

So you still rely on developers to make reasonable GUIs

downWidOutaFite 10/24/2024||
Vision is a crappy interface for computers but I think it could be a useful weapon against all the extremely "secure" platforms that refuse to give you access to your own data and refuse to interoperate with anything outside their militarized walled gardens.
tomatohs 10/24/2024||
> It is very helpful to give it things like:

- A list of applications that are open - Which application has active focus - What is focused inside the application - Function calls to specifically navigate those applications, as many as possible

We’ve found the same thing while building the client for testdriver.ai. This info is in every request.

sharpshadow 10/24/2024||
In this context Windows Recall makes total sense now from a AI learning perspective for them.

It’s actually a super cool development and I’m very exiting already to let my computer use any software like a pro infront of me. Paint me canvas of a savanna sunset with animals silhouette, produce me a track of uk garage house, etc. everything with all the layers and elements in the software not just an finished output.

croes 10/24/2024||
Lots of energy consumption just to create a remix of something that already exists.
sharpshadow 10/24/2024||
Absolutely we need much much more energy and many many more powerful chips. Energy is a resource and we need to harvest more of it.

I don’t understand why people make a point about energy consumption as it would be something bad.

wwweston 10/24/2024|||
https://dothemath.ucsd.edu/2012/04/economist-meets-physicist...

"the Earth has only one mechanism for releasing heat to space, and that’s via (infrared) radiation. We understand the phenomenon perfectly well, and can predict the surface temperature of the planet as a function of how much energy the human race produces. The upshot is that at a 2.3% growth rate, we would reach boiling temperature in about 400 years. And this statement is independent of technology. Even if we don’t have a name for the energy source yet, as long as it obeys thermodynamics, we cook ourselves with perpetual energy increase."

viraptor 10/24/2024|||
Come on, the trolling is too obvious.
sharpshadow 10/24/2024||
Absolutely not I’m serious and that is exactly what is going on, it would be trolling to pretend the opposite or accepted the status quo as final.

Obviously we need to use a way to not harm and destroy our environment more but we are on a good way on that. But technically we need much much more energy.

croes 10/24/2024||
We are not on a good way, we are far away from our goals and at the moment AI‘s is near the same as Bitcoin.

We do things fast and expensive that could be done slow but cheap.

The problem is we are running out of time.

If you want more energy you first build clean energy sources then you can pump up consumption not the other way around.

sharpshadow 10/24/2024|||
Those goals you are referring to are made up as the illusion we are running out of time.

It’s usually the other way around. If we would do things only if the resources are there in the first place we wouldn’t the progress we have.

flemhans 10/24/2024|||
Agreed, let's go!
throwup238 10/24/2024||
Vision plus accessibility metadata is the ultimate API. I see little reason that poorly designed flat UIs are going to confuse LLMs any less than humans, especially when they’re missing from the training data like most internal apps or the documentation on the web is out of date. Even a basic dump of ARIA attributes or the hierarchy from OS accessibility APIs can help a lot.
dbish 10/24/2024|
The problem is accessibility data and apis are very bad across the board.
echoangle 10/24/2024|
Am I the only one thinking this is an awful way for AI to do useful stuff for you? Why would I train an AI to use a GUI? Wouldn’t it be better to just have the AI learn API docs and use that? I don’t want the AI to open my browser, open google maps and search for Shawarma, I want the AI to call a google api and give me the result.
famouswaffles 10/24/2024||
The vast majority of Applications cannot be used by anything other than a GUI.

We built computers to be used by humans and humans overwhelmingly operate computers with GUIs. So if you want a machine that can potentially operate computers as well as humans then you're going to have to stick to GUIs.

It's the same reason we're trying to build general purpose robots in a human form factor.

The fact that a car is about as wide as a two horse drawn carriage is also no coincidence. You can't ignore existing infrastructure.

echoangle 10/24/2024||
But I don’t want an AI to „operate a computer“… maybe I’m missing the point of this but I just can’t imagine a usecase where this is a good solution. For everything browser based, the burden of making an API is probably relatively small and if the page is simple enough, you could maybe even get away with training the AI on the page HTML and generating a response to send. And for everything that’s not browser-based, I would either want the AI embedded in the software (image editors, IDEs…) or not there are all.
dragonwriter 10/24/2024|||
I think the two big applications for programmatically (AI or otherwise) operating a computer via this kind of UI automation are:

(1) Automated testing of apps that use traditional UIs, and

(2) Automating legacy apps that it is not practical or cost effective to update.

famouswaffles 10/24/2024||||
>But I don’t want an AI to „operate a computer“…

You don't and that's fine but certainly many people are interested in such a thing.

>maybe I’m missing the point of this but I just can’t imagine a usecase where this is a good solution.

If it could operate computers robustly and reliabily then why wouldn't you ? Not everything someone does on a computer is a task they wouldn't like to automate away but can't with current technology.

>For everything browser based, the burden of making an API is probably relatively small

It's definitely not less effort than stricking to a GUI

>and if the page is simple enough, you could maybe even get away with training the AI on the page HTML and generating a response to send.

Sure in special circumstances, it may be a good idea to use something else.

>And for everything that’s not browser-based, I would either want the AI embedded in the software (image editors, IDEs…) or not there are all.

AI embedded in software and AI operating the computer itself are entirely different things. The former is not necessarily a substitute for the latter.

Having access to SORA is not at all the same thing as AI that can expertly operate Blender. And right now at least, studios would actually much prefer the latter.

Even if they were equivalent (they're not), then you wouldn't be able to operate most applications without developers explicitly supporting and maintaining it first. That's infeasible.

simonw 10/24/2024||||
Google really don't want to provide a useful API to their search results.
agwp 10/24/2024|||
[dead]
layer8 10/24/2024|||
A general-purpose assistant should be able to perform general-purpose operations, meaning the same things people do on their computers, and without having to supply a special-purpose AI-compatible interface for every single function the AI might need to operate. The AI should be able to operate any interface a human can operate.
Workaccount2 10/24/2024|||
Anthropic is selling a product to people, not software engineers.
bubaumba 10/25/2024|||
How about GUI and API are just interfaces, the rest is the same. They can coexist in the same model or setup. Same functionality can be implemented usually by one or another. In advanced products by both. But model's thinking is probably the same, it operates with concepts, arranged in sort of graph. Just my guess, have no proof and likely it's not always true.
voiper1 10/24/2024||
Sure, it's more effecient to have it use an API. And people have been integrating those for the last while.

But there's tons of applications that are locked behind a website or deckstop GUI with no API that are accessible via vision.

More comments...