Posted by anamhira 10 hours ago

We built a tool to that makes it easy to create LLM UI automations for IOS and Android.

If You've ever tried to create tests that run with an LLM on mobile, probably encountered context problems due to the accessibility view being too long or just sending a screenshot to the LLM, which has low accuracy.

Cognisim:

1. Turns the accessibility into a more LLM parseable form for your agents, and uses set of mark prompting to combine the screenshot and text for more accuracy. We create a grid of the screen so the text representation also has some form of visual structure to it.

2. Implements a simple API to do interactions on IOS and Android

3. Maps LLM responses back to accessibility components via a dictionary that maps id -> accessibility component

Pretty simple, but we hope you get some value from it :)

We run CogniSim in production for thousands of mobile UI tests a day at Revyl. If you are interested in proactive observability - combining resilient end to end tests with open telemetry traces, give us a shout - revyl.ai

4 points | 0 comments