Top
Best
New

Posted by darkrishabh 4 hours ago

Show HN: Agent-skills-eval – Test whether Agent Skills improve outputs(github.com)
24 points | 5 comments
ssgodderidge 1 hour ago|
The example model in the documentation is 4o-mini, you might want to update that to a more recent model.

As an aside, 4o-mini came out months before agent skills were released… I’m curious how it performs with choosing to load skills in the first place?

stingraycharles 38 minutes ago||
It’s an artifact of the documentation being AI generated, they usually pick gpt4-era models, without giving it further thought.

For Gemini it seems to always pick 2.5 despite 3.1 being the latest, Claude the 3.5-era models.

Not sure what’s preventing AI labs on ensuring this stuff is refreshed during training.

block_dagger 44 minutes ago||
The skill is deterministically added to the prompt by the harness before the target model is invoked. There is no “choosing” to load a skill. You might be confusing skills with tools (MCP etc).
egeozcan 1 hour ago||
Are there any published results gathered using this?
ianhxu 1 hour ago||
How do you iterate on the judge prompt? Is there an auto rater?
bixxie09 8 minutes ago||
[dead]
huflungdung 3 hours ago|
[dead]