Posted by zdw 1 day ago
I typically use python ML libraries like lightgbm, sklearn, xgboost etc.
I also use numpy for large correlation matrices, covariance etc.
Are these operations accelerated? Is there a simple way to benchmark?
I see a lot of benchmarks on what look like C functions, but today in my jobs I rely on higher level libraries. I don't know if they perform any better on apple HW, and unless they have a flag like use_ane I'm inclined to think they do better.
Of course chatgpt suggested I benchmark an Intel Mac vs. newer apple silicon. Thanks chatgpt, there's a reason people still hate AI.
I just wanted to say that you’ve done an excellent job and am looking forward to the 3rd installment.
Why did you guys remove the ability to detach the console and move it to another window?
6.6 FLOPS/W, plus the ability to completely turn off when not in use, so 0W at idle.
Sure, "collaboratively." Why would I ever trust a vibe coded analysis? How do I, a non expert in this niche, know that Opus isn't pulling a fast one on both of us? LLMs write convincing bullshit that even fools experts. Have you manually verified each fact in this piece? I doubt it. Thanks for the disclaimer, it saved me from having to read it.
Humans also make mistakes and assumptions while reverse engineering, so it will always need more engineers to go through the results, test things
in big hardware companies, things start getting siloed, but that probably has more to do with big companies (seemingly invariably) operating as a union of fiefdoms (dunbar-number-ification?)
no it's not insane - it's completely mundane policy. that's my point - that you're calling something out as insane with exactly zero experience (which is the actually insane thing...).
> the company is also planning a few other software-based AI upgrades, including a new framework called Core AI. The idea is to replace the long-existing Core ML with something a bit more modern.
https://www.bloomberg.com/news/newsletters/2026-03-01/apple-...
- The key insight - [CoreML] doesn't XXX. It YYY.
With that being said, this is a highly informative article that I enjoyed thoroughly! :)
The article links to their own Github repo: https://github.com/maderix/ANE
It's not my subject, but it reads as a list of things. There's little exposition.
People seem to be going around pointing out that people talk like parrots, when in reality it's parrots talk like people.
Did you develop your own whole language at any point to describe the entire world? No, you, me, and society mimic what is around us.
Humans have the advantage, at least at this point, of being a continuous learning device so we adapt and change with the language use around us.
Here is why you are correct:
- I see what you did there.
- You are always right.
> hollance/neural-engine — Matthijs Hollemans’ comprehensive community documentation of ANE behavior, performance characteristics, and supported operations. The single best existing resource on ANE.
> mdaiter/ane — Early reverse engineering with working Python and Objective-C samples, documenting the ANECompiler framework and IOKit dispatch.
> eiln/ane — A reverse-engineered Linux driver for ANE (Asahi Linux project), providing insight into the kernel-level interface.
> apple/ml-ane-transformers — Apple’s own reference implementation of transformers optimized for ANE, confirming design patterns like channel-first layout and 1×1 conv preference.