Top
Best
New

Posted by felipemesquita 8/30/2025

Magic Lantern Is Back(www.magiclantern.fm)
500 points | 165 comments
joatmon-snoo 9/3/2025|
For folks who don't know what Magic Lantern is:

> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.

It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.

bprater 9/3/2025||
More backstory: before the modern generation of digital cameras - Magic Lantern was one of the early ways to "juice" more power out of early generations of Canon cameras, including RAW video recording.

Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.

nottorp 9/3/2025|||
Funny, when i saw it uses a .fm TLD i thought it's some online radio.
names_r_hard 9/3/2025|||
They were trendy at the time :D

I think possibly someone thought it sounded a bit like firmware?

xwowsersx 9/3/2025||||
Same :) I had in mind Groove Salad from soma.fm
esafak 9/3/2025|||
last.fm
barnas2 9/3/2025|||
"Scrobbles" will always be a funny word to me.
mxmilkiib 9/3/2025|||
sub.fm
t0bia_s 9/3/2025|||
I wish there are similar projects for other camera brands like Fujifilm. With abilities of ML on old Canon cameras we know there is a lot of potential in those old machines across other brands. It is also "eco" friendly approach that should be supported.
bhickey 9/3/2025|||
I just switched from Canon to Fujifilm due to enshitification. Canon started charging $5/mo to get clean video out of their cameras. We're plenty screwed if manufacturers decide that cameras are subscriptions and not glass.
petee 9/3/2025||
Fuji's are great, but ecosystem is definitely smaller, and I've found some software still doesn't support debayering x-trans
kjkjadksj 9/3/2025||
Yeah like Adobe. Whatever method they use has been peak worm creation for over 10 years. Capture one and dcraw are head and shoulders better.
Ccecil 9/4/2025|||
https://fujihack.org/
hypercube33 9/3/2025||
it also has a scripting system and is damn fun to mess with.
names_r_hard 9/3/2025||
Thanks to all who are sharing their appreciation for this niche but cool project.

I'm the current lead dev, so please ask questions.

Got a Canon DSLR or mirrorless and like a bit of software reverse engineering? Consider joining in; it's quite an approachable hardware target. No code obfuscation, just classic reversing. You can pick up a well supported cam for a little less than $100. Cams range from ARMv5te up to AArch64.

GranPC 9/3/2025||
What's the situation re: running on actual hardware these days? I was experimenting around with my 4000D but when it came to trying to actually run my code on the camera rather than the emulator, a1ex told me I needed some sort of key or similar. He told me he'd sign it for me or something but he got busy and I never heard back.

Is this situation still the same? (Apologies for the hazy details -- this was 5 years ago!)

names_r_hard 9/3/2025||
That must have been a few years back. I think you're talking about enabling "camera bootflag". We provide an automated way to do this for new installs on release builds, but don't like to make this too easy before we have stable builds ready. People do the weirdest stuff, including trying to flash firmware that's not for their cam, in order to run an ML build for that different cam...

Anyway, I can happily talk you through how to do it. Our discord is probably easiest, or you can ask on the forum. Discord is linked from the forum: https://www.magiclantern.fm/forum/

Whatever code you had back then won't build without some updates. 4000D is a good target for ML, lots of features that could be added.

GranPC 9/3/2025||
Yes, this was in September 2020 according to my records. All I remember is that I could run the ROM dumper just fine, then I could run my firmware in QEMU, and then I just had to locate a bunch of function pointers to make it do anything useful. Worked in QEMU but that's where I got stuck - no way to run it on hardware.

I'll definitely keep this in mind and hit you up whenever I have a buncha hours to spare. :)

names_r_hard 9/3/2025||
That would have been only a little before a1ex left. Getting code running on real hardware is easy, maybe I'll talk to you in discord in a few months when you find this fabled free time we are all looking for ;)

The 4000D is an interesting cam, we've had a few people start ports then give up. It has a mix of old and new parts in the software. Canon used an old CPU / ASIC: https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...

So it has hardware from 2008, but they did update the OS to a recent build. This is not what the ML code expects to find, so it's been a confusing test of our assumptions. Normally the OS stays in sync with the hardware changes, which means when we're reversing, it's hard to tell which changes are which.

That said, 4000D is probably a relatively easy port.

grep_name 9/3/2025|||
Wow, newly supported models is super exciting to see! I have a 5d mk iii which I got specifically to play around with ML. I haven't done much videography in my life, but do plan to get some b-roll at the very least with my mk iii or maybe record some friends live events sometime.

> I'm the current lead dev, so please ask questions.

Well, you asked for it!

One question I've always wondered about the project is: what is the difference between a model that you can support, and a model you currently can't? Is there a hard line where ML future compatibility becomes a brick wall? Are there models where something about the hardware / firmware makes you go 'ooh, that's a good candidate! I bet we can get that one working next'?

Also, as someone from the outside looking in who would be down to spend $100 to see if this something I can do or am interested in, which (cheap) model would be the easiest to grab and load up as dev environment (or in a configuration that mimics what someone might do to work on a feature), and where can I find documentation on how to do that? Is there a compendium of knowledge about how these cameras work from a reverse-engineering angle, or does everyone cut their teeth on forum posts and official canon technical docs?

edit: Found the RE guide on the website, gonna take a look at this later tonight

names_r_hard 9/3/2025||
5D3 is perhaps the best currently supported ML cam for video. It's very capable - good choice. Using both CF and SD cards simultaneously, it can record at about 145MB/s, so you can get very high quality footage.

Re what we can support - it's a reverse engineering project, we can support anything with enough time ;) The very newest cams have software changes that make enabling ML slightly harder for normal users, but don't make much difference from a developer perspective. I don't see any signs of Canon trying to lock out reverse engineers. Gaining access and doing a basic, ML GUI but no features port, is not hard when you have experience.

What we choose to support: I work on the cams that I have. And the cams that I have are whatever I find for cheap, so it's pretty random. Other devs have whatever priorities they have :)

The first cam I ported to was 200D, unsupported at the time. This took me a few months to get ML GUI working (with no features enabled), and I had significant help. Now I can get a new cam to that standard in a few days in most cases. All the cams are fairly similar for the core OS. It's the peripherals that change the most as hardware improves, so this takes the most time. And the newer the camera, the more the hw and sw has diverged from the best supported cams.

The cheapest way for you to get started is to use your 5D3 - which you can do in our fork of qemu. You can dump the roms (using software, no disassembly required), then emulate a full Canon and ML GUI, which can run your custom ML changes. There are limitations, mostly around emulation of peripherals. It's still very useful if you want to improve / customise the UI.

https://github.com/reticulatedpines/qemu-eos/tree/qemu-eos-v...

Re docs - they're not in a great shape. It's scattered over a few different wikis, a forum, and commit messages in multiple repos. Quick discussion happens on Discord. We're very responsive there, it's the best place for dev questions. The forum is the best single source for reference knowledge. From a developer perspective, I have made some efforts on a Dev Guide, but it's far from complete, e.g.:

https://github.com/reticulatedpines/magiclantern_simplified/...

If you want physical hardware to play with (it is more fun after all), you might be able to find a 650d or 700d for about $100. Anything that's Digic 5 green here is a capable target:

https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...

Digic 4 stuff is also easy to support, and will be cheaper, but it's less capable and will be showing its age generally - depends if that bothers you.

Vagantem 9/3/2025|||
Just wanted to say thanks for keeping this alive! I used magic lantern in 2014 to unlock 4K video recording on my Canon. It was how students back then could start recording professional video without super expensive gear
dylan604 9/3/2025|||
I still shoot a 5Dmkii solely due to the ML firmware. It's primarily a timelapse camera at this point. The ETTR functionality is one of my absolute favorites. The biggest drawback I have is trying to shoot with an interval less than 5 seconds. The ML software gets confused and shoots irregular interval shots. Anything over 5 seconds, and it's great. No external timers necessary for the majority of my shooting. I do still have an external for when <5s intervals are necessary. I'm just waiting for the shutter to die, but I'm confident I'll just have it replaced and continue using the body+ML rather than buy yet another body.

Thanks for your work keeping it going, and for those that have worked on it before.

names_r_hard 9/3/2025||
Strange, it certainly can do sub 5s on some bodies. But I don't have a 5d2 to test with.

Could this be a conflict with long exposures? Conceivably AF, too. The intervalometer will attempt to trigger capture every 5s wall time. If the combined time to AF seek, expose, and finish saving to card (etc) is >5s, you will skip a shot.

When the time comes, compare the price of a used 5d3 vs a shutter replacement on the 5d2, maybe you'll get a "free" upgrade :) Thanks for the kind words!

dylan604 9/3/2025||
> Could this be a conflict with long exposures?

I've done lots of 1/2 second exposures with 3s interval, and it shoots some at much shorter interval than 3 and some 3+??? At one point, the docs said 5s was a barrier. Maybe it was the 5dmkii specifically. All of my cards are rated higher than the 5D can write (but makes DIT much faster) so I doubt it is write speed interfering. What makes me think it is not the camera is that using a cheap external timer works without skipping a beat.

names_r_hard 9/3/2025||
Yeah, the external timer behaviour is fairly strong evidence. Curious though. These cams all seem to have a milli- and micro-second hw clock, and can both schedule and sleep against either. But it's also true that every cam has some weird quirks. And I don't know the 5d2 internals well.

From what I've seen, the image capture process is state machine based and tries to avoid sleeps and delays. Which makes sense for RTOS and professional photography.

If you care enough to debug it, pop into the discord and I can make you some tests to run.

pixelmonkey 9/3/2025|||
I just want to say "thank you." I run Magic Lantern on my Canon 5D Mark III (5d3) and it is such awesome software.

I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.

More here:

https://amontalenti.com/photos

When I hang out with programmer friends and demo Magic Lantern to them, they are always blown away.

names_r_hard 9/3/2025||
You're a better photographer than I am. I'm glad if ML helped you.

Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.

I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.

pixelmonkey 9/3/2025||
I've always wanted to work on Magic Lantern myself (I am in the Discord) but just haven't found the time yet! Thanks again!
ASlave2Gravity 9/3/2025|||
Hey just want to say a massive thank you for everything you've done with this project. I've shot so much (short films, music videos, even a TV pilot!) on my pair of 600Ds and ML has given these cams such an extended life.

It’s been a huge blessing!

fooker 9/3/2025|||
I recently obtained an astro converted 6D. Have played around with CHDK a long time ago as a teenager but never magic lantern.

I am a compiler dev with decent low level skills, anything in particular I should look at that would be good for the project as well as my ‘new’ 6D? (No experience with video unfortunately)

I have a newer R62 as well, but would rather not try anything with it yet.

names_r_hard 9/3/2025||
Ah I'd love an astro conversion.

I've had a fun idea knocking around for a while for astro. These cams have a fairly accessible serial port, hidden under the thumb grip rubber. I think the 6D may have one in the battery grip pins, too. We can sample LV data at any time, and do some tricks to boost exposure for "night vision". Soooo, you could turn the cam itself into a star tracker, which controlled a mount over serial. While doing the photo sequence. I bet you could do some very cool tricks with that. Bit involved for a first time project though :D

The 6D is a fairly well understood and supported cam, and your compiler background should really help you - so really the question is what would you like to add? I can then give a decent guess about how hard various things might be. I believe the 6D has integrated Wifi. We understand the network stack (surprisingly standard!) and a few demo things have been written, but nothing very useful so far. Maybe an auto image upload service? Would be cool to support something like OAuth, integrate with imgur etc?

It's slow work, but hopefully you don't mind that too much, compilers have a similar reputation.

fooker 9/3/2025||
> turn the cam itself into a star tracker

Hmm, that's a neat idea. The better language for it is 'auto guider'. Auto guiding is basically supplying correction information to the mount when it drifts off.

Most mounts support guiding input and virtually all astrophotographers set up a separate tiny camera, a small scope, and a laptop to auto guide the mount. It would be neat for the main camera to do it. The caveat is that this live view sampling would add extra noise to the main images (more heat, etc). But in my opinion, the huge boost in convenience would make that worth it, given that modern post processing is pretty good for mitigating noise.

The signals that have to be sent to the mount are pretty simple too, so I'll look at this at some point in future. The bottleneck for me is that I have ever got 'real' auto guiding to work reliably with my mount so if I run into issues it would be tricky as there's no baseline working version.

> Maybe an auto image upload service?

This sounds pretty useful, even uploading seamlessly to a phone or laptop would be a huge time saver for most people! I'll set up ML on my 6D and try out some of the demo stuff that use the network stack.

Is there a sorted list of things that people want and no one has got around to implementing yet?

names_r_hard 9/3/2025||
I am definitely an astro noob :) LV sampling was just the first idea I thought of. We could also load the last image while the next was being taken, and extract guide points from that (assuming an individual frame has enough distinct bright points... which it might not... you could of course sum a few in software). It's a larger image, but your time constraints shouldn't be tight. That way you're not getting any extra sensor heat. Some CPU heat though, dunno if that would be noticeable.

For networking, this module demonstrates the principles: https://github.com/reticulatedpines/magiclantern_simplified/...

A simple python server, that accepts image data from the cam, does some processing, sends data back. The network protocol is dirt simple. The config file format for holding network creds, IP addr etc is really very ugly. It was written for convenience of writing the code, not convenience of making the config file.

You would need to find the equivalent networking functions (our jargon is "stubs"). You will likely want help with this, unless you're already familiar with Ghidra or IDA Pro, and have both a 6D and 200D rom dump :) Pop in the discord when you get to that stage, it's too much detail for here.

There's no real list of things people want (well, they want everything...). The issues on the repo will have some good ideas. In the early days of setting that up I tagged a few things as Good First Issue, but gave up since it was just me working on them.

I would say it's more important to find something you're personally motivated by, that way you're more likely to stick with it. It gets a lot easier, but it doesn't have a friendly learning curve.

fooker 9/3/2025||
Does LV sampling work when ..say.. a 120 second image is being captured?
names_r_hard 9/4/2025|||
I don't know of a way to do that. I don't think the cam will ever display an image on LV while a capture is in progress. The readout process from the sensor is fundamentally decoupled from the capture. You could probably interleave long exposures with short ones at greatly boosted ISO, and display only the short ones on LV.

I was assuming it would be possible to quite accurately model the drift over time, and adjust the model based on the last image. The model continuously guides the mount, and the lag in updates hopefully wouldn't matter - so you can use saved images, not LV. In fact, we can trigger actions to occur on the in memory image just before writing out.

fooker 9/4/2025||
> quite accurately model the drift over time

This indeed seems like something someone would have written software for!

CarVac 9/3/2025|||
I would love to add it to my 1Ds3. I recall reading that once upon a time Canon wrote ML devs a strongly worded letter telling them not to touch a 1D, but a camera that old is long obsolete.

(I literally only want a raw histogram)

(I also have a 1Dx2 but that's probably a harder port)

dylan604 9/3/2025|||
I have been toying with the idea of picking up an old 1D. I can't remember the guy's name that I saw do this, but he had his 1D modified to use a PL mount instead of an EF mount. Something about the 1D body (being thicker I guess) allowed for the flange distances to work out. He then mounted a $35,000 17mm wide angle to it. That lens was huge and could just suck in photons. With that lens, he could expose the night sky in 1/3 second exposures what would take multiple seconds on my gear. He mounted the camera to the front of his boat floating down river using night vision goggles to see where he was going. The images were fantastic. I always wanted to do something crazy like that
names_r_hard 9/3/2025||||
Canon have never had any contact with ML project for any reason, to the best of my knowledge. The decision to stay away from 1D series was made by ML team, I would say out of an abundance of caution to try not to annoy them.
omegacharlie 9/4/2025||
Might be time to reconsider. Canon are (supposedly) not planning any further flagship DSLRs and I see little wrong with modifying your own property.

Independent of that: how dangerous is ML dev to the cameras themselves (in terms of brick potential)? Permanently bricking a camera in the price range of the 1DX is not exactly my idea of a good time. :-)

names_r_hard 9/4/2025||
Over the years, a few devs have temp soft bricked cams, requiring various non-standard methods to restore them to working order. I think all attempts succeeded so far! I don't think any permanent physical damage has been triggered by devs. It is definitely a real risk, but we try to work with the OS when possible, and the OS was written to try to make these things hard.

I don't think I'd want to learn the ropes on a cam too expensive to psychologically say goodbye to. Maybe save that for the second port.

omegacharlie 9/4/2025||
Thank you for the insight. Best of luck to you and the future of the project!
dingaling 9/3/2025|||
The 1Ds3 still renders wonderful images but the UI feels so limited now. ML would transform it.
CarVac 14 hours ago||
Actually when I go back to the 1Ds3 from my 1Dx2 the only thing I miss is the front buttons near the lens mount, which I use for playback and magnification.
archerx 9/3/2025||
I use magic lantern on my canon 650D to get a clean feed for my blackmagic ATEM. The installation was easy and everything works well.

Thank you and the magic lantern team!

IshKebab 9/3/2025||
> The main thing you need is knowledge of C, which is a small language that has good tutorials.

Heh, a little like saying "the main thing you need is to be able to play the violin, which is a small instrument with good tutorials".

names_r_hard 9/3/2025|
I stand by my statement! Compare the length of the C standard to JS / ECMAScript, or C++! :)

Maaaaybe I'm hiding a tradeoff around complexity vs built-in features, but volunteers can work that out themselves later on.

You honestly don't need much knowledge of C to get started in some areas. The ML GUI is easy to modify if you stay within the lines. Other areas, e.g., porting a complex feature to a new camera, are much harder. But that's the life of a reverse engineer.

Etheryte 9/3/2025|||
Conversely, the terseness of the C standard also means there's many more footguns and undefined behaviors. There are many things C is, but being easy to pick up is not one of them. I loved C all the way up until I graduated uni, but it would be a very hard sell to get me to pick it for a project these days. To me, working with C is akin to working with assembly, you just feel that you're doing real programming, but realistically there's better options for most scenarios these days.
names_r_hard 9/3/2025|||
I agree with some of what you're saying; some of the well known risks of working in C are because it's a small standard. But much of the undefined behaviour was deliberately made that way to support the hardware of the time - it's hard to be cross-platform on different architectures as a low-level language.

C genuinely is easy to pick up. It is harder to master. And you're right, for many domains, there are better options now, so it may not be worth while mastering it.

Because it's an old language, what it lacks in built-in safety features, is provided by decades of very good surrounding tooling. You do of course need to learn that tooling, and choose to use it!

In the context of Magic Lantern, C is the natural fit. We are working with very tight memory limitations, due to the OS. We support single core 200Mhz targets (ARMv5, no out-of-order or other fancy tricks). We don't include C stdlib, a small test binary can be < 1kB. Normal builds are around 400kB (this includes a full GUI, debug capabilities, all strings and assets, etc).

Canon code is probably mostly C, some C++. We have to call their code directly (casting reverse engineered addresses to function pointers, basically). We don't know what safety guarantees their code makes, or what the API is. Most of our work is interacting with OS or hardware. So we wouldn't gain much by using a safe language for our half.

chownie 9/3/2025||
> C genuinely is easy to pick up.

I feel like this is a bit of an https://xkcd.com/2501/ situation.

C is considered easy to pick up for the average user posting HN comments because we have the benefit of years -- the average comp sci student, who has been exposed to Javascript and Python, who might not know what "pass by reference" even means... I'm not sure they're going to be considering C easy.

names_r_hard 9/3/2025|||
I've taught several different languages to both 1st year uni students, and new joiners to a technical company, where they had no programming background.

Honestly, C seems to be one of the easier languages to teach the basics of. It's certainly easier than Java or C++, which have many more concepts.

C has some concepts that confuse the hell out of beginners, and it will let you shoot yourself in the foot very thoroughly with them (much more than say, Java). But you don't tend to encounter them till later on.

I have never said getting good at C is easy. Just that it's easy to pick up.

wkjagt 9/3/2025|||
C made a lot more sense to me after having done assembly (6502 in my case, but it probably doesn't matter). Things like passing a reference suddenly just made sense.
F3nd0 9/3/2025|||
I agree. For me as a beginner, C was relatively easy to learn the basics of. Sure, I never went on to get familiar with all the details and become proficient in it, but the basic concepts really aren’t that hard to understand. There’s just not too much you need to wrap your head around.
pixelmonkey 9/3/2025||||
C is taught as the introduction to programming in CS50x, Harvard's wildly popular MOOC for teaching programming to first-year college students and lifelong learners via the internet. Using the clang toolchain gives you much better error messages than old versions of gcc used to give. And I bet AI/LLM/copilot tools are pretty good at C given how much F/OSS is written in C.

Just to provide another data point here... that C is a little easier to pick up, today, than it was in the 1990s or 2000s, when all you had was the K&R C book and a Linux shell. I regularly recommend CS50x to newcomers to programming via a guide I wrote up as a GitHub gist. I took the CS50x course myself in 2020 (just to refresh my own memory of C after years of not using it that much), and it is very high quality.

See this comment for more info:

https://news.ycombinator.com/item?id=40690760

jibal 9/3/2025||||
Everything is passed by reference in Python. Everything is passed by value in C.
__mharrison__ 9/3/2025||
Not quite true for Python but a close approximation.
jcelerier 9/3/2025|||
depends on which school you went? the one I've been to started with C and LISP in the 2010s and then moved on to C++ and java with some python
ptero 9/3/2025|||
Undefined behaviors -- yes. But being able to trigger undefined behavior is not a huge foot gun by itself. Starting with good code examples means you are much less likely to trigger it.

Having a good, logical description of supported features, with a warning that if you do unsupported stuff things may break, is much more important than trying to define every possible action in a predictable way.

The latter approach often leads to explosion of spec volume and gives way more opportunities for writing bad code: predictable in execution, but instead with problems in design and logic which are harder to understand, maintain and fix. My 2c.

BiteCode_dev 9/3/2025|||
I stand by my statement! Compare the number of strings a violin has to the keys on a piano! :)
chrisweekly 9/3/2025||
I know it's all at least semi- tongue-in-cheek, but IRL a piano's discrete, sequential keys are what make it almost inarguably the easiest instrument to learn.
IshKebab 9/3/2025||
That's exactly his point. Languages aren't easier to learn simply because their specification is short, any more than instruments are easier to play because they have fewer strings.
jibal 9/3/2025||
The analogy is completely invalid. Languages with small specifications are easier to learn.

It's sad that the dev, who has done great work, has to spend time defending the C language from critters living under a bridge when it's a fixed element that isn't going to change.

chrisweekly 9/3/2025|||
Accusing people who disagree w/ you of being trolls doesn't bolster your argument.
jibal 9/3/2025||
Speaking of weak arguments: that wasn't the basis of the accusation.
thijson 9/3/2025||||
People don't argue with a carpenter over what tools were used to build a piece of furniture. It feels like a religious debate.
IshKebab 9/3/2025|||
> Languages with small specifications are easier to learn.

Only if all other things are equal, which they never are.

aorth 9/3/2025||
> We're using Git now. We build on modern OSes with modern tooling. We compile clean, no warnings. This was a lot of work, and invisible to users, but very useful for devs. It's easier than ever to join as a dev.

Very impressive! Thankless work. A reminder to myself to chase down some warnings in projects I am a part of...

ChrisMarshallNY 9/3/2025|
It’s not too difficult, if you do it from the start, and by habit.

I have an xcconfig file[0], that I add to all my projects, that turns on treat warnings as errors, and enables all warnings. In C, I used to compile -wall.

I also use SwiftLint[1].

But these days, I almost never trigger any warnings, because I’ve developed the habit of good coding.

Since Magic Lantern is firmware, I’m surprised that this was not already the case. Firmware needs to be as close to perfect as possible (I used to write firmware. It’s one of the reasons I’m so anal about Quality).

[0] https://github.com/RiftValleySoftware/RVS_Checkbox/blob/main... (I need to switch the header to MIT license, to match the rest of the project. It’s been a long time, since I used GPL, but I’ve been using this file, forever).

[1] https://littlegreenviper.com/swiftlint/

names_r_hard 9/3/2025|||
It's not firmware :) We use what is probably engineering functionality, built into the OS, to load and execute a file from disk. We run as a (mostly) normal program on the cam's normal OS.

We build with: -Wall -Wextra -Werror-implicit-function-declaration -Wdouble-promotion -Winline -Wundef -Wno-unused-parameter -Wno-unused-function -Wno-format

Warnings are treated as errors for release builds.

ChrisMarshallNY 9/3/2025||
Awesome!

Great work, and good luck!

names_r_hard 9/3/2025||
Thanks, and for what it's worth, I didn't downvote you (account is too new to even do so :D ), and I agree with your main point - it's not that hard to avoid all compiler warnings if you do it from the start, and make sure it's highly visible.

You only add one at a time, so you only need to fix one at a time, and you understand what you're trying to do.

It is, however, a real bitch to fix all compiler warnings in decade old code that targets a set of undocumented hardware platforms with which you are unfamiliar. And you just updated the toolchain from gcc 5 to 12.

ChrisMarshallNY 9/3/2025||
Oh, don't worry about the downvotes. Happens every time someone starts talking about improving software Quality around here.

Unpopular topic. I talk about it anyway, as it's one of my casus belli. I can afford the dings.

BTW: I used to work for Canon's main [photography] competitor, and Magic Lantern was an example of the kind of thing I wanted them to enable, but they were not particularly open to the idea -control freaks.

Also, it's a bit "nit-picky," I know, but I feel that any software that runs on-device is "firmware," and should be held to the same standards as the OS. I know that Magic Lantern has always been good. We used to hear customers telling us how good it was, and asking us to do similar.

I think RED had something like that, as well. I wonder how that's going?

names_r_hard 9/3/2025||
Okay, good, just making sure :) Fun to hear that at least some photo gear places are aware of ML!

I have done a stint in QA, as well as highly aggressive security testing against a big C codebase, so I too care a lot about quality. And you can do it in C, you just have to put in the effort.

I'd like to get Valgrind or ASAN working with our code, but that's quite a big task on an RTOS. It would be more practical in Qemu, but still a lot of effort. The OS has multiple allocators, and we don't include stdlib.

Re firmware / software, doesn't all software run on a device? So I suppose it depends what you mean by a device. Is a Windows exe on a desktop PC firmware? Is an app from your phones store firmware? We support cams that are much more powerful than low end Android devices. Here the cam OS, which is on flash ROM, brings the hardware up, then loads our code from removable storage, which can even be a spinning rust drive. It feels like they're firmware, and we're software, to me. It's not a clearly defined term.

The main reason I make the distinction is because we get a lot of users who think ML is like a phone rom flash, because that's what firmware is to most people. Thus they assume it's a risky process, and that the Canon menus etc will be gone. But we don't work that way.

ChrisMarshallNY 9/3/2025||
Good point, and really just semantics. I guess you could say native mobile apps are “firmware,” using my criteria.

But I put as much effort into my mobile apps, as I did, into my firmware projects (it’s been decades since I wrote firmware, BTW. The landscape is quite different, these days -This is my first ever shipped engineering project[0]. Back then, we could still use an ICE to debug our software).

It just taught me to be very circumspect about Quality.

I do feel that any software (in any part of the stack) I write that affects moving parts, needs to be quite well-tested. I never had issues with firmware, but drivers are another matter. I've fried stuff that cost a lot.

[0] https://littlegreenviper.com/TF30194/TF30194-Manual-1987.pdf

names_r_hard 9/3/2025||
Yes, it gets a bit blurry, especially given how fast solid-state storage is these days.

I think IoT has seen a resurgence in firmware devs... but regrettably not so much in quality. Too cheap to be worth it, I suppose. I can imagine a microwave could be quite a concerning product to design - there's some fairly obvious risks there!

Certainly, whatever you class ML as, we could damage the hardware. The shutter in particular is quite vulnerable, and Canon has made an unusual design choice that it flashes an important rom with settings at every power off. Leaving these settings in an inconsistent state can prevent the cam from booting. We do try to think hard about contingencies, and program defensively. At least for anything we release. I've done some very stupid tests on my own cams, and only needed to recover with UART access once ;)

I haven't use ICE, but I have used SoftICE. Oh, and we had a breakthrough on locating JTAG pinouts very recently, so we might end up being able to do similar.

ChrisMarshallNY 9/3/2025||
You do need to be careful with the shutter. It is possible to do damage (and add dirt) from it.

We had to add software dust removal, because the shutter kicked dirt onto the sensor.

I’m assuming that, at some point, the sensor technology will progress to where mechanical shutters are no longer necessary.

aorth 9/3/2025|||
Great, thanks for sharing the links.

By the way, rift valley software? I'm writing to you from Kenya, one of the homes of the great rift valley. It is truly remarkable to drive down the escarpment just North of Nairobi!

ChrisMarshallNY 9/3/2025||
I used to live in Uganda.

Visiting the Rift Valley in Southwest Uganda was one of the most awesome experiences of my childhood. My other company, Little Green Viper, riffs on that, too.

I was born in Africa, and spent the first eleven years of my life, there.

Had to leave Uganda in a hurry, though (1973).

heliographe 9/3/2025||
Yes! As a software developer in the photography space, we are deeply in need of projects like this.

The photography world is mired in proprietary software/ formats, and locked down hardware; and while it has always been true that a digital camera is “just” a computer, now more than ever it is painful just how limited and archaic on-board camera software is when compared to what we’ve grown accustomed to in the mobile phone era.

If I compare photography to another creative discipline I am somewhat familiar with, music production - the latter has way more open software/hardware initiatives, and freedom of not having to tether yourself to large, slow, user-abusing companies when choosing gear to work with.

Long live Magic Lantern!

waz0wski 9/3/2025|
Agreed

cries in .x3f & Sigma Photo Pro

shrinks99 9/3/2025||
If you don't know about it already and are a macOS user, you may appreciate https://x3fuse.com/
privatelypublic 9/3/2025||
Unfortunately, they're not using a github organization- leaving it to fail again if that account disappears. Continuity is hard.

> git clone https://github.com/reticulatedpines/magiclantern_simplified

ekianjo 9/3/2025|
Why would it fail if the code is available?
privatelypublic 9/3/2025||
If it's github.com/magiclantern/magiclantern ownership can change hands via organizational user changes.
teamonkey 9/3/2025||
An alternative to Magic Lantern is CHDK. Unfortunately that also feels somewhat abandoned and at the best of times held together with string* so I’m glad ML is back.

*No judgement, maintaining a niche and complex reverse-engineering project must be a thankless task

https://chdk.fandom.com/wiki/CHDK

fitsumbelay 9/3/2025||
This is good news

One of those projects I wanted to take on but always back logged. Wild that they've been on a 5 year hiatus -- https://www.newsshooter.com/2025/06/21/the-genie-is-out-of-t... -- that's the not-so-happy side of cool free wares.

names_r_hard 9/3/2025|
No time like the present :)

It is actually easier to get started now, as I spent several months updating the dev infrastructure so it all works on modern platforms with modern tooling.

Plus Ghidra exists now, which was a massive help for us.

We didn't really go on hiatus - the prior lead dev left the project, and the target hardware changed significantly. So everything slowed down. Now we are back to a more normal speed. Of course, we still need more devs; currently we have 3.

nobleach 9/3/2025||
For a look at some of the amazing output from an "ancient" EOS, you can look at Magic Lantern's Discord. It's rather shocking how far this little camera could be pushed. It is definitely a fun hobby project to fool around with these things. After awhile I stopped having the time and moved over to Sony APS-C with vintage lenses. I was able to maintain some of the aesthetic without getting frustrated by stuttering video. Still it's really a cool project.
ZiiS 9/3/2025|
This news is probably my excuse to buy my forth EOS; the first three were 100% only because of Magic Lantern. Can't understand why manufacturers make this hard as it sells hardware.
Ballas 9/3/2025||
> Can't understand why manufacturers make this hard as it sells hardware.

Because a lot of features that cost a lot of money are only software limitations. With many of the cheaper cameras the max shutter speed and video capabilities are limited by software to make the distinction with the more expensive cameras bigger. So they do sell hardware - but opening up the software will make their higher-end offerings less compelling.

i_am_proteus 9/3/2025||
Magic Lantern is fantastic software that makes EOS cameras even better, but I understand why manufacturers make it hard:

Camera manufacturers live and die on their reputation for making tools that deliver for the professional users of those tools. On a modern camera, the firmware and software needs to 100% Just Work and completely get out of the photographer's way, and a photographer needs to be able to grab a (camera) body out of the locker and know exactly what it's going to do for given settings.

The more cameras out there running customized firmware, the more likely someone misses a shot because "shutter priority is different on this specific 5d4" or similar.

I'm sure Canon is quietly pleased that Magic Lantern has kept up the resale value of their older bodies. I'm happy that Magic Lantern exists-- I no longer need an external intervalometer! It does make sense, though, that camera manufacturers don't deliberately ship cameras as openly-programmable computational photography tools.

mcdeltat 9/3/2025||
You have an interesting point about consistency and I'd like to provide a counterargument. While control consistency is very important, the actual image you get from a camera varies significantly between models as the manufacturers change tone curves, colour models, etc. JPGs from the camera are basically arbitrary and RAWs are not much better. The manufacturers don't provide many guarantees, it's just up to you and downstream software to figure out what looks good. Funny that so much thought goes into designing the feel of a camera yet the photo output is basically undefined...

Also another thing, Magic Lantern adds optional features which are arbitrarily(?) not present on some models. Perhaps Canon doesn't think you're "pro enough" (e.g. spent enough money) so they don't switch on focus peeking or whatever on your model.

i_am_proteus 9/3/2025||
If you want JPGs to look different, you can change them in the camera, and RAW files are just that: raw. They will vary between cameras slightly because the cameras have different sensors. Editing RAWs from 5d3 vs. 5d4 vs. 6d (my only experience) is not very different. Ultimately, the workflow that matters is a photographer capturing the image and getting the output to the studio quickly, in high quality. Event photographers often tether via ethernet or USB and the studio can post-process the RAW in minutes (or even seconds). The part of this that is most sensitive and hardest to recover from error is the photographer capturing the image, which is why consistency and usability of camera controls is so important.

IIRC none of the EOS DSLRs had focus peaking from the factory, you need Magic Lantern -- Canon didn't program it at all.

mcdeltat 9/3/2025||
My point about JPGs is they will look different between cameras anyway because of software differences, with the "same" settings, so they're already inconsistent from the user perspective. Editing RAW is not necessarily different but from what I've heard that's because RAW editing software busts its ass to try to correct for all manner of arbitrary differences between camera models. It's in spite of camera design that we have consistency, not really because of.
More comments...