Posted by thawawaycold 3 days ago
There already exist fantastic open source tools such as Yosys, Nextpnr, iverilog, OpenFPGALoader, ... that together implement most features that a typical hardware dev would want to use. But chip support is unfortunately limited, so fewer people are using these tools.
We decided to build a VSCode extension that wraps these open source tools (https://edacation.github.io for the interested) to combat this problem. Students are already using it during the course and are generally very positive about the experience. It's by no means a full IDE, but if you're just getting started with HDL it's great to get familiar with it. Instead of a mess of a toolchain that nobody truly knows how to use, you now get a few buttons to visualize and (soon) program onto an FPGA.
There's also Lushay Code for the slightly more advanced users. But we need more of these initiatives to really get the ball rolling and make an impact, so I'd highly recommend people to check out and contribute to projects like this.
Sometimes some configuration is wrong and it behave wrongly but you don't know which configuration.
Sometimes it relies on another software installed on system and if you installed the incompatible version it malfunctions without telling you incompatibility.
Sometimes the IDE itself has random bugs.
A lot of time is spent workarounding IDE issues
VHDL and Verilog are used because they are excellent languages to describe hardware. The tools don't really hold anyone back. Lack of training or understanding might.
Consistently the issue with FPGA development for many years was that by the time you could get your hands on the latest devices, general purpose CPUs were good enough. The reality is that if you are going to build a custom piece of hardware then you are going to have to write the driver's and code yourself. It's achievable, however, it requires more skill than pure software programming.
Again, thanks to low power an slow cost arm processors a class of problems previously handled by FPGAs have been picked up by cheap but fast processors.
The reality is that for major markets custom hardware tends to win as you can make it smaller, faster and cheaper. The probability is someone will have built and tested it on an FPGA first.
Maybe they were in the 80. In 2025, language design has moved ahead quite a lot, you can't be saying that seriously.
Have a look at how clash-lang does it. It uses functional paradigm, which is much more suitable for circuits than pseudo-pricedural style of verilog. You can also parameterize modules by modules, not just by bitness. Take a functional programmer, hive him clash and he'll have no problems doing things in parallel.
Back when I was a systems programmer, I tried learning system verilog. Had zero conceptual difficulty, but I just couldn't justify to myself why I should spend my time on something so outdated and badly designed. Hardware designers at my company at the time were on the other hand ok with verilog because they haven't seen any programming languages other than C and Python, and had no expectations.
The issue isn't the languages, it's the horrible tooling around them. I'm not going to install a multi GB proprietary IDE that needs a GUI for everything and doesn't operate with any of my existing tools. An IDE that costs money, even though I already bought the hardware. Or requires an NDA. F** that.
I want to be able to do `cargo add risc-v` if I need a small cpu IP, and not sacrifice a goat.
I'm tooting my own horn with this, as I'm building my own language for doing the actual designing. It's called SUS.
Simple things look pretty much like C:
module add :
int#(FROM:-8, TO: 8) a,
int#(FROM: 2, TO: 20) b ->
int c {
c = a+b
}
It automatically compensates for pipelining registers you add, and allows you to use this pipelining information in the type system.It's a very young language, but me, a few of my colleagues, and some researchers in another university are already using it. Check it out => https://github.com/pc2/sus-compiler
Also, modern Verilog (AKA Systemverilog) fixes a bunch of the issues you might have had. There isn't much advantage to VHDL these days unless perhaps you are in Europe or work in certain US defense companies.
create_project -in_memory -part ${PART}
set_property target_language VHDL [ current_project ]
read_vhdl "my_hdl_file.vhd"
synth_design -top my_hdl_top_module_name -part ${PART}
opt_design
place_design
route_design
check_timing -file my_timing.txt
report_utilization -file my_util.txt
write_checkpoint my_routed_design.dcp
write_bitstream my_bitfile.bit
So if you learn VHDL first, you'll be on a solid footing.
If only hardware people would stop stereotyping. Also, do you guys not use use formal tools (BMC etc) now? Who do you think wrote those tools? Heck all the EDA stuff was designed by software people.
I just can't with the gatekeeping.
(Btw, this frustration isn't just pointed at you. I find this sentiment being parroted allover /r/FPGA on reddit and elsewhere. It's damn frustrating to say the least. Also, the worst thing is all the hardware folks only know C so they think all programming is imperative. VDHL is Ada for crying out loud.)
As soon as they reach the point where it's as easy to put down an FPGA as it is an old STM32 or whatever, they'll get a lot more interesting.
Building bitstreams is IMO not complicated. (I just copy a Makefile from a previous project and go from there.)
Loading them is a matter of plugging in a JTAG cable and typing “make program”.
I don’t know what you mean with the “manage SW projects for”?
Yes? Yes it is? 9 times out of 10, my entire board is LVCMOS33. I would love to have the option to drop all of the power rail complexity in a simplified series of parts.
Sometimes you need maximum I/O speed. Sometimes you need maximum I/O flexibility. Sometimes you need processing horsepower. And sometimes you need the certainty of hardware timing, which you get on a gate array and don't get any time there's a processor involved. Or, often, what I actually need is just a little bit of weird logic that's asynchronous, but too hard to do with the remnants of 74-series or 4000-series logic that are still available.
> Building bitstreams is IMO not complicated. (I just copy a Makefile from a previous project and go from there.)
It is not complicated for people who have spent a long time learning and who have past designs they can copy from. (I have a few of those myself.) It is nasty to explain to a new person and very nasty to explain well enough to reproduce in the future without me around.
> Loading them is a matter of plugging in a JTAG cable and typing “make program”.
Yes, for you on the bench. Now program them into a product on an assembly line. Of course it is possible. It is still a giant headache, and quite a bit worse than just dealing with an MCU.
> I don’t know what you mean with the “manage SW projects for”?
Two words: Xilinx ISE.
As a concrete example of this: two weeks ago I wanted a 21-input OR gate. It would have been wonderful if I could spend a little bit of money, buy a programmable thing in a 24-pin package, put it down, figure out some way to get the bitstream in (this is never pleasant in medium-volume manufacturing, so it's not like we're going to solve it now), and get my gate function that is literally one line of HDL. One. Line.
As it was, a 21-input OR gate is so much work in 74-series logic that I abandoned that whole thing and we did the bigger-picture job in a different, worse, way.
Anyway, just tie the output of 21 emitter followers together, add a resistor and - tadaaa - 21 input OR!
But please don't complain when I give concrete examples of things I'd like to do but couldn't. (And please do recognize that there was a lot more context to the mess than just "I need an OR gate", but no one cares about the real gory details.)
Unless you're using some kind of USB DFU mode (which is annoying on assembly lines), SWD-based flashing of an MCU is substantially more complicated than the JTAG sequences that some internal-flash FPGAs use for programming..
These chips are just as easy or easier to program than any ARM MCU. Raw SPI NOR flash isn't "easy" to program if you've never done it before, either.
Oh look, the factory screwed up and isn't flashing the MCU this week! Does the board survive?
Oh look, the factory screwed up and isn't flashing the PLD this week! Does the board survive?
Oh look, the factory... wait, what is the factory doing and why are they putting that sticker on that....
You get the idea. Yes, yes, it is all solvable. I have never claimed it isn't. I am just claiming it is a giant pain in the ass and limits use of these things. I will bend over backwards to keep boards at one binary that needs to be loaded.
I should check my spam folder.
And sometimes you need support for multiple IO standards.
I don’t understand what point you’re trying to get across.
But if all you need is LVCMOS33, why do you not use a MAX10 FPGA with built-in voltage regulator? Or a similar FPGA device from GoWin that is positioned as a MAX10 alternative? What is wrong with those?
> JTAG
On our production line, we use JTAG to program the FPGA? We literally used the same “make program” command for development and production. That was for production volumes considerably larger than 100k.
> ISE
ISE was end of life’d when I started using FPGAs professionally. That was in 2012. The only reason it still exists is because some hold-outs are still using Spartan 6.
My point is twofold:
1. There are many niches. Your main needs are not the same as my main needs. And my needs are poorly met by existing products, so I want to see something better. (And I do buy chips.)
2. All of this is way, way harder than it needs to be. It could be easy, but it isn't. Everything is possible right now. But I wasn't random when I used the dreaded A-word ("Arduino"). Arduino is a kind of horrible product that did not make anything possible and did not really invent anything. It did not make anything really hard suddenly become easy. Hard things before Arduino were still hard after Arduino. It "just" made some things that used to be medium-hard pains-in-the-butt actually really quick and easy (at a little backend complexity cost: now you've got the Arduino IDE around, hope it doesn't break!).
It turns out that is very valuable.
And is what I would like to see happen with FPGAs: make them easy to drop in instead of pains in the butt. All pieces for this exist, nothing is new tech, no major revolutions need to happen. "Just" ease of use.
It did. Onboarding people onto embedded programmer.
You just ran it, wrote few lines and you had working blinky. Write some more and you have useful toy. You could even technically make products with it but going from this to C++ was easier coz you already know what you could do, just needed to go thru pain of switching the toolchain once you're already invested.
Compare that to "you need to setup compiler, toolchain, SDK, figure out how to program the resulting binary, map the registers to your devboard pins etc."
How much easier does it need to be than putting down a single 1mm^2 LDO and a QFN IC? Is this really that difficult?
https://gowinsemi.com/en/product/detail/46/
- Requires just 1V2 + 3V3
- Available in QFN
- Bitstream is saved in internal flash or programmed to SRAM via a basic JTAG sequence
> Request Sample
> Please login to download the document.
I mean, yeah. My argument isn't that anything is impossible. My argument is that all of this is harder than it needs to be and this is not countering me!
The Gowin FPGAs are available (at a massive premium) from Mouser, just like whatever MCU you are already using. Many are available for <$1-2 in China. Efinix are available from DigiKey, with some SKUs under <$10.
All of the Gowin documentation is available on their site with a free, approval-less email login and no NDA, or via Google directly (PDFs, just like Xilinx, even numbered similarly.)
The industry draws a distinction between CPLDs and FPGAs, and rightly so, but most "Arduino-level" hobbyists think "I want something I can program so that it acts like such-and-such a circuit, I know, I need an FPGA!" when what they probably want is what the professional world would call a CPLD - and the distinction in terminology between the two does more to confuse than to clarify.
I don't know how to fix this; it'd be lovely if the two followed convergent paths, with FPGAs gaining on-board storage and the line between them blurring. Or maybe we need a common term that encompasses both. ("Programmable logic device" is technically that, but no-one knows that.)
Anyway. CPLDs are neat.
But, really, no one cares what's inside the box. CPLD or FPGA, they're all about the same. The available PLDs are still not really acceptable. There's a bunch of 5V dinosaurs that the manufacturers would obviously love to axe, and a few tiny little micro-BGA things where you've got to be buying 100k to even submit a documentation bug report. Not much for stuff in the middle.
It's like fpga companies don't want people using them, much like others like the pixart sensor I wanted to use: NDA because some parasite dipshit executive or manager thinks that register layouts are extremely sensitive information.
I've had dozens of uses for an fpga...but every single time I just can't be bothered. Why, when they make it a pain in the ass on purpose.
This is exactly it. Why hasn't some expert group produced a very simple open design board with a simple Arduino-like IDE for FPGAs? Make it easy to access and use, get it into the hands of makers/hobbyists and watch the apps/ecosystem explode.
As an example, one could provide soft-cores for 8051/RISC-V etc. right out of the box with a menu of peripherals to mix and match. Provide a simple language library wrapper say over SystemVerilog (or whatever the community settles on) just like Arduino did (with C++) that makes it "easy" to program the FPGA.
For apps, one good example would be putting TinyML (or any other ML/LLM models) on a FPGA. This would take advantage of the current technology wave to make this project a success.
PS: Folks might find the book FPGAs for Software Programmers by Dirk Koch et al. (https://link.springer.com/book/10.1007/978-3-319-26408-0) useful.
Xilinx, Altera, and Lattice are culturally incapable of doing this. For lattice especially it seems like a no brainer but they don’t understand the appeal of open source still.
HDL isn't getting any easier, though, and that's where most of the complexity is.
Sadly, this doesn't seem to be panning out because the Chinese domestic market has perfectly functional Xilinx and Altera clones for a fraction of the price. Consequently, they don't care about anything else.
It irritates me to no end that Gowin won't open their bitstream format because they'd displace a bunch of the low end almost immediately.
"engineers are stuck using outdated languages inside proprietary IDEs that feel like time capsules from another century.". The article misses that Vivado was developed in the 2010's and released around 2013. It's a huge step-up from ISE if you know how to drive it properly and THIS is the main point that the original author misses. You need to have a different mindset when writing hardware and it's not easy to find training that shows how to do it right.
If you venture into the world of digital logic design without a guide or mentor, then you're going to encounter all the pitfalls and get frustrated.
My daily Vivado experience involves typing "make", then waiting for the result and analysing from there (if necessary). It takes experience to set up a hardware project like this, but once you get there it's compatible with standard version control, CI tools, regression tests and the other nice things you expect form a modern development environment.
Exactly my experience with Quartus as well.
One really can’t help but wonder if those who always whine about the IDE/GUI just don’t know any better?
The GUI interfaces are what newcomers tend to aim for straight away, but they're not good for any long-term "repeatable" build flows and they're no use for CI. I think this is where a lot of the frustration comes from.
Or how to use an LLM properly.
Unlikely this will ever happen but one can always dream.
The three SKUs between Xilinx and Altera that had HBM are no longer manufactured because Samsung Aquabolt was discontinued.
Secondly the integration with consumer devices and OS is almost non-ecistant - it should really be simpler to interact with ala GPU/Network chip and have more mainboards with lowcost integrated FPGAs even if they are only a couple of hundred of logic cells.
[1]https://github.com/chipsalliance/chisel/blob/main/README.md
The real argument for open source toolchains is much narrower in scope and implying its requirement for fixing a nonexistent tool problem is absurd
The vendor tools are still a barrier to the high-end FPGA's hardened IP
The placement and routing flow of these devices is an NP-Complete problem and is relatively non-deterministic* (the exact same HDL will typically produce identical results, but even slightly different HDL can produce radically different results.)
All of these use cases you've mentioned (AV1 decoders, NN layers, but especially a JS runtime) require phenomenal amounts of physical die area, even on modern processes. CPUs will run circles around the practical die area you can afford to spare - at massively higher clock speeds - for all but the most niche of problems.