Fujifilm X half: Is it the perfect family camera? (arslan.io)
1 points by farslan 43m ago 0 comments
Do you mine Bitcoin? How does it work? (help.abathhouse.com)
2 points by bmmayer1 48m ago 0 comments
Spade Hardware Description Language
118 spmcl 61 5/12/2025, 12:19:50 PM spade-lang.org ↗
But. The focus on "CPU" examples on the landing page (A 3 stage CPU supporting Add, Sub, Set and Jump, "You can easily build an ALU") is immediately discouraging. I implement and verify FPGA designs for a living, and the vast, vast majority of my work is nothing like designing a CPU. So my fear is that this new hardware description language hasn't been created by a veteran who has many years of experience in using HDLs and therefore knows what and where the real pain points are, but by someone who's ultimately a software developer – even a highly skilled and motivated software developer – and who has never designed, implemented, and verified a large FPGA design. And skilled, motivated software developers without a lot of domain-specific experience tend to solve the wrong problems.
I would be happy to be proven wrong, though.
That's a good reminder that I need to update the example on the website, I must have written that example almost 3 years ago at this point :)
For more up to date motivation, my talk from LatchUp last year is probably the best one I have https://www.youtube.com/watch?v=_EdOHbY2dlg&t=277s
> So my fear is that this new hardware description language hasn't been created by a veteran who has many years of experience in using HDLs
That's annoyingly quite close to the truth :D But I think I have enough experience now to not be completely stumbling around in the dark
My probably controversial opinion on output code quality is that if you have to see the generated Verilog, I've done something wrong since there is probably a compiler bug if you need to go down to that level.
Of course, you could just be looking at output from tools like timing reports, and then as someone else commented out, it is a bit of a tooling issue. Spade does emit (* src = *) attributes which yosys and friends accept to show the original Spade source instead of Verilog, but it is still kind of leaky in some cases
But that's because the tools debug at the C level. That isn't the case for SV at all - all the tools operate at the SV level.
Unless you're going to create commercial grade simulators, debuggers, synthesis tools etc. then users are going to be debugging SV.
My day job is debugging generated SV and even though it isn't nearly as bad as the code smallpipe posted it still suuucks. It costs me a lot of time reverse engineering it.
If anything is going to replace SV (and I really hope it does) it really really needs to focus on producing clean debuggable output. That includes things like using interfaces, structs and so on.
https://en.wikipedia.org/wiki/Computer-aided_software_engine...
Also, "just a tooling issue" is a pretty big problem when you're talking about something that wants to be adopted as part of the toolchain.
The word "just" is carrying the weight of the world on its shoulders…
FPGA toolchains are infamous for one of the worst and cursed toolchains in the world. Where writing tcl scripts to imperatively connects blocks together in a block diagram that will integrate all the verilog code is not just normal, but encouraged. Because their internal block diagram description file is git hostile
The challenge of a HDL over a regular sequential programming (software) language is that a software language is programmed in time, whereas a HDL is programmed in both space and time. As one HDL theory expert once told me "Too many high level HDLs try to abstract out time, when what they really need to do is expose time."
> The challenge of a HDL over a regular sequential programming (software) language is that a software language is programmed in time, whereas a HDL is programmed in both space and time. As one HDL theory expert once told me "Too many high level HDLs try to abstract out time, when what they really need to do is expose time."
That's an excellent quote, I might steal it :D In general, I think good abstractions are the ones that make important details explicit rather than ones that hide "uninteresting" details.
Exactly! It's astounding how often the documentation of some vendor component has in the documentation: "data_out is valid 2 cycles after read_enable is asserted", and NOTHING in the actual module definition makes a mention of this. There's so much dumb and error-prone mental arithmetic designers have to do to synchronize such latencies.
Spade does make a nod at this, with its pipelining notation. The issue I have with it is that it takes a too simplistic approach to said port timings. In a Spade pipeline you separate "pipeline stages" by adding a "reg;" statement on its own line. (It's an approach shared by another language called TL-Verilog.). A consequence of this style is that all inputs arrive at the sime time (say cycle 0), and all results are produced at a second time (say cycle 5). This is irrespective of if an input is actually only ever needed in a final addition in cycle 4. It'll insert the 4 extra registers regardless. Likewise, it leads to unnatural expression of subpipelines, where syntactically you can already see a value, but can only _access_ it 3 pipeline stages later.
With SUS, I have a solution to this: Latency Counting. (Intro here: https://m.youtube.com/watch?v=jJvtZvcimyM&t=937). A similar philosophy is also followed by Filament, though they go a step further with adding validity intervals too.
The gist of Latency Counting is that Instead of explicitly marking and naming "Pipeline stages", you annotate a statement to say that it "takes one cycle", and through exploring the dependency graph between your wires, it assigns a unique "absolute latency" to every wire, and places registers accordingly. (And now it can even infer submodule parameters based on this pipelining, in a bid to do HLS-style design in an RTL language).
I have looked at TL-Verilog, I love the language, I am on the fence with the syntax, which arguably, is nearly an inconsequential nit given how far languages and tools need to progress.
https://github.com/TL-X-org/TL-V_Projects
Wow, a TL-Verilog video! https://www.youtube.com/watch?v=o2epusH-fXI
The hard part has never been writing HDL, it’s verifying HDL and making the verification as organized and as easy as possible. Teams spend something like 20% of time on design and 80%+ on verification allegedly (definitely true at my shop).
Edit: I see it’s tightly integrated with cocotb which is good. But someone needs to take a verification-first approach to writing a new language for HDL. It shouldn’t be a fun after thought, it’s a bulk of the job.
That is a good point, sadly I'm not experienced enough with verification to know what is actually needed for verification from a language design perspective which is why I just offload to cocotb. There are a few interesting HDLs that do focus more on verification, ReWire, PDVL, Silver Oak, and Kôika are the ones I know about if you're interested in looking into them
Also, nitpick but amaranth does have its own simulator as far as I know
Also great work with spade. I love to hate, but the hardware industry needs folks like you pushing it forward. I just fear most people are making toys or focusing a ton of effort on the wrong issues (how to write HDL in a different way) instead of solving industry issues like verification, wrangling hand written modules with enormous I/O, stitching IP together, targeting real FPGAs, auto generating memory maps, etc. some of that is a tough solve because it’s proprietary.
[1] https://github.com/janestreet/hardcaml/blob/master/docs/wave...
> wrangling hand written modules with enormous I/O, stitching IP together
This is something where I'm confident a good type system can help significantly, part of the problem imo is that the module interfaces are often communicated with prefixes on variable names. The Spade type system bundles them together as one interface, and with methods on that interface you can start to transform things in a predictable way
Generating memory maps is also an obvious problem to solve with a language that attaches more semantics to things. I haven't looked into it with Spade, but I believe the Clash people are working on something there
If there is something worth checking, but its only possible to check it in simulation (think UBSan), then you should add it anyway, just so that it can get triggered by a counterexample. (Think debug only signals/wires/record fields/inputs/outputs/components) You don't want people to write lengthy exhaustive tests or stare at waveforms all day.
Note that the point of formal verification here isn't to be uptight about writing perfect software in a vacuum. It's in fact the opposite. It's about being lazy and getting away with it. If you fuzz Rust code merely to make sure that you're not triggering panics, you've already made a huge improvement in software correctness, even though you haven't defined any application specific contracts yet!
- https://danluu.com/why-hardware-development-is-hard/
- https://danluu.com/pl-troll/
That said, I'm all for better languages, of they really are better and as expressive
DSLs are great and elegant and beautiful for expressing a domain solution. Once. But real solutions evolve, and as they do they get messy, and when that happens you need to address that with software tools. And DSLs are, intentionally, inadequate to the tasks of large scale software design. So they add features[1], and we gets stuff like SystemC that looks almost like real programming. Except that it's walled off from updates in the broader community, so no tools like MSAN or whatnot, and you're working from decades-stale language standards, and everything is proprietary...
Honestly my sense is that it's just time to rip the bandaid off and generate synthesizable hardware from Python or Rust or whatnot. More syntax isn't what's needed.
[1] In what can be seen as an inevitable corollary of Greenspun's Tenth Rule, I guess.
> Honestly my sense is that it's just time to rip the bandaid off and generate synthesizable hardware from Python or Rust or whatnot. More syntax isn't what's needed.
People who think the problem is that they can't synthesize a program in hardware from something like Python completely misunderstand the purpose of an HDL. You do not write a program and press a button to get that program on hardware. You write a program to generate the design and verify its correctness. It is much less like writing an imperative or functional program and more like writing macros or code generation.
Now if you want to write a Python library for generating the underlying data for programming an FPGA or taping out circuits that's actually a good idea that people have tried out - the problem you run into though are network effects. Generating designs is easy, verifying and debugging them is very hard. All the money is in the tooling, and that tooling speaks HDLs.
The major HDLs (i.e., Verilog/SystemVerilog and VHDL) are not DSLs in any meaningful sense of the word. There exist HDLs which actually are DSLs, but they're mostly used by hobbyist and aren't gaining any significant traction in the industry.
But my point upthread is that even though these are "general purpose", they're still extremely limited in Practical Expressive Power for Large Scale Development, simply by being weird things that most people don't learn.
Python and Rust and even C++ projects can draw on decades of community experience and best practices and tools and tutorials that tell you how to get stuff done in their environments (and importantly how not to do things).
Literally the smartest people in software are trying to help you write Python et. al... With e.g. SystemVerilog you're limited to whatever the yahoos at Synopsys thought was a good idea. It's not the same.
Tasks which are also best done by premier software development environments and not ad hoc copies of ideas from other areas.
I worked a bit with VHDL and the parallelism aspect is - to me - so fundamentally different than what our sequential programming languages can express that I'm not sure I a layer of abstraction between this and that. How would that work?
You don't need to run Python/whatever to simulate and you don't need (and probably don't want) your semantic constraints and checks to be expressed in python/whatever syntax. But the process of moving from a parametrized design through the inevitable cross-team-design-madness and decade-stale-design-mistake-workarounds needs to be managed in a development environment that can handle it.
Modern VHDL isn't too far off what we need. I'd rather see more improvements to that. But most crucially, we need tooling that actually supports the improvements and new features. We don't have that today, it's an absolute mess trying to use VHDL '19 with the industry's standard tools. We even avoid using '08 for fear of issues. I can't speak to how far off SV is.
TL;DR: The hardware modules you're generating are represented as first-class objects that can be constructed from a DSL embedded within Python or explicitly from a list of primitives.
[1]: https://m-labs.hk/gateware/migen/
Although the last official release mentioned on the website is from 2021, it is still actively developed on GitHub [2]. See also contranomy [3] for a non-pipelined RV32I RISC-V core written in Clash.
[1] https://clash-lang.org/
[2] https://github.com/clash-lang/clash-compiler
[3] https://github.com/christiaanb/contranomy
That said, Clash is great and I know quite a few people at QBay. They don't seem to be slowing down any time soon!
It'll be a long while before either gets enough traction to be serious competition to system erilog, even if SV is, compared to modern software languages, outdated.
Not to mention: existing designs already done in Verilog, VHDL or whatever. Converting such a design from one HDL to another may no be easy.
So as always: use the best tool for the job.
Hopefully one day we'll break open the hardware design ecosystem. Verilog & VHDL still being de-facto industry standard is pathetic. And IMO the only reason is the white-knuckle grip Intel (Altera again?) and Xilinx have over what languages are accepted by their respective proprietary design tools.
I'm a big Bluespec booster, and beyond the nice typing and functional programming you get I think the big advance it brings to the table is the Guarded Atomic Action paradigm, which simplifies reasoning about what the code is doing and means that it's usually not too painful to poke at the generated HDL too since there's a clear connection between the two halves. At $WORK$ we've been using Bluespec very successfully in a small team to quickly iterate on hardware designs.
I don't want to denigrate the Spade developers since it's clearly a labor of love and nicely done, but I feel that unless the underlying mental model changes there's not much benefit to any of these neo-HDLs compared to SV or VHDL.
With Spade my goal is to build new abstractions on top of RTL. That should allow you to operate at a higher abstraction level with minimal overhead most of the time, and dive down to regular RTL when necessary