Posted by 71bw 12 hours ago
In the presentation below, only the USB capabilities of it is discussed, but it was able to simulate PCI devices too.
https://download.microsoft.com/download/5/b/9/5b97017b-e28a-...
There is interest in getting 9front running on the Octeon chips. This would allow one to run anything they want on an Octeon card (Plan 9 cross platform is first class) so one could boot the card using the hosts root file system, write and test a program on the host, change the objtype env variable to mips/arm, build the binary for the Octeon and then run it on the Octeon using rcpu (like running a command remotely via ssh.) All you need is a working kernel on the Octeon and a host kernel driver and the rest is out of the box.
[0] https://www.psdevwiki.com/ps4/PCIe
[1] https://fail0verflow.com/blog/2016/console-hacking-2016-post...
That fascinates me. Intel deserves a lot of credit for PCI. They built in future proofing for use cases that wouldn't emerge for years, when their bread and butter was PC processors and peripheral PC chips, and they could have done far less. The platform independence and general openness (PCI-SIG) are also notable for something that came from 1990 Intel.
or https://github.com/sora/wireshark-pcie/blob/master/plugins/p...
(The PCIe wire format consists of TLPs and DLLPs. Context: https://xillybus.com/tutorials/pci-express-tlp-pcie-primer-t... )
As in, PCIem is going to populate the bus with virtually the same card (At least, in terms of capabilities, vendor/product id... and whanot) so I don't see how you'd then add another layer of indirection that somehow can transparently process the unfiltered transaction stream PCIem provides to it to an actual PCIe card on the bus. I feel like there's many colliding responsabilities in this.
I would instead suggest to have some sort of behavioural model (As in, have a predefined set of data to feed from/to) and have PCIem log all the accesses your real driver does. That way the driver would have enough infrastructure not to crash and at the same time you'd get the transport layer information.
Ideally, the setup might be genetic enough to apply to all (most?) of the pcie device/driver....
For "carving up" there are technologies like SR-IOV (Single Root I/O Virtualization).[2]
For advanced usage, like prototyping new hardware (host driver), you could use PCIem to emulate a not-yet-existing SR-IOV-capable GPU. This would allow you to develop and test the host-side driver (the one that manages the VFs) in QEMU without needing the actual hardware.
Another advanced use-case could be a custom vGPU solution: Instead of SR-IOV, you could try to build a custom paravirtualized GPU from scratch. PCIem would let you design the low-level PCIe interface for this new device, while you write a corresponding driver on the guest. This would require significant effort but it'd provide you complete control.
[1] https://qemu.readthedocs.io/en/v8.2.10/system/devices/virtio...
[2] https://en.wikipedia.org/wiki/Single-root_input/output_virtu...
Passthru or time sharing? The latter is difficult because you need something to manage the timeslices and enforce process isolation. I'm no expert but I understand it to be somewhere between nontrivial and not realistic without GPU vendor cooperation.
Note that the GPU vendors all deliberately include this feature as part of their market segmentation.
Serious work, detail intense, but not so different in design to e.g. Carmack's Trinity engine. Doable.
The other existing solution to this is FPGA cards: https://www.fpgadeveloper.com/list-of-fpga-dev-boards-for-pc... - note the wide spread in price. You then also have to deal with FPGA tooling. The benefit is much better timing.
PCIe prototyping is usually not something super straightforward if you don't want to pay hefty sums IME.
Seems unlikely you'd emulate a real PCIe card in software because PCIe is pretty high-speed.
https://mikrotik.com/product/ccr2004_1g_2xs_pcie
and G-RAID
PCIem kinda does that, but it's down a level; in terms of, it basically pops the device on your host PCI bus, which lets real, unmodified drivers to interact with the userspace implementation of your card, no QEMU, no VM, no hypervisors.
Not saying that you can then, for instance, forward all the accesses to QEMU (Some people/orgs already have their cards defined in QEMU so it'd be a bit pointless to redefine the same stuff over and over, right?) so they're free to basically glue their QEMU stuff to PCIem in case they want to try the driver directly on the host but maintaining the functional emulation on QEMU. PCIem takes care of abstracting the accesses and whatnot with an API that tries to mimick that the cool people over at KVM do.
Something like just a single BAR with a register that printfs whatever is written
Hopefully this is what you're searching for!
PCIEM_EVENT_MMIO_READ is defined but not used anywhere in the codebase
You basically have the kernel eventfd notify you about any access triggered (Based on your configuration), so from userspace, you have the eventfd and then you mmap the shared lock-less ring buffer that actually contains the events PCIem notifies (So you don't end up busy polling).
You basically mmap a struct pciem_shared_ring where you'll have your usual head/tail pointers.
From then on, on your main, you'd have a select() or a poll() for the eventfd; when PCIem notifies the userspace you'd check head != tail (Which means there are events to process) and you can basically do:
struct pciem_event *event = &event_ring->events[head]; atomic_thread_fence(memory_order_acquire); if (event->type == PCIEM_EVENT_MMIO_WRITE) handle_mmio_read(...);
And that's it, don't forget to update the head pointer!
I'll go and update the docs now. Hopefully this clears stuff up!
Usually, without actual silicon, you are pretty limited on what you can do in terms of anticipating the software that'll run.
What if you want to write a driver for it w/o having to buy auxiliary boards that act as your card? What happens if you already have a driver and want to do some security testing on it but don't have the card/don't want to use a physical one for any specific reason (Maybe some UB on the driver pokes at some register that kills the card? Just making disastrous scenarios to prove the point hah).
What if you want to add explicit failures to the card so that you can try and make the driver as tamper-proof and as fault-tolerant as possible (Think, getting the PCI card out of the bus w/o switching the computer off)?
Testing your driver functionally and/or behaviourally on CI/CD on any server (Not requiring the actual card!)?
There's quite a bunch of stuff you can do with it, thanks to being in userspace means that you can get as hacky-wacky as you want (Heck, I have a dumb-framebuffer-esque and OpenGL 1.X capable QEMU device I wanted to write a driver for fun and I used PCIem to forward the accesses to it).
In fact, "zero~th generation" of thunderbolt used optical link, too. Also both thunderbolt and DisplayPort reuse a lot of common elements from PCI-E
I wouldn't expect that to be mainstream until after optical networking becomes more common, and for consumer hardware that's very rare (apart from their modem).