Posted by dagurp 12 hours ago
They don't use oligo pools - "This capacity may be adapted to use large oligo pools to substantially reduce the cost per construct45 but requires further engineering to account for the formation of the unintended Sidewinder heteroduplexes before assembly and the higher truncation rate of pooled oligos"
This absolutely destroys any unit economics when it comes to DNA synthesis. Oligo pool synthesis isn't 10x cheaper, it's 100x to 1000x cheaper than individual oligo synthesis.
So what they really have is a good way to do DNA assembly from synthesized oligos; fair. But we have that: GoldenGate can do 40 part assemblies, hell it can do 52 part assemblies, and you CAN use oligo pools - https://pmc.ncbi.nlm.nih.gov/articles/PMC10949349/ (there are a couple enzymatic properties which allow this, mainly that you can use full doublestranded DNA, which you can make with a PCR. Can't make these overhang guys with a PCR).
We've even found that with some GoldenGate enzymes, the biology somehow breaks the current models of the physics of ligation by being so efficient - https://www.biorxiv.org/content/10.64898/2026.01.31.702778v1
Their gels do look really good, I'll admit. I can imagine circumstances (exception cases) where this would be better. But not only is this kind of thing for 99% of cases has already been available for many years while being orders of magnitude cheaper (plural).
It is super clever and exciting. Note that people have been able to assemble short (<100 bases) DNA oligomer fragments of synthetic DNA into longer fragments using "splint" oligos since forever. But in this case, each splint has to be custom engineered to only bind to the junction of interest (in practice it is pretty tricky and expensive to do this.) These guys figured out a way to use engineered sequences to make the match, and used a clever (but also more or less standard) way to chew up the engineered stuff, leaving behind only the desired long assembly with no scars at the end of the process.
All sorts of ambiguity and hilarity would ensue; to be a good writer, you needed to ensure that words didn't bleed together and form incorrect meanings in unintended combinations. If you lost your place when reading, you'd have to know generally where you were in a scroll, and restart from a place you remembered.
Kinda crazy to think how difficult it would be to cross reference things and do collaborative research with no spaces or pages.
The whole context of written words had so much implicit process and knowledge and institutional memory, compared to now when we have petabytes of throwaway logs and trivial scratchpads for software running on a "just in case I might need to figure something out" basis. I'd love to see a written word graph over time, starting ~4k BC to now. And the complexity and diversity of those automated words are going up like crazy since LLMs.
It's been built up over centuries where new innovations and shifts in perspective often create new kinds of notation, but those most frequently just get tacked onto whatever else is already standard and the new notations almost never actually supplant the old.
AFAICT we haven't really had a big shift in fundamental mathematical notation in Europe (and its colonies) since Roman Numerals (CXXIII) gave way to Arabic (123) numerals four hundred years ago. 8I
Your history is a little confused. Arabic numerals came into use in Europe as early as the 13th century (introduced by Leonardo Fibonacci), but most other mathematical notation like "=" or or "√" didn't show up until the 16th or 17th century.
https://commons.wikimedia.org/wiki/File:Pantheon_Rom_1_cropp...
The parent article mentions that binding the pages of the first bibles in the correct order, in the absence of page numbers, was an extremely tedious work.
That is why page numbers have invented many years later, exactly as you say, "to help printers not mix up pages".
https://en.wikipedia.org/wiki/Gutenberg_Bible#/media/File:Gu...
Hindsight is 20/20 , lol. There are so many obvious, effective constructs and functions in modern English, we kinda miss the absolute janky mess of hacking and tradition and arbitrary rules and facepalm moments that went in to the last 1500+ years of development, let alone the tens of thousands of years prior.
It's an interesting idea. Remember they printed large sheets containing many 'pages', I think even in different orientations, which were then folded and the ends cut to produce a nice orderly codex for the reader. They were printing in a different order than the one you read in.
I do think they numbered the large sheets or similar, and you can find old books that retain that number, but I don't recall what it is called.
The main thing to keep in mind is that all the stuff that involves analogies between software and biology is almost universally a bullshit oversimplification that you can safely ignore. It's just that software is so profitable and there's so much vc money in it that there's a ton of pressure to be like "oh we can program biology like we program computers." We can't - we invented computers but didn't invent biology. Biology is the end result of 4 billion years of unchecked entropy - it's a chaos system, non deterministic in the wildest ways, impossibly complicated, and yet something we are getting astonishingly good at understanding and engineering.
Basically, all the biologists that started companies that were like "we can program biology like we can program computers" are bankrupt now.
On the other hand, the computer scientists that respected the nature of biology and pushed the limits of computing to develop Alphafold - giant models trained on the full complexity of biological data - finally created computer systems that could handle biological systems like protein folding at an extraordinary level of capability. They won a nobel.
I'm wondering if I could find a fun weekend project in alphafold just to see what it's like.
Are they really? Is this just limited to some very specific areas with an active biotech scene?
It's not uncommon that adults do something similar and run a community workshop with whatever the members are interested in.
https://zulko.github.io/bricks_and_scissors/posts/overhangs/
https://web.archive.org/web/20260121201045if_/https://www.na...
They don't give much details on how the barcode duplex is removed though. I guess ultimately the barcode duplex strands can just be melted off and the ligated strand can be used to template off of.
If this can be made into an easy to use kit, can really make vector generation much easier and hopefully not locked into proprietary systems.
I can imagine a company that bioinformatically generates libraries of common long oligos with corresponding barcode and allow end-users to select oligos to modularly ligate together in a one pot reaction. Cool stuff.
But branched DNA is really interesting. It’s a bit hard to get my head around. We spend so much time thinking about DNA in the 2D sequence sense, it’s easy to forget that it exists in 3D space.
I’m honestly not sure how different this really is to the traditional ways of doing this (with custom oligos). The common set of large self-hybridizing oligos is definitely easier, but you still have to have compatible tag overhangs between your two fragments. Meaning, it isn’t quite as universal and you’ll still need work to pair the fragments together. But where I think it might be useful is if you have a set of common hybridizing pairs that can be easily located onto the custom flanking oligos. You’ll still need some sequence analysis to get your custom oligos, but it would make the process more “standardized”.
I think the main bonus here is the self correcting selection… that you only end up with matching pairs linking together, so you could really have a mix in a one tube reaction that links many kilobase fragments together. That’s quite nice. And useful. And still cool.
One thing that is interesting is that this is another step towards getting the “writing” step of DNA analysis better. For the past 50+ years, we’ve developed all sorts of tools for reading DNA. It’s only really been the past 20-ish or so that we’ve had tools for writing. And now we can write longer chunks. That’s all a good thing.
Not sure I think it’s revolutionary (yet), but that’s a university PR release for you! I’m still thinking about the paper.
At first I thought this was about olympic figure skating, but after a bit of googling I think:
Complementary overhang - https://en.wikipedia.org/wiki/Sticky_and_blunt_ends
Toehold sequences: https://en.wikipedia.org/wiki/Toehold_mediated_strand_displa...
Ligate (ligase?) knick (nick?) - https://en.wikipedia.org/wiki/Nick_(DNA)
Barcode - https://en.wikipedia.org/wiki/DNA_barcoding
Heteroduplex - https://en.wikipedia.org/wiki/Heteroduplex
“ Guided by the removable DNA page numbers, Sidewinder achieves an incredibly high fidelity in DNA construction with a measured misconnection rate of just one in one million, a four to five magnitude improvement over all prior techniques whose misconnection rates range from 1-in-10 to 1-in-30.”
I wonder if this is even a problem, since you could amplify the correct sequence with PCR afterward.
I didn’t see this technique as having DNA modification per-se, but a novel way to managing the hybridization process. It’s stock (well engineered) oligos, if I read it correctly.
You're correct that PCR has a limited max length, but it is longer and cheaper than vanilla DNA synthesis.
The Polymerase Chain Reaction
https://www.nobelprize.org/prizes/chemistry/1993/mullis/lect...