Top
Best
New

Posted by giuliomagnifico 6 days ago

Scientists create ultra fast memory using light(www.isi.edu)
69 points | 13 comments
cycomanic 4 hours ago|
People have done these sort of "optical computing" based demonstrations for decades, despite David Miller showing that fundamentally digital computing with optical photons will be immensely power hungry (I say digital here, because there are some applications where analog computing can make sense, but it almost never relies memory for bits).

Specifically this paper is based on simulations, and I've only skimmed the paper, but the power efficiency numbers sound great because they say 40 GHz read/write speeds, but these consume comparatively large powers even if not reading or writing (the lasers have to be running constantly). I also think they did not include the contributions of the modulation and the required drivers (typically you need quite large voltages)? Somebody already pointed out that the size of these is massive, and that's again fundamental.

As someone working in the broad field, I really wish people would stop these type of publications. While these numbers might sound impressive at a first glance, they really are completely unrealistic. There are lots of legitimate applications of optics and photonics, we don't need to resort to this sort of stuff.

embedding-shape 3 hours ago||
> showing that fundamentally digital computing with optical photons will be immensely power hungry

> they really are completely unrealistic

Unrealistic only because they're power hungry? That sounds like a temporary problem, kind of like when we come up with a bunch of ML approaches we couldn't actually do in the 80s/90s because of the hardware resources required, but today work fine.

Maybe even if the solution aren't useful today, they could be useful in the future? Or maybe with these results, there are more people being inspired to create solutions specifically about the power usage?

"we don't need to resort to this sort of stuff" makes it sound like this is all so beneath you and not deserving of attention, but why are you then paying attention to it?

cindyllm 1 hour ago||
[dead]
adrian_b 4 hours ago||
Free version of the research paper:

https://arxiv.org/abs/2503.19544v1

The memory cell is huge in comparison with semiconductor memories, but it is very fast, with a 40 GHz read/write speed.

There are important applications for a very high speed small memory, e.g. for digital signal processing in radars and other such devices, but this will never replace a general-purpose computer memory, where much higher bit densities are needed.

ilaksh 5 hours ago||
MRAM and MRAM-CIM is like 10 years ahead of this and going to make a huge impact on efficiency and performance in the next few years, right? Or so I thought I heard.

Memristors are also probably coming after MRAM-CIM and before photonic computing.

cs702 5 hours ago||
Cool. Memory bandwidth is a major bottleneck for many important applications today, including AI. Maybe this kind of memory "at the speed of light" can help alleviate the bottleneck?

For a second, I thought the headline was copied & pasted from the hallucinated 10-years-from-now HN frontpage that recently made the HN front page:

https://news.ycombinator.com/item?id=46205632

lebuffon 5 hours ago||
Wow 300mm chips. They must be huge!

(I am sure they meant nm, but nobody is checking the AI output)

KK7NIL 5 hours ago||
It almost certainly refers to 300 mm wafers, which are the largest size used right now. They offer significantly better economics than the older 200 mm wafers or lab experiments done in even smaller (i.e. 100 mm) wafers.

The text in the article supports this:

> This is a commercial 300mm monolithic silicon photonics platform, meaning the technology is ready to scale today, rather than being limited to laboratory experiments.

vlovich123 5 hours ago||
From the paper

> footprint of 330 × 290 µm2 using the GlobalFoundries 45SPCLO

That’s a 45nm process but the units for the chip size probably should have been 330um? However I’m not well versed enough in the details to parse it out.

https://arxiv.org/abs/2503.19544

bgnn 5 hours ago||
I'm very familiar with this process as I use it regularly.

The area is massive. 330um × 290um are the X and Y dimensions. The area is roughly 0.1 mm2. You can see the comparison on table 1. This is roughly 50000 times larger than an SRAM of 45nm process.

This is the problem with photonic circuits. They are massive compared to electronics.

pezezin 1 hour ago|||
Would it be possible to use something similar to DWDM to store/process multiple bits in parallel in the same circuit?
bun_at_work 5 hours ago|||
Is it prohibitively larger? And is the size a fundamental constraint of the technology, or is it possible to reduce the size?
adrian_b 4 hours ago||
The size is a fundamental constraint of optical technologies, because it is related to the wavelength of light, which is much bigger than the sizes of semiconductor devices.

This is why modern semiconductor devices no longer use lithography with visible light or even with near ultraviolet, but they must use extreme ultraviolet.

The advantage of such optical devices is speed and low power consumption in the optical device itself (ignoring the power consumption of lasers, which might be shared by many devices).

Such memories have special purposes in various instruments, they are not suitable as computer memories.

xienze 5 hours ago|
This just in, OpenAI has already committed to buying the entire world’s supply once it becomes available.