Top
Best
New

Posted by nnx 10 hours ago

A better streams API is possible for JavaScript(blog.cloudflare.com)
336 points | 112 commentspage 2
rhodey 5 hours ago|
the pull-stream module and its ecosystem is relevant here

the idea is basically just use functions. no classes and very little statefulness

https://www.npmjs.com/package/pull-stream

notnullorvoid 6 hours ago||
There's a lot I like about this API, mainly the pull-based iterator approach. I don't really see what the value of the sync APIs are though. What's the difference of just using iterators directly for sync streams?
jonkoops 6 hours ago|
It avoids the overhead of Promises, so I can imagine that this would be quite useful if you know that blocking the thread is fine for a little while (e.g. in a worker).
notnullorvoid 5 hours ago||
I mean the APIs like `Stream.pullSync` you could do that with a regular (non-async) iterator/generator.
halfmatthalfcat 7 hours ago||
The Observables spec should just get merged and implemented.

https://github.com/tc39/proposal-observable

bakkoting 6 hours ago|
Observables has moved to WHATWG [1] and been implemented in Chrome, although I don't know if the other browsers have expressed any interest (and there's still some issues [2] to be worked through).

But Observables really do not solve the problems being talked about in this post.

[1] https://github.com/WICG/observable [2] https://github.com/WICG/observable/issues/216

shevy-java 9 hours ago||
We deserve a better language than JavaScript.

Sadly it will never happen. WebAssembly failed to keep some of its promises here.

gejose 9 hours ago||
There's always a comment like this in most discussions about javascript.
krashidov 7 hours ago|||
> WebAssembly failed to keep some of its promises here

classic case of not using an await before your promise

teaearlgraycold 5 hours ago|||
As wonky as JS is I really like it. Typescript has done such a good job at making it fun to use.
tprgoreturnon 1 hour ago||
[dead]
postalrat 9 hours ago||
Where can I find these not kept promises?
nindalf 9 hours ago||
They haven't yet made languages other than JavaScript first-class languages for the web - https://hacks.mozilla.org/2026/02/making-webassembly-a-first.... I wouldn't call this a broken promise, but it was something people were hoping would take less than a decade.
nottorp 5 hours ago||
Well, it's also possible to replace JavaScript with a better language, it's just too late for it...
kg 9 hours ago||
It's a real shame that BYOB (bring your own buffer) reads are so complex and such a pain in the neck because for large reads they make a huge difference in terms of GC traffic (for allocating temporary buffers) and CPU time (for the copies).

In an ideal world you could just ask the host to stream 100MB of stuff into a byte array or slice of the wasm heap. Alas.

amluto 9 hours ago|
I wonder if you can get most of the benefit BYOB with a much simpler API:

    for await (const chunk of stream) {
        // process the chunk
        stream.returnChunk(chunk);
    }
This would be entirely optional. If you don’t return the chunk and instead let GC free it, you get the normal behavior. If you do return it, then the stream is permitted to return it again later.

(Lately I’ve been thinking that a really nice stream or receive API would return an object with a linear type so that you must consume it and possibly even return it. This would make it impossible to write code where task cancellation causes you to lose received data. Sadly, mainstream languages can’t do this directly.)

dilap 9 hours ago||
> The problems aren't bugs; they're consequences of design decisions that may have made sense a decade ago, but don't align with how JavaScript developers write code today.

> I'm not here to disparage the work that came before — I'm here to start a conversation about what can potentially come next.

Terrible LLM-slop style. Is Mr Snell letting an LLM write the article for him or has he just appropriated the style?

jasnell 8 hours ago||
Heh, I was using emdashes and tricolons long before LLMs appropriated the style but I did let the agent handle some of the details on this. Honestly, it really is just easier sometimes... Especially for blogs posts like this when I've also got a book I'm writing, code to maintain etc. Use tools available to make life easier.
dilap 8 hours ago|||
I think you'd be much better served by writing something rough that maintains your own voice!
silisili 8 hours ago||||
I'm not sure any emdash use at all is what people are calling out typically(maybe it is?), more the sheer number of them typical in LLM written stuff.

Just ctrl-f'ing through previous public posts, I think there were a total of 7 used across about that many posts. This one for example had 57. I'm not good enough in proper English to know what the normal number is supposed to be, just pointing that out.

hackrmn 1 hour ago||||
Just want to raise my hand and say I too have been using em dashes for considerably longer than LLM has been on every hacker's lips. It's obviously not great being accused of being an AI just because one has a particular style of writing...
n_e 8 hours ago||||
I found your article both interesting and readable.

It doesn't really matter what tools are used if the result is good

eis 8 hours ago|||
People are understandably a bit sensitized and sceptical after the last AI generated blog post (and code slop!) by Cloudflare blew up. Personally I'm fine with using AI to help write stuff as long as everything is proof-read and actually represents the authors thoughts. I would have opted to be a bit more careful and not use AI for a few blog posts after the last incident though if I was working at Cloudflare...
azangru 8 hours ago|||
What was it specifically about the style that stood out as incongruous, or that hindered comprehension? What was it that made you stumble and start paying close attention to the style rather than to the message? I am looking at the two examples, and I can't see anything wrong with them, especially in the context of the article. They both employ the same rhetorical technique of antithesis, a juxtaposition of contrasting ideas. Surely people wrote like this before? Surely no-one complained?
jsheard 8 hours ago||
The problem is less with the style itself and more that it's strongly associated with low-effort content which is going to waste the readers time. It would be nice to be able to give everything the benefit of the doubt, but humans have finite time and LLMs have infinite capacity for producing trite or inaccurate drivel, so readers end up reflexively using LLM tells as a litmus test for (lack of) quality in order to cut through the noise.

You might say well, it's on the Cloudflare blog so it must have some merit, but after the Matrix incident...

guntars 6 hours ago|||
These AI signals will die out soon. The models are overusing actual human writing patterns, the humans are noticing and changing how they write, the models are updated, new patterns emerge, etc, etc. The best signal for the quality of writing will always be the source, even if they are "just" prompting the model. I think we can let one incident slide, but they are on notice.
azangru 7 hours ago|||
> You might say well, it's on the Cloudflare blog so it must have some merit

I would instead say that it is written by James Snell, who is one of the central figures in the Node community; and therefore it must have some merit.

nebezb 8 hours ago|||
The idea is well articulated and comes across clear. What’s the issue? Taking a magnifying glass to the whole article to find sentence structure you think is “LLM-slop” is an odd way to dismiss the article entirely.

I’ve read my fair share of LLM slop. This doesn’t qualify.

jitl 9 hours ago|||
cloudflare does seem to love ai written everything
lapcat 9 hours ago||
You’ve got it backwards: LLMs were trained on human writing and appropriated our style.
have_faith 8 hours ago||
Partially true. They've been trained and then aligned towards a preferred style. They don't use em-dashes because they are over-represented in the training material (majority of people don't use them).
lapcat 8 hours ago||
It seems likely that with the written word, as with most things, a minority of people produce the majority of content. Most people publish relatively few words compared to professional writers.

Possibly the LLM vendors could bias the models more toward nonprofessional content, but then the quality and utility of the output would suffer. Skip the scientific articles and books, focus on rando internet comments, and you’ll end up with a lot more crap than you already get.

adamnemecek 6 hours ago||
It might be a good idea to look into the research on streams as coalgebras, there is quite a bit, for example here https://cs.ru.nl/~jrot/CTC20/.

Coalgebras might seem too academic but so were monads at some point and now they are everywhere.

ralusek 8 hours ago||
I tinkered with an alternative to stream interfaces:

https://github.com/ralusek/streamie

allows you to do things like

    infiniteRecords
    .map(item => doSomeAsyncThing(item), { concurrency: 5 });
And then because I found that I often want to switch between batching items vs dealing with single items:

    infiniteRecords
    .map(item => doSomeAsyncSingularThing(item), { concurrency: 5 })
    .map(groupOf10 => doSomeBatchThing(groupsOf10), { batchSize: 10 })
    // Can flatten back to single items
    .map(item => backToSingleItem(item), { flatten: true });
murmansk 9 hours ago|
For gods sake, finally, somebody have said this!
More comments...