Posted by teleforce 1 day ago
(My comment is slightly off-topic to the article but on-topic to the title.)
The nice thing about prolog is that you write logical rules, and they can get used in whatever order and direction that is needed. By direction, I mean that if you define "a grandparent is the parent of a parent", you can now use that rule not just to evaluate whether one person is a parent (or to find all grandparents) but also to conclude that if you know someone is a grandparent, and they are the parent of some one, then that person is someone's parent. Ok, it can also do recursion, so if you define an ancestor as a parent or the parent of an ancestor it will recurse all the way up the family tree. Neat.
You could write some kind of runtime that takes c code and brute-forces its way from outputs to inputs, except that regular imperative code allows for all kinds of things that make this impossible (e.g. side-effects). So then, you'd be limited to some subset, essentially ending up with a domain specific language again, albeit with the same syntax as your regular code, rather than those silly :- symbols (although LISP looks much sillier than prolog IMHO).
What the article is getting at is that if you use some features specific to a language, it's hard to embed your code as a library in another language. But is it? I mean, DLLs don't need to be written in the same language, there's stuff like JNI, and famously there's stuff like pytorch and tensorflow that runs CUDA code from python.
Not necessarily.
This generalizes!
Prolog: declare relations. Engine figures out how to satisfy them. Bidirectional -- same rules answer "is X a grandparent?" and "find all grandparents."
LLMs do something similar but fuzzier. Declare intent. Model figures out how to satisfy it. No parse -> AST -> evaluate. Just: understand, act.
@tannhaeuser is right that Prolog's power comes from what the engine does -- variables that "range over potential values," WAM optimization, automatic pruning. You can't get that from a library bolted onto an imperative language. The execution model is different.
Same argument applies to LLMs. You can't library your way into semantic understanding. The model IS the execution model. Skills aren't code the LLM runs -- they're context that shapes how it thinks.
Prolog showed that declarative beats imperative for problems where you can formalize the rules. LLMs extend that to problems where you can't.
I've been playing with and testing this: Directories of YAML files as a world model -- The Sims meets TinyMUD -- with the LLM as the inference engine. Seven architectural extensions to Anthropic Skills. 50+ skills. 33 turns of a card game, 10 characters, one LLM call. No round trips. It just works.
https://github.com/SimHacker/moollm/blob/main/designs/stanza...
The integration of a Prolog backend into a mainstream stack is typically achieved via Prolog code generation (and also code generation via LLMs) or as a "service" on the Prolog side, considering Prolog also has excellent support for parsing DSLs or request/responses of any type; as in, you can implement a JSON parser in a single line of code actually.
As they say, if Prolog fits your application, it fits really well, like with planning, constraint solving, theorem proving, verification/combinatoric test case enumeration, pricing models, legal/strategic case differentiation, complex configuration and the like, the latter merely leveraging the modularity of logic clauses in composing complex programs using independent units.
So I don't know how much you've worked hands on with Prolog, but I think you actually managed to pick about one of the worst rather than best examples ;)
Seems more like an interesting research project than something I'd ever deploy in an application serving millions of users
You mean like the kinds of problems digital computing was originally invented to solve?
You know that still exists, right? There are many people using computers to advance the state of Mathematics & related subjects.
Sometimes, having a language with a distinct syntax is nicer.
For example, Prolog isn't a general purpose functional or imperative language: you can assert, retract and query facts in the automatically managed database, risking only incorrect formulas, inefficiencies and non-monotonicity accidents, but not express functions, types, loops, etc. which could have far more general bugs.
I am so glad LLMs eliminate all of that and just call functions in the right order.
When LLMs do something, it's always because everybody was already doing it.
The noise you see online about it exists exactly because most people don't understand and can't use DI.
There is nothing magical about topological sort and calling constructors in the right order, which is all DI is.
I dislike it a lot, it is exactly like any other construct that allows you to throw code at anything in a way that sucks (Haskell lens, optics, monad transformers).
It allows people to granularize the whole codebase to the point where you can’t do much about it. For most, they should just stick with functions, no one can build 100 levels deep function callstacks without it being cumbersome, but DI makes it a breeze.
Then I got into Python and people were building useful server APIs in a day.
Both have their place, but I think the problem with the first route is that EVERYTHING ends up with Spring or CDI and complexity overload even if only 1 thing will ever be "implemented".
Lol... That's exactly the kind of thing we call "magic" in software development.
Anyway, if your framework is entirely based on DI, everybody that uses the framework will use DI, and the LLMs will generate code for it that uses DI. That does not contest my point in any way.
I was talking to Bob Harper about this specific issue (context was why macro systems are important to me) and his answer was “you can just write a separate programming language”. Which I get.
But all of this is just to say that doing relational-programming-as-a-library has a ton of issues unless your language supports certain things.
(Select the "Usinag Datalog..." example in the code sample dropdown)
The Rust code looks completely "procedural"... it's like building a DOM document using `node.addElement(...)` instead of, say, writing HTML. People universally prefer the declarative alternative given the choice.
I don't have real work I need prolog for, but I find it an interesting subject, My personal learning goal, the point where I can say I know prolog reasonably well is when I can get it to solve this mit puzzle I found, a sort of variant of soduku. I found a clever prolog solver for soduku that I thought could teach me more in this domain, but it was almost to clever, super optimized for soduku(it exploited geometric features to build it's relationships) and I was still left with no idea on how to build the more generic relationships I need for my puzzle(specific example if soduku cells were not in a grid how could they be specified?), in fact I can find very little information on how to specify moderately complex, ad hoc relationships. One that particularly flummoxed me was that some rules(but you don't know which) are wrong.
All the other books that I looked at were pretty awful, including the usual recommendations.
If you want to learn LP concepts in general, Tarski's World is a great resource as well.
But I have heard repeatedly that the good thing of prolog is the compiler, that takes information and queries that would be awful inefficient, and convert them in something that actually works. So I'm not sure... of course, you can convert virtually any language in a kind of library with some API that basically accepts source code... but I'm pretty sure is not what you meant.
Sure you can implement OOP as a library in pretty much any language, but you’ll probably sacrifice ergonomics, performance and/or safety I guess.
I haven't looked into the implementation. But taking a brief glance now, it looks interesting. They appear to be translating Prolog to Java via a WAM representation[3]. The compiler (prolog-cafe) is written in prolog and bootstrapped into Java via swi-prolog.
I don't know why compilation is necessary, it seems like an interpreter would be fast enough for that use case, but I'd love to take it apart and see how it works.
[1]: https://www.gerritcodereview.com/ [2]: https://gerrit-documentation.storage.googleapis.com/Document... [3]: https://gerrit.googlesource.com/prolog-cafe/+/refs/heads/mas...
References were Racket with the Racklog library¹. There's also Datalog² and MiniKanren, picat, flix. There were tons of good comments there which you should check out, but PySwip seemed like "the right thing" when I was looking at it: https://github.com/yuce/pyswip/
...documentation is extremely sparse, and assumes you already know prolog, but here's a slightly better example of kindof the utility of it:
https://eugeneasahara.com/2024/08/12/playing-with-prolog-pro...
...ie:
# ya don't really care how this works
prolog.consult("diabetes_risk.pl")
# ...but you can query into it!
query = "at_risk_for_diabetes(Person)"
results = list(prolog.query(query))
...the point being there's sometimes some sort of "logic calculation that you wish could be some sort of regex", and I always think of prolog as "regexes for logic".One time I wished I could use prolog was trying to figure the best match between video file, format, bitrate, browser, playback plugin... or if you've seen https://pcpartpicker.com/list/ ...being able to "just" encode all the constraints, and say something like:
valid_config = consult("rules.pl")
+ consult("parts_data.pl")
+ python.choice_so_far(...)
rules.pl: only_one_cpu, total_watts < power_supply(watts)
parts_data.pl: cpu_xyz: ...; power_supply_abc: watts=1000
choices: cpu(xyz), power_supply(abc), ...
...this is a terribly syntactically incorrect example, but you could imagine that this would be horrific code to maintain in python (and sqrt(horrific) to maintain in prolog), but _that's_ the benefit! You can take a well-defined portion and kindof sqrt(...) the maintenance cost, at the expense of 1.5x'ing the number of programming languages you need to expect people to know.Here is a sample on how to read a file.
https://lpn.swi-prolog.org/lpnpage.php?pagetype=html&pageid=...
How to make syscalls,
Now try to produce a library that adds compile-time features: static types, lifetimes, the notion of const and constexpr, etc. You can, of course, write external tools like mypy, or use some limited mechanism like Java annotations. But you have a really hard time implementing that in an ergonomic way (unless your language is its own metalanguage, like Lisp or Forth, and even then).
Creating a library that alters the way the runtime works, e.g. adding async, is not entirely impossible, but usually involves some surgery (see Python Twisted, or various C async libs) that results in a number of surprising footguns to avoid.
Frankly, even adding something by altering a language, but not reworking it enough to make the new feature cohesive, results in footguns that the source language did not have. See C#'s LINQ and exceptions.
You might be able to hack on some of the datatype semantics into JS prototype-based inheritance (I'd rather start with TypeScript at that point, but then we're back at the "why isn't it a library" debate) to keep those ontologies from being semantically separate, but that's an uphill battle with some of JS's implicit value conversions.
I consider Logic Programming languages to be the go-to counterargument to TFA but yeah, anything with lazy eval and a mature type system are strong counterexamples too.
false.
https://www.j-paine.org/dobbs/prolog_lightbulb.html
I always wanted to write a compiler whose front-end consumes Prolog and back-end emits PostScript, and call it "PrologueToPostscript".
prologue: a separate introductory section of a literary, dramatic, or musical work.
postscript: an additional remark at the end of a letter, after the signature and introduced by ‘PS’.
I’m hoping more recent developments, like WASM or Graal, provide a route for more flexibility when selecting languages. It’s nice to see Rust slowly become a serious choice for web development. Most of the time JS is fine, but it’s good to have the option to pull out a stricter low-level language when needed.
I'm sure there's good use cases for it - one impressive example at the time was using functional programming to create Hadoop map / reduce jobs, a oneliner in Scala was five different files / classes in Java. But for most programming tasks it's overkill.
You can write boring code in Scala, but in my (limited) experience, Scala developers don't want to write boring code. They picked Scala not because it was the best tool for the job, but because they were bored and wanted to flex their skills. Disregarding the other 95% of programmers that would have to work with it.
(And since these were consultants, they left within a year to become CTOs and the like and ten years on the companies they sold Scala to are still dealing with the fallout)
That is AFAIK the "curse of lisp" because is so easy (and needed and encouraged) to write SDLs, any ecosystem grows many languages in a hurry, so suddenly that elegant minimalistic beautiful pure language, becomes 1000 beautiful clean languages. Now you have to learn them all...
Intersting observation.
So basically Scala is to the JVM what Perl is to scripting?
Scala was designed from the beginning to support classical Java-style OOP code and also Haskell-like functional code. These are 2 very different styles in one language. And then Scala supports defining DSLs which give you even more flexibility.
> They picked Scala not because it was the best tool for the job, but because they were bored and wanted to flex their skills.
Guilty as charged!
> Disregarding the other 95% of programmers that would have to work with it.
No. Your coworkers end up being the other 5% of programmers that have the same taste as you. Interviewers ask about monads and lenses. It's fine, as long as everyone is on the same page. Which... they kind of have to be.
It's not the whole community, not by a long shot. Don't judge Scala by the Scala subreddit.
Most new things you'll see written about Scala are about solving difficult problems with types, because those problems are inexhaustible and some people enjoy them, for one reason or another. Honestly I think this shows how easy and ergonomic everything else is with Scala, that the difficulties people write about are all about deep type magic and how to handle errors in monadic code. You can always avoid that stuff when it isn't worth it to you.
The type poindexters will tell you that you're giving up all the benefit of Scala the moment you accept any impurity or step back from the challenge of solving everything with types, and you might as well write Java instead, but they're just being jerks and gatekeepers. Scala is a wonderful language, and Scala-as-a-better-Java is a huge step up from Java for writing simple and readable code. It lets you enjoy the lowest hanging fruit of functional programming, the cases where simple functional code outshines OO-heavy imperative code, in a way that Java doesn't and probably never will.
Although I agree the usual lens, optics, machines, pipes or other higher kinded libs are completely unnecessary, solving problems you do not want to have and have dire performance implications, but are at least correct and allow you to throw code at problems quickly, even though that code sucks in all ways except correctness.
Pipes also don’t necessarily have “dire performance implications”, but it depends a lot on the implementation. Haskell libraries don’t always emphasize real world performance as a top criterion. E.g. see https://github.com/composewell/streaming-benchmarks for some truly wild variations in performance across libraries (disclaimer: I haven’t investigated or verified those numbers.)
Or, as Paul Grahmam put it in his 1993 book On Lisp: "a bottom-up style in which a program is written as a series of layers, each one acting as a sort of programming language for the one above"
https://paulgraham.com/progbot.html
https://www.paulgraham.com/onlisptext.html
Here is a talk that explains the concept in Clojure, titled Bottom Up vs Top Down Design in Clojure:
https://www.contalks.com/talks/1692/bottom-up-vs-top-down-de...
Also see, for instance, Java. There's Java, the language that keeps improving, and then the Spring ecosystem, which is what 95% of programmers end up having to use professionally, with its heavy "magic" component. Writing services avoiding Spring is going against the grain. It might as well be part of the language as far professional Java use is concerned.
Communities matter more than the language features, and Java is all Spring, and now Scala is really a choice of Zio and Cats
I have a problem.
Right, I'll design a DSL.
Hmm. Now I have two problems.
I used Scala a few times when it was semi popular, just seemed like Java but with lots of redundant features added. Not sure what the aim was.
But ultimately using Scala at the place I worked at the time was a failure. A couple of my co-workers had introduced it, and I joined the bandwagon at some point, but it just didn't work out.
Many Java developers inside the company didn't want to learn, and it was really hard to hire good Scala developers. The ones who did learn (myself included) wrote terrible Scala for a least the first 6 months, and that technical debt lingered for a long time. When other people outside the team (who didn't know Scala) needed to make changes to our code, they had a lot of trouble figuring things out, and even when they could, the code they wrote was -- quite understandably -- bad, creating extra work for us to review it and get it into shape.
I also feel like Scala suffers from similar complexity/ways-to-do-things problems as C++. I often hear people say things like "C++ can be a safe, consistent language if you just use a subset of it!", and then of course everyone has a subtly (or not-so-subtly) different subset that they consider as The One True Subset. With Scala, you can write some very complex, type-heavy code that's incredibly hard to read if you are not well-versed in type/category/etc. theory (especially if you are using a library like cats or scalaz). Sure, you could perhaps try to come up with some rules around what things are acceptable and what aren't, but I think in this case it's a hard thing to specify, and different people will naturally disagree on what should be allowed.
I really wanted Scala to succeed at our company, but I think that's hard to do. I feel like the ideal case is a small company with just a few tens of developers, all whom were hired specifically for their Scala expertise, with a product/business that is going to keep the number-of-developers requirement roughly static. But that's probably very rare.
Blub is a great language!
Java may not be the pinnacle of programming languages, but since Java 8, pretty much every feature it's added has been absolutely excellently done.
I had to work on a Scala codebase at some point, and I thought it horrible. I judge a language on how easy it allows you to create an unreadable mess. Scala makes it incredibly easy. And the people that enjoy Scala seem to like "unreadable messiness" as a feature.
I found it fun to learn the basics, and it was interesting to think of problems from a FP approach, but it is never something I would use in the real world.
I vastly prefer Java. The features it imported from Scala were fine, made the language better. It doesn't need to import everything.
And the most important thing Java was always missing until recently, virtual threads, were lacking in Scala too.
(And I'd disagree that virtual threads were all that important compared to language features.)
Records/sealed interfaces (ADTs) are quite clean.
Text Blocks are better in Java IMO. The margin junk in Scala is silly.
Java was the language where "write libraries instead" happened, and it became an absolute burden. So many ugly libraries, frameworks and patterns built to overcome the limitations of a simple language.
Scala unified the tried-and-tested design patterns and library features used in the Java ecosystem into the core of its language, and we're better off for it.
In Java we needed Spring (urghh) for dependency injection. In Scala we have the "given" keyword.
In Java we needed Guava to do anything interesting with functional programming. FP features were slowly added to the Java core, but the power and expressivity of Java FP is woeful compared what's available at the core of Scala and its collections libraries.
In Java we needed Lombok and builder patterns. In Scala we have case classes, named and default parameters and immutability by default.
In the Java ecosystem, optionality comes through a mixture of nulls (yuck) and the crude and inconsistently-used "Optional". In Scala, Option is in the core, and composes naturally.
In Java, checked exceptions infect method signatures. In Scala we have Try, Either and Validated. Errors are values. It's so much more composable.
There's so much more - but hopefully I've made the point that there's a legitimate benefit in taking the best from a mature ecosystem and simple language like Java and creating a new, more elegant and complete language like Scala.
So you don't actually disagree with the article.
It helps to actually read it. The title is in quotes because the point of the article is to refute it.
What comes close is:
#! /usr/bin/env elixir
Mix.install([:jason])
defmodule JsonPrettyPrinter do
def get_stdin_data do
:stdio
|> IO.read(:all)
|> Jason.decode()
|> case do
{:ok, json_data} -> json_data
_ -> raise "Invalid JSON payload was provided"
end
end
end
JsonPrettyPrinter.get_stdin_data()
|> JsonPrettyPrinter.pretty_print_json()
|> IO.puts()Contrived example:
ls | where type == 'file' | sort-by size | take 4 | each {|f| {n: $f.name, s: ($f.size | format filesize MB) }} | to json
outputs {
"n": "clippy.toml",
"s": "0.000001 MB"
},
{
"n": "README.md",
"s": "0.000009 MB"
},
{
"n": "rustfmt.toml",
"s": "0.000052 MB"
},
{
"n": "typos.toml",
"s": "0.00009 MB"
} E = Struct.new(:name, :size, :type)
def ls = Dir.children('.').map{ s=File::Stat.new(_1); E.new(_1, s.size, s.file? ? 'file' : 'dir') }
This becomes valid Ruby: ls.find_all{_1.type == 'file'}.sort_by(&:size).take(4).map{ {n: _1.name, s: _1.size } }.each { puts JSON.pretty_generate(_1) }
(drops your size formatting, so not strictly equivalent)Which isn't meant to "compete" - nushell looks nice -, but to show that the lower-threshold option for those of us who don't want to switch shells is to throw together a few helpers in a language... (you can get much closer to your example with another helper or two and a few more "evil" abuses of Ruby's darker corners, but I'm not sure it'd be worth it; I might a wrapper for the above in my bin/ though)
ls -l --sort=size | head -n 5 | tail -n 4 | awk '{print $5 " = " $9}' | numfmt --to iec | jq --raw-input --null-input 'inputs | gsub("\r$"; "") | split(" = "; "") | select(length == 2) | {"s": (.[0]), "n": .[1]}'I think it doesn't even work correctly. ls lists files and directories and then picks the first 4 (it should only select files).
And this also uses awk and jq, which are not just simple "one purpose" tools, but pretty much complete programming languages. jq is not even part of most standard installations, it has to be installed first.
In a way that exactly illustrates the GGP's point: why learn a new language (nushell's) when you can learn awk or jq, which are arguably more generally- and widely-applicable than nushell. Or if awk and jq are too esoteric, you could even pipe the output of `find` into the python or ruby interpreters (one of which you may already know, and are much more generally applicable than nushell, awk, or jq), with a short in-line script on the command line.
That is backwards. I know I said "complete programming languages", but to be fair, awk only shines when it comes to "records processing", jq only shines for JSON processing. nushell is more like a general scripting language — much more flexible.
find -maxdepth 1 -type f -printf '%s %f\n' | sort -n | head -n 5
For the latter part, I'd tend to think that if you're going to use awk and jq, you might as well use Ruby. ruby -rjson -nae ' puts(JSON.pretty_generate({n: $F[1], s: "%.5f MB" % ($F[0].to_i / 10e6) }))'
("-nae" effectively takes an expression on the command line (-e), wraps it in "while gets; ... end" (-a), and adds the equivalent to "$F = $_.split" before the first line of your expression (-n))It's still ugly, so no competition for nushell still.
I'd be inclined to drop a little wrapper in my bin with a few lines of helpers (see my other comment) and do all Ruy if I wanted to get closer without having to change shells...
https://lucasoshiro.github.io/posts-en/2024-06-17-ruby-shell...
Unfortunately, I don't think Nushell brings much benefit for folks who already know Bash enough to change directories and launch executables and who already know Python enough to use more complicated data structures/control flow/IDE features
I'm still rooting for Nushell as I think its a really cool idea.
I agree that bash sucks, but I really have no motivation to learn something like nushell. I can get by with bash for simpler things, and when I get frustrated with bash, I switch to python, which is default-available everywhere I personally need it to be.
Back to text, though... I'm honestly not sure objects are strictly better than dumb text. Objects means higher cognitive overhead; text is... well, text. You can see it right there in front of you, count lines and characters, see its delimiters and structure, and come up with code to manipulate it. And, again, if I need objects, I have python.
About objects vs. text: I'm convinced that objects are vastly superior. There was a comment about this here with good arguments: https://news.ycombinator.com/item?id=45907248
gci -file | sort-object size | select name, size -first 4 | % { $_.size /= 1MB; $_ } | ConvertTo-JsonThe last command is properly cased, because I pressed tab (it auto-completes and fixes the case). The other commands I typed without tab completion. You can write however you want, PS is not case sensitive.
C#: https://devblogs.microsoft.com/dotnet/announcing-dotnet-run-...
Java: https://dev.to/toliyansky/scripting-with-java-3i9k
Go: https://golangcookbook.com/chapters/running/shebang/
https://github.com/igor-petruk/scriptisto will let you generate shebang scripts for pretty much any language
Though, I don't think it has the capability for single-file scripts to declare 3rd-party dependencies to be automatically installed.
The best option I've found for this use case (ad-hoc scripting with third party dependencies) is Deno.
I'm hoping Rust will get there in the end too.
Discussion: https://news.ycombinator.com/item?id=46431028
Sure, for "applications", the ecosystem can be frustrating at times, but I don't think that's what we're talking about here.
I still work on projects that were written under 3.6.
If you care enough, you can also use something like asdf to install an older Python alongside the system one.
Python lets you dynamically import from anywhere. The syntax is a bit funky, but thats what llms are for.
With Deno you can just import by relative file path and it just works like you'd expect and the tools support it. I wish more languages worked like that.
....no.
import keyword uses importlib under the hood. It just does a lot of things for you like setting up namespace. But importlib has all the functionality to add the code in any python file cleanly.
My custom agent that I use basically has the functionality to wrap every piece of code it writes as a tool and stores it into python files. During tool calls, it pretty much dynamically imports that code as part of a module within the project. Works perfectly fine.
https://docs.python.org/3/reference/import.html#relativeimpo...
You'd use:
import ...foo.barEveryone wants that to just mean "import relative to this file" but it doesn't.
https://taonexus.com/publicfiles/jan2026/171toy-browser.py.t...
it doesn't look like it would be easily derived from Chromium or Firefox, because this code is Python and those don't use Python this way.
By the way is there any feature you'd like to see added to the toy browser? The goal is that one day it's a replacement for Chrome, Firefox, etc. It's being built by ChatGPT and Claude at the moment. Let me know if there are any feature ideas you have that would be cool to add.
Great questions. 1. Yes, for the moment. Like the title of this article suggests - we're using a library! :)
It's great to iterate in Python, which has a large ecosystem of libraries. Believe it or not, there is a chance that in the future it would be able to translate the language into a different one (for example, C++) while using C++ bindings for the same gui libraries. This would speed up its actions by 40x. However, not all of the libraries used have C++ bindings so it could be harder than it looks.
2. Here's the current version of the source code:
https://taonexus.com/publicfiles/jan2026/171toy-browser.py.t...
you can have a quick read through. Originally it was using tkinter for the GUI toolkit. I believe it is still using tkinter, but the AI might be leaning on some other library. As you read it, is it using anything but tkinter for the GUI toolkit?
These libraries are doing a lot of heavy lifting, but I think it is still ending up drawing in tkinter (not handing off rendering to any other library.)
#!/usr/bin/dotnet run
https://devblogs.microsoft.com/dotnet/announcing-dotnet-run-...https://andrewlock.net/exploring-dotnet-10-preview-features-...
I’m taking the GP seriously instead of dismissing it. Raku looks like more fun than nushell tbh.
print 42 + 99; # 141
print &print.file ; # ...src/core.c/io_operators.rakumod
print &infix:<+>.file; # ...src/core.c/Numeric.rakumod
print ?CORE::<&print>; # True
I barely understood these four example lines.Also I think a Python script is reasonable if you use a type-checker with full type annotations, although they are not a silver bullet. For most scripts I use fish, which is my preferred interactive shell too.
[1]: https://hackage.haskell.org/package/shh
[2]: https://docs.haskellstack.org/en/v3.9.1/topics/scripts/
[3]: https://wiki.nixos.org/wiki/Nix-shell_shebang. On a side note, if you were to use nix's shebang for haskell scripts with dependencies, you should be using https://github.com/tomberek/- instead of impure inputs, because it allows for cached evaluation. I personally cloned the repo to my personal gitlab account, since it's small and should never change
LLMs are eval(). Skills are programs. YAML is the motherboard.
@unkulunkulu nails it -- "library as the final language", languages all the way down. Exactly. Skills ARE languages. They teach the interpreter what to understand. When the interpreter understands intent, the distinction dissolves.
@conartist6: "DSL is fuzzy... languages and libraries don't have to be opposing" -- yes. Traditional DSL: parse -> AST -> evaluate. LLM "DSL": read intent -> understand -> act. All one step. You can code-switch mid-sentence and it doesn't care.
The problem with opinionated frameworks like ROR and their BDFLs like DHH is that one opinion is the WRONG number!
The key insight nobody's mentioned: SPEED OF LIGHT vs CARRIER PIGEON.
Carrier pigeon: call LLM, get response, parse it, call LLM again, repeat. Slow. Noisy. Every round-trip destroys precision through tokenization.
Speed of light: ONE call. I ran 33 turns of Stoner Fluxx -- 10 characters, many opinions, game state, hands, rules, dialogue, jokes -- in a single LLM invocation. The LLM simulates internally at the speed of thought. No serialization overhead. No context-destroying round trips.
@jakkos, @PaulHoule: nushell and Python are fine. But you're still writing syntax for a parser. What if you wrote intent for an understander?
Bash is a tragedy -- quoting footguns, jq gymnastics, write-only syntax. Our pattern: write intent in YAML, let the LLM "uplift" to clean Python when you need real code.
Postel's Law as type system: liberal in what you accept. Semantic understanding catches nonsense because it knows what you MEANT, not just what you TYPED.
Proof and philosophy: https://github.com/SimHacker/moollm/blob/main/designs/stanza...
So do you disagree with any of my points, or my direct replies to other people's points, or is that all you can think of to say, instead of engaging?
Do you prefer to use bash directly? Why? If not, then what is your alternative?
What do you think of Anthropic Skills? Have you used or made any yourself, or can you suggest any improvements? I've created 50+ skills, and I've suggested, implemented, and tested seven architectural extensions -- do you have any criticism of those?
https://github.com/SimHacker/moollm/tree/main/skills
Obviously you use llms yourself, so you're not a complete luddite, and you must have some deeper more substantial understanding and criticism than those two words from your own experience.
How do your own ideas that you blogged about in "My LLM System Prompt" compare to my ideas and experience, in your own "professional, no bullshit, scientific" opinion?
https://mahesh-hegde.github.io/posts/llm_system_prompt/
Your entire blog post on LLM prompts is "I don't like verbiage" in five sentences. Ironic, then, that your entire contribution here is two empty words. I made specific technical points, replied to real people, linked proof. 'Slop' is the new 'TL;DR' -- a confession of laziness dressed as critique. Calling substance slop while contributing nothing? That's actual slop.
Also often, the language doesn't live isolated from its implementation (compiler or interpreter). While theory looks at languages via its semantics, in practice as the OP notes it is about the quality of the implementation and what can be reasonably done with the language.
A recent [1] case is Julia. I think it has hit a kind of sweet spot for language design where new performant code tends to get written in Julia rather than in some other language and bound to it. At its core, it is a simple "call functions, passing data in and getting results out" kind of language, but what the functions ("methods") mean and how the compiler does just-ahead-of-time compilation with deep type specialized code means you can write high level code that optimizes very well. Those mechanics operate under the hood though, which makes for a pleasant programming experience ... and there are loads of cutting edge packages being written in Julia. It is less interesting to look at Julia as "just the language".
[1] recent in programming languages is perhaps anything <= 15 years? .. because it takes time to discover a language's potential.
Wasn't that the point of the article? That you need both?
my $a of Int = 42;
say $a; # 42
or my $a of Int = "foo";'
# Type check failed in assignment to $a; expected Int but got Str ("foo")
? int number;
… you choose Pascal style
number : Integer;years ago a senior developer close to me said "when screening interviews, if i see rails i throw the resume in the trash"
so ironic how trivial/stupid these language-based judgements are
What was the senior's stack?
Not as easy to find in my vicinity, at least good ones, which is of course true for any language and profession in general.
I have RoR on my resume and very fond of it.
But shouldn't the check just be that the candidate has used more than one different stack? It's pretty hard for anyone with real experience to stick to one, and even if they do, that's not a good sign either. Or are you saying those bootcamp people end up learning another stack but still not being very good?
If you had another filtering mechanism, perhaps you could do that. But what other arbitrary, legally acceptable, filter are you going to use to further narrow the search? Can't realistically throw out all the resumes with female-sounding names, for example. What is going to keep you out of trouble is quite limited.
Why not throw out all the "Rails" resumes? If you had all the time in the world you would interview every last person, of course, but in the real world, with real world constraints, you have to pick a few to interview and live with your choice.
To use the internet's favourite analogy: It's like buying a car. Most people would never find it reasonable to test-drive every single one of them. It is just too time consuming to do that. So, instead, one normally looks at signals to try and distill the choice down to a few cars to test drive. You very well might miss out on what is actually your perfect car by doing that, but if you find one that is good enough, who cares?
On the other, you only have so much time in the day. It'd take me 3-6 months to give phone screens to every resume that comes in the door for any one engineering role, 8x that for a full 4-hour interview. I have to filter through them somehow if it's my job to hire several people in a month.
You'll obviously start with things that are less controversial: Half of resumes are bot-spam in obvious ways [0]. Half of the remainder can easily be tossed in the circular filing bin by not having anything at all in their resume even remotely related to the core job functions [1].
You're still left with a lot of resumes, more than you're able to phone screen. What do you choose to screen on?
- "Good" schools? I personally see far too much variance in performance to want to use this as a filter, not to mention that you'd be competing even more than normal on salary with FAANG.
- Good grades? This is a better indicator IME for early-career roles, but it's still a fairly weak signal, and you also punish people who had to take time off as a caretaker or who started before they were mature enough or whatever.
- Highest degree attained? I don't know what selection bias causes this since I know a ton of extremely capable PhDs, but if anything I'd just use this to filter out PhDs at the resume screening stage given how many perform poorly in the interviews and then at work if we choose to hire them.
- Gender? Age? ... I know this happens, but please stop.
If there's a strong GitHub profile or something then you can easily pass a person forward to a screen, but it's not fair to just toss the rest of the resumes. They have a list of jobs, skills, and accomplishments, and it's your job to use those as best as possible to figure out if they're likely to come out on top after a round of interviews.
I don't have any comment on rails in particular, but for a low-level ML role there are absolutely skills I don't want to see emphasized too heavily -- not because they're bad, but because there exists some large class of people who have learned those skills and nothing else, and they dominate the candidate pool. I used to give those resumes a chance, and I can't accept 100:1 odds anymore on the phone screen turning into a full interview and hopefully an offer. It's not fair to the candidates, and I don't have time for it either.
And that's ... bad, right? I have some things I do to make it better in some ways (worse in others, but on average trying to save people time and not reject too many qualified candidates) -- pass resumes on to a (brief) written screen instead of outright rejecting them if I think they might have a chance, always give people a phone screen if they write back that I've made a mistake, revisit those filtering rules I've built up from time to time and offer phone screens anwyay, etc -- hiring still sucks on both sides of the fence though.
[0] One of my favorites is when their "experience" includes things like how they've apparently done some hyper-specific task they copy-pasted from the job description (which exists not as a skills requirement but as a description of what their future day-to-day looks like), they did it before we pioneered whatever the tech in question was, they did it at several FAANG companies, and using languages and tools those companies don't use and which didn't exist during their FAANG tenure. Maybe they just used an LLM incorrectly to touch up their resume, but when the only evidence I should interview you is a pack of bold-faced lies I'm not going to give the benefit of the doubt.
[1] And I'm not even talking about requiring specific languages or frameworks, or even having interacted with a database for a database-adjacent role. Those sorts of restrictions can often be too overbearing. Just the basics of "I need you to do complicated math and program some things that won't wake me up at night" and resumes that come in without anything suggesting they've ever done either at any level of proficiency (or even a forward or a cover letter stating why their resume appears bare-bones and they deserve a shot anyway).
People, on the other hand, work with ideas, metaphors, expressions of intent, etc. If a language/library makes the communication of those things easier/better/faster; if it can be "written down" clearly, and "read" clearly by a person, then does it really matter into which taxonomic category it fits? We pick horses for courses. That seems about right.
If Rails works for you, is complementary with what you want to achieve, is an accelerator, and is generally well-understood by the people with whom you work, then use it. Alternatively, if the answer to all the previous is Stanza then go with that. There's less "right" and "wrong" in those decisions than there is "advance", or "struggle". It sounds trite. But, use what works. If something doesn't work make something that does, iff that's the most efficient approach.
I think about programming/design as languages/translation in a lot of ways: its languages all the way down.
It's true, you couldn't really do Express in Java, at least not back then.
But Java problem is not the mechanics, it’s that the community doesn’t want nice things.
Anyway, libraries like this were only really feasible after Java 8 because of the reliance on lambdas. Having to instantiate anonymous nested classes for every "function" was a total pain before that.