There might be a way to get standard ML to output lua or something but I'm not that familiar with it. I think it would be an incredible fit for a third backend for gleam, but they say they aren't adding any more beyond erlang and js.
[1]: https://github.com/Grazfather/dotfiles/blob/master/nvim/fnl/...
[1]: https://git.sr.ht/~xerool/fennel-ls/
[2]: https://github.com/rydesun/fennel-language-server
[3]: https://github.com/LuaLS/lua-language-server
A good and comprehensive LSP for example. It really needs to be on par with other languages to increase adoption
Niche languages often serve specific purposes that don't require broad tooling. They also sometimes can leverage tooling from more popular languages. Less popular doesn't necessarily mean less capable - some niche languages have excellent tooling in their domain. Not to mention that not all developers need the same type of tooling or prioritize certain set of tools equally, like for example, for me personally - homoiconicity and REPL-connectivity in a language are far more important than LSP or some other things.
Additionally, in my workflow I open my editor across many many terminals, the startup time alone prevented this. With emacsclient it was faster, but I specifically DON'T want to have a window that shows all my other open files.
There are multiple different ways of separating project/workspace/etc. contexts in Emacs - tab-bar mode, persp.el, etc.
I appreciate customization; that's why I moved to Neovim and not to a full-blown GUI IDE (and I'll never use those). But that editor you speak of practically requires me to fix the idiocies of plugin authors that can't be bothered to run 7 tests on their code.
I get it, a lot of programmers are cult followers. I was as well. At one point though, pragmatism should prevail. The editor should ideally be non-laggy and not spit warnings after I move from one buffer to another. Just embarrassing.
Before anyone yells "skill issue!", I'll say that if any plugin author can make the editor crap the bed, then the editor is not good.
There are degrees to freedom. Complete freedom in such environments is a liability, not an asset.
Say hi to the others in the ivory tower. And remind them not to do selective omissions in their supposed counter-arguments to "win" an argument, and that it's a cheap tactic.
As a community of practitioners we should embrace the idea that not all tools have to be “ideal” for all users. Some people like hacking their editor, and some don’t. If software tools sink to the lowest common denominator, like the vast majority of commercial software, we’ll all be worse for it.
2. The ivory tower thing is dedicated to the parent poster sounding a bit elitistic and trying to imply I am doing it wrong and he's doing it right -- which I did not deny by the way (which is the really funny part) as my central point was "too much freedom is not good".
3. I completely agree with the notion that not all tools are ideal for all users. I used this sub-thread to express a strong opinion that Emacs allows the "too much freedom" thing that actually becomes much more of a hurdle for those of us that just want to get on with it. I was sure it's going to ruffle feathers which makes me commenting on it fairly stupid, come to think of it, because I was not looking to pick fights, but just to broadcast an apparently unpopular opinion and never engage with replies. Which I failed spectacularly. :D
> If software tools sink to the lowest common denominator, like the vast majority of commercial software, we’ll all be worse for it.
Here's the part where you and I will disagree. Your statement is correct on the outset but I take issue with it because I take it as a hint that Emacs > all other editors. Which cannot be stated as a fact, ever, not for any editor, not just Emacs.
It's their right. ¯\_(ツ)_/¯
With that being said, I also struggled with Emacs. Again, too much freedom. I don't want to care where am I supposed to put `.el` files. I want a plugin manager and to be able to tell it: install / update / delete. Neovim's Lazy and Mason do that and that's why I love them. Just earlier today some configuration of a plugin broke; it used git submodules and something got moved and the new revision used another mechanism. I literally fixed it in 5 seconds: deleted the plugin, installed it again, and configuration persisted and I didn't have to set it up again. Reopen a file (just input `:e` in the command bar), everything worked right away. Is this too much to ask of Emacs?
> They just seem to make breaking changes a lot more often.
That was my experience as well. Always some warning in the command bar or a full-blown stack trace. No thanks. I don't associate with amateur work.
I've never had to declare Emacs bankruptcy.
10 years of good times with Emacs+evil and 35 years of vim.
Fwiw.
I had a soft spot for Emacs for a _long_ time (almost two decades)... FWIW. ;)
Key word: had.
NOT looking forward to it. But I suppose there's no other way.
I still view Neovim as a huge improvement over Emacs though.
Neovim unlike Emacs is an editor. Emacs is not a mere code editor, not an IDE, text-processor, or web-browser. Emacs first and foremost is a Lisp REPL, with a built-in text-editor. Without deeply understanding that aspect one can never truly appreciate the incredible power it grants you.
Do you use your editor to read and annotate pdfs? Or watch videos? Or manage the library of your ebooks? Or track your expenses? Or control project management like Jira? Or keep your knowledge base and note-taking? Or interact with LLMs? Or explore APIs like Postman? Or keep your spaced repetition flash cards like Anki? Or use it for chat over platforms like Telegram and Slack? Or find and read RFCs and manpages? Or to perform web-search, search through your browser history, Wikipedia. Do you have etymology lookup, thesaurus, dictionaries, translation? Or to order pizza? Or measure distances between coordinates on a map? Automate things based on solar calendar or moon phases? Manage all your configs, aka dotfiles? OCR images with text? List, browse and code review Pull Requests, etc., etc.
In what sense exactly is Neovim/VSCode/IntelliJ/whatever is a "huge improvement", please tell us?
All of my professional experience, which is at this point substantial (though of course I make no claims that quality stems from quantity!), has showed me that smaller specialized tools always perform better -- in every meaning of the word.
To me Neovim wins because it's extremely snappy, it has no visual noise, and doesn't show me a warning from a random plugin down there in the command bar almost every minute (something which Emacs apparently will always do). And is configurable in a way I find intuitive. I never cared about all the directories where I am supposed to install my own Elisp files, and I still don't. I want a plugin manager and I want to issue commands to it: install, update, delete. Neovim's Lazy and Mason do exactly what I expect.
I was never, in almost two decades, able to look at how Emacs does things and think to myself "oh but of course it will work like that".
So no, I haven't used Emacs for anything except coding. And I don't intend to use Neovim for anything else as well (with the possible exception of lazygit integration, that one works great). Though I am also working towards having everything except my web browsers be in the terminal.
Again, many will say "skill issue" or "it's just not for you" (the smarter ones). Which, again again, I never denied. But I don't plan to mince words and I am tired of the (to me) unjustified praise for Emacs. It absolutely is not, not just for everyone, but not for most even, IMO.
If you have tamed it and find it intuitive, I am sure it empowers you. I never got to that point and I regret trying to fit in for such an extremely long time. But oh well, we live and learn. We live for sure. :)
I don't care what editor you use or like or moved on to. I use Neovim myself. But like I said, Emacs is not just an editor. Come back when you find a better replacement for a "Lisp REPL with a built-in editor", maybe then, the conversation start making sense.
The distinction you're making is too only technically correct. Emacs is still (also) an editor. I judged its editor abilities and found them lacking. Finally I woke up and understood it's not for me and moved on.
You can stop arguing now.
My brain mercifully deleted all details. At one point I really had enough (and waiting for 19 years for an editor to get better is IMO having an angelic patience that Emacs did not deserve) and just moved on and forgot all about it.
> * I envy your brain's ability to excise editor-induced trauma.*
That's a really funny way of putting it, thanks. I am simply one of the people who never truly settles and if something irks me for long enough, I ultimately cut the toxic element. And yeah it's often painful.
But that also gave me my amazing wife. If I stuck with my toxic ex I would be absolutely nowhere in life right now.
Also by the same author.
But it does come with some design decisions that I'm a bit ambivalent about and for which I haven't found a good explanation:
- No persistent data structures. I guess this has something to do with limitations of the GC?
- unhygienic macros combined with lack of namespaces. XOR those two choices would be fine, but the combination is janky
- Somewhat peculiar choices in syntax. It's neither Scheme, nor is it Clojure. # starts comments, ; is splice, @ marks literals as mutable...
0(example config): https://github.com/TheBlob42/love2d-fennel-neovim
https://git.sr.ht/~technomancy/fennel-lang.org
Janet looks like is by Calvin Rose (bakpakin) https://github.com/janet-lang/janet/graphs/contributors
% git remote -v
origin https://git.sr.ht/~technomancy/fennel (fetch)
origin https://git.sr.ht/~technomancy/fennel (push)
% git log --reverse
commit 9afe4338ed1816a759cb4b120f89e8f67159ce16
Author: Calvin Rose <calsrose@gmail.com>
Date: Sun Aug 7 18:50:34 2016
First commit.
commit b62a24853e24662f76932a2a81bb77cc25704491
Author: Calvin Rose <calsrose@gmail.com>
Date: Sun Aug 7 19:05:40 2016
Add some examples.
commit 5e3e0fe11e4f56007b3523e54d81d55280ef2204
Author: Calvin Rose <calsrose@gmail.com>
Date: Tue Aug 9 08:27:43 2016
Update README.md
What? People are just creating new languages these days as if they were Javascript libraries?
Let's say I wanted to make my own programming language. What's the easiest way to prototype it in a way I can share it with the world? Are the programming language development toolkits that come with a tokenizer library and things like that? Should I write my own program to output machine code? Or maybe it's easier to just transpile to Javascript?
I don't think the average programming language enthusiast is maintaining multiple well-known languages.
I really want to try making a language that is imperative, like really imperative, where every line must start with a verb, just to see what it would look like.
It would look like Tcl.
One way I think I can get rid of that is like this
32 = foo;
But why do we even need variables? I think the perfect language design would be if you could just do this: pow(x, 2);
pow(y, 2);
sqrt() = result;
And maybe you could do this {
pow(x, 2);
pow(y, 2);
sqrt();
} + 1;
pow(2) = result;
Instead of result = pow(sqrt(pow(x, 2), pow(y, 2)) + 1, 2); that we have today.The author posts on HN as ‘munificent’, I think.
You can make a simple language very easily if you design the syntax carefully and restrict its capabilities. It all depends on what you need it for.
In my case I needed a way to create reports from a Turbo Pascal program (TP3 for DOS I think) without having to edit the program and ship a new version. So I made a simple report generating language. It had simple for loops, all variables were global, tokens were all separated by white-space, no user defined sub-routines or functions, a set of predefined procedures specifically designed for the report generating function, arithmetic expressions that were only allowed in assignment statements, interpreted not compiled.
It was actually quite easy to do but of course was not a general purpose language. These days it might be simpler to embed Lua.
Writing code like this is cumbersome and unnecessarily symbol heavy, and reading it isn't really nice as well.
I'd rather have the language add that extra complexity into the parser than have me stare down these endless paretheses. Parsing something C-like is not that, hard, trust me, I've done it
Even if I didn't the full power of a lisp macro system, it is an absolute joy to manipulate programs written in s-expressions. Being able to cut/copy/paste/jump-[forward/back] by sexpr is really convenient, and often done nowhere near as well in other languages. I think this is because until the invention of tree-sitter and LSPs (and the former isn't yet widely adopted in editor tech), most editors had regex-based syntax highlighting and some kind of ad-hoc "parser" for a language. This makes them less aware of the language the developer is editing, but was probably a pragmatic design decision by editor implementers: it's easier than writing a full parser and means the editor can still assist even if a program is syntactically ill-formed.
On your other point, I've programmed in many languages in many years, and mostly I did some in an environment with an IDE, or powerful language-specific tooling (not tree-sitter) that had a proper good understanding of the syntax and semantics of the language used.
Oh... and I think we can't mention SICP without referencing this (relatively recent) video about why MIT moved from Scheme to Python for intro classes: https://youtu.be/OgRFOjVzvm0
I have gotten much farther (and accordingly learned more from HtDP). It is accurate to think of it as an on ramp for SICP.
What's the best way to learn programming in general? For me, is to try to build something. Find a problem, pick a Lisp, start building.
Just make sure to have two things: structural editing and the REPL. Without these two, Lisp may feel awkward. But when you have the ability to quickly move any expression around, transpose them, etc., - writing programs becomes like composing haikus or something. You basically will be moving some "lego-pieces" around. With the connected REPL, you will be able to eval any expression in-place, from where you're writing your code.
I started without these and indeed, that was challenging. Having to balance the parentheses by hand, counting them, omg, I was so obtuse, but I'm glad I didn't give up. You don't have to go too crazy - in the beginning, something that automatically balances out the parens, or highlights them when they are not, and lets you grab an expression and paste it somewhere else would be good enough.
And the REPL. Shit, I didn't know any better, I thought I was suppose to be copypasting or typing things into it. That is not the way! Find a way to eval expressions in-place. Some editors even show you the result of the computations right where the cursor is.
I have done years of programming prior to discovering Lisp, and I don't really understand how I was okay without it. I wish someone has insisted to try it out. Today I don't even understand how can anyone identify as a programmer and "hate" Lisp, just because they have stared at some obscure Lisp code for like two minutes at some point.
And back in my day you couldn't get a CS degree without a class on parsing and interpreting structured data. Usually it was the compilers class. Now we don't require kids to take a compilers class so an entire generation doesn't understand why regexes don't work in all cases. When kids in my organization try to "parse" HTML or XML with regexes, I hand them a copy of the O'Reilly Lex and Yacc book. And when they come back saying they can't understand it, I hand them the Dragon book. I guess I just think we should all feel that particular pain. Sorry for the digression, but I was triggered by "regex" and "parser" used in the same sentence.
Writing code like this is combersome and unnecessarily symbol heavy, and reading it isn't really nice as well.
I'd rather have the language add those extra parens into the parser than have me stare down these endless semi-colon, linebreaks or indentation. Parsing something Lisp-like is not that, hard, trust me, I've done it.
>Writing code like this is combersome and unnecessarily symbol heavy
Does not make sense in this context, as it mainly applies to Lisp-like languages that uses parentheses heavily.
I had read, some years back, that someone did an actual calculation / demonstration that showed that the number of symbol / punctuation characters in Lisp is actually less than in C-based languages, for a block of code with equal functionality in both languages.
I don't have the reference handy. Someone here may know of it, and post it.
(f x y)
vs f(x, y);
Note the extra comma and semicolon. The only place this breaks down is for simple arithmetic expressions like (+ a b) vs a + b, which is trivial enough to ignore (and also goes back in favor of Lisp when you start having more operands).I’m not a Lisp hater, but there’s a reason people make this criticism. I think most people find the infix version easier to read.
(if (<= 0 (+ x m) n) ...)
There is one additional set of parentheses due to the simple addition, which I already mentioned.The core expression in your example expression will parse as you have written it. Except we have to add ifx:
1> (ifx (if (0 <= x + m && x + m <= n) (prinl 'do-this)))
** expr-1:1: warning: unbound variable x
** expr-1:1: warning: unbound variable m
** expr-1:1: warning: unbound variable x
** expr-1:1: warning: unbound variable m
** expr-1:1: warning: unbound variable n
** expr-1:1: unbound variable x
Though it doesn't work since it's not a complete example with defined variables, we can quote it and expand it to see what the expansion looks like, which answers your question of how you write it one underlying Lisp: 1> (expand '(ifx (if (0 <= x + m && x + m <= n) (prinl 'do-this))))
(if (and (<= 0 (+ x m))
(<= (+ x m) n))
(prinl 'do-this))
The ifx macro deosn't have to be used for every expression. Inside ifx, infix expressions are detected at any nesting depth. You can put it around an entire file: (ifx
(defstruct user ()
id name)
(defun add (x y) (x + y))
...)
Autodetection of infix add some complexity and overhead to the code walking process that expands macros (not to mention that it's newly introduced) so we wouldn't want to have it globally enabled everywhere, all the time.It's a compromise; infix syntax in the context of Lisp is just something we can support for a specific benefit in specific use case scenarios. It's mainly for our colleagues who find it a barrier not to be able to use infix.
In this particular infix implementation, a full infix expression not contained inside another infix expression is still parenthesized (because it is a compound form, represented as a list). You can see from the following large exmaple that it still looks like LIsp; it is a compromise.
I took an example FFT routine from the book Numerical Recipes in C and transliterated it, to get a feel for what it's like to write a realistic numerical routine that contains imperative programming "warts" like using assignments to initialize variables and such:
(defun fft (data nn isign)
(ifx
(let (n nmax m j istep i
wtemp wpr wpi wr wi theta
tempr tempi)
(n := nn << 1)
(j := 1)
(for ((i 1)) ((i < n)) ((i += 2))
(when (j > i)
(swap (data[j]) (data[i]))
(swap (data[j + 1]) (data[i + 1])))
(m := nn)
(while (m >= 2 && j > m)
(j -= m)
(m >>= 1))
(j += m))
(nmax := 2)
(while (n > nmax)
(istep := nmax << 1)
(theta := isign * ((2 * %pi%) / nmax))
(wtemp := sin 0.5 * theta)
(wpr := - 2.0 * wtemp * wtemp)
(wpi := sin theta)
(wr := 1.0)
(wi := 0.0)
(for ((m 1)) ((m < nmax)) ((m += 2))
(for ((i m)) ((i <= n)) ((i += istep))
(j := i + nmax)
(tempr := wr * data[j] - wi * data[j + 1])
(tempi := wr * data[j + 1] + wi * data[j])
(data[j] := data[i] - tempr)
(data[j + 1] := data[i + 1] - tempi)
(data[i] += tempr)
(data[i + 1] += tempi))
(wr := (wtemp := wr) * wpr - wi * wpi + wr)
(wi := wi * wpr + wtemp * wpi + wi))
(nmax := istep)))))
It was very easy to transliterate the C code into the above, because of the infix. The remaining outer parentheses are trivial.A smaller example is a quadratic roots calculation, from the test suite. This one has the ifx outside the defun:
(ifx
(defun quadratic-roots (a b c)
(let ((d (sqrt b * b - 4 * a * c)))
(list ((- b + d) / 2 * a)
((- b - d) / 2 * a)))))
sqrt is a function, but treated as a prefix operator with a low precedence so parentheses are not required. - it is used everywhere
- carries very little information
Look at the example on the Janet page transcribed to Python syntax [0]. Several differences: - in Janet nearly every line starts and ends with ( and ), which is just noise
- in Janet there are several ))))
- in Janet there is no special syntax for: function definition, variable definition, collections, statements
- while Python is the opposite, it reads like a mix of English and Mathematics. Also it has special syntax for the ~5 things that you can do with the language, so you can just look at the Python code from far, don't read it, and you'll have a clue what's going on. It is also helpful when you search for something with your eyes. Also nested parentheses are different shaped parentheses, so you know which ones match.
Also in theory you could manipulate Python AST the same way you do in Lisps both in your editor and both at program-level. In practice you can't do that. for (int i = 0; i < 5; i++) {
printf("%d\n", i);
}
vs (loop for i from 0 below 5
do (format t "~A~%" i))
C has the same number of parentheses and also has curly brackets. C additionally has a bunch of semicolons and a comma not present in the Lisp version. The C version is also omitting the required function encapsulation to actually run this, while the Lisp version can be run as a top level expression.The comparison really isn't even close. If you really want, you can always put the trailing )) on new lines like people do with the trailing }}}} for their nested if in a for in a method in a class in C-like languages. The Lisp community has just decided that putting single characters on new lines like that is needlessly noisy and emphasizes characters that are easily ignored by humans and instead manipulated using structural editing.
IMO it's close. Lisp isn't much worse than other languages. Tho it needs special syntax for some common constructs. It helps the human.
Regarding the for loop, if you only loop one statement, you do
for (int i = 0; i < 5; i++) printf("%d\n", i);
If you loop several statements, you do (loop for i from 0 below 5
do (progn
(format t "~A~%" i)
(format t "~A~%" i)))
again, lisp lacks the syntax to group several statements together. In C it ends with );} , indicating a function call inside a control flow or a block. In lisp ))) can be anything from function definition to assigning variables or arrays/collections or anything.And I understand it doesn't resonate with most, I just wanted to highlight how the initial parent comment was very subjective and not very substantive, some people didn't take the joke so well, I guess it could've sounded a bit passive aggressive. I personally enjoy both C-like and Lisp-like syntaxes and languages, I do have a sweet spot for Forth tho.
But back on topic, Fennel is a great language, working with Love and Fennel is really nice. And if the parentheses can seem off-putting for some, I'd highly encourage to give it a shot as you can quickly get past it and see how comfy it feels to have your statements properly demarkated.
S-expr shine the most when working with XML-like structure. Spinneret[1] was the most fun I ever had working with HTML, every other templating engine feels subpar now that I have tasted that sweet nectar..
This is so wrong. Lisp does not use parentheses heavily. It doesent even use more parens than any C like language. I just dont understand the fixation with parentheses. The power of lisps comes from the fact that everything is an expression. Someone can correct me if I am wrong, since I only have experience with functional lisp Clojure, but I believe other lisps are more or less similar.
So if everything can be evaluated, then you can have a really great REPL experience. You can be inside your favorite editor and have the tightest feedback loop possible. So the absence of statements is actually a great feature of the language which improves ergonomics.
Perhaps because maybe you just haven't used one? I'm not talking about "a REPL" in langs like Python, where you typically have to type stuff into it. Lisp REPLs allow evaluating code directly in the source buffer, by sending it into the REPL. That REPL can be remote. Like seriously remote - NASA once did it on a spacecraft 150 million miles away from Earth. We run ours in a kubernetes cluster. Lisp REPL allows you to evaluate specific expressions, functions, or regions directly from where you are. Without saving, linting, linking, compiling, etc.
> because you don't understand it by yourself.
REPL is not about "understanding your code", it's about interactive development and exploration - it allows you to test your ideas and assumptions in real-time. Imagine being able to send http request, pull some good chunk of data, and then sort it, group it, slice it, dice it, filter it, visualize it - do whatever you want to do with it, directly from your editor. It's incredibly empowering and extremely liberating.
The only downsides - after getting used to that, using non-lispy languages sucks all the joy out of programming and you also have to explain to people what it is like.
It's like moving to an island where people have never discovered salt, pepper, and other spices, and though their cuisine looks beautiful - you just can't describe to them what they all are missing. Some even may say - "Hey, I've tried pepper once - but I don't understand your fixation with it" and you'd be like: "have you ever tried putting it into your food?"
take this Javascript example:
function addFive(x) { return x + 5; }
or const addFive = (x) => x + 5;
And its counterpart in Clojurescript: (defn add-five [x] (+ x 5))
In javascript REPL you can eval the entire thing, but try doing it piecemeal - it makes little sense, i.e., what is (x) or => from js perspective, semantically? While Clojure variant already is a list - a native data-structure, the argument is just a vector - another native thing, etc.So that infamous code-is-data & data-is-code mantra may not seem like a big deal in a trivial example like this, in practice, it's very nice, it allows you to do things otherwise difficult to achieve, watch this video https://www.youtube.com/watch?v=nEt06LLQaBY and give it a thought, how does one build something like that and how using a Lispy language helps there, while using more traditional PL would make things much more difficult.
That doesn't even make any remote sense. "I don't get the fixation with the JVM. It sounds like you write code as a black box, then compile and run it to see what it does because you don't understand it by yourself."
LISP is only better when you add source code transformation (which is way easier with its source code looking like the transformed code). But then you introduce "everyone can write their own syntax" which is good and bad given the history of DSLs...
This isn't really true. Most non-Lisp languages I work in, like JS or Ruby or Python, have things like long_expression =\n long_other_expression, or long_expression\n.another_long_expression\n.another_long_expression.
"Sometimes I need multiple lines" is fine, exceptions happen.
But again I ask, visually are those lines super different?
Ditto for things like for loops which have multiple statements in a line.
Why the exact repeat of the earlier post under a different name or some bots at play here?
If it is not suspicious then why has the comment been deleted, causing this comment to rise to the top level?
If you can understand the appeal of having JSON in JavaScript, you can understand some of the appeal of Lisp.
I can see how one might have a taste preference for it, but I’m struggling to understand what the tangible benefits are.
An s-expression is a list. An s-expression like (list (list 1 2) (list 3 4) (list 5 6)) is a list of lists. An s-expression like (hash "a" 1 "b" 2) is as hash table/dictionary.
> I also don’t see why representing data in a language’s data structure is inherently better than representing it in a language-agnostic format like JSON
You don't see why having a language data structure like Date is better than having a date-string stored in a JSON value that you need to provide parsing and other functions for? If you have a language data structure like Date, you can add days to a date, extract the month, convert it to a DateTime, etc. If you just have a JSON value, you either need to provide those functions or convert your JSON value to the Date language data structure. It seems like you see the value in using the language's data structure because you then say:
> having libraries to parse and convert data from that format into a language data structure
Also, JSON is just as "language agnostic" as s-expressions. JSON happens to be a first class component of JavaScript, as s-expressions are a first class component of Lisp; if libraries exist to help you deal with JSON in other languages, so, too, can libraries exist to help you deal with s-expressions.
> I’m struggling to understand what the tangible benefits are.
I think you understand the tangible benefits of JSON? It is a human readable/writable data serialization format. It is integrated in JavaScript in such a way that you can easily serialize, parse, and extract data from it without reaching for a library. S-expressions within Lisp do that, but they don't limit you to strings, floats, arrays, and unsorted maps. You don't need to write conversion functions or use them from a library because reading and writing s-expressions are core parts of Lisp.
I don't get why Lisp's s-expressions are much better than using arrays/tables in another language, such that they are a justification for using the language. Are they only significantly superior over a language with only arrays, like C? What's something that is made significantly easier by an s-expression than by arrays/tables?
To make s-expressions language-agnostic, wouldn't you need libraries in the languages to convert between the s-expression as it exists in some specification, and the language's native data structures? This doesn't sound all that different from JSON at this point, or a much more complex specification that defines the representation of all kinds of types, like dates.
It was meant for C specifically. Take a JSON document. Define it in C. Here's what I see on a random cJSON GitHub project:
https://github.com/DaveGamble/cJSON/blob/master/README.md#ex...
Now do that in JavaScript:
const jsonDoc = { "name": "Awesome 4K" ... }
Now make the equivalent jsonDoc in Lisp with s-expressions: (define json-doc
(hash "name" "Awesome 4K"
"resolutions" (list (hash "width" 1280
"height" 720)
(hash "width" 1920
"height" 1080)
(hash "width" 3840
"height" 2160))))
The C approach is the approach you'd similarly take in many languages where you create the HashMap, then create the Array, then populate them. Of course, you could "cheat" in many languages by first making a string and then calling the JSON library's `parse` on the string. But, this is different than JavaScript where you can directly create the JSON document. In Lisp, you are always writing s-expressions, both for data and code.> What's something that is made significantly easier by an s-expression than by arrays/tables?
An s-expression is a form of syntax. Even though it is a "list", the s-expression (list 1 2 3) is an actual list. It's not like you're taking the idea of arrays and tables and replacing it with a list. It's like you're taking the idea:
// Create a List in Java
var l = new ArrayList();
l.add(1);
l.add(2);
l.add(3);
// Print the List in Java
System.out.println(l);
// Prints [1, 2, 3];
// Can we construct a List from the String "[1, 2, 3]"? Is there a fromString() or similar for a Java Object?
And replacing it with the idea: // Create a List in Lisp
(define l (list 1 2 3))
// Print the List in Lisp
(println l)
// Prints (list 1 2 3)
// Can we construct a List from the String "(list 1 2 3)"?
(eval (read "(list 1 2 3)"))
What about dates? What if we want rationals? What if we want to use a binary-search-tree-map instead of a hash? // Create a Date in JavaScript
const date = new Date()
// Print the Date in JavaScript
console.log(date);
// Prints Wed Apr 16 2025 00:00:00 GMT ...
// Can I JSON.parse that string and receive a date?
Lisp: // Create a Date in Lisp
(define d (today))
// Print the Date in Lisp
(println d)
// Prints (date 2025 4 16)
// Construct a Date from the String "(date 2025 4 16)"
(eval (read "(date 2025 4 16)"))
The Lisp examples are simplified, but that is the idea.> To make s-expressions language-agnostic, wouldn't you need libraries in the languages to convert between the s-expression as it exists in some specification, and the language's native data structures?
Yes.
> This doesn't sound all that different from JSON at this point, or a much more complex specification that defines the representation of all kinds of types, like dates.
Correct. It would just be like JSON and whatever bits are standardized are what would be handled.
Some Lisp dialects do not have printed-representations for these; that is a bug, and needs no further discussion.
Common Lisp and Scheme have vectors: they are notated as #(...). That is an S-expression.
CLISP has a #<N>A(...) notation for multidimensional arrays, which it can print and read:
[8]> (make-array '(2 2) :initial-element 0)
#2A((0 0) (0 0))
There is a little bit of a restriction in that backquote doesn't support multi-dimensional arrays: [2]> (let ((x 42)) `#2A((,x 0) (0 ,x)))
*** - READ: unquotes may not occur in arrays
But this does work for vectors (as required by ANSI CL, I think): [11]> (let ((x 42)) `#(0 ,x 0))
#(0 42 0)
You might think that #(0 42 0) is just some list variation, but in fact it is a bona-fide vector object, with fast numeric indexing, and without the structural flexibility of lists. Vectors are typically implemented as flat arrays in memory. (Though they could use something else, particularly if large, like radix trees.)Hash literals look like this in TXR Lisp:
1> #H(() (a 1) (b 2) (c 3))
#H(() (a 1) (c 3) (b 2))
You can see the order of the keys changed when it was echoed back. The first element in the #H syntax, (), gives properties. It's empty for the most general form of hash, which uses equal comparison, and doesn't have weak keys or values.Binary search trees are likewise printable. The #T notation gives a tree (concealing the nodes). The #N notation for individual tree nodes:
1> (tree)
#T(())
2> (tree-insert *1 1) ; *1 means value from repl line 1
#N(1 nil nil)
3> (tree-insert *1 3)
#N(3 nil nil)
4> (tree-insert *1 7)
#N(7 nil nil)
5> (tree-insert *1 4)
#N(4 nil nil)
6> (tree-insert *1 2)
#N(2 nil nil)
7> (tree-insert *1 8)
#N(8 nil nil)
8> (tree-insert *1 0)
#N(0 nil nil)
9> *1
#T(() 0 1 2 3 4 7 8)
10> (tree-root *1)
#N(3 #N(1 #N(0 nil nil) #N(2 nil nil)) #N(7 #N(4 nil nil) #N(8 nil nil)))
The #T notation is readable. It gives the values in order, but they don't have to be specified in order when you write a literal: 11> #T(() 3 2 1)
#T(() 1 2 3)
If we ask for the tree root, we see the actual nodes, and how they are linked together: 12> (tree-root *11)
#N(2 #N(1 nil nil) #N(3 nil nil))
All these notations are something we can casually use as data in a file or network stream between TXR Lisp programs, or anything else that cares to read and write then notation.The printed notation of any object in a Lisp is a S-expression. S-expression syntax usually strives for print-read consistency: when an object's printed notation is read by the machine, a similar object is recovere. (In some cases, the original object itself, as with interned symbols).
In other systems, you can just read the object, and the structure comes with it.
check out Lean 4 then. Its syntax system is based on Racket but —instead of parens— implements stuff like [JSX syntax](https://github.com/leanprover-community/ProofWidgets4/blob/d...) and a [maze](https://github.com/dwrensha/lean4-maze)
The ideal part of it comes down to the language being able to manipulate itself. Make the tokens an array, that you can shift, inject and/or mould into what you need.
That being said, that power isn't isolated to just Lisp-y. A few stack languages have it, like Forth, or to plug myself [1]. However, stack languages are a bit harder to optimise.
It isn't that they don't want a complicated parser. It's that you want to be able to easily modify that parser as its running, without hitting TeX levels of performance slowdowns.
[0] https://srfi.schemers.org/srfi-119/srfi-119.html
Code is list and main structure is list. This is genius.
(defun transpose (matrix) (apply #'mapcar #'list matrix))
Based on my own experience I think I can say that It isn't until one has acquired a reasonable amount of experience with the language that they can fully appreciate its power.
I'm forever indebt to lisp for giving JS it's saving graces (closures and FN as first class citizens), but I think we some honestly on what the end experience really is.
Batch processing style programming (write to file, save, run, stop, repeat) in Lisp removes some 3/4 of what the language(s) offer. It's especially so if one is navigating code by text rather than structure, eg. jump-to-next-word instead of jump-to-next-expression.
What is magical about Lisps, esp. the ones that are fully featured in this sense (Common Lisp, Clojure), is that you're programming a running program without losing state when you re-evaluate code.
That is magic. Instead of doing the save-compile-run-setup-conditions-wait-for-result-repeat hundreds of times per session, you run and write the code for your given problem when it arises during gameplay.
On top of that you throw on things like paredit, condition systems and you're and order of magnitude less frustrated and dare I say it, productive, than when you constantly have to churn through constant transitions between the code-in-file and game-in-memory disparity.
For games especially, things like test driven development make no sense because state is so insanely tangled. So you either hope for the best, or you program against a live game. I prefer the second option.
Have you used a Lisp with a connected REPL? Not the one that you have to type instructions into - the one that allows you to send any expression at point, with virtually zero ceremony to it, basically letting you evaluate any part of the program on the fly? And that REPL can be even remote - at work we use one running in a kubernetes cluster, we can change our APIs and experiment without not only redeploying things, we don't even have to save our changes. Can you imagine being able to try your code without saving, linting, linking, compiling, deploying - all that on the fly? It is truly mind-opening experience. It's so fucking nice, it's like playing a video game. I do understand why it's appealing to build actual video games that way.
Using a Lisp without structural editing tools and without a REPL is like having a Ferrari without a working engine - you'd have to pedal it to move around, it's ridiculous.
Common Lisp, Racket or Scheme with a good REPL and editor integration are light years ahead than Fennel which is little more Lua-with-parens.
Totally. And Clojure is Java-with-parens; Janet is C-with-parens; LFE - Erlang-with-parens; Elisp is a Stallman's erotic fantasy, neatly wrapped in parens. Only Common Lisp is an "actual Lisp for immortal souls" - the rest is for peasants.
I wish it had gradual typing support though or at least allowed for type annotation for static tooling. Not that dynamic typing isn't a valid choice but with more and more languages getting gradual typing support it is hard to go back.
I guess we could build something like Coalton but for Lua.
https://github.com/codr7/eli?tab=readme-ov-file#type-checkin...
It's mostly addition by now, rare that something disappears or changes significantly.
No prod use by anyone afaik.
Maybe a static system can be built upon it.
So anything that requires C libs would automatically rule out fennel for a lot of projects that are essentially using someone's lua api as the target platform. Roblox, mud client scripting, openresty, that sort of thing.
And these environments usually have so much added to them, pcre, stdlib extensions, class systems etc fennel works best not making any assumptions about any of that. It's just straight up the lua semantics, and so anywhere lua works it works. I've used it a lot and originally recoiled from this decision but now I think it is genius.
In contrast, Clojure is intended as the language Rich Hickey wanted for writing the sort of applications he wrote, and the JVM happened to be a powerful (and already existing) platform that was suitable for doing that.