If i programmed enough in lisp I think my brain would adjust to this, but it's almost like I can't full appreciate the language because it reads in the "wrong order".
I’m not certain how true that really is. This:
foo(bar(x), quux(y), z);
looks pretty much identical to: (foo (bar x) (quux y) z)
And of course if you want to assign them all to variables: int bar_x = bar(x);
char quux_y = quux(y);
return foo(bar_x, quux_y, z);
is pretty much the same as: (let ((bar-x (bar x))
(quux-y (quux y)))
(foo bar-x quux-y z))
FWIW, ‘per se’ comes from the Latin for ‘by itself.’One of the things that sucks about LISP is - master it and every programming language is nothing more than an AST[0].
:-D
can you imagine saying something like
> The fradlis language encourages your average reader to think of essays as syntax [instead of content].
and thinking it reflects well on the language................
can you imagine saying something like
> The fradlis language encourages your average reader
to think of essays as syntax [instead of content].
and thinking it reflects well on the language
A reciprocating saw[0] is a great tool to have. It can be used to manipulate drywall, cut holes in various material, and generally allow "freehand cutting."But it is not the right tool for making measured, repeatable, cuts. It is not the right tool for making perfect right-angle cuts, such as what is needed for framing walls.
In other words, use the right tool for the job.
If a problem is not best expressed with an AST mindset, LISP might not be the right tool for that job. But this is a statement about the job, not about the tool.
The AST aspect of Lisps is absolutely an advantage. It obviates the need for the vast majority of syntax and enables very easy metaprogramming.
(let (bar-x (bar x))
(quux-y (quux y)))
(foo bar-x quux-y z)
Why is the second set of parens necessary?The nesting makes sense to an interpreter, I'm sure, but it doesn't make sense to me.
Is each top-level set of parens a 'statement' that executes? Or does everything have to be embedded in a single list?
This is all semantics, but for my python-addled brain these are the things I get stuck on.
(let variable-bindings statment1 statement2 ... statementN)
If statementN is reached and evaluates to completion, then its value(s) will be the result value(s) of let.The variable-bindings occupy one argument position in let. This argument position has to be a list, so we can have multiple variables:
(let (...) ...)
Within the list we have about two design choices: just interleave the variables and their initializing expressions: (let (var1 value1
var2 value2
var3 value3)
...)
Or pair them together: (let ((var1 value1)
(var2 value2)
(var3 value3)
...)
There is some value in pairing them together in that if something is missing, you know what. Like where is the error here? (let (a b c d e) ...)
we can't tell at a glance which variable is missing its initializer.Another aspect to this is that Common Lisp allows a variable binding to be expressed in three ways:
var
(var)
(var init-form)
For instance (let (i j k (l) (m 9)) ...)
binds i, j and k to an initial value of nil, and m to 9.Interleaved vars and initforms would make initforms mandatory. Which is not a bad thing.
Now suppose we have a form of let which evaluates only one expression (let variable-bindings expr), which is mandatory. Then there is no ambiguity; we know that the last item is the expr, and everything before that is variables. We can contemplate the following syntax:
(let a 2 b 3 (+ a b)) -> 5
This is doable with a macro. If you would prefer to write your Lisp code like this, you can have that today and never look back. (Just don't call it let; pick another name like le!)If I have to work with your code, I will grok that instantly and not have any problems.
In the wild, I've seen a let1 macro which binds one variable:
(let1 var init-form statement1 statement2 ... statementn)
1. Just for the sake of other readers, we agree that the code you quoted does not compile, right?
2. `let` is analogous to a scope in other languages (an extra set of {} in C), I like using it to keep my variables in the local scope.
3. `let` is structured much like other function calls. Here the first argument is a list of assignments, hence the first double parenthesis (you can declare without assigning,in which case the double parenthesis disappears since it's a list of variables, or `(variable value)` pairs).
4. The rest of the `let` arguments can be seen as the body of the scope, you can put any number of statements there. Usually these are function calls, so (func args) and it is parenthesis time again.
I get that the parenthesis can get confusing, especially at first. One adjusts quickly though, using proper indentation helps.
I mostly know lisp trough guix, and... SKILL, which is a proprietary derivative from Cadence, they added a few things like inline math, SI suffixes (I like that one), and... C "calling convention", which I just find weird: the compiler interprets foo(something) as (foo something). As I understand it, this just moves the opening parenthesis before the preceding word prior to evaluation, if there is no space before it.
I don't particularly like it, as that messes with my C instincts, respectively when it comes to spotting the scope. I find the syntax more convoluted with it, so harder to parse (not everything is a function, so parenthesis placement becomes arbitrary):
let( (bar-x(bar(x))
quux-y(quux(y)))
foo(bar-x quux-y z)
)
it distinguishes the bindings from the body.
strictly speaking there's a more direct translation using `setq` which is more analogous to variable assignment in C/Python than the `let` binding, but `let` is idiomatic in lisps and closures in C/Python aren't really distinguished from functions.
(let (bar-x quux-y)
(setq bar-x (bar-x)
quux-y (quux y))
(foo bar-x quux-y z))
I just wouldn’t normally write it that way. [](){}
let bar_x = x.bar()
let quux_y = y.quux()
return (bar_x, quux_y, z).foo()
As a bit of a digression:
The ML languages, as with most things, get this (mostly) right, in that by convention types are encapsulated in modules that know how to operate on them - although I can't help but think there ought to be more than convention enforcing that, at the language level.
There is the problem that it's unclear - if you can Frobnicate a Foo and a Baz together to make a Bar, is that an operation on Foos, on Bazes, or on Bars? Or maybe you want a separate Frobnicator to do it? (Pure) OOP languages force you to make an arbitrary choice, Lisp and co. just kind of shrug, the ML languages let you take your take your pick, for better or worse.
People don't work in postfix notation either, even though it would be more direct to parse. What people feel is clearer is much more important.
Tather than obj.f(a, b). we have obj.(f a b).
1> (defstruct dog ()
(:method bark (self) (put-line "Woof!")))
#<struct-type dog>
2> (let ((d (new dog)))
d.(bark))
Woof!
t
The dot notation is more restricted than in mainstream languages, and has a strict correspondence to underlying Lisp syntax, with read-print consistency. 3> '(qref a b c (d) e f)
a.b.c.(d).e.f
Cannot have a number in there; that won't go to dot notation: 4> '(qref a b 3 (d) e f)
(qref a b 3 (d)
e f)
Chains of dot method calls work, by the way: 1> (defstruct circular ()
val
(:method next (self) self))
#<struct-type circular>
2> (new circular val 42)
#S(circular val 42)
3> *2.(next).(next).(next).(next).val
42
There must not be whitespace around the dot, though; you simply canot split this across lines. In other words: *2.(next)
.(next) ;; nope!
.(next) ;; what did I say?
The "null safe" dot is .? The following check obj for nil; if so, they yield nil rather than trying to access the object or call a method: obj.?slot
obj.?(method arg ...)
(progn
(do-something)
(do-something-else)
(do-a-third-thing))
The only case where it's a bit different and took some time for me to adjust was that adding bindings adds an indent level. (let ((a 12)
(b 14))
(do-something a)
(do-something-else b)
(setf b (do-third-thing a b)))
It's still mostly top-bottom, left to right. Clojure is quite a bit different, but it's not a property of lisps itself I'd say. I have a hard time coming up with examples usually so I'm open to examples of being wrong here. (define (start request)
(define a-blog
(cond [(can-parse-post? (request-bindings request))
(cons (parse-post (request-bindings request))
BLOG)]
[else
BLOG]))
(render-blog-page a-blog request))
https://docs.racket-lang.org/continue/index.htmlHere's an example that mixes in a decent amount of procedural code that I'd consider idiomatic. https://github.com/ghollisjr/cl-ana/blob/master/hdf-table/hd...
https://github.com/hipeta/arrow-macros
The common complaint that Common Lisp lacks some feature is often addressed by noting how easy it is to add that feature.
I don't understand why you think this. Can you give an example?
(log (sqrt (sin (* 2 pi x)))
log (sqrt (sin (2 * pi * x)))
Seems as much right to left to me as the original one. And just 2 deletions (you missed closing the opening parenthesis) and 2 insertions.The ergonomic problem people face is that the chaining of functions appears in other contexts, like basic OOP.
Some kids trained on banana.monkey().vine().jungle() go into a tizzy when they see (jungle (vine (monkey banana)))).
(-> (* 2 PI x) sin sqrt log)
Also while `comp` in clojure is right to left, it is easy to define one left to right. And if anything, it even uses less parentheses than the OOP example, O(1) vs O(n).Plus, if syntax errors can easily take several minutes to fix, because if the syntax is wrong, auto format doesn't work right, and then you have to read a wall of text to find out where the missing close paren should have been.
The parenthesis do really disappear, just like the hieroglyphics on C influenced languages, it is a matter of habit.
At least it was for me.
Language shapes the way we think, and determines what we can think about.
- Benjamin Lee Whorf[0]
From the comments in the post: Ask a C programmer to write factorial and you will likely
get something like this (excuse the underbars, they are
there because blogger doesn't format code in comments):
int factorial (int x) {
if (x == 0)
return 1;
else
return x * factorial (x - 1);
}
And the Lisp programmer will give you:
(defun factorial (x)
(if (zerop x)
1
(* x (factorial (- x 1)))))
Let's see how we can get from the LISP version to something akin to the C version.First, let's "modernize" the LISP version by replacing parentheses with "curly braces" and add some commas and newlines just for fun:
{
defun factorial { x },
{
if { zerop x },
1 {
*,
x {
factorial {
- { x, 1 }
}
}
}
}
}
This kinda looks like a JSON object. Let's make it into one and add some assumed labels while we're at it. {
"defun" : {
"factorial" : { "argument" : "x" },
"body" : {
"if" : { "zerop" : "x" },
"then" : "1",
"else" : {
"*" : {
"lhs" : "x",
"rhs" : {
"factorial" : {
"-" : {
"lhs" : "x",
"rhs" : "1"
}
}
}
}
}
}
}
}
Now, if we replace "defun" with the return type, replace some of the curlies with parentheses, get rid of the labels we added, use infix operator notation, and not worry about it being a valid JSON object, we get: int
factorial ( x )
{
if ( zerop ( x ) )
1
else
x * factorial ( x - 1 )
}
Reformat this a bit, add some C keywords and statement delimiters, and Bob's your uncle.0 - https://www.goodreads.com/quotes/573737-language-shapes-the-...
The citation is relevant to this topic, therefore use and attribution warranted.
Batch programs are easy to fit in this model generally. A compiler is pretty clearly a pure function f(program source code) -> list of instructions, with just a very thin layer to read/write the input/output to files.
Web servers can often fit this model well too: a web server is an f(request, database snapshot) -> (response, database update). Making that work well is going to be gnarly in the impure side of things, but it's going to be quite doable for a lot of basic CRUD servers--probably every web server I've ever written (which is a lot of tiny stuff, to be fair) could be done purely functional without much issue.
Display also can be made work: it's f(input event, state) -> (display frame, new state). Building the display frame here is something like an immediate mode GUI, where instead of mutating the state of widgets, you're building the entire widget tree from scratch each time.
In many cases, the limitations of purely functional isn't that somebody somewhere has to do I/O, but rather the impracticality of faking immutability if the state is too complicated.
I have respect for OCaml, but that's mostly because it allows you to write mutable code fairly easily.
Roc codifies the world vs core split, but I'm skeptical how much of the world logic can be actually reused across multiple instances of FP applications.
(I'm biased though as I am immersed in Clojure and have never coded in Haskell. But the creator of Clojure has gone out of his way to praise Haskell a bunch and openly admits where he looked at or borrowed ideas from it.)
This is exactly why I'm so aggressive in splitting IO from non-IO.
A pure function generally has no need to raise an exception, so if you see one, you know you need to fix your algorithm not handle the exception.
Whereas every IO action can succeed or fail, so those exceptions need to be handled, not fixed.
> You have to fake all these issues.
You've hit the nail on the head. Every programmer at some point writes code that depends on a clock, and tries to write a test for it. Those tests should not take seconds to run!
In some code bases the full time is taken.
handle <- startProcess
while handle.notDone
sleep 1000ms
check handle.result
In other code-bases, some refactoring is done, and fake clock is invented. fakeClock <- new FakeClock(10:00am)
handle <- startProcess(fakeClock);
fakeClock.setTime(10:05am)
waitForProcess handle
Why not go even further and just pass in a time, not a clock? let result = process(start=10:00am, stop=10:05)
Typically my colleagues are pretty accepting of doing the work to fake clocks, but don't generalise that solution to faking other things, or even skipping the fakes, and operating directly on the inputs or outputs.Does your algorithm need to upload a file to S3? No it doesn't, it needs to produce some bytes and a url where those bytes should go. That can be done in unit-test land without any IO or even a mocking framework. Then some trivial one-liner higher up the call-chain can call your algorithm and do the real S3 upload.
* Encapsulation? What's the point of having it if's perfectly sealed off from the world? Just dead-code eliminate it.
* Private? It's not really private if I can Get() to it. I want access to that variable, so why hide it from myself? Private adds nothing because I can just choose not to use that variable.
* Const? A constant variable is an oxymoron. All the programs I write change variables. If I want a variable to remain the same, I just wont update it.
Of course I don't believe in any of the framings above, but it's how arguments against FP typically sound.
Anyway, the above features are small potatoes compared to the big hammer that is functional purity: you (and the compiler) will know and agree upon whether the same input will yield the same output.
Where am I using it right now?
I'm doing some record linkage - matching old transactions with new transactions, where some details may have shifted. I say "shifted", but what really happened was that upstream decided to mutate its data in-place. If they'd had an FPer on the team, they would not have mutated shared state, and I wouldn't even need to do this work. But I digress.
Now I'm trying out Dijkstra's algorithm, to most efficiently match pairs of transactions. It's a search algorithm, which tries out different alternatives, so it can never mutate things in-place - mutating inside one alternative will silently break another alternative. I'm in C#, and was pleasantly surprised that ImmutableList etc actually exist. But I wish I didn't have to be so vigilant. I really miss Haskell doing that part of my carefulness for me.
C# has introduced many functional concepts. Records, pattern matching, lambda functions, LINQ.
The only thing I am missing and will come later is discriminated unions.
Of course, F# is more fited for the job if you want a mostly functional workflow.
Back when I was more into pushing Haskell on my team (10+ years ago), I pitched the idea something like:
You get: the knowledge that your function's output will only depend on its input.
You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
Those higher-order functions are a tough sell for programmers who only ever want to do things the way they've always done them.But 5 years after that, in Java-land everyone was using maps, folds and filters like crazy (Or in C# land, Selects and Wheres and SelectManys etc,) with some half-thought-out bullshit reasoning like "it's functional, so it must good!"
So we paid the price, but didn't get the reward.
The main problem with Monads is you're almost always the only programmer on a team who even knows what a Monad is.
You can say that again!
Right now I'm working in C#, so I wished my C# managed effects, but it doesn't. It's all left to the programmer.
You could I guess have a “before” step that iterates your data stream and logs all the before values, and then an “after” step that iterates after and logs all the after and get something like:
``` (->> (map log-before data) (map transform-data) (map log-after-data)) ```
But doesn’t that cause you to iterate your data 2x more times than you “need” to and also split your logging into 2x as many statements (and thus 2x as much IO)
for i in 0 to arr.len() {
new_val = f(arr[i]);
log("Changing {arr[i]} to {new_val}.\n");
arr[i] = new_val;
}
I haven't used Haskell in a long time, but here's a kind of pure way you might do it in that language, which I got after tinkering in the GHCi REPL for a bit. In Haskell, since you want to separate IO from pure logic as much as possible, functions that would do logging return instead a tuple of the log to print at the end, and the pure value. But because that's annoying and would require rewriting a lot of code manipulating tuples, there's a monad called the Writer monad which does it for you, and you extract it at the end with the `runWriter` function, which gives you back the tuple after you're done doing the computation you want to log.You shouldn't use Text or String as the log type, because using the Writer involves appending a lot of strings, which is really inefficient. You should use a Text Builder, because it's efficient to append Builder types together, and because they become Text at the end, which is the string type you're supposed to use for Unicode text in Haskell.
So, this is it:
import qualified Data.Text.Lazy as T
import qualified Data.Text.Lazy.Builder as B
import qualified Data.Text.Lazy.IO as TIO
import Control.Monad.Writer
mapWithLog :: (Traversable t, Show a, Show b) => (a -> b) -> t a -> Writer B.Builder (t b)
mapWithLog f = mapM helper
where
helper x = do
let x' = f x
tell (make x <> B.fromString " becomes " <> make x' <> B.fromString ". ")
pure x'
make x = B.fromString (show x)
theActualIOFunction list = do
let (newList, logBuilder) = runWriter (mapWithLog negate list)
let log = B.toLazyText logBuilder
TIO.putStrLn log
-- do something with the new list...
So "theActualIOFunction [1,2,3]" would print: 1 becomes -1. 2 becomes -2. 3 becomes -3.
And then it does something with the new list, which has been negated now.In the case above, where I constructed a really long string, it depends on the type of string you use. I used lazy Text, which is internally a lazy list of strict chunks of text, so that won't ever have to be in memory all at once to print it, but if I had used the strict version of Text, then it would have just been a really long string that had to be evaluated and loaded into memory all at once before being printed.
What happens if there are multiple steps with logging at each point? Say perhaps a program where we want to:
1) Read records from a file
2) Apply some transformations and log
3) Use the resulting transformations as keys to look up data from a database and log that interaction
4) Use the result from the database to transform the data further if the lookup returned a result, or drop the result otherwise (and log)
5) Write the result of the final transform to a different file
and do all of the above while reporting progress information to the user.
And to be very clear, I'm genuinely curious and looking to learn so if I'm asking too much from your personal time, or your own understanding, or the answer is "that's a task that FP just isn't well suited for" those answers are acceptable to me.
No, that's okay, just be aware that I'm not an expert in Haskell and so I'm not going to be 100% sure about answering questions about Haskell's evaluation system.
IO in Haskell is also lazy, unless you use a library for it. So it delays the action of reading in a file as a string until you're actually using it, and in this case that would be when you do some lazy transformations that are also delayed until you use them, and that would be when you're writing them to a file. When you log the transformations, only then do you start actually doing the transformations on the text you read from the file, and only then do you open the file and read a chunk of text from it, like I said.
As for adding a progress bar for the user, there's a question on StackOverflow that asks exactly how to do this, since IO being lazy in Haskell is kind of unintuitive.
https://stackoverflow.com/questions/6668716/haskell-lazy-byt...
The answers include making your own versions of the standard library IO functions that have a progress bar, using a library that handles the progress bar part for you, and reading the file and writing the file in some predefined number of bytes so you can calculate the progress yourself.
But, like the other commenter said, you can also just do things in IO functions directly.
> if the program were running and the system crashed half way through, we'd still have logs for everything that was processed up to the point it crashed
Design choice. This one is all IO and would export logs after every step:
forM_ entries $ \entry -> do
(result, logs) <- process entry
export logs
handle result
Remember, if you can do things, you can log things. So you're not going to encounter a situation where you were able to fire off an action, but could not log it 'because purity'.The computation code becomes effectful, but the effects are visible in types and are limited by them, and effects can be implemented both with pure and impure code (e.g. using another effect).
The effect can also be abstract, making the processing code kinda pure.
In a language with unrestricted side effects you can do the same by passing a Writer object to the function. In pure languages the difference is that the object can't be changed observably. So instead its operations return a new one. Conceptually IO is the same with the object being "world", so computation of type "IO Int" is "World -> (World, Int)". Obviously, the actual IO type is opaque to prevent non-linear use of the world (or you can make the world cloneable). In an impure language you can also perform side-effects, it is similar to having a global singleton effect. A pure language doesn't have that, and requires explicit passing.
Now repeat this for every location where you want to log something because you're debugging
But with Haskell, I tend to do less debugging anyway, and more time getting the types right to with; when there's a function that doesn't work but still type checks, I feed it different inputs in GHCi and reread the code until I figure out why, and this is easy because almost all functions are pure and have no side effects and no reliance on global state. This is probably a sign that I don't write enough tests from the start, so I end up doing it like this.
But, I agree that doing things in a pure functional manner like this can make Haskell feel clunkier to program, even as other things feel easier and more graceful. Logging is one of those things where you wonder if the juice is worth the squeeze when it comes to doing everything in a pure functional way. Like I said, I haven't used it in a long time, and it's partly because of stuff like this, and partly because there's usually a language with a better set of libraries for the task.
Yeah, because it's often not just for debugging purposes. Often you want to trace the call and its transformations through the system and systems. Including externally provided parameters like correlation ids.
Carrying the entire world with you is bulky and heavy :)
> You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
You're my type of guy. And literally none of my coworkers in the last 10 years were your type of guy. When they read this, they don't look at it in awe, but in horror. For them, functions should be allowed to have side effects, and for loops is a basic thing they don't see good reason to abandon.
Maps and folds and filters are everywhere now. Why? Because 'functional is good!' ... but why is functional good?
I'm not against functional languages. My point was that if you want to encourage others to try it, those two are not what you want to lead with.
> you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
You mean what C# literally does everywhere because Enumerable is the premier weapon of choice in the language, and has a huge amount of exactly what you want: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enu...
(well, with the only exception of foreach which is for some odd reason is still a loop).
> But 5 years after that
Since .net 3.5 18 years ago: https://learn.microsoft.com/en-us/dotnet/api/system.linq.enu...
> So we paid the price, but didn't get the reward.
Who is "we", what was the price, and what was the imagined reward?
Slow down and re-read.
>> You get: the knowledge that your function's output will only depend on its input.
>> You pay: you gotta stop using those for-loops and [i]ndexes, and start using maps, folds, filters etc.
What was the price: two things:
- The programmers must stop using for-loops and [i]ndexes.
- The programmers must start using maps/folds/filters/et cetera.
What was the expected reward: the knowledge that their functions' outputs will only depend on their inputs.
In short: programmers who change their behaviour get the benefit of certainty about specific properties of their programs.
What exactly does this mean? Haskell has plenty of non-deterministic functions — everything involving IO, for instance. I know that IO is non-deterministic, but how is that expressed within the language?
Functional programming simply says: separate the IO from the computation.
> Pretty much anything I've written over the last 30 years, the main purpose was to do I/O, it doesn't matter whether it's disk, network, or display.
Every useful program ever written takes inputs and produces outputs. The interesting part is what you actually do in the middle to transforms inputs -> outputs. And that can be entirely functional.
My work needs pseudorandom numbers throughout the big middle, for example, drawing samples from probability distributions and running randomized algorithms. That's pretty messy in a FP setting, particularly when the PRNGs get generated within deeply nested libraries.
The messiness gets worse when libraries use different conventions to manage their PRNG statefulness. This is a non-issue in most languages but a mess in a 100% pure setting.
So a program it's a function that transforms the input to the output.
What about managing state? I think that is an important part and it's easy to mess it.
But that particular context has become inpure and decried as such in the documentation, so that carefulness is increased when interacting with it.
Can you please elaborate on this point? I read it as this web page (https://wiki.c2.com/?SeparateIoFromCalculation) describes, but I fail to see why it is a functional programming concept.
"Functional programming" means that you primarily use functions (not C functions, but mathematical pure functions) to solve your problems.
This means you won't do IO in your computation because you can't do that. It also means you won't modify data, because you can't do that either. Also you might have access to first class functions, and can pass them around as values.
If you do procedural programming in C++ but your functions don't do IO or modify (not local) values, then congrats, you're doing functional programming.
Excellent! You will encounter 0 friction in using an FP then.
To the extent that programmers find friction using Haskell, it's usually because their computations unintentionally update the state of the world, and the compiler tells them off for it.
Normally what functional programmers will do is pull their state and side effects up as high as they can so that most of their program is functional
Can you actually name something? The only thing I can come up with is working with interesting algorithms or datastructures, but that kind of fundamental work is very rare in my experience. Even if you do, it's quite often a very small part of the entire project.
- The part that receives the connection
- The part that sends back a response
- Interacting with other unspecified systems through IPC, RPC or whatever (databases mainly)
The shit in between, calculating a derivative or setting up a fancy data structure of some kind or something, is interesting but how much of that do we actually do as programmers? I'm not being obtuse - intentionally anyway - I'm actually curious what interesting things functional programmers do because I'm not seeing much of it.
Edit: my point is, you say "Anything else is logic." to which I respond "What's left?"
A LOT, depending on the domain. There are many R&D and HPC labs throughout the US in which programmers work directly with specialists in the hard sciences. A significant percentage of their work is akin to "calculating a derivative".
"When a customer in our East Coast location makes this purchase then we apply this rate, blah blah blah".
"When someone with >X karma visits HN they get downvote buttons on comments, blah blah blah".
So your projects are only moving bits from one place to another? I've literally never seen that in 20 years of programming professionally. Even network systems that are seen as "dumb pipes" need to parse and interpret packet headers, apply validation rules, maintain BGP routing tables, add their own headers etc.
Surely the program calculates something, otherwise why would you need to run the program at all if the output is just a copy of the input?
What interesting things do you do as a programmer, really?
That's a few more than zero. I don't do network programming, that was just an example to show how even the quintessential IO-heavy application requires non-trivial calculations internally.
But of course this heavily depends on the domain you are working in. Some people work in simulation or physics or whatever and that's where the interesting bits begin. (Even then I'm thinking "programming" is not the interesting bit, it's the physics)
I've never seen what you work on so there is no way I can say this with certainty, but generally people unfamiliar eith functional programming have way more code that is / or can be pure in their code base than they realize. Or put the opposite, directly as is if you were to go line by line in your code (skipping lines of comments and whitespace) and give every line a yes/no on whether is performs IO, what percentage are actually performing IO? Not are related to IO, or are preparing for or handing the results of IO, but how many lines are the actual line to write to the file or send the network packet?
Generally, it's a much smaller percentage than people are thinking because they are usually associating actual IO with things "related to" or "preparing for" or "handing results from" IO.
And then after finding that percentage to be lower than expected, it can also be made to be significantly lower by following a few functional programming design approaches.
A big part of it, I'm sure, but it requires some work. Pushing the side effects to the edge requires some abstractions to not directly mess with the original mutable state.
You are, in fact designing a state diagram from something that was evolving continuously on a single dimension: time. The transition of the state diagram are the code and the node are the inputs and output of that code. Then it became clear that IOs only matters when storing and loading those nodes. Because those nodes are finite and well defined, then the non-FP code for dealing with them became simpler to write.
- Refreshing daily "points" in some mobile app (handling the clock running backward, network connectivity lapses, ...)
- Deciding whether to send an marketing e-mail (have you been unsubscribed, how recently did you send one, have you sent the same one, should you fail open or closed, is this person receptive to marketing, ...)
- How do you represent a person's name and transform it into the things your system needs (different name fields, capitalization rules, max characters, what it you try to put it on an envelope and it doesn't fit, ...)
- Authorization logic (it's not enough to "just use a framework" no matter your programming style; you'll still have important business logic about who can access what when and how the whole thing works together)
And so on. Everything you're doing is mapping inputs to outputs, and it's important that you at least get it kind of close to correct. Some people think functional programming helps with that.
I can't shake off the feeling we should be defining some clean sort of "business algebra" that can be used to describe these kind of notions in a proper closed form and can then be used to derive or generate the actual code in whatever paradigm you need. What we call code feels like a distraction.
I am wrong and strange. But thanks for the list, it's helpful and I see FP's points.
I'd push back, slightly, in that you need to encode those abstract rules _somehow_, and in any modern parlance that "somehow" would be a programming language, even if it looks very different from what we're used to.
From the FP side of things, they'd tend to agree with you. The point is that these really are generic, abstract rules, and we should _just_ encode the rules and not the other state mutations and whatnot that also gets bundled in.
That implicitly assumes a certain rule representation though -- one which takes in data and outputs data. It's perfectly possible, in theory, to describe constraints instead. Looking at the example of daily scheduling in the presence of the clock running backward; you can define that in terms of inputs and outputs, or you can say that the desired result satisfies (a) never less than the wall clock, (b) never decreases, (c) is the minimal such solution. Whether that's right or not is another story (it probably isn't, by itself -- lots of mobile games have bugs like that allowing you to skip ads or payment forever), but it's an interesting avenue for exploration given that those rules can be understood completely orthogonally and are the business rules we _actually_ care about, whereas the FP, OOP, and imperative versions must be holistically analyzed to ensure they satisfy business rules which are never actually written down in code.
Especially when reading Rust or C++.
That's code I would prefer to have generated for me as needed in many cases, I'm generally not that interested in manually filling in all the details.
Whatever it is, it hasn't been created yet.
1. A compiler. The actual algorithms and datastructures might not be all that interesting (or they might be if you're really interested in that sort of thing), but the kinds of transformations you're doing from stage to stage are sophisticated.
2. An analytics pipeline. If you're working in the Spark/Scala world, you're writing high-level functional code that represents the transformation of data from input to output, and the framework is compiling it into a distributed program that loads your data across a cluster of nodes, executes the necessary transformations, and assembles the results. In this case there is a ton of stateful I/O involved, all interleaved with your code, but the framework abstracts it away from you.
I think what I engaged with is the notion that most programming "has some side-effects" ("it's not 100% pure"), but much of what I see is like 95% side-effects with some cool, interesting bits stuffed in between the endless layers of communication (without which the "interesting" stuff won't be worth shit).
I feel FP is very, very cool if you got yourself isolated in one of those interesting layers but I feel that's a rare place to be.
But most functions in Common Lisp do mutate things, there is an extensive OO system and the most hideous macros like LOOP.
I certainly never felt constrained writing Common Lisp.
That said, there are pretty effective patterns for dealing with IO that allow you to stay in a mostly functional / compositional flow (dare I say monads? but that sounds way more clever than it is in practice).
It's less about what the language "allows" you to do and more about how the ecosystem and libraries "encourage" you to do.
Erlang is a strictly (?) a functional language, and the reason why it was invented was to do network-y stuff in the telco space. So I'm not sure why I/O and functional programming would be opposed to each other like you imply.
First and foremost Erlang is a pragmatic programming language :)
In direct response every other language in the mid 2010s saying, "Look, we're functional too, we can pass functions to other functions, see?"
foo.bar()
.map(x => fireTheMissiles())
.collect();
C's had that forever: void qsort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *))
A function pointer is already half way there. What it lacks is lexical environment capture.
And things that are possible to do with closures never stop amazing me.
Anyways, functional programming is not about purity. It is something that came from the academia, with 2 major language families: ML-likes and Lisp-likes, each focusing on certain key features.
And purity is not even the key feature of MLs in general.
If I think hard, I can sort of remember how I used to do things before I worked almost exclusively in languages that natively support closures ("Let's see... I create a state object, and it copies or retains reference to all the relevant variables... and for convenience I put my function pointer in there too usually... But I still need rules for disposing the state when I'm done with it..." It's so much nicer when the language handles all of that bookkeeping for you and auto-generates those state constructs).
A functional programming language is one with first class functions.
Last I checked when you implement lambda in lisp it's also a pointer to the lambda internally.
Local and anonymous functions that capture lexical environments really, really work much better in languages built around GCs.
Without garbage collection a trivial closure (as in javascript or lisps) suddenly needs to make a lot of decisions around referencing data that can be either on the stack or in the heap.
Environments aren’t a thing in Haskell etc. does that mean it’s not functional?
And “close over” semantics differ greatly depending on the language.
I also wrote a toy resource scheduler at an HTTP endpoint in Haskell[2]. Writing I/O in Haskell was a learning curve but was ultimately fine. Keeping logic separate from I/O was the easy thing to do.
Functional core, imperative shell is a common pattern. Keep the side effects on the outside. Instead of doing side effects directly, just return a data structure that can be used to enact the side effect
What I will add is look up how the GHC runtime works, and the STGM. You may find it extremely interesting. I didn't "get" functional programming until I found out about how exotic efficient execution of functional programs ends up being.
So this only really mean:
Purely Functional Programming by default.
In most programming languages you can write
"hello " + readLine()
And this would intermix pure function (string concatenation) and impure effect (asking the user to write some text). And this would work perfectly.
By doing so, the order of evaluation becomes essential.
With a pure functional programming (by default).
you must explicitely separate the part of your program doing I/O and the part of your program doing only pure computation. And this is enforced using a type system focusing on I/O. Thus the difference between Haskell default `IO` and OCamL that does not need it for example.
in Haskell you are forced by the type system to write something like:
do
name <- getLine
let s = "Hello " <> name <> "!"
putStrLn s
you cannot mix the `getLine` directly in the middle of the concatenation operation.But while this is a very different style of programming, I/O are just more explicit, and they "cost" more, because writing code with I/O is not as elegant, and easy to manipulate than pure code. Thus it naturally induce a way of coding that try to really makes you conscious about the part of your program that need IO and the part that you could do with only pure function.
In practice, ... yep, you endup working in a "Specific to your application domain" Monad that looks a lot like the IO Monad, but will most often contains IO.
Another option is to use a free monad for your entire program that makes you able to write in your own domain language and control its evaluation (either using IO or another system that simulates IO but is not really IO, typically for testing purpose).
There is world, and there is a model of the world - your program. The point of the program, and all functions, is to interact with the model. This part, data structures and all, is pure.
The world interacts with the model through an IO layer, as in haskell.
Purity is just an enforcement of this separation.
Functional React follows this pattern. The issue is when the programmer thinks the world is some kind of stable state that you can store results in. It’s not, the whole point is to be created anew and restart the whole computation flow. The escape hatches are the hooks. And each have a specific usage and pattern to follow to survive world recreation. Which why you should be careful with them as they are effectively world for subcomponents. So when you add to the world with hooks, interactions with the addition should stay at the same level
Where have you ever heard anyone talk about side-effect free programs, outside of academic exercises? The linked post certainly isn't about 100% side-effect/state free code.
Usually, people talk about minimizing side-effects as much as possible, but since we build programs to do something, sometimes connected to the real world, it's basically impossible to build a program that is both useful and 100% side-effect free, as you wouldn't be able to print anything to the screen, or communicate with other programs.
And minimizing side-effects (and minimizing state overall) have a real impact on how easy it is to reason about the program. Being really carefully about where you mutate things, leads to most of the code being very explicit about what it's doing, and code only affects data that is close to where the code itself is, compared to intertwined state mutation, where things everywhere in the codebase can affect state anywhere.
(* Yes, you can technically write it procedurally like a good C programmer, sure.)
One is its the implicit function calls. For example, you'll usually see calls like this: `(+ 1 2)` which translates to 1 + 2, but I would find it more clear if it was `(+(1,2))` where you have a certain explicitness to it.
It doesn't stop me from using Lisp languages (Racket is fun, and I been investigating Clojure) but it took way too long for the implicit function stuff to grok in my brain.
My other complain is how the character `'` can have overloaded meaning, though I'm not entirely sure if this is implementation dependent or not
In theory ' just means QUOTE, it should not be overloaded (although I've mostly done Common Lisp, so no idea if in other impl that changes). Can you show an example of overloaded meaning?
Got used to it by typing out (quote …)-forms explicitly for a year. The shorthand is useful at the REPL but really painful in backquote-templates until you type it out.
CL:QUOTE is a special operator and (CL:QUOTE …) is a very important special form. Especially for returning symbols and other code from macros. (Read: Especially for producing code from templates with macros.)
Aside: Lisp macros solve the C-fopen() fclose() dance for good. It even closes the file handle on error, see WITH-OPEN-FILE. That alone is worth it. And the language designers provided the entire toolset for building stuff like this, for free.
No matter how unusual it seems, it really is worth getting used to.
That said, since I work in C-like languages during the day, I suppose my minor complaint has to do with ease of transition, it always takes me a minute to get acquainted to Lisp syntax and read Lisp code any time I work with it.
Its really a minor complaint and one I probably wouldn't have if I worked with a Lisp language all day.
Think of a variable name like `x` usually referring to a value. Example in C89:
int x = 5 ;
printf("%i\n", x) ;
The variable is called `x` and happens to have the value of integer 5. In case you know the term, this is an "rvalue" as opposed to an "lvalue".In C-land (and in the compiler) the name of this variable is the string "x". In C-land this is often called the identifier of this variable.
In Python you also have variables and identifiers. Example Python REPL (bash also has this):
>>> x = 5
>>> x
5
In Common Lisp they are called symbols instead of identifiers. Think of Python 3's object.__dict__("x") .Lisp symbols (a.k.a. identifiers a.k.a. variable names) are more powerful and more important than in C89 or Python, because there are source code templates. The most important use-case for source code templates are lisp macros (as opposed to C89 #define-style macros). This is also where backquote and quasiquote enter the picture.
In Lisp you can create a variable name (a.k.a. an identifier a.k.a. a symbol) with the function INTERN (bear with me.)
(intern "x")
is a bit like adding "x" to object.__dict__ in Python.Now for QUOTE:
Lisp exposes many parts of the compiler and interpreter (lisp-name "evaluator") that are hidden in other languages like C89 and Python.
Like with the Python example ">>>" above:
We get the string "x" from the command line. It is parsed (leave out a step for simplicity) and the interpreter it told to look up variable "x", gets the value 5 and prints that.
(QUOTE …) means that the interpreter is supposed to give you back the piece of code you gave it instead of interpreting it. So (QUOTE x) or 'x — note the dangling single quote — returns or prints the variable name of the variable named "x".
Better example:
(+ 1 2)
Evaluates to number 3. (quote (+ 1 2))
and '(+ 1 2)
both evaluate to the internal source code of "add 1 and 2" one step short of actually adding them.In source code templates you sometimes provide code that has to be evaluated multiple times, like the iconic increment "i++" has to be evaluated multiple times in many C89 loops. This is where QUOTE is actually useful. (Ignored a boatload of detail for the sake of understandability.)
For example in a quoted list, you dont need to quote the symbols because they are already in a quoted expression!
'(hello hola)
' really just says "do not evaluate whats next, treat it as data"
I suppose my suggestion would break those semantics.
Syntax mattered less than rhythm. Parens weren’t fences, they were measures. The REPL didn’t care if I understood. It played anyway.
If you want to use commas, you can in Lisp dialects I’m familiar with—they’re optional because they’re treated as whitespace, but nothing is stopping you if you find them more readable!
This practice quickly disappeared though. (I don't have an exact time line about this.)
That’s what I get for not double checking… well… basically anything I think during my first cup of coffee.
;)
Lisp listened. Modem sang. PalmPilot sweated. We talked, not debugged.
No thanks
(defun foo (x)
(declare (type (Integer 0 100) x))
(* x
(get-some-value-from-somewhere-else)))
And then do a (describe 'foo) in the REPL to get Lisp to tell me that it wants an integer from 0 to 100.Take default values for function arguments. In most languages, that's a careful consideration of the nuances of the parser, how the various symbols nest and prioritize, whether a given symbol might have been co-opted for another purpose... In LISP, it's "You know how you can have a list of symbols that are the arguments for the function? Some of those symbols can be lists now, and if they are, the first element is the symbolic argument name and the second element is a default value."
I personally have used LISP a lot. It was a little rough at first, but I got it. Despite having used a lot of languages, it felt like learning programming again.
I don't think there's something special about me that allowed me to grok it. And if that were the case, that's a horrible quality in a language. They're not supposed to be difficult to use.
Just because it allows intricate wizardry doesn't mean it is inherently hard to get/use. I think the bigger issue would be ecosystem and shortage of talent pool.
Five exclamation marks, a sure sign of an insane mind
That's what I think about five closing parentheses too... But tbh I am also jealous, because I can't program in lisp at all> Lisp is easier to remember,
I don't feel this way. I'm always consulting the HyperSpec or googling the function names. It's the same as any other dynamically typed language, such as Python, this way to me.
> has fewer limitations and hoops you have to jump through,
Lisp as a language has incredibly powerful features find nowhere else, but there are plenty of hoops. The CLOS truly feels like a superpower. That said, there is a huge dearth of libraries. So in that sense, there's usually lots of hoops to jump through to write an app. It's just I like jumping through them because I like writing code as a hobby. So fewer limitations, more hoops (supporting libraries I feel the need to write).
> has lower “friction” between my thoughts and my program,
Unfortunately I often think in Python or Bash because those are my day job languages, so there's often friction between how I think and what I need to write. Also AI is allegedly bad at lisp due to reduced training corpus. Copilot works, sorta.
> is easily customizable,
Yup, that's its defining feature. Easy to add to the language with macros. This can be very bad, but also very good, depending on its use. It can be very worth it both to implementer and user to add to the language as part of a library if documented well and done right, or it can make code hard to read or use. It must be used with care.
> and, frankly, more fun.
This is the true reason I actually use Lisp. I don't know why. I think it's because it's really fun to write it. There are no limitations. It's super expressive. The article goes into the substitution principle, and this makes it easy to refactor. It just feels good having a REPL that makes it easy to try new ideas and a syntax that makes refactoring a piece of cake. The Lisp Discord[1] has some of the best programmers on the planet in it, all easy to talk to, with many channels spanning a wide range of programming interests. It just feels good to do lisp.
Which Common LISP or Scheme environment (that runs on, say Ubuntu Linux on a typical machine from today) gets even close to the past's LISP machines, for example? And which could compete with IntelliJ IDEA or PyCharm or Microsoft Code?
- truly interactive development (never wait for something to restart, resume bugs from any stack frame after you fixed them),
- self-contained binaries (easy deployment, my web app with all the dependencies, HTML and CSS is ±35MB)
- useful compile-time warnings and errors, a keystroke away, for Haskell levels see Coalton (so better than Python),
- fast programs compiled to machine code,
- no GIL
- connect to, inspect or update running programs (Slime/Swank),
- good debugging tools (interactive debugger, trace, stepper, watcher (on some impls)…)
- stable language and libraries (although the implementations improve),
- CLOS and MOP,
- etc
- good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE), Jetbrains (SLT), Jupyter kernel, Lem, and more: https://lispcookbook.github.io/cl-cookbook/editor-support.ht...
What we might not get:
- advanced refactoring tools -also because we need them less, thanks to the REPL and language features (macros, multiple return values…).
---
For a lisp machine of yesterday running on Ubuntu or the browser: https://interlisp.org/
But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
> good editor support: Emacs, Vim, Atom/Pulsar (SLIMA), VScode (ALIVE)
I can't speak for those other editors, but my experience with Alive has been pretty bad. I can't imagine anyone recommending it has used it. It doesn't do what slime does, and because of that, you're forced to use Emacs.
Calva for Clojure, however, is very good. I don't know why it can't be this way for CL.
> The usage experience was very ergonomic, much more ergonomic than I'm used to with my personal CL set-up. Still, the inability to inspect stack frame variables would bother me, personally.
I don't use them, but I'd recommend Pulsar's SLIMA over the VSCode plugin, because it's older and based on Slime, where ALIVE is based on LSP.
> But Lispworks is the only one that makes actual tree-shaken binaries, whereas SBCL just throws everything in a pot and makes it executable, right?
right. SBCL has core compression, so as I said a web app with dozens of dependencies and all static assets is ±35MB, that includes the compiler and debugger (that allow to connect and update a running image, whereas this wouldn't be possible with LispWorks' stripped down binary). 35MB for a non-trivial app is good IMO (and in the ballparks of a growing Go app right?)
There's also ECL, if you rely on libecl you can get very small binaries (I didn't explore this yet, see example in https://github.com/fosskers/vend)
> Maybe you tried some time ago? This experience report concludes by "all in all, a great rig"
No, I've read that article before. Not being able to inspect variables in the stack frame kills a lot of the point of a REPL, or even a debugger, so I wouldn't use Alive (and most people don't). But the article represents this as a footnote for a reason.
Listen, I like Lisp. But Lisp has this weird effect where people, I think in an effort to boost the community, want to present every tool as a viable answer to their problems, no matter how unfinished or difficult to use.
In Clojure, if people ask what platforms it can reach, people will often say "anywhere." They will tell you to use Flutter through Clojure Dart (unfinished) or Babashka (lots of caveats to using this if all you want is small binaries in your app). Instead of talking about these things as tools with drawbacks, they will lump them all together in a deluge to give the impression the ecosystem is thriving. You did a similar thing in listing every editor under the sun. I doubt you have tried all of these extensively, but I could be wrong.
Same with ECL. Maybe you want the advantages of LISP with smaller binaries. But ECL is slower, supports fewer libraries, and cannot save/dump an image. You're giving up things that you don't normally have to give up in other ecosystems.
But this evangelism works against LISP. People come in from languages with very good tooling and are confused to find half the things they said would work does not.
Thanks. Don't hesitate to reach out and give feedback. If you're into web, you might find my newer (opinionated) resource helpful: https://web-apps-in-lisp.github.io/
> Listen, I like Lisp. But Lisp has this weird effect where people, I think in an effort to boost the community, want to present every tool as a viable answer to their problems, no matter how unfinished or difficult to use.
I see. On social media comments, that's kinda true: I'm so used to hear "there is no CL editor besides Emacs" (which is literally plain false and has been for years, even if you exclude VSCode), and other timeless FUD. Articles or pages on community resources (Cookbook) should be better measured.
> listing every editor under the sun.
there's an Eclipse plugin (simple), a Geany one (simple), a Sublime one (using Slynk, can be decent?), Allegro (proprietary, tried the web version without color highlighting, surprising), Portacle and plain-common-lisp are easy to install Emacs + CL + Quicklisp bundles…
some would add CLOG as a CL editor.
BTW the Intellij plugin is also based on Slime. Not a great development activity though. But a revolution for the Lisp world if you think about it. Enough to make me want mention it twice or thrice on HN.
> tried them extensively
emphasis on "extensively", so no. SLIMA for Atom/Pulsar was decent.
> ECL… slower…
true and for this I've been measured, but looking at how vend is doing would be very interesting, as it ships a very small binary, based on libecl.
IDEs provide such environments for the most common languages but major IDEs offer meager functionality for Lisp/Scheme (and other "obscure" languages). With a concerted effort it's possible an IDE could be configured to do more for Lisp. Thing is the amount of effort required is quite large. Since AFAIK no one has taken up the challenge, we can only conclude it's not worth the time and energy to go there.
The workflow I've used for Scheme programming is pretty simple. I can keep as many Emacs windows ("frames") open as necessary with different views of one or several modules/libraries, a browser for documentation, terminals with REPL/compiler, etc. Sort of a deconstructed IDE. Likely it does take a bit more cognitive effort to work this way, but it gets the job done.
https://github.com/matthiasn/talk-transcripts/blob/master/Hi...
https://www.youtube.com/watch?v=028LZLUB24s
Someone helpfully pulled out this chunk, which is a good illustration of why data is better than functions, a key driver of Clojure's design.
It's also extremely fun, you go from building Eliza to a full pattern matcher to a planning agent to a prolog compiler.
That's why I keep rekindling my learn-lisp effort. It feels like I'm just scratching the surface re: the fun that can be had.
https://github.com/sideshowcoder/core-logic-sudoku-solver/bl...