Posted by ryanhn 1 day ago
This assumes the code you wrote is already correct and giving the correct answer, so why bother writing tests? If, however you accept that you may have got it wrong, figure out the expected outcome through some reliable means (in this case, dig out your old TI-89), get the result and write your test to assert against a known correct value.
I wouldn't trust any tests that are written this way.
First, the test fails because there's no expected output, and you get to check the existing behaviour pretty-printed. Then, if it's correct, you approve it by promoting the diff into the source code, and it becomes a regression test.
It catches regressions. Which is the one thing where such semi-automated testing is most useful in my eyes.
No clue though why they gave it that weird "expect" name. Basically, it's semi-automated regression testing.
[0] https://github.com/ianthehenry/judge
[1] https://github.com/minikomi/advent-of-code/blob/d73e0b622b26...
There is a time and place for golden tests, but this reads more like a case for property-based testing.
Um …duh? Get out a calculator. Consult a reference, etc. Otherwise compute the result, and ensure you've done that correctly, ideally as independent of the code under test as possible. A lot of even mathematical stuff has "test vectors"; e.g., the SHA algorithms.
> Here’s how you’d do it with an expect test:
printf "%d" (fibonacci 15);
[%expect {||}]
> The %expect block starts out blank precisely because you don’t know what to expect. You let the computer figure it out for you. In our setup, you don’t just get a build failure telling you that you want 610 instead of a blank string. You get a diff showing you the exact change you’d need to make to your file to make this test pass; and with a keybinding you can “accept” that diff. The Emacs buffer you’re in will literally be overwritten in place with the new contents:…you're kidding me. This is "fix the current state of the function — whether correct or not — as the expected output."
Yeah… no kidding that's easier.
We gloss over errors — "some things just looked incorrect" — well, but how do you know that any differently than fib(10)?
You can mix the approaches, have some static assertions(as sanity checks) but make most snapshot tests. Like I said I wouldn't use snapshot testing for a fibonacci method, but there are problems out there that are a real pain to test via static assertions.
That said, I don’t see how it’s much different to TDD (write the test to fail, write the code to pass the test) aside from automating adding the expected test output.
So I guess it’s TDD that centres the code, not the test…
"If you already know, terrific—but what are you meant to do if you don’t?"
You're supposed to look at the first gif that visualizes a waveform diagram. How are HDL designs tested? Testbenches (akin to unit tests) and model checking. With model checking you define the property you want to test and the model checker will try to find a counter example.
Said property is so obvious for fibonacci, that it is staring right at your face and you're consciously trying to avoid looking it in the eyes. Fibonacci is defined as fib(n) = fib(n-1) + fib(n-2), so that's what you need to test. This means you can simply test fib(1) = 1, fib(2) = 1, fib(3) = 2, for a fixed set of n to cover the edge cases, then choose a fixed set of random n and make sure that fib(n) = fib(n-1) + fib(n-2) is true. Obviously the only way to be 100% sure is to use a model checker and write code that is bounded in its runtime.
Article:
> This is a perfectly lovely test. But think: everything in those describe blocks had to be written by hand. The programmer first had to decide what properties they cared about... then also had to say explicitly what state they expected each field to be in. Then they had to type it all out.
The article is about not getting out the calculator.
Yes, this is the point of testing. You have to think about what you're about to write! Before you write it! The technique in the article completely discards this. It's a terrible way to write tests.
It feels ... strangely empowering.
Each of us does half the work, the other half being done by a LLM. The difference is that I specify the desired behavior, while you leave the specification up to the LLM. A little strange if you ask me!
But with LLMs in hand, I can generate entire suites of tests where they're most useful before management has the time to complain. All the little nice-to-have-but-hard-to-google environment tweaks are seconds away for the asking. It's finally cost effective to do things right.
What if writing tests was a joyful experience? - https://news.ycombinator.com/item?id=34350749 - Jan 2023 (122 comments)
Of course I love solving the initial problem / building the feature etc, but I always find unit tests a calming easy going exercise. They are sometimes interesting to think about writing, but normally fairly simple. Either way, once you're testing, you're normally on the home straight with whatever it is you're developing.