Also, adding 123456789 to itself eight times on an abacus is a nice exercise, and it's easy to visually control the end result.
base 16: 123456789ABCDEF~16 * (16-2) + 16 - 1 = FEDCBA987654321~16
base 10: 123456789~10 * (10-2) + 10 - 1 = 987654321~10
base 9: 12345678~9 * (9-2) + 9 - 1 = 87654321~9
base 8: 1234567~8 * (8-2) + 8 - 1 = 7654321~8
base 7: 123456~7 * (7-2) + 7 - 1 = 654321~7
base 6: 12345~6 * (6-2) + 6 - 1 = 54321~6
and so on..
or more generally:
base n: sequence * (n - 2) + n - 1
num(b)/denom(b) = b - 2 + (b-1)/denom(b)
so you just need to clear the denominator.They are also +9 away from being in order.
And then 12345678 * 8 is 98765424 which is +9 away from also being in order.
For binary, it looks like (1-(b-1))/1=b-10 or (1-(2-1))/1=2-2=0 in decimal.
For trinary, it looks like (21-(b-1))/12=b-2 or (7-(3-1))/5=5/5=1 in decimal.
For quaternary, it looks like (321-(b-1))/123=b-2 or (57-(4-1))/27=54/27=2 in decimal.
Essentially and perhaps unsurprisingly, the size of the slices in the number pie get smaller the bigger the pie gets. In binary, the slice is the pie, which is why the division comes out to zero there.
12345679 * 8 = 98765432"Why include a script rather than a proof? One reason is that the proof is straight-forward but tedious and the script is compact.
A more general reason that I give computational demonstrations of theorems is that programs are complementary to proofs. Programs and proofs are both subject to bugs, but they’re not likely to have the same bugs. And because programs made details explicit by necessity, a program might fill in gaps that aren’t sufficiently spelled out in a proof."
What I was actually good, or at least fast at, was TI-Basic, which was allowed in a lot of cases (though not all). Usually the problems were set up so you couldn’t find the solution using just the calculator, but if you had a couple of ideas and needed to choose between them you could sometimes cross off the wrong ones with a program.
The script the author gives isn’t a proof itself, unless the proposition is false, in which case a counter example always makes a great proof :p
(Also: complementary != complimentary.)
a) it can be actually helpful to check that some property holds up to one zillion, even though it's not a proof that it holds for all numbers; and
b) if a proof has a bug, a program checking the relevant property up to one zillion is not unlikely to produce a counterexample.
I'm gonna blame autocorrect for that one, but appreciate you catching it. Fixed! :)
There’s a technique for unit testing where you write the code in two languages. If you just used a compiler and were more confident about correspondence, that would miss the point. The point is to be of a different mind and using different tools.
In practice land (real theorem provers), I guess the idea is that, it theoretically should be a perfect logic engine. Two issues:
1. What if there's a compiler bug?
2. How do I "know" that I actually compiled "what I meant" to this logic engine?
(which are re-statements of what I said in theory land). You are given, that supposedly, within your internal logic engine, you have a proof, and you want to translate it to a "universal" one.
I guess the idea is, in practice, you just hope that slight perturbations to either your mental model, the translation, or even the compiler itself, just "hard fail". Just hope it's a very not-continuous space and violating boundaries fail the self-consistency check.
(As opposed to, for example, physical engineering, which generally doesn't allow hard failure and has a bunch of controls and guards in mind, and it's very much a continuuum).
A trivial example is how easy it is to just typo a constant or a variable name in a normal programming language, and the program still compiles fine (this is why we have tests!). The idea is, that, down from trivial errors like that, all the way up to fundamental misconceptions and such, you can catch preturbations to the ideal, I guess, be they small or large. I think what makes one of these theorem provers minimally good, is that you can't easily, accidentally encode a concept wrong (from high level model A to low level theorem proving model B), for a variety of reasons. Then of course, runtime efficiency, ergonomics etc. come later.
Of course, this brings into notion just how "powerful" certain models bring - my friend is doing a research project with these, something as simple as "proving a dfs works to solve a problem" is apparently horrible.
I’ve never seen a more succinct explanation of the value of coding up scripts to demonstrate proofs.
I think I’ll tighten it up to “proofs have bugs” in the future.
0.987654321/0.123456789 = (1.11111111-x)/x = 1.11111111/x - 1 where x = 0.123456789
You can aproxímate 1.11111111 by 10/9 and aproxímate x = 0.123456789 using y = 0.123456789ABCD... = 0.123456789(10)(11)(12)(13)... that is a number in base 10 that is not written correctly and has digits that are greater than 9. I.E. y = sum_i>0 i/10^i
Now you can consider the function f(t) = t + 2 t^2 + 3 t^3 + 4 t^4 + ... = sum_i>0 i*t^i and y is just y=f(0.1).
And also consider an auxiliary function g(t) = t + t^2 + t^3 + t^4 + ... = sum_i>0 1*t^i . A nice property is that g(t)= 1/(1-t) when -1<t<1.
The problem with g is that it lacks the coefficients, but that can be solved taking the derivative. g'(t) = 1 + 2 t + 3 t^2 + 4 t^3 + ... Now the coefficients are shifted but it can be solved multiplying by t. So f(t)=t*g'(t).
So f(t) = t * (1/(1-t))' = t * (1/(1-t)^2) = t/(1-t)^2
and y = f(0.1) = .1/.9^2 = 10/81
then 0.987654321/0.123456789 ~= (10/9-y)/y = 10/(9y)-1 = 9 - 1 = 8
Now add some error bounds using the Taylor method to get the difference between x and y, and also a bound for the difference between 1.11111111 an 10/9. It shoud take like 15 minutes to get all the details right, but I'm too lazy.
(As I said in another comment, all these series have a good convergence for |z|<1, so by standards methods of complex analysis all the series tricks are correct.)
If you multiply term by term every term has coefficient 1 of course. There are n terms with exponent n+1, made from the n sums of the first exponent and the second exponent.
Eg 1+5, 2+4, 3+3, 4+2, 5+1.
So (1/9)^2 = (sum 1/10^i)^2 = 1/10 sum i/10^i
The derivative trick is more useful generally, but this method gets you the solution to 0.12345678.. in an quick way that's also easier to justify that it works.
In general, sum(x^k, k=1…n) = x(1-x^n)/(1-x).
Then sum(kx^(k-1), k=1…n) = d/dx sum(x^k, k=1…n) = d/dx (x(1-x^n))/(1-x) = (nx^(n+1) - (n+1)x^n + 1)/(1-x)^2
With x=b, n=b-1, the numerator as defined in TFA is n = sum(kb^(k-1), k=1…b-1) = ((b-2)b^b + 1)/(1-b)^2 = ((b-2)b^b + 1)/(1-b)^2.
And the denominator is:
d = sum((b-k)b^(k-1), k=1..b-1) = sum(b^k, k=1..b-1) - sum(kb^(k-1), k=1..b-1) = (b-b^b)/(1-b) - n = (b^b - b^2 + b - 1)/(1-b)^2.
Then, n-(b-1) = (b^(b+1) - 2b^b - b^3 + 3b^2 - 3b +2)/(1-b)^2.
And d(b-2) = the same thing.
So n = d(b-2) + b - 1, whence n/d = b-2 + (b-1)/d.
We also see that the dominant term in d will be b^b/(1-b)^2 which grows like b^(b-2), which is why the fractional part of n/d is 1 over that.
I disagree with the author that a script works as well as a proof. Scripts are neither constructive nor exhaustive.
┌───┬───┬───┐
│ 7 │ 8 │ 9 │
├───┼───┼───┤
│ 4 │ 5 │ 6 │
├───┼───┼───┤
│ 1 │ 2 │ 3 │
├───┼───┼───┤
│ 0 │ . │ │
└───┴───┴───┘
I remember seeing that (14787 + 36989) / 2 would produce 25888, in that the mean of geometric shape traced by the two sequences would average out in the middle like that(147 + 369) / 2 = 258
and
(741 + 963) / 2 = 852
(741 + 963)/2 = (700+900)/2 + (40+60)/2 + (1+3)/2, it's just average in each decimal place.
It's unfortunate that we have 5 fingers.
It was only as an adult that I realised nobody around me counted this way. You are the first person I have found who talked about this method, so I am glad to find this comment of yours.
255 if you use both hands!
More like 1023 if you also use thumbs but I prefer to use them as carry, overflow bits.
It's so natural, useful and lends well to certain numerical tricks. We should explicitly be teaching binary to children earlier.
When I was a kid I relized that I can count the fives on the right hand (1 finger for each 5 on the left), which brought me to 25.
It is only when I was traveling in Asia and watched people on markets that I realized that I can use my thumb to count my 12 other finger phalanges, which brought the total to 144. You just need to know your multiplication table of 12 :)
This gives you the range 0..99. Sweeet.
741 + 369 & 963 + 147 | 123 + 987 & 321 + 789 (left right | up down)
159 + 951 & 753 + 357 | 258 + 852 & 456 + 654 (diagonally | center lines)
the design of a keypad... it unintentionally contains these elegant mathematical relationships.
i call this phenomena: outcomes of human creations can be "funny and odd", and everybody understand that eventually there will be always something unpredictable.
For non-Americans and/or those too young to remember when landline service was still dominant, in the 90s and early 2000s AT&T ran a collect-call service accessible through the number 1-800-CALL-ATT (1-800-225-5288) and promoted it with ads featuring comedian Carrot Top. And if you don't know who Carrot Top is, maybe that's for the best.
┌──────╖
│ OK ║
╘══════╝
https://news.ycombinator.com/formatdoc ┌──────────╖
│ CANCEL ║
╘══════════╝
(for posterity)https://math.stackexchange.com/a/2268896
Apparently 1/9^2 is well known to be 0.12345679(012345679)...
EDIT: Yes it's missing the 8 (I wrote it wrong intially): https://math.stackexchange.com/questions/994203/why-do-we-mi...
Interesting how it works out but I don't think it is anywhere close to as intuitive as the parent comment implies. The way its phrased made me feel a bit dumb because I didn't get it right away, but in retrospect I don't think anyone would reasonably get it without context.
Eg 12345679*6*9 = 666666666
1/81 is 0.012345679012345679....
no 8 in sight
.123456789
then add 10 on the end, as the tenth digit after the decimal point, to get .123456789(10)
where the parentheses denote a "digit" that's 10 or larger, which we'll have to deal with by carrying to get a well-formed decimal. Then carry twice to get .12345678(10)0
.1234567900
So for a moment we have two zeroes, but now we need to add 11 to the 11th digit after the decimal point to get .1234567900(11)
or after carrying .12345679011
and now there is only one zero.Then you have (x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...) + (x^3 + x^4 + x^5 + x^6 + ...) (count the number of occurrences of each power of x^n on the right-hand side)
and from the sum of a geometric series the RHS is x/(1-x) + x^2/(1-x) + x^3/(1-x) + ..., which itself is a geometric series and works out to x/(1-x)^2. Then put in x = 1/10 to get 10/81.
Now 0.987654... = 1 - 0.012345... = 1 - (1/10) (10/81) = 1 - 1/81 = 80/81.
1/9 = 0.1111...
1/81 = 1/9 * 1/9 = 0.111... * 0.111... =
Sum of:
0.0111...
0.00111...
0.000111...
...
= 0.012345...0.1111... is just a notation for (x + x^2 + x^3 + x^4 + ...) with x = 1/10
1/9 = 0.1111... is a direct application of the x/(1-x) formula
The sum of 0.0111... + 0.00111... ... = 0.012345... part is the same as the "(x + 2 x^2 + 3 x^3 + ...) = (x + x^2 + x^3 + x^4 + ...) + (x^2 + x^3 + x^4 + x^5 + ...)" part (but divided by 10)
And 1/81 = 1/9 * 1/9 ... part is the x/(1-x)^2 result
1/(b-1) = 0.1111...
1/((b-1)^2) = 1/b * 1/b = 0.111... * 0.111... =The use of series is a little "sloppy", but x + 2 x^2 + 3 x^3 + ... has absolute uniform convergence when |x|<r<1, even more importantly that it's true even for complex numbers |z|<r<1.
The super nice property of complex analysis is that you can be almost ridiculously "sloppy" inside that open circle and the Conway book will tell you everything is ok.
[I'll post a similar proof, but mine use -1/10 and rounding, so mine is probably worse.]
987,654,321 + 123,456,789 = 1,111,111,110
1,111,111,110 + 123,456,789 = 1,234,567,899 \approx 1,234,567,890
So 987,654,321 + 2 x 123,456,789 \approx 10 x 123,456,789
Thus 987,654,321 / 123,456,789 \approx 8.
If you squint you can see how it would work similarly in other bases. Add the 123... equivalent once to get the base-independent series of 1's, add a second time to get the base-independent 123...0.
> 987654320 / 123456790
8.0
I've decremented the numerator and incremented the denominator: ( 987654321 - 1 )
----------------- = 8
( 123456789 + 1 )
Works in other bases. TXR Lisp, base 4: 1> (/ (poly 4 '(3 2 1)) (poly 4 '(1 2 3)))
2.11111111111111
2> (/ (poly 4 '(3 2 0)) (poly 4 '(1 2 4)))
2.0
It also works for base 2, which is below the lowest base used in the article: the Python code goes from 3.For base 2, the ratio is 1/1. When we apply the correction, we get (1 - 1) / (1 + 1) = 0, which is 2 - 2.
* David Goldberg, 1991: https://dl.acm.org/doi/10.1145/103162.103163
* 2014, "Floating Point Demystified, Part 1": https://blog.reverberate.org/2014/09/what-every-computer-pro... ; https://news.ycombinator.com/item?id=8321940
* 2015: https://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloa...
1 / 1 = 1 = b - 1
1 % 1 = 0 = b - 2
they are the other way around, see for example the b=3 case: 21 (base 3) = 7
12 (base 3) = 5
7 / 5 = 1 = b - 2
7 % 5 = 2 = b - 1In base 2 (and only base 2), denom(b) >= b-1, so the "fractional part" (b-1)/denom(b) carries into the 1's (units) place, which then carries into the 2's (b's) place, flipping both bits.