Hacker Timesnew | past | comments | ask | show | jobs | submit | cdegroot's commentslogin

The book goes back a bit more, so you'd end at Church :-)


There's now a list of errata which I'll keep working on, https://berksoft.ca/gol/errata_1.html


Yup. They have a lot of resources available on how to do it but what gets printed is 100% the author's (or publisher's) responsibility.


It's... "different". I think there are two big things going on: one is the fact that "refactoring" never ends, as you can mold the language around your problem. Ultimately that level of expressiveness makes the hard parts of your code more concise, more readable (as the code ends up being closer to the domain at hand), and therefore more maintainable.

The other thing, which it shares with Smalltalk and which I've seen pretty much nowhere else (Erlang comes very close) is the interactivity. You code inside a running system. Yes, other languages have things they call "the REPL", but they can't deal with classes changing shape, reloading code, etc. It makes the coding cycle much shorter, quicker feedback is better feedback, so you end up going faster (and iterating through complex stuff like that macro that will really nail how clean your top level code is will be much more doable)

When I mentor coders, I often talk about malleable code, like the clay on a potter's table. As soon as you stop working it, things start solidifying and your code turns into something unchangeable and brittle. I think Lisp's traits (which I otherwise only found in Smalltalk) help you push back that point, maybe indefinitely.


Absolutely not. I created every mistake painstakingly by hand ;-)


Yeah, you can't imagine how happy I was when first RPG sent me some of that material to help me out with background and then said yes when I asked him for permission to include them verbatim in my book. He's such a good writer.


Thanks. An initial list of errata will appear this weekend (probably tomorrow). I think I'll add a RSS feed for it :)

W.r.t. actors and scheme: the whole thing is that Sussman and Steele started Scheme to figure out actors, did some hacking to do async stuff on top of Maclisp, essentially, and then found out that their stuff and (Hewitt-style) actors were the same. So I guess Scheme took "the same but a different" path early on, pretty much how Erlang and Golang are pretty much similarly powerful systems, expressing the same functionality in different ways.


Thanks. Even though I'm originally from Europe, the book has ended up somewhat focused on what happened on the continent where I now live. It's something I'm planning to fix for a potential second edition.


Darn, I had his name wrong and fixed it, but somewhere an undo button must have been hit. Thanks for pointing it out.

I'm more than happy to add corrections to an already "longer than zero" list of errata. I'll give the Scheme chapter and Wikipedia a once-over to see where I went off the rails.


There's also an issue where Hewitt's actors were more like Erlang processes, i.e. unlike Scheme closures, they could run independently of each other. Maybe call/cc can simulate something like that. I remember the footnote in SICP claiming that Scheme was developed partly to understand what Hewitt was talking about, but I think that might not have been serious. It could be worth trying to talk to Steele or Sussman about this history.


Yup, and the book mentions that, including the surprising result that (Hewitt-style) actors - which are not completely like Erlang processes - and Scheme's closures were the same.


When writing a book, you can't make everybody happy. I wrote this for a somewhat general techie audience and already had debates about the amount of math material in the lead-up to LISP I :-). Especially here on HN, there's a better-than-average chance that people will want more, something more encyclopedic, and I get that but "ok for a cross-section of Lisp history" already fills a book, I had to stop somewhere. Too much for some, not enough for others, hopefully "mostly ok" for most readers, it's all I can aim for.

And +1 on a comprehensive Lisp history bibliography, that's a great idea.


> When writing a book, you can't make everybody happy.

The usual reason a reader might be unhappy is that something they wanted to see isn't there. So the solution is put in as much as you possibly can ;). Maybe future editions can be bigger and more comprehensive. OTOH there seems to be quite a lot of what amounts to implementation tutorials. Maybe that's not needed in a history book. In a history book I'm more interested in sources than narrative. Although, some interviews with important Lispers would also be cool.

I can understand not wanting to put in too much math and theory and that's fine. I can't really tell what is there and what isn't beyond getting some hints from the bibliography entries.

This (by McCarthy) showed up immediately when I searched for something unrelated, some articles by Jeff Barnett about Lisp 2: http://jmc.stanford.edu/articles/lisp/lisp.pdf

This is a link dump about Lisp 2: https://softwarepreservation.computerhistory.org/LISP/lisp2_...

I have been wanting to look into Lisp 2 because it had supposedly had an interesting trick in its GC. It was a compacting mark/sweep GC but had an antecedent of generational GC where it usually wouldn't bother trying to reclaim memory that had already survived compaction once. I've been interested in re-implementing that trick in some modern implementations for small MCUs.


Didn't know about the planned GC tricks (I mostly treat LISP 2 in passing so it most salient superficial points: the syntax and the fact it never happened), interesting!

W.r.t. tutorials: the most code-rich chapter is the one about "the Maxwell equations of software". As a Smalltalker, I'm well aware of Kay's label of the code in the LISP 1.5 manual. It's a good exercise, especially for non-Lispers, but dare I say also for most Lispers, to implement this stuff to both see how powerful simple ideas can be and to see how this magic works in its bare essence (stripped of "noise" like parsers, etc). The rest is basically illustrations of concepts, and that's on purpose; I wanted to write a history book primarily aimed at techies, so code had to be there.


The link dump includes scans of the source code and a link to this free-to-read paper on the history of the Lisp 2 project: https://ieeexplore.ieee.org/document/8267589 (I’m the author).


Other reasons are:

- what they want is there but not explained or presented in a way that clicks for them

- something is there that they don't want

The latter would be a "WONTFIX". The former is a major difficulty because presenting the same ideas in umpteen ways adds bulk without significant content. If you have the same ideas from every possible angle, it becomes search problem for the reader quickly turning into a TL;DR.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: