Working Notes: a commonplace notebook for recording & exploring ideas.
Home. Site Map. Subscribe. More at expLog.
A -- tentatively -- weekly catalog of things I've been finding interesting as a programmer. There's always something interesting going on, and I wanted to have some record of what's been catching my attention spread across time.
Writing things out -- well, or poorly -- has generally paid off well in clarifying what I'm thinking about, showing the gaps in what I'm thinking, and helping me navigate the world in general.
I hope these letters help me start -- and maintain -- this practice again. And that they can capture some of the joy, curiousity, frustration and sense of excitement I find on programming; and mail themselves back to me on days I find myself jaded.
If you happen to come across these, you should expect a lot of links across several domains: programming languages, systems programming, ML, design, systems and organizational dynamics and whatever happens to catch my fancy. This, very first, edition is likely to be significantly longer than the rest just because I have so much to say it forced me to start writing.
Newsletters I find myself inspired by: Factorio's Friday Facts for the detail, craft and readability; Craig Mod's pop-up newsletters, taking them into an entirely different art form; John Cutler's The Beautiful Mess for incisive descriptions of patterns and anti-patterns in organizations; and of course, Kent Beck's Tidy First -- musings on design, a collection of new ideas, and the fearlessness to constantly experiment.
When it comes to books, my eyes are much, much, much bigger than my
stomach. I have far too many I'm trying to read at the same time; some
of the books I've read over the past week include:Kill It With
Fire, a fascinating book by Marianne Bellotti which I ran into
while catching up with Strange Loop talks I wasn't able to
attend. There are several lessons here: the incredible value of
familiarity with the existing systems, why cp
and ls
were named
the way they were and more.
At the same time, I'd like to have a significantly better handle on programming GPUs: Programming Massively Parallel Processors has been a pleasure in both learning about CUDA and being up to date in a very fast moving world.
On the same note, Understanding Software Dynamics brings significantly more rigor to my understand of performance; embarrassingly enough this book disappeared into one of my collections and I forgot all about it till I stumbled back into it recently.
Bash Idioms, the Google Shell Style Guide and
ShellCheck have been helping me write up some production-worthy
shell scripts (with several questions to ChatGPT along the
way). Misunderstanding parameter expansion led me to committing
broken code repeatedly to the point of printing out a cheat
sheet and a solemn promise to only ever use [[ -z ${1-} ]]
and
[[ -n ${1-} ]]
when testing for an argument with "strict" mode
(-u
) enabled.
Given I'm working on tools used by people building transformers, and that I spend most of my day bothering ChatGPT with questions on documentation I can't be bothered to read, it seemed like a good idea to implement Transformers on my own steam. I spent most of a 6 hour flight watching and re-watching Andrej Karpathy's video on NanoGPT while also trying to implement pieces in HyLang and Jax -- as a way to make sure I actually understand the material. I've been making slow progress on the bigram model.
I enjoy using Lisp, and I enjoy writing Python. Re-finding a surprisingly functional implementation of a Lisp that runs on Python has been surprisingly cathartic and enjoyable; I expect to use this combo for most of my personal programs in the near future.
Hy is very usable, and I have a lot of stuck projects: Transformers, a site generator for this website, migrating my slipbox, working through PAIP, Let Over Lambda and similar books that get unblocked as I play with this language. There are rough edges to work through, but for the most part I find myself delighted.
Last week, I was finally able to talk publicly about some work I did in 2022; building support for logging intermediate values in PyTorch -- ignoring any transforms that may be applied to the model. It's some of the sneakiest code I've ever written, with significant amounts of metaprogramming through code generation. I plan to refactor and release the code soon; I have some ideas on how to write it in a way that makes it both easy to understand and to use. The slides for the talk are available online, and the video should be up soon.
— Kunal