This is my page’s equivalent of a social media feed that I use to link to interesting content. If you want to follow this feed, you can use the update-only RSS feed
with any RSS reader.
Interesting post from the great blog
Construction Physics
on the evolution of solar energy cost in the last 20 years, and what that means for solar’s role as an alternative way of powering your home.
An interesting point is that the cost of powering your home doesn’t increase linearly with the % of solar used in your electricity mix. If you target to use solar for 90% of your electricity, it will cost you much more than 2x the cost of targeting to use solar for 45% of your electricity. This is because you need to either overbuild capacity or provide extra storage to cover times of lower solar yield.
However, if the cost of PV modules (and ideally installation) keeps decreasing at a similar pace, this becomes less and less of an issue.
One of the best ways to become a more effective user of LLMs is to watch other people use them. An eye-opener for me was
this post
by Nicholas Carlini.
Some useful snippets from Simon Willison’s LLM usage guide:
The best way to start any project is with a prototype that proves that the key requirements of that project can be met. I often find that an LLM can get me to that working prototype within a few minutes of me sitting down with my laptop—or sometimes even while working on my phone.
I could write this function myself, but it would take me the better part of fifteen minutes to look up all of the details and get the code working right. Claude knocked it out in 15 seconds.I find LLMs respond extremely well to function signatures like the one I use here. I get to act as the function designer, the LLM does the work of building the body to my specification.
I often wonder if this is one of the key tricks that people are missing—a bad initial result isn’t a failure, it’s a starting point for pushing the model in the direction of the thing you actually want.
I’ve been vibe-coding since before Andrej gave it a name! My simonw/tools GitHub repository has 77 HTML+JavaScript apps and 6 Python apps, and every single one of them was built by prompting LLMs. I have learned so much from building this collection, and I add to it at a rate of several new prototypes per week.
And this is similar to something I wrote before:
I’m certain it would have taken me significantly longer without LLM assistance—to the point that I probably wouldn’t have bothered to build it at all. This is why I care so much about the productivity boost I get from LLMs so much: it’s not about getting work done faster, it’s about being able to ship projects that I wouldn’t have been able to justify spending time on at all.
Interesting discussion of git config options that the git core developers favor and that are not (yet) defaults. There are some nice suggestions. For example, I didn’t know that
git config --global push.autoSetupRemote true
existed. If you haven’t defined an upstream for a branch yet, git will automatically set it for you. So you don’t have to run
git push --set-upstream origin my-branch-name
anymore.
Some other options I adopted from this post:
# sort branches by last committed date$ git config --global branch.sort -committerdate
# sort tags by tag-number not alphabetically$ git config --global tag.sort version:refname
# better diff algorithm than the default myers$ git config --global diff.algorithm histogram
# color moved code differently than added code in diffs$ git config --global diff.colorMoved plain
# push branch to same-named remote$ git config --global push.default simple
# attempt to autocorrect misspelled git commands in the cli$ git config --global help.autocorrect prompt
# add the diff to the commit message draft$ git config --global commit.verbose true# 3-way-diffing$ git config --global merge.conflictstyle zdiff3
This is an interesting podcast conversation with
Brandon Sanderson
that I came across accidentally (I enjoyed particularly the parts from [00:37:57] onwards). He is the author of
The Stormlight Archive
and
Mistborn
fantasy novels. I have read The Way of Kings some years ago but didn’t yet continue the series.
A couple of interesting things he discussed:
He publishes and distributes his books via his company, Dragonsteel, and uses successful
kickstarter campaigns
to finance new projects. Kickstarter is quite common for board games or video games projects, but not many people have used it for successful book projects.
He develops and tests his books like a Hollywood studio would test movie ideas. For example, his team developed an elaborate test reader process to understand whether they understand and/or like certain parts of a book. I find it interesting how he walks the fine line between writing about what he likes and writing for the market and commercial success.
He explains that ebook and audiobook deals are hard to negotiate when the market is dominated by amazon / audible.
I found his distinction between hard magic systems (magic rules are explicitly laid out to the reader, e.g. Asimov series) and soft magic systems (rules are vague, and reader or characters will develop surprising new capabilities, e.g. Gandalf in Lord of the Rings) fascinating.
“Getting from a rough idea to a working proof of concept of something like this with less than 15 minutes of prompting is extraordinarily valuable. This is exactly the kind of project I’ve avoided in the past because of my almost irrational intolerance of the frustration involved in figuring out the individual details of each call to S3, IAM, AWS Lambda and DynamoDB.”
This describes exactly why I think that the current generation of models is already immensely valuable.
There are a range of technologies and frameworks that would deliver me value if I would spend the time to adapt them to my use case. Normally this would involve a lot of googling and reading forum posts that only describe my problem to 80%. Having an LLM to guide you in the right direction, lowers the bar and time investment enough to allow much easier and quicker experimentation.
A LLM-generated proof-of-concept, followed by understanding the solution, and then a refining is what works for me. I used it to implement changes on this blog (don’t know much about Hugo), building simple apps at work (with retool), debugging package errors (homebrew, pyenv), etc.
Casey Handmer’s perspective on how to think about your job and career, especially relevant for people transitioning into tech after a PhD/Postdoc:
“It’s not enough to have mastered your job to get moved up. You also have to build trust with your management. It doesn’t matter how good you are at the mechanics of your job, if your management and colleagues don’t trust you, they’ll see you as a loose cannon and try to find ways to offboard you. I have been in this position before – and clueless about it. My job was saved because I had become critical infrastructure for too much of the system, but I was still marginalized and unable to advance, because I had broken (spectacularly!) the trust of management.”
A couple of senior developer best practices that I can relate to, especially:
“Avoid, at all costs, arriving at a scenario where the ground-up rewrite starts to look attractive.”
and
“Nobody cares about the golden path. Edge cases are our entire job. Think about ways in which things can fail. Think about ways to try to make things break.”
I am at least sceptical that the current LLM approach will allow the necessary step change in capability, autonomy, and robustness. However, it is fun to read about hypotheses how future companies will look like:
“Everyone is sleeping on the collective advantages AIs will have, which have nothing to do with raw IQ but rather with the fact that they are digital—they can be copied, distilled, merged, scaled, and evolved in ways human simply can’t.
What would a fully automated company look like - with all the workers, all the managers as AIs? I claim that such AI firms will grow, coordinate, improve, and be selected-for at unprecedented speed.”
I find it more practical to think about how to position yourself in a world where certain aspects of your job are already automatible, i.e. the skills are available in the training dataset and they can be verified as right or wrong. This is discussed
here
.
I think most people (that get to choose) know deep down about the major things they need to do to be more productive, have a more fulfilling career and family life. Nevertheless, it’s always interesting to see other people’s focuses and learnings. I agree with most of them.
I agree with this blog post on good (impact, leverage, vision) and bad (money, status, growth) reasons for becoming a manager. Nicely put together and I like the style of the blog.
I am a big fan of the decentralized and simple nature of RSS. I am using the Feeder RSS app which means I can curate a feed of interesting content without an algorithm having to push content on me.
“[…] much of the point of a model like o1 is not to deploy it, but to generate training data for the next model. Every problem that an o1 solves is now a training data point for an o3 (eg. any o1 session which finally stumbles into the right answer can be refined to drop the dead ends and produce a clean transcript to train a more refined intuition).”
I wanted to add a link blog section to this blog since some time. This article gave me the motivation to do it. It took me roughly 2 hours with the support of claude to add this functionality and design to this page. I roughly follow the design/ideas in
My approach to running a link blog
.