I use Simon’s llm to quickly run LLM prompts. This package is easily installed with brew or pip, so if you want to use it, I recommend those approaches. The following approach is not for the faint of heart and assumes a bit of familiarity with Nix and home-manager. We are going to install the llm including the llm-mistral plugin using Nix. It’s not particularly straightforward, but if you want to manage this tool with Nix, it appears to be possible.
I’ve been meaning to try out Simon’s llm package for a while now. From reading the docs and following the development, it’s a modular, meet-you-where-you-are CLI for running LLM inference locally or using almost any API out there. In the past, I might have installed this with brew, but we run nix over here now so everything is harder first, then reproducible. The llm package/cli is available as a few different nixpkgs
Devbox, is an interesting, nix-based tool for setting up reproducible development environments. I recently needed to quickly setup a postgres database and load the Chinook dataset to play around with some queries. I could have used Docker, but I am not a fan of its UI or how heavyweight it has become (looking into podman is also on my todo list) and I’ve been using nix a lot lately, which is what led me to the devbox project.
As I’ve fallen more down the rabbit hole, empowered by nix making it so easy to install, configure and manage any software, I discovered Alacritty as a fast, configurable terminal emulator. I’ve used and enjoyed iTerm2 for a while but it never hurts to try something new. I have some muscle memory built up for how my use my machine, so my aim was to configure something I could use comfortably in Alacritty, modeling it off of my iTerm setup.
I’ve used Hammerspoon as a window manager for almost 10 years. I decided to explore some of the newer tools in window management to see if I could find an alternative approach for what I do with Hammerspoon. Using yabai and skhd, I wrote the following skhdrc file that nearly reproduces the core functionality of my Hammerspoon window management code. I have four general window management use cases: halves quarters maximize move to another display Here’s how I implemented that with skhd hotkeys mapped to yabai commands:
This post is extremely similar to nix flakes and direnv. Here, I repeated my process, but with a little more thought and a little less language model magic. I setup my new computer to use nix, switching away from Homebrew, which I’ve used to manage and install dependencies on my system for about a decade. My goal was to unify my configuration management with my package management. Thus far, I’ve been quite satisfied.
I was following this guide to setup nix-darwin on a new Mac when I ran into an issue following the section about cross-compiling Linux binaries. I put this issue to the side when I first encountered it because I was trying to setup dependency management for my new system and this problem didn’t prevent that. However, I was reminded when I read another article by Jacek, which motivated me to figure out what the problem was.
Last year I wrote about nix and direnv as I explored the potential convenience of an isolated, project-specific environment. There were some interesting initial learnings about nix, but I didn’t really know what I was doing. Now, I still don’t know what I’m doing, but I’ve been doing it for longer. As an example, I’m going to walk through how I set up a flake-driven development environment for this blog with direnv.
I just did a fresh clone of my site for the first time in (probably) years. I’ve been using nix on my new system, so I was writing a flake to setup a development environment for the site with Hugo and Python. When I ran hugo serve, I saw all my content show up | EN -------------------+------ Pages | 528 Paginator pages | 20 Non-page files | 0 Static files | 173 Processed images | 0 Aliases | 53 Sitemaps | 1 Cleaned | 0 but when I went to load the local site at localhost:1313, I saw “Page Not Found”.
OpenAI popularized a pattern of streaming results from a backend API in realtime with ChatGPT. This approach is useful because the time a language model takes to run inference is often longer than what you want for an API call to feel snappy and fast. By streaming the results as they’re produced, the user can start reading them and the product experience doesn’t feel slow as a result. OpenAI has a nice example of how to use their client to stream results.