2024-06-05

I spent some time moving my bookmarks and workflows to Raindrop from Pocket. I’m mostly done with a short post on how and why I did that. I’d like to use the python-raindropio package to generate stubs for logs posts from links I bookmark, including comments I write about them. The Nix flake I use for this blog hampered those efforts. The python-raindropio package isn’t available as a Nix package and I’m not particularly well versed in building my own packages with Nix, so that became a bit of a rabbit hole.

2024-06-01

I was away from the computer for a couple of weeks. That was really nice. During my downtime and in transit, these were some of my favorite things I read in the past two weeks. I’m planning to come up with a bit of a better system for logging and adding thoughts to stuff I read but for now a list will have to do. https://eugeneyan.com/writing/simplicity https://daniel.haxx.se/blog/2024/05/27/my-bdfl-guiding-principles/ https://yekta.dev/posts/dont-microservice-do-module https://hiandrewquinn.github.io/til-site/posts/doing-is-normally-distributed-learning-is-log-normal/ https://blog.railway.app/p/how-we-work-volume-iii https://jxnl.
Sabrina wrote an interesting write up on solving a math problem with gpt-4o. It turned out the text-only, chain-of-thought approach was the best performing, which is not what I would have guessed. It’s was cool to see Simon dive into LLM-driven data extraction in using his project datasette in this video. Using multi-modal models for data extraction seems to bring a new level of usefulness and makes these models even more general purpose.

2024-05-15

Nostalgia: https://maggieappleton.com/teenage-desktop. I wish I had done something like this. Maybe I can find something on an old hard drive.
I’m looking into creating a Deno serve that can manage multiple websocket connections and emit to one after receiving a message from another. A simple way to implement this is to have a single server id and track all the ongoing connections to websocket clients. I’m learning more about approaches that could support a multi-server backend.

2024-05-09

I take an irrational amount of pleasure in disabling notifications for apps that use them to send me marketing.

2024-05-08

I enjoyed reading Yuxuan’s article on whether Github Copilot increased their productivity. I personally don’t love Copilot but enjoy using other AI-assisted software tools like Cursor, which allow for use of more capable models than Copilot. It’s encouraging to see more folks adopting a more unfiltered thought journal.
I read this post by Steph today and loved it. I want to try writing this concisely. I imagine it takes significant effort but the result are beautiful, satisfying and valuable. It’s a privilege to read a piece written by someone who values every word.
llama 3-400B with multimodal capabilities and long context would put the nail in the coffin for OAI — anton (@abacaj) May 6, 2024 Having gotten more into using llama 7b and 30b lately, this take seems likes it could hold water. Model inference still isn’t free when you scale a consumer app. Maybe I can use llama3 for all my personal use cases, but I still need infra to scale it.
I read Jason, Ivan and Charles’ blog post on Modal about fine tuning an embedding model. It’s a bit in the weeds of ML for me but I learn a bit more every time I read something new.