In a previous note, I discussed running coroutines in a non-blocking manner using gather. This approach works well when you have a known number of coroutines that you want to run in a non-blocking manner. However, if you have tens, hundreds, or more tasks, especially when network calls are involved, it can be important to limit concurrency. We can use a semaphore to limit the number of coroutines that are running at once by blocking until other coroutines have finished executing.
Python coroutines allow for asynchronous programming in a language that earlier in its history, has only supported synchronous execution. I’ve previous compared taking a synchronous approach in Python to a parallel approach in Go using channels. If you’re familiar with async/await in JavaScript, Python’s syntax will look familiar. Python’s event loop allows coroutines to yield control back to the loop, awaiting their turn to resume execution, which can lead to more efficient use of resources.
Render is a platform as a service company that makes it easy to quickly deploy small apps. They have an easy-to-use free tier and I wanted run a Python app with dependencies managed by Poetry. Things had been going pretty well until I unexpectedly got the following error after a deploy Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' You don’t have to search for too long to find out this isn’t good.

2023-11-25

A thoroughly enjoyable and inspiring read by Omar about his 20 year journey to date. Quantity was important. Quantity led to emergent of quality. Read the documentation: I can’t emphasize how useful this is. There are gems upon gems in the documentation. A good documentation gives a glimpse of the mind of the authors, and a glimpse of their experience.
I’m betting OpenAI will soon have a Cloud Storage product like Google Drive or iCloud for ChatGPT Plus users. Having your personal data available in the context of a language model is a massive value add. With a product like, OpenAI can fully support use cases like “summarize my notes for the week” or “create action item reminders from this recording”. They’re already dipped their toe in the water with the Files API.
In Javascript, using async/await is a cleaner approach compared to use of callbacks. Occasionally, you run into useful but older modules that you’d like to use in the more modern way. Take fluent-ffmpeg, a 10 year old package that uses callbacks to handle various events like start, progress, end and error. Using callbacks, we have code that looks like this: const ffmpeg = require('fluent-ffmpeg'); function convertVideo(inputPath, outputPath, callback) { ffmpeg(inputPath) .
When deploying software with Kubernetes, you need to expose a liveness HTTP request in the application. The Kubernetes default liveness HTTP endpoint is /healthz, which seems to be a Google convention, z-pages. A lot of Kubernetes deployments won’t rely on the defaults. Here is an example Kubernetes pod configuration for a liveness check at <ip>:8080/health: apiVersion: v1 kind: Pod metadata: name: liveness-http spec: containers: - name: liveness image: k8s.gcr.io/liveness args: - /server livenessProbe: httpGet: path: "/health" port: 8080 initialDelaySeconds: 3 periodSeconds: 3 When setting up a new app to be deployed on Kubernetes, ideally, the liveness endpoint is defined in a service scaffold (this is company and framework dependent), but in the case it isn’t, you just need to add a simple HTTP handler for the route configured in the yaml file.
I did more exploration with Copilot, mainly for writing unit tests. Copilot is a pretty good at bootstraping unit tests, particularly in Java, where initializing the right types may take several lines and there is a large standard library. In doing this, the Copilot-written code was close enough that it saved me time relative to writing the tests from scratch. I learned that Copilot can show a ranked list of completion in a separate window in the IDE, which is helpful and can also be invoked with a hotkey (option+\ in IntelliJ), which is useful as well.
I’ve been integrating Copilot into my workflow the past few days. From my understanding, it uses OpenAI’s Codex model, which is part of the GPT-3 model series. I believe this also predates the chat models, gpt-3.5-turbo and gpt-4. As someone who has been using Cursor for my personal work for several months now, the core completion functionality of Copilot feels like a step back compared to Cursor’s cmd+k (to say nothing of chat, which both have and other features).
I finished migrating my site to the latest release of Hugo today. It’s been quite a while since I’d pulled the latest changes but most of the fixes were straightforward. A number of the partials had been updated, so I need to port my custom components to incorporate the changes. I also migrated what I’d previously kept in a static directory to assets. I looked further into using hidutil to replace Karabiner.