A spot where I slipped up in trying to adopt Temporal in an existing Python project and then again in starting a new Python project was in defining a Workflow that invokes an Activity that calls a third party library. Temporal outputs an error message with a long stacktrace that I vaguely understood but didn’t immediately know the solution to
... raise RestrictedWorkflowAccessError(f"{self.name}.{name}") temporalio.worker.workflow_sandbox._restrictions.RestrictedWorkflowAccessError: Cannot access http.server.BaseHTTPRequestHandler.responses from inside a workflow.I wanted to stop the Obsidian editor cursor from blinking. Something like VS Code’s
{ "editor.cursorBlinking": "solid" } Some searching turned up an option to solve this problem in Vim mode using CSS, but in insert mode, the cursor still blinks. Eventually, I came across a macOS-based approach to solve this issue on StackExchange, included here for convenience
defaults write -g NSTextInsertionPointBlinkPeriod -float 10000 defaults write -g NSTextInsertionPointBlinkPeriodOn -float 10000 defaults write -g NSTextInsertionPointBlinkPeriodOff -float 10000 After running, restart Obsidian and the cursor no longer blinks.Language models and prompts are magic in a world of deterministic software. As prompts change and use cases evolve, it can be difficult to continue to have confidence in the output of a model. Building a library of example inputs for your model+prompt combination with annotated outputs is critical to evolving the prompt in a controlled way, ensuring performance and outcomes don’t drift or regress as you try and improve your overall performance.I’ve been doing a bit of work with Temporal using it’s Python SDK. Temporal remains one of my favorite pieces of technology to work with. The team is very thoughtful with their API design and it provides a clean abstraction for building distributed, resilient workflows. It’s a piece of technology that is difficult to understand until you build with it, and once you do, you find applications for it everywhere you look.Cursor is VS Code with Cmd+K that opens a text box that can do text generation based on a prompt. When I created this post, I first typed
insert hugo yaml markdown frontmatter
In a few seconds, the editor output
--- title: "Cursor Introduction" date: 2023-08-12T20:00:00-04:00 draft: false tags: - cursor - intro --- This was almost exactly what I was looking for except the date was not quite right, so I corrected that and accepted the generation.The problem with long running code in Next serverless functions The current design paradigm at the time of this writing is called App Router.
Next.js and Vercel provide a simple mechanism for writing and deploying cloud functions that expose HTTP endpoints for your frontend site to call. However, sometimes you want to asynchronously do work on the backend in a way that doesn’t block a frontend caller, needs to move on.🎧 Velocity over everything: How Ramp became the fastest-growing SaaS startup of all time | Geoff Charles (VP of Product)
This conversation between Lenny and Geoff was particularly noteworthy for me because it hit on so many areas of what I’ve seen in the most effective organizations and teams I’ve been apart of as well as realigning incentives to solve a number of problems I’ve experienced that hold teams back.Simon wrote an excellent post on the current state of the world in LLMs.
Twitter continues to talk LK-99. It seems like an easy thing to root for but hard to tell exactly what is going on.First attempt I made an attempt to setup TypeChat to see what’s happening on the Node/TypeScript side of language model prompting. I’m less familiar with TypeScript than Python, so I expected to learn some things during the setup. The project provides example projects within the repo, so I tried to pattern off of one of those to get the sentiment classifier example running.
I manage node with asdf. I’d like to do this with nix one day but I’m not quite comfortable enough with that yet to prevent it from become its own rabbit hole.The high-order bit that changed in AI:
"I'll give you 10X bigger computer"
- 10 years ago: I'm not immediately sure what to do with it
- Now: Not only do I know exactly what to do with it but I can predict the metrics I will achieve
Algorithmic progress was necessity, now bonus.
— Andrej Karpathy (@karpathy) August 3, 2023 Turning scaling into a systematic science is the biggest advance enabled by LLMs.