I downloaded Pile, a journal app with a first-class language model integration and an offline ollama integration. For personal data, running the model offline is a must for me. I use DayOne sporadically, but I’m intrigued by the potential of a more of conversational format as I write. The concept of a journal writing partner appears to be capturing mindshare. I found another similar app called Mindsera today as well. I also learned about Lex which puts collaborative and AI features at the heart of document authorship, I concept I played around a bit with Write Partner.

2024-08-19

I setup WezTerm and experimented a bit. It’s a nice terminal emulator. I like the builtin themes and Lua as a configuration language. These days, I largely rely on the Cursor integrated terminal. It’s not the greatest, but having cmd+k it’s a bit of a killer feature.

2024-08-18

I can't believe we're back to discussing LLMs' ability to reason. Where have you been these past two years? In a bunker? If you'd actually worked with LLMs during this time, you'd know by now that they're obviously pattern-matching machines. Try asking one to write incorrect… pic.twitter.com/KPcDCI2cjD — Andriy Burkov (@burkov) August 18, 2024 I haven’t viewed the LLMs-can, LLM-can’t discourse through this lens explicitly. they’re obviously pattern-matching machines I’m not sure if I understand at what point these are different things.
I tried to run florence-2 and colpali using the Huggingface serverless inference API. Searching around, there seems to pretty pretty start support for image-text-to-text models. On Github, I only found a few projects that even reference these types of models. I didn’t really know what I was doing, so I copied the example code then tried to use a model to augment it to call florence-2. Initially, it seemed like it was working:
I’ve been doing some experimentation with smaller models and embeddings, including distilbert/distilbert-base-uncased-finetuned-sst-2-english and cardiffnlp/twitter-roberta-base-sentiment-latest as binary sentiment classifiers and google/vit-base-patch16-224 as an image classifier. Also GoogleNews-vectors-negative300 and fasttext-wiki-news-subwords-300 for embeddings to try and find semantically similar words and concepts.

2024-08-09

I figured out the issue with adding mistral-large. After a bit of debugging, I realized by manually calling llm_mistral.refresh_models() that something was wrong with how I had added the secret on Modal. It turns out the environment variable name for the Mistral API key needed to be LLM_MISTRAL_KEY. I’m going to try and make a PR to the repo to document this behavior. I’ve been trying to run models locally. Mostly specifically colpali and florence-2.
I did some experimentation deriving a data model iteratively (something I am currently calling “data model distillation”) by sequentially passing multiple images (could work with text as well) to a language model and prompting it to improve the schema using any new learnings from the current image. Results so far have been unimpressive. I’ve been hearing good things about mistral-large-2. I’m working on adding it to bots-doing-things but have had a bit of dependency trouble so far.
I watched Jeremy Howard’s interview with Carson Gross, the author of htmx. As someone who learned my first bits of web dev with jQuery, I feel like I appreciate the foundations of the approach in the library, but am still early in fully developing my mental model. Jeremy built a Python wrapper on top of htmx called fastml and the combination of these technologies is pretty well aligned with the technology I like to work with.
I tried the Vision Pro today. I had heard mixed reviews, mostly about how heavy it is and people getting headaches or vertigo. Those challenges are real. Even still, the experience was pretty incredible. I might need to go back for another demo, so I can choose my own adventure rather than staying on the standard demo path. The eye tracking selection was natural and pretty effective. I did find my eyes getting a bit tired though and eventually got a bit of a headache.

2024-08-02

I added image support for the chat shortcode of this site. gpt-4o Chat user Describe this image assistant The image depicts a LEGO chef figure in a playful, detailed kitchen setting. The chef, dressed in a white chef’s coat, apron, and white chef’s hat, appears to be cooking or serving food. The LEGO figure is smiling and holding utensils, with plates of colorful items like tomatoes in the foreground, resembling food.