I spent another hour playing around with different techniques to try and teach and convince gpt-4 to play Connections properly, after a bit of exploration and feedback. I incorporated two new techniques
Asking for on category at a time, then giving the model feedback (correct, incorrect, 3/4) Using the chain of thought prompting technique Despite all sorts of shimming and instructions, I still struggled to get the model to
only suggest each word once, even when it already got a category correct only suggest words from the 16 word list Even giving a followup message with feedback that the previous guess was invalid didn’t seem to help.After some experimentation with GitHub Copilot Chat, my review is mixed. I like the ability to copy from the sidebar chat to the editor a lot. It makes the chat more useful, but the chat is pretty chatty and thus somewhat slow to finish responding as a result. I’ve also found the inline generation doesn’t consistently respect instructions or highlighted context, which is probably the most common way I use Cursor, so that was a little disappointing.I worked through a basic SwiftUI 2 tutorial to build a simple Mac app. Swift and SwiftUI are an alternative to accomplish the same things Javascript and React do for web. I could also use something like Electron to build a cross-platform app using web technology, but after reading Mihhail’s article about using macOS native technology to develop Paper, I was curious to dip my toe in and see what the state of the ecosystem looked like.I enjoyed this article by Robin about writing software for yourself. I very much appreciate the reminder of how gratifying it can be to build tools for yourself.I read Swyx’s article Learn in Public today and it’s inspired me to open source most of my projects on Github.
A beautifully written and thought-provoking piece by Henrik about world models, exploring vs. exploiting in life, among other things.I finally had a chance to use Github Copilot Chat in VS Code. It has a function to chat inline like Cursor, which has worked quite well given my initial use of it. I’m looking forward to using this more. Unfortunately, it’s not available for all IDEs yet but hopefully will be soon!
I watched lesson 3 of the FastAI course. I’ve really enjoyed Jeremy Howard’s lecture’s so far.I looked into 11ty today to see if it could be worth migrating away from hugo, which is how (at the time of this post) I build my blog. After a bit of research and browsing, I setup this template and copied over some posts. Some over my older posts were using Hugo’s markup for syntax highlighting. I converted these to standard markdown code fences (which was worthwhile regardless). I also needed to adjust linking between posts.I would love if OpenAI added support for presetting a max_tokens url parameter in the Playground. Something as simple as this:
https://platform.openai.com/playground?mode=chat&model=gpt-4-1106-preview&max_tokens=1024 My most common workflow (mistake):
Press my hotkey to open the playground Type in a prompt Submit with cmd+enter Cancel the request Increase the “Maximum Length” to something that won’t get truncated Submit the request againA thoroughly enjoyable and inspiring read by Omar about his 20 year journey to date.
Quantity was important. Quantity led to emergent of quality.
Read the documentation: I can’t emphasize how useful this is. There are gems upon gems in the documentation. A good documentation gives a glimpse of the mind of the authors, and a glimpse of their experience.I’m betting OpenAI will soon have a Cloud Storage product like Google Drive or iCloud for ChatGPT Plus users. Having your personal data available in the context of a language model is a massive value add. With a product like, OpenAI can fully support use cases like “summarize my notes for the week” or “create action item reminders from this recording”. They’re already dipped their toe in the water with the Files API.