I did a bit more work with Clojure today. My imperative programming habits are still bleeding through. The exercise introduced cond as a sort of case statement for flow control. I wrote a simple cond statement but was getting a bizarre runtime error: (defn my-fn [x] (cond (x < 0) "negative" (x = 0) "zero" (x > 0) "positive" ) ) user=> (my-fn 1) Execution error (ClassCastException) at user/my-fn (REPL:4). class java.
I’ve following the work Exercism has been doing for several years now. I used their product to learn a bit of Elixir and Go a few years back. I got an email from the today promoting a focus on functional programming languages in the month of June. I decided to learn a bit of Clojure, since I’ve been working with the JVM lately. I’ve done a few of the exercises and my takeaways so far are
I tried out jsonformer to see how it would perform with some of structured data use cases I’ve been exploring. Setup python -m venv env . env/bin/activate pip install jsonformer transformers torch Code ⚠️ Running this code will download 10+ GB of model weights ⚠️ from jsonformer import Jsonformer from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b") tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b") json_schema = { "$schema": "http://json-schema.org/draft-07/schema#", "title": "RestaurantReview", "type": "object", "properties": { "review": { "type": "string" }, "sentiment": { "type": "string", "enum": ["UNKNOWN", "POSITIVE", "MILDLY_POSITIVE", "NEGATIVE", "MILDLY_NEGATIVE"] }, "likes": { "type": "array", "items": { "type": "string" } }, "dislikes": { "type": "array", "items": { "type": "string" } } }, "required": ["review", "sentiment"] } prompt = """From the provided restaurant review, respond with JSON adhering to the schema.
The API does not just change without us telling you. The models are static there. — Logan.GPT (@OfficialLoganK) May 31, 2023 Logan says any changes to the model would have been communicated. It seems some folks have data that show the model’s degradation. As competition emerges in the space, it could be a problem for OpenAI if they lose user trust on model versioning and evolution. Tried to setup Falcon 40B.
Tons of reports on HN that GPT-4 has gotten significantly worse. Are people experiencing this? pic.twitter.com/VBminyUj6r — Nabeel S. Qureshi (@nabeelqu) May 31, 2023 A number of folks are reporting gpt-4 appears to be performing less impressively as of late (additional conversation). I was using gpt-4 to write code earlier today, and anecdotally, can say it seems to be less effective at code generation. It still writes working code but the code, but the tests cases it provided aren’t always correct and it seems to be less “expert level” than I recall initially.
I’ve been following Eric’s posts about SudoLang since the first installment back in March. I’ve skimmed through the spec and the value proposition quite compelling. SudoLang seeks to allow programmers all levels to instruct LLMs and can also be transpiled into your programming language of choice. While I’m still early in my understanding of how to use this technique, it’s one I’m following closely and continuing to experiment with.
Imagine we have a query to an application that has become slow under load demands. We have several options to remedy this issue. If we settle on using a cache, consider the following failure domain when we design an architecture to determine whether using a cache actually is a good fit for the use case. Motivations for using a cache When the cache is available and populated it will remove load from the database.
I’ve written several posts on using JSON and Pydantic schemas to structure LLM responses. Recently, I’ve done some work using a similar approach with protobuf message schemas as the data contract. Here’s an example to show what that looks like. Example Imagine we have the following questionnaire that we send out to new employees when they join our company so their teammates can get to know them better. What are your hobbies or interests outside of work?
What if we set GPT-4 free in Minecraft? ⛏️ I’m excited to announce Voyager, the first lifelong learning agent that plays Minecraft purely in-context. Voyager continuously improves itself by writing, refining, committing, and retrieving *code* from a skill library. GPT-4 unlocks… pic.twitter.com/hjTxk6Qb1x — Jim Fan (@DrJimFan) May 26, 2023 NVIDIA researchers introduce an LLM-based agent with “lifelong learning” capabilities that can navigate, discover, and accomplish goals in Minecraft without human intervention.
The Alexandria Index is building embeddings for large, public data sets, to make them more searchable and accessible. That people produce HTML with string templates is telling us something. I think about this phenomena often, though I personally find most string template systems that produce HTML difficult to use. Django templates, Handlebars, Rails ERB, Hugo templates just to name a few. My experience has been these systems are difficult to debug and are practically their own full programming languages.