Imagine we have a query to our application that has become slow under load demands. We have several options to remedy this issue. If we settle on using a cache, consider the following failure domain when we design our architecture to determine whether using a cache actually is a good fit for our use case.
Motivations for using a cache When the cache is available and populated it will remove load from our database.
I’ve written several posts on using JSON and Pydantic schemas to structure LLM responses. Recently, I’ve done some work using a similar approach with protobuf message schemas as the data contract. Here’s an example to show what that looks like.
Example Imagine we have the following questionnaire that we send out to new employees when they join our company so their teammates can get to know them better.
What are your hobbies or interests outside of work?