Skip to main content
Adam Leventhal's Blog

Recent Posts

Oxide and Friends 2024 in Images

Bryan and I just wrapped up season 4 of Oxide and Friends. Borrowing from The Changelog, we capped the season with a look back on the past year of episodes. While the Changelog folks pick their favorite titles, we had more to say about our favorite cover images. Below I've summarized our year in images. Click on the left for YouTube, and the right for the podcast (with links to various podcast platforms).

We record most Mondays on the Oxide Discord server; join us live to chime in or just heckle us in chat.

YouTube
Podcast
Episode 1: Predictions 2024!
Episode 2: Open Source LLMs with Simon Willison
Episode 3: What's taking so long?!
Episode 4: Helios
Episode 5: Innovation Stagnation?
Episode 6: Crucible: The Oxide Storage Service
Episode 7: Data Visualization
Episode 8: Adversarial Machine Learning
Episode 9: Cultural Idiosyncrasies
Episode 10: Discovering the XZ Backdoor with Andres Freund
Episode 11: A Baseball Startup with Paul Freedman and Bryan Carmel
Episode 12: All we have to fear is FUD itself
Episode 13: Bookclub: How Life Works by Philip Ball
Episode 14: Rebooting a datacenter: A decade later
Episode 15: Musing with Changelog's Adam Stacoviak
Episode 16: Is NVIDIA like Sun from the Dot Com Bubble?
Episode 17: Innovation Tokens with Charity Majors
Episode 18: Heterogeneous Computing with Raja Koduri
Episode 19: CrowdStrike BSOD Fiasco with Katie Moussouris
Episode 20: Pragmatic LLM usage with Nicholas Carlini
Episode 21: The Saga of Sagas
Episode 22: Whither CochroachDB?
Episode 23: RFDs: The Backbone of Oxide
Episode 24: Reflecting on Founder Mode
Episode 25: RTO or GTFO
Episode 26: Querying Metrics with OxQL
Episode 27: Unshrouding Turin (or Benvenuto a Torino)
Episode 28: Books in the Box IV
Episode 29: Technical Blogging
Episode 30: Intel after Gelsinger
Episode 31: Conferences in Tech
Episode 32: Scaling Bluesky with Paul Frazee
Episode 33: OxF 2024 Wrap-Up

Austin API Summit Wrap-up

I started the year thinking that we had done some neat work in the Rust / API / OpenAPI ecosystem and I wanted to talk about it. I found my people at the Nordic APIs Austin Summit; it’s a conference that had been on my radar for a while, but it was my first time attending. I was pleasantly surprised by how interesting the sessions and conversations were… and was pleased to share some stuff we’ve been up to.

The Oxide API-a-matic universe

I presented (slides) our experience with Rust as API server and client, generating an SDK, and generally how terrific all that has been. It’s worked out better—if differently—than I expected. Joining Oxide, I had some experience with OpenAPI. I thought we’d be able to avoid a bunch of work, relying instead on the strength of the ecosystem. Faith in the ecosystem turned out to be misplaced. However, the investment we made in our own web API framework (dropshot), and SDK generation (progenitor) turned out to be incredibly and surprisingly valuable.

SDK: The Next Generation

Given my poor experience with open source SDK generators, I thought we might be alone in seeing the value of SDK generation. Instead, it was a discernible theme at the conference.

Microsoft was there (in numbers) talking about Kiota, their open source SDK generator (and TypeSpec—also interesting). It’s very cool, very post-lost decade Microsoft. Part of the thesis (as far as I can tell) is that first-party API producers may not need to ship their own SDK; instead, they can share OpenAPI (or TypeSpec) and let consumers use Kiota to generate clients. Further, they can generate specifically the clients they need: not just by picking the relevant language, but the relevant subset of the API, or even by picking subsets of several APIs to have a consistent client interface for all the APIs they want to talk to. You don’t need various vendors to make consistent SDKs—you can generate the SDK you need for the integrations you want to build. It’s a neat concept I hadn’t seen before.

TypeSpec is pretty cool; it jives with some of my opinions on OpenAPI. TypeSpec is a higher-level, terser, opinionated, more expressive way to define an API. It’s a new TypeScript-familiar language that you can translate into OpenAPI. At Oxide, we define our APIs with Dropshot as annotations in the code itself, but it’s a similar idea. In both cases humans aren’t writing JSON by hand (Yes! Leave machine formats to the machines!). There can be some lossiness, e.g. TypeSpec allows you to express pagination semantics that can’t be expressed in OpenAPI (and doesn’t seem to be on the radar for the next OpenAPI release).

I was blindsided by SDK generation being a thesis for a whole batch of startups. APIMatic and liblab were at the conference. I came across Speakeasy previously; and since discovered Stainless, Fern, and Konfig. Except for APIMatic all of these seem to have been founded in 2022 with at least $65m invested by my count. This is wild to me… here we are just giving away our Rust SDK generator… like suckers! (Just kidding! Open source is central to our mission at Oxide and we’re happy to work almost entirely in the open. We don’t ask for copyright assignment that might allow for the source-available maliciousness we've seen proliferating.)

Each of these companies charges $100s/month for SDK generation. I guess the pitch is something like: you, a large company, drank the API Kool-Aid and now you have N internal APIs. At a minimum you’re exerting N x (number of languages in use) effort, but in all likelihood, that work is being repeated in a bunch of places where groups aren’t aware or don’t trust the SDKs made by others. So with SDK generation, you just call our API with your latest API specs and out pops SDKs in all the languages you and your customers care about. It will be interesting to see if this turns out to be a sustainable model.

AI / ML

Can you go a tech conference in 2024 where AI / ML isn’t a theme? I doubt it. Lots of uses of AI in the API space on display. Use of generative AI APIs, writing docs through gen AI, training AI on docs, even writing SDKs through gen AI. Gen AI all the things.

In-person Conferences… still a thing!

I had not been to a conference in … a while. It was surprisingly great. In particular, it was a chance to go deep on topics I don’t usually discuss. I work with a ton of great folks at Oxide, but I think none of them cares—for example—about the draft proposal for the next OpenAPI release or has opinions about it. A bunch of folks who do care were there in person, and we could debate topics I’d been chewing on. I got to hear about how others are approaching similar problems in ways that’s not always obvious from the talks that come out of conferences or online discussion.

I wasn’t sure if in-person conferences were still going to be valuable. It was for me; it might be for you.

Rust and JSON Schema: odd couple or perfect strangers

A bit over two years ago, I started work on typify, a library to generate Rust types from JSON Schema. It took me a while to figure out it was a compiler, but I’ll call it that now: it’s a compiler! It started life as a necessary component of an OpenAPI SDK generator—a pretty important building block for the control plane services at Oxide. Evolving the compiler has become somewhere between a hobby and an obsession, trying to generate increasingly idiomatic Rust from increasingly esoteric schemas.

Get in, loser; we’re writing a compiler

Why did I start building this? I came to Oxide with a certain amount of OpenAPI optimism (from my previous company), optimism that was in some cases well-founded (it has earned its place as the de facto standard for describing HTTP-based APIs), and in other cases profoundly misplaced (the ecosystem was less mature than expected). On the back of that optimism, Dave and I (but mostly Dave) built a server framework, dropshot, that emits OpenAPI from the code. We gave a pretty good talk about it in 2020 about using the code as the source of truth for interface specification.

As we built out services of the control plane we wanted service-specific clients. Ideally these would be derived from the OpenAPI documents emitted from dropshot. We couldn’t find what we wanted in the ecosystem (read: we tried them and they didn’t work) so we built our own. Before we could invoke APIs and understand their responses we needed to generate types. Since OpenAPI uses JSON Schema to define types**, I started there.

**: sort of; and it's actually quite annoying but I'll save my grousing for later.

Sum types

Pretty uncontroversial take for Rust programmers: sum types are great. We use enums a bunch in the API types because they let us express precise constraints. (They do make for tricky SDK generation in languages that don’t support sum types, but that’s not important here.) How are enums represented in JSON serialization or JSON schema? The answer, with some irony, is “variously". The ubiquitous Rust de/serialization framework gives 4 different choices (that I’ll show below).

My woodworking mentor as a kid observed that I start projects in the middle. That’s exactly what happened here. Reconstructing enums from their generated schemas seemed tricky and interesting, so that’s where I started. Generally, an enum turns into a oneOf construction ("data must conform to exactly one of the given subschemas"). I try to apply heuristics that correspond to each of the serde enum formats:

 let ty = self
    .maybe_option(type_name.clone(), metadata, subschemas)
    .or_else(|| self.maybe_externally_tagged_enum(type_name.clone(), metadata, subschemas))
    .or_else(|| self.maybe_adjacently_tagged_enum(type_name.clone(), metadata, subschemas))
    .or_else(|| self.maybe_internally_tagged_enum(type_name.clone(), metadata, subschemas))
    .or_else(|| self.maybe_singleton_subschema(type_name.clone(), subschemas))
    .map_or_else(|| self.untagged_enum(type_name, metadata, subschemas), Ok)?;

Externally tagged enums have this basic shape:

{
  "<variant-name>": { .. }
}

Internally tagged enums look like this:

{
  "<tag-name>": { "const": ["<variant-name>"] },
  … other properties …
}

Externally tagged enums:

{
  "<tag-name>": { "const": ["<variant-name>"] },
  "<content-name>": { .. }
}

Unlike other formats, the final format, “untagged", doesn’t include any indication of the variant name—it just dumps the raw type data (and one needs to be careful that the subschemas are mutually exclusive).

Seeing enums traverse JSON Schema and turn back into the same Rust code was very satisfying. While I basically got enum generation right, there are a couple of JSON Schema constructs that I really screwed up.

allOf

In JSON Schema an “allOf" indicates that a data value needs to conform to all subschemas… to no one’s surprise. So you see things like this:

{
  "title": "Doodad",
  "allOf": [
    { "$ref": "#/$defs/Thingamajig" },
    { "$ref": "#/$defs/Whosiewhatsit" },
  ]
}

Serde has a #[serde(flatten)] annotation that takes the contents of a struct and, effectively, dumps it into the container struct. This seemed to match the allOf construct perfectly; the above schema would become:

// ⬇️ This is wrong; don’t do this ⬇️
struct Doodad {
    #[serde(flatten)]
    thingamajig: Thingamajig,
    #[serde(flatten)]
    whosiewhatsit: Whosiewhatis,
}

This is wrong! Very very wrong. So wrong it often results in structs for which no data results in valid deserialization or serializations that don’t match the given schema. In particular, imagine if both Thingamajig and Whosiewhatis have a fields of the same name with incompatible types.

Perhaps more precisely: the code above is only right under the narrow conditions that the subschemas are all fully orthogonal. In the wild (as we JSON Schema wranglers refer to its practical application), allOf is most commonly used to apply constraints to existing types.

Here’s an example from a github-related schema I found:

"allOf": [
  { "$ref": "#/definitions/issue" },
  {
    "type": "object",
    "required": ["state", "closed_at"],
    "properties": {
      "state": { "type": "string", "enum": ["closed"] },
      "closed_at": { "type": "string" }
    }
  }
]

The “issue" type is an object with non-required properties like:

{
  "state": {
    "type": "string",
    "enum": ["open", "closed"],
    "description": "State of the issue; either 'open' or 'closed'"
  },
  "closed_at": { "type": ["string", "null"], "format": "date-time" },
}

The result of this allOf is a type where state is required and must have the value “closed" and “closed_at" must be a date-time string (and not null). (closed_at was already required by the base type, so I’m not sure why the allOf felt the need to reassert that constraint.)

This is very very different than what #[serde(flatten)] gives us. Originally I was generating a broken type like this:

struct ClosedIssue {
    #[serde(flatten)]
    type_1: Issue,
    #[serde(flatten)]
    type_2: ClosedIssueType2,
}

struct ClosedIssueType2 {
    state: ClosedIssueType2State; // enum { Closed }
    closed_at: String,
}

Wrong and not actually useful. More recently I’ve applied merging logic to these kinds of constructions, but it’s tricky and opens the door to infinite recursion (one of the many sins the JSON Schema spec condemns albeit with merely its second sternest form of rebuke).

anyOf

I got allOf wrong. I got anyOf much wronger. AnyOf says that a valid value should conform to any of the given subschemas. So if an allOf is just a struct with a bunch of flattened members then it would make sense that an anyOf is a struct with a bunch of optional members. It makes sense! especially if you don’t think about it!

// ⬇️ This is wrong; don’t do this ⬇️
struct Doodad {
    #[serde(flatten)]
    thingamajig: Option<Thingamajig>,
    #[serde(flatten)]
    whosiewhatsit: Option<Whosiewhatis>,
}

But if you do think about it even briefly, you realize that a type like carries only the most superficial relationship with the JSON Schema. For example, at least one of the subschemas needs to be valid and this type would be fine with an empty object ({}) turning into a bunch of Nones.

So what’s a valid representation of anyOf as a Rust type? In a way I’m glad I went with this quick, clever, and very very wrong approach because a robust approach is a huge pain the neck! Consider an anyOf like this:

{
  "title": "Something",
  "anyOf": [
    { "$ref": "#/$defs/A" },
    { "$ref": "#/$defs/B" },
    { "$ref": "#/$defs/C" },
  ]
}

Bear in mind, my goal is to allow only valid states to be represented by the generated types. That is, I want type-checking at build time rather than, say, validation by a runtime builder. Consequentially, I think we need a Rust type that’s effectively:

enum Something {
    A,
    B,
    C,
    AB,
    AC,
    BC,
    ABC,
}

You need the power set of all sub-types. Sort of. Some of those are going to produce unsatisfiable combinations (i.e. if the types are orthogonal). We’d ideally exclude those. And we need to come up with reasonable names for the enum variants (AAndBAndC?). Ugh. It’s awful. While I've cleaned up allOf, typify's anyOf implementation is still based on that original, wrong insight.

JSON kvetching

I used to abstractly dislike JSON Schema. My dislike has become much more concrete. With a big caveat: I'm considering only the use cases I care about, which assuredly bear little-to-no resemblance to the use cases envisioned by the good folks who designed and evolve the standard. By way of a terrible analogy here's the crux of the issue: I think about product concept documents (PCDs) and product requirement documents (PRDs) which are vaguely common product management terms (that I’ll now interpret for my convenience). A PCD tells you about the thing. What is it? How’s it work? How might you build it? A PRD provides criteria for completion. Can it do this? Can it do that? JSON Schema is much better at telling you if the thing you've built is valid than it is at telling you how to build the intended values.

What I want is a schema definition for affirmative construction, describing the shape of types. What are the properties? What are the variants? What are the constraints? JSON Schema seems to have a greater emphasis on validation: does this value conform?

As an example of this, consider JSON Schema’s if/then/else construction.

{
  "if": { "$ref": "#/$defs/A" },
  "then": { "$ref": "#/$defs/B" },
  "else": { "$ref": "#/$defs/C" }
}

If the value conforms to a schema, then it must conform to another schema… otherwise it must conform to a third schema. Why does JSON Schema even support this? I think (but am deeply unsure) that this is equivalent to:

{
  "oneOf": [
    {
      "allOf": [
        { "$ref": "#/$defs/A" },
        { "$ref": "#/$defs/B" }
      ]
    },
    {
      "allOf": [
        { "not": { "$ref": "#/$defs/A" } },
        { "$ref": "#/$defs/C" }
      ]
    }
  ]
}

In other words, { A ∪ B, ¬A ∪ C }. Perhaps it's a purely academic concern: I haven’t encountered if/then/else in an actual schema.

More generally: there are often many ways to express equivalent constructions. This is, again, likely a case of my wanting JSON Schema to be something it isn’t. There’s an emphasis on simplicity for human, hand-crafted authorship (e.g. if/then/else) whereas I might prefer a format authored and consumed by machines. The consequence is a spec that's broad, easy to misimplement or misinterpret, and prone to subtle incompatibilities from version to version.

Typify to the future

As much as it’s been a pain in the neck, this JSON Schema compiler has also been a captivating puzzle, reminiscent of the annual untangling of Christmas tree lights (weirdly enjoyable… just me?). How to translate these complex, intersecting, convoluted (at times) schema into neat, precise, idiomatic Rust types. I feel like I kick over some new part of the spec every time I stare at it (dependentRequired? Who knew!). There are plenty of puzzles left: schemas with no simple Rust representation, unanticipated constructions, weirdo anchor and reference syntex, and—to support OpenAPI 3.1—a new (subtly incompatible) JSON Schema revision to untangle.