It’s time to Consult the Oracle

Lokesh Poovaragan
6 min readJun 3, 2022

If L1 chains were poised to be truly decentralised computers, technically they should be able to do anything, including but not limited to video encoding or blender rendering, but for this to work, you would need to find a way to cross compile ffmpeg / blender to run on chain and even then it would cost insane gas to compute a video as long as a second, rendering the solution vain, if you did however want to pay for the cost of compute for encoding/rendering with cryptocurrencies,

That is fairly doable, typically the way this is accomplished on most networks, (today we’ll look at Solana) is to actually perform the deep compute off chain, but setup an agent (think of it as a DaemonSet in kubernetes) on chain that can query the status of the job off chain (how long it took to run, how many cores / gigs of RAM were consumed) and bill your wallet accordingly, this agent is usually referred to as The Oracle, and the process is termed Oracalization

I’m a 100% sure whoever coined the term Oracle was thinking of the first Matrix movie when they did

Say you’ve concocted yourself a series of distributed job sequences (I’m calling this pattern the Do This Then That or DTTT framework) that accomplish a given task, in our case video encoding, or blender rendering, (check out the earlier parts, I highly recommend them, you can mix and match these steps at will, they are after all, not strictly web3 components, they are still distributed tasks thanks to k8s and Pachyderm, though for the sake of the narrative, let’s stick to encoding for now)

You might now be looking to accept cryptocurrencies to run the video encoding DAG on demand, you will need a dApp, you could start by writing a smart contract that added a buffer to intentionally overshoot the estimated cost of completion of the task, with a clause to refund the excess amount back to the originating wallet if the actual processing took lesser resources than the initial estimated cost to completion, but you will still need a way to find out exactly how much the job cost you in terms of compute infrastructure, let’s first try to determine this number manually,

Im fairly sure this is called the Unitary method

I have triggered a job to encode a 30 minute video from lambo.mp4 to lambo.mkv and once the job is complete (periodically polling to check if it’s a success), I can run

pachctl list job

The output shows:

Seeing that it took 240 minutes to run, and there are 43800 minutes in a month, and it cost me $113 / month for a n2-standard-4 on GCP, back of napkin math says it costs $0.62 to run the encoding job, now we need a way to query this cost from our smart contract, gm oracle! 😎

It’s time for some command line kung fury!

Typically oracles are associated with a price feed, think ETH, SOL, Crude, Gold etc, basically information that is publicly accessible and verifiable, it openly declares the sources of truth for data that it is bringing into the chain, you could manually fact check the data coming in by visiting the source and verifying that the same price is being brought into the network (← this last step is called a Crank, and when you queue up to receive an update, you call it a crank queue),

Coming back to our example, the cost of a GCP n2-standard-4 VM on spot capacity is a price feed that can and should be publically oracalized, that way it can be queried or cranked, just in time before billing the user, since this is publically available information, and that I found this excellent service by infracost that does weekly updates, we can write an oracle wrapper around infracost’s cli, this solves one half of our problem, we can now reliably acquire the real world bare metal cost of compute on the internet, the other half is specific to our job, our cluster, it probably is (and I highly recommend it, if you haven’t already) highly secure (kubectl configs and pachctl auth tokens) and firewalled to prevent misuse. But if you want to run an oracle, you’re going to have to put your keys online so it can be verified openly on the blockchain! that’s a huge deal breaker! no no no… the DevSecOps gods will never agree to stash auth tokens on the internet!

Enter Switchboard

What switchboard does differently is to not only make any API call accessible on chain, they go one step even further beyond, and let you run Private Oracles, these are agents that run in your infrastructure that do not operate on SwitchboardDAO but have the ability to accept and inject private keys (think of it as Vault but for chains instead) allowing you to publish to the crank queue like a regular oracle, but from private environments, this is perfect for the second half of our problem, all we would need to do right now is provide an API endpoint to our private oracle so that it can query the time it took to complete the encoding job

I should warn you, if you were buidling this for production, I would most likely steer you towards node-pachyderm or python-pachyderm as those are much more resilient fault tolerant pachyderm clients, but for the sake of the narrative, command-line-fu it is!

The output from this endpoint, looks a lot like this (removed a few sections for brevity):

A quick TL;DR; of the above section is, it:

  1. runs pachctl list job (to get the actual completion metrics of the job from pachyderm)
  2. pipes the output to jq (so you get a padded, well structured json)
  3. returns the json as is to the entity that requested an update (in our case, that it would be pushed into the Crank Queue)
  4. we would mostly be interested in converting the .stats.process_time key which is currently in seconds to minutes and using it for calculating how long the encoding ran for
Just so that i can press the Up key a whole bunch of times to run it again

Putting it all together,

We now have two sets of oracles:

a publicly verifiable oracle that returns the varying cost of compute infrastructure (spot pricing) on GCP, AWS, Azure

AND

a private oracle that returns the specific duration of your encoding job

The aggregator (a job definition for how to generate cranks for an oracle) would be as follows,

Putting them together, you can now on chain calculate the cost of an encoding job and bill users for exactly the duration their job took to run, and refund any excess funds that were held back to their originating wallet

Praise yourself! You’ve now built yourself a unidirectional bridge from web2 to web3!

What if I told you, the Oracle must be consulted

Where do we go from here?

Here’s a checklist you might be interested in, to buidl on top of,

  1. you would want to run the public oracles on a dedicated RPC node for better availability
  2. you would want to run the private oracles on a auto-pilot cluster from GCP because let’s face it, no one wants to manage nodes on kubernetes 🥲 (quick shoutout to DevSecOps folks! 🦾)
  3. highly recommend acquiring a Pachyderm Enterprise license, as that lets you scrape metrics from a plethora of stats from Prometheus so you can get even more creative with a selection of criteria with which you can bill your users
  4. setup some primitive health checks or a more comprehensive observability and alerting stack if you have one, on your data feed with @switchboard-xyz/lease-observer to ensure you have required funds to keep the feed active

Switchboard was a game changer for me, never before was I sure of how to pull in off chain data to make on chain decisions, the simplicity and ease of use of their interfaces further increases my confidence in being able to use it for any RESTful queryable data source

@ me on twitter (@thorsadoptedbro) if you want me to buidl anything you want to see

Thanks for Reading!

— Loki

--

--

Lokesh Poovaragan

theycallmeloki.com, Developer Advocate at Dra.gd, loves Cake and all things pertaining to remarkable Developer Experience