We’re Fly.io. We put your code into lightweight microVMs on our own hardware around the world, close to your users. Redis by Upstash is managed Redis living right next door to your Fly.io apps. Check us out—your app and database can be running close to your users within minutes.
We love databases that scale globally. As an ambivalent database provider, we built a global, automated Postgres, and we tinkered with global Redis on scrappy startup weekends. But the Fly.io forecast called for integration over invention. So we partnered up on launching a simple, global, low-latency Redis service built by the intrepid crew at Upstash.
Redis by Upstash sounds good enough to launch a cologne. We think it’s as big a deal. Oh, and there’s a generous free tier.
Keep reading to learn how our first integration came to life. Or, just sign up for Fly and give it a try:
flyctl redis create
? Select Organization: fly-apps (fly-apps)
? Choose a Redis database name (leave blank to generate one): redis-for-lovers
? Choose a primary region: Madrid, Spain (mad)
? Would you like to enable eviction? Yes
? Optionally, choose one or more replica regions: Amsterdam, Dallas, São Paulo, Johannesburg
? Select an Upstash Redis plan Free: 100 MB
Your Upstash Redis database redis-for-lovers is ready.
A Better Redis for Global Deployments
So what’s special here? I assure you: this isn’t stock Redis with a price tag slapped on.
Complex features like global read replicas demand good DX to get noticed. But in the managed Redis market, read replicas are elusive, hidden behind sales calls, enterprise pricing plans and confusing UI.
With flyctl redis update
and a few keystrokes, you can spin up global Redis replicas in seconds, with write forwarding switched on. Reads and writes make their way to the geographically-nearest replica, which happily forwards writes along to its primary, ensuring read-your-write consistency along the way. So, with a single Redis URI, you can safely experiment with global deployment without changing your app configuration.
VM-to-Redis requests are reliably fast, in every region, because your apps run on the same bare metal hardware as your databases, one network hop away at most. Check out Upstash’s live latency measurements to compare Fly.io with serverless platforms like Vercel or AWS. This comparison is not entirely fair, as we run apps on real VMs; not in JavaScript isolates. But we love the colors.
Finally, it’s worth mentioning these databases are secure: only reachable through your Fly.io encrypted, private IPv6 network.
Like a Surgeon
When this integration was on the cards, we had two clear goals: don’t expose Redis to the internet, and give Upstash full control of their service without compromising customer app security. Serendipity struck as we pondered this.
We were knee-deep in fresh platform plumbing — the Machines API and Flycast private load balancing. The API grants precise control over where and how VMs launch. And Flycast yields anycast-like powers to apps on the private, global WireGuard mesh.
So Upstash Redis is a standard Fly.io app — a multitenant megalith running on beefy VMs in all Fly.io regions. These VMs gossip amongst themselves over their private IPv6 network. Upstash uses our API to deploy. We support Upstash like any other customer. Awesome.
But Redis runs in its own Fly.io organization, and therefore, in its own isolated network. And customer apps, each in their own. We needed a way to securely connect two Fly applications. Enter Flycast, stage left.
Flycast is a beautiful, complex cocktail of BPF, iptables and tproxy rules: fodder for another post! Flycast offers public proxy features — geo-aware load balancing, concurrency control and TLS termination — between apps that share a private network. With a small tweak, Flycast could now surgically join services with customer networks.
Customer apps can connect to their provisioned Redis, but not to anything else in the Upstash private network. Upstash can’t access the customer’s network at all. Mission accomplished.
A Tale of Provisioning
You might be curious how provisioning Redis works, end-to-end.
Your flyctl redis create
hits the Fly.io API. We mint a fresh Flycast IP address on your network and pass that IP along to Upstash’s API with the desired database configuration.
In the same request, Upstash informs their Fly.io mega-deployment about your database, and we (Fly.io) point the Flycast address at Upstash’s app. We blast this info to our global proxies. They’ll now proxy connections on this IP to the nearest healthy mega-Redis instance. This all happens in a matter of seconds.
Alright, so now you have a Redis connection URL to chuck requests at.
Remember that Upstash’s Redis deployment is multitenant. Upstash hosts scores of customer databases within a single OS process. With a clever shuffling of data from RAM to persistent disks, many, many more databases can fit in this instance than your average Redis running on its own VM.
But multitenancy poses a problem. How can mega-Redis identify the target database for a given request?
Your Redis URL includes a unique database password (remember this is all private, encrypted traffic). Could we use this password to identify your database? Technically, yes, but if you leak your Redis password on a live coding stream, anyone else with a Redis database could hijack yours! Yeah, let’s not.
Before, we passed your Flycast IP address to Upstash, so they have it on record. Could they match that against the source address of the incoming Redis TCP connection? Not quite! Connections to Redis pass through our proxy. So, traffic will appear to arrive from the proxy itself; not from your Flycast IP.
No worries! We’ve got another trick up our sleeve.
A Protocol for Proxies
Bonus: our proxy supports prepending proxy procotol headers to TCP requests.
This curious 10-year-old internet resident is understood by most web servers and programming languages. At the top of the protocol spec, we spot our problem:
Relaying TCP connections through proxies generally involves a loss of the original TCP connection parameters such as source and destination addresses, ports, and so on.
Redis runs on port 6379, just because. Here’s a typical header for Redis connection initiation:
PROXY TCP6 fdaa:0:47fb:0:1::19 fdaa:0:47fb:0:1::16 6379 6379
Here we have two IPs — source and destination — on the same lovingly-named network, fdaa:0:47fb
. The source IP belongs to the application VM, which is assigned randomly and is of little use here. But the destination address is the Flycast IP assigned to our particular database. Ace.
Now we’re in the home stretch. Redis parses this header, plucks out that Flycast IP, finds the associated customer database, and forwards traffic to it. In wafts the sweet aroma of victory.
A Need for Speed
Let’s talk about a clear-cut use case for global Redis: caching HTML at the edge.
Last year we turbo-boosted our Paris-based, recipe finder Rails app by deploying Postgres replicas around the globe. But our database has grown. We don’t need to replicate all of its contents, and we’re too busy to spend time optimizing our queries. Let’s just lean on a lightweight HTML cache, which Rails is good at.
We know we can get similar or better performance by caching HTML in Redis alongside our deployed VMs. And we can do this in a few minutes, really. First, let’s add a few read replicas in distant, exotic lands.
~ $ fly redis update cookherenow-redis
? Choose replica regions, or unselect to remove replica regions: [Use arrows to move, space to select, <right> to all, <left> to none, type to filter]
> [ ] Amsterdam, Netherlands (ams)
[x] Denver, Colorado (US) (den)
[ ] Dallas, Texas (US) (dfw)
[ ] Secaucus, NJ (US) (ewr)
[ ] Frankfurt, Germany (fra)
[x] São Paulo (gru)
[ ] Hong Kong, Hong Kong (hkg)
[ ] Ashburn, Virginia (US) (iad)
[x] Johannesburg, South Africa (jnb)
[ ] Los Angeles, California (US) (lax)
[ ] London, United Kingdom (lhr)
[ ] Chennai (Madras), India (maa)
[ ] Madrid, Spain (mad)
[ ] Miami, Florida (US) (mia)
[x] Santiago, Chile (scl)
Then, with a sprinkle of Rails magic, our naive HTML cache is on the scene. Metrics can be boring, so, trust us that our Time To First Byte is still in the low milliseconds, globally, for GET requests on cached recipe pages.
RYOW
Now and then, one must write. And read-your-own-write consistency is a thing you need to care about when hitting speed-of-light latency in global deployments. That’s life, kids.
Readers hitting database replicas may not be served the very freshest of writes. We’re OK with that. Except in one case: when that replica is serving the author of the write. Good UX demands that a writer feel confident about the changes they’ve made, even if they have to wait a few hundred milliseconds.
To that end, Upstash Redis replicas take one of two paths to ensure a consistent read-your-own-write experience, with some trade-offs. Let’s talk it out.
Isa — one our recipe editors in Santiago — is worried that the recipe for Humitas Chilenas mentions New Mexico Green Chiles. While they may be the first chiles grown in outer space, they’re generally not tossed into humitas. So she makes corrections and proudly smashes that ENVIAR button.
Meanwhile, Santiago Redis has been diligently keeping track of the unique IDs of the writes that pass through Isa’s Redis connection.
So, that write is forwarded on to Paris, securely, over the WireGuard mesh. Santiago Redis holds blocks on the write command, waiting for replication to catch up to this specific write. On a clear internet day, we might wait 150ms, and Isa is redirected to the recipe page and sees her updated recipe sans chiles.
But under poor network conditions, we may need to wait longer, and we don’t want to wait forever. Editing must go on. This kind of thing can happen, and we need to be prepared for it.
So, the less happy path: Santiago Redis waits up to 500ms for the written value to return via replication. After that, Redis client connection is released, suggesting to the Redis client that the write completed. Now, this is risky business. If we redirect Isa to her recipe before her write makes that round trip, she gets spicy Humitas once again. New Mexican space chiles haunt her confused mind.
No fear - Santiago Redis has our back. Remember that it was tracking writes? When Isa’s recipe read is attempted, Santiago grabs the ID of the most recently tracked write on her connection. It checks to see if that ID exists in the replicated database contents. If so, Isa gets a fast, correct read of her updated recipe.
But if her change didn’t arrive yet, Santiago forwards the read operation to our our source of truth — Paris Redis — at the cost of another full round trip to Europe. Such is the price of consistency.