How to use Cloudflare Workers proxy with Rust

 
Visit counter implemented in rust is represented by a board Photo by Miguel Á. Padriñán

Visits counter was a critical feature of every website just 20 years ago. In this tutorial, we will implement it with Rust Cloudflare Workers by adding persistence and dynamic behaviors to an otherwise static page. We will also discuss other practical use cases of CF workers edge proxy.

Static blog with CF edge caching

Visits: 1778

This blog is a static JekyllRB website hosted on an EC2 behind an NGINX proxy. Additionally, it’s using a Cache everything Cloudflare cache rule with the following header for each HTML page:

cache-control: public, max-age=3600

Cloudflare cache everything rule

You can check it by running this cURL:

curl -I https://pawelurbanek.com/cloudflare-workers-rust | grep cache

# cache-control: public, max-age=3600
# cf-cache-status: HIT

cf-cache-status: HIT indicates that the request did not reach the origin server but downloaded the HTML page from the CF edge cache. It ensures the best loading performance because edge locations are always close to the readers.

Each HTML page is cached for 10 minutes. I trigger the following HTTP call when releasing an update to clear the edge cache locations:

curl -X POST "https://api.cloudflare.com/client/v4/zones/${$CLOUDFLARE_ZONE_ID}/purge_cache" \
     -H "X-Auth-Email: ${$CLOUDFLARE_EMAIL}" \
     -H "X-Auth-Key: ${$CLOUDFLARE_API_KEY}" \
     -H "Content-Type: application/json" \
     --data '{"purge_everything":true}'

Still, if you refresh this page, the hit counter below will increase:

Visits: 1778

No JS involved. What kind of sorcery is this!?

Basic Cloudflare worker with KV store

To implement this feature I’ve used Cloudflare Worker Routes. They work similarly to Lambde@Edge, i.e., workers sitting in front of the cache can reprocess a response before sending it to the client. Currently, it natively supports Javascript, Typescript, Python, and Rust, but more languages are supported by Wasm (Web Assembly). In this tutorial we will use Rust.

Unfortunately, Cloudflare currently supports creating only Javascript workers directly from the UI. So to get started you’ll need to install Rust locally.

Next run these commands to install necessary dependencies:

rustup target add wasm32-unknown-unknown
cargo install cargo-generate

And initialize your project using a template:

cargo generate cloudflare/workers-rs

Select a Hello world template and name your project. It creates the following worker implementation:

src/lib.rs

use worker::*;

#[event(fetch)]
async fn fetch(
    _req: Request,
    _env: Env,
    _ctx: Context,
) -> Result<Response> {
    console_error_panic_hook::set_once();
    Response::ok("Hello World!")
}

You can test it by running:

npx wrangler dev

Now you can access it at http://localhost:8787

Cloudflare worker running locally

Cloudflare worker process running locally


Let’s make this example more interesting by adding a visits counter using KV storage:

src/lib.rs

use worker::*;

#[event(fetch)]
async fn fetch(req: Request, env: Env, ctx: Context) -> Result<Response> {
    let kv = env.kv("visits")?;
    let visits = kv.get("count").text().await?.unwrap_or("0".to_string());
    let visits = visits.parse::<i32>().unwrap_or(0) + 1;
    let _ = kv.put("count", visits.to_string())?.execute().await;

    Response::ok(format!("Visits: {}", visits))
}

This example reads number of visits from a key-value store and increments it on each request. If you test it in the browser, you’ll notice that the counter increments by 2 on each visit. It’s because of the GET /favicon.ico request which also triggers the hit.

Let’s now deploy our counter to production. You have to start by creating a production key-value store:

npx wrangler kv namespace create visits

This command will prompt you to login to your Cloudflare account. Don’t forget to add the output to your wrangler.toml file:

wrangler.toml

[[kv_namespaces]]
binding = "visits"
id = "XXX"

We’re now ready to go live by typing:

npx wrangler deploy

Running this command will output the URL to access your first deployed Cloudflare worker.

Enabling Cloudflare workers proxy

Let’s make our worker more useful by hooking it up as a proxy to a live website. This tutorial assumes that you have a website that uses Cloudflare DNS with proxy enabled i.e. cloud should be orange:

Cloudflare proxy enabled

Now go to Workers Routes in your domain’s settings and click Add route. Now select a page for which you want to activate your worker. As for Request limit failure mode select Fail open so that your page will keep working even if you exhaust the worker’s quota. On a free plan you can use 100k requests/day and up to 10 million a month.

Worker proxy route config

Once you enable it, your website will display a blank page with a counter we’ve configured previously. Here’s how you can enable the proxy mode:

use worker::*;

#[event(fetch)]
async fn fetch(req: Request, env: Env, ctx: Context) -> Result<Response> {
    let kv = env.kv("visits")?;
    let visits = kv.get("count").text().await?.unwrap_or("0".to_string());
    let visits = visits.parse::<i32>().unwrap_or(0) + 1;
    let _ = kv.put("count", visits.to_string())?.execute().await;

    let mut origin_response = if env.var("WORKER_ENV")?.to_string() == "production" {
        Fetch::Request(req).send().await?
    } else {
        Response::from_html("<p>Visits: [VISITS_COUNTER]</p>").unwrap()
    };

    let body = origin_response.text().await?;
    let body = body.replace("[VISITS_COUNTER]", &format!("{}", visits));

    let response = Response::from_html(body).unwrap();
    let mut response = response.with_headers(origin_response.headers().clone());
    response.headers_mut().delete("Last-Modified")?;

    Ok(response)
}

We removed the Last-Modified header to avoid blank page errors, as described here.

We use a WORKER_ENV config variable, to allow testing our worker locally. You’ll have to create .devs.vars with the following content:

WORKER_ENV="development"

and add:

[vars]
WORKER_ENV = "production"

to wrangler.toml.

With this configuration in place, locally, you can work with a mocked version of an origin page. On production, we use Fetch::Request(req).send().await? call to download our target website and reprocess its body.

Our modification is just replacing all the occurences of the [VISITS_COUNTER] string with the count fetched from the KV store.

All you have to do now is embed the placeholder anywhere in your origin page and deploy the updated worker. The hit counter is live, and you can feel like the web design is back to its former glory.

Visits: 1778

Security headers with Rust Cloudflare workers proxy

But if visit counters are not your thing, the workers proxy offers great flexibility in modifying any part of the HTTP response.

A practical application of this is adding security headers to systems that otherwise don’t support it. I’ve known companies whose primary landing page used a niche CMS, without an NGINX proxy. So adding custom headers was not possible without a general infrastructure overhaul.

You can easily overcome this limitation by implementing the following worker:

use worker::*;

#[event(fetch)]
async fn fetch(req: Request, env: Env, ctx: Context) -> Result<Response> {
    let origin_response = if env.var("WORKER_ENV")?.to_string() == "production" {
        Fetch::Request(req).send().await?
    } else {
        Response::from_html("mock").unwrap()
    };

    let mut headers = origin_response.headers().clone();
    headers.set(
        "strict-transport-security",
        "max-age=31536000; includeSubDomains; preload",
    )?;
    headers.set("x-frame-options", "SAMEORIGIN")?;
    headers.delete("Last-Modified")?;

    Ok(origin_response.with_headers(headers))
}

With similar worker, you should always be able to score an A+ on securityheaders.com.

Summary

Rust, Wasm, CDN, and edge cache workers… I wonder how devs implemented hit counters 20 years ago without all this advanced tech. Anyway, Cloudflare Workers is an interesting tool that is worth knowing. In some cases it could offer a simple solution to otherwise complex devops tasks.



Back to index