Skip to main content

ยท One min read
Ziinc

This is my handy dandy way to deploy lots of Supabase edge functions and sync my migrations all in one go:

In a makefile at project root:

diff:
# see the migration protip below!
supabase db diff -f $(f) -s public,extensions --local

deploy:
@echo 'Deploying DB migrations now'
@supabase db push
@echo 'Deploying functions now'
@find ./supabase/functions/* -type d ! -name '_*' | xargs -I {} basename {} | xargs -I {} supabase functions deploy {}

.PHONY: diff deploy

Just run make deploy and it will push the database migrations and deploy all edge functions in the supabase/functions folder.

The edge functions deploy will also ignore all folders that start with _, which is usually shared code modules and not an actual edge function that you would want to deploy.

Migration Generation ProTipโ€‹

You can also use make diff f=my_migration_name that I added in above to generate a database migration diff faster than you can say "Yes please!" (Actually the diff-ing is not very fast, so you might finish saying it before it completes. Try saying it letter by letter ๐Ÿ˜„)

ยท 2 min read
Ziinc

Wouter is amazingly lightweight, and if it can do much work in a tiny package, so can you!

Here is my tried and tested way to track routes in a React app using Wouter as the main browser router:

Installationโ€‹

This guide assumes that you already have a working router set up with wouter already installed. This is for v3 of wouter.

We'll use react-ga4 package, because popping npm package pills is more fun than hand-rolling it.

npm i react-ga4

In the App.tsx, initialize the React GA4 script:

import ReactGA from "react-ga4";

ReactGA.initialize([
{
trackingId: "G-mytrackingnum",
gaOptions: { anonymizeIp: true },
},
]);

Create a <TrackedRoute /> Componentโ€‹

import { Route, RouteProps useLocation, useRoute } from "wouter";
const TrackedRoute = (props: RouteProps) => {
const [location, _setLocation] = useLocation();
const [match] = useRoute(props.path as string);
useEffect(() => {
if (match) {
ReactGA.send({
hitType: "pageview",
page: props.path,
title: document.title,
});
}
}, [location]);

return <Route {...props} />;
};

In this example, we trigger the effect every time the browser location changes. We then check if the route matches, and if it does, we will fire off the ReactGA pageview event.

Add it to the <Router> componentโ€‹

import { Router } from "wouter";

<Router base={import.meta.env.BASE_URL}>
<TrackedRoute path="/test/:testing">some test page</TrackedRoute>
<TrackedRoute path="/">some app code</TrackedRoute>
</Router>;

And now, if we navigate to /test/123, we will see that the pageview of /test/:testing will get logged.

Note that this example only focuses on the path route that is matched, and not the actual location. This is app routes are not really the same as public content routes and the actual resource IDs are irrelevant to the web analytics.

ยท 5 min read
Ziinc

EmailOctopus is a lovely newsletter site, and I do enjoy the fact that their UI is quite well done and user friendly, and on the technical side they have EmailOctopus Connect, which is allows for lower email costs. They bill by subscriber count though, so for any seriously large subscriber counts it would make more sense to self-host, but I like using them for small projects (like this blog for example) which will never ever see more than a handful of subscribers.

However, EmailOctopus could definitely up their game when it comes to their developer APIs and scripts. Their embeddable script is an absolute pain to work with when it comes to custom websites, especially if you're using a shadow DOM.

<script
async
src="https://eomail1.com/form/983899ac-29fb-11ef-9fcd-4756bf35ba80.js"
data-form="983899ac-29fb-11ef-9fcd-4756bf35ba80"
defer
type="text/javascript"
></script>

Let me break this down for you:

  • It loads a script asyncronously. The script is custom generated for the specific form created, as can be seen by the usage of the uuid in the script URL.
  • The script will insert a form as well as load some additional some Google reCaptcha scripts for spam protection. It will also load some Google Fonts and any assets related to the form.
  • By default, it does not come wiht the defer and type attributes. These were added in by me, and would ensure that the browser executes it as JavaScript, and that execution would be deferred until the DOM is fully loaded.
  • It finds a <script> tag with the data-form attribute with that exact UUID and replaces it with the form. It then creates the required DOM nodes within the HTML page.

However, adding in the script directly to a React component would not work:

// ๐Ÿšซ This will not work!
const MyComponent = () => (
<div>
<script
async
src="https://eomail1.com/form/983899ac-29fb-11ef-9fcd-4756bf35ba80.js"
data-form="983899ac-29fb-11ef-9fcd-4756bf35ba80"
defer
type="text/javascript"
></script>
</div>
);

Why wouldn't this work?

  • React works with a shadowDOM, and thus there would not be any script tag available on the html at page load. React will mount the component on client load.
  • Even with React Server Side Rendering, the script tag would not be executed because React protects from malicious code that will set raw html inside components. One would need to use __dangerouslySetInnerHtml in order for this to work

This, we need to adjust our React code in Docusaurus to:

  1. execute the script; and then
  2. create the HTML tags at the <script> tag; but
  3. only do it client side;
Why do we want it to be only client side?

Docusaurus will generate both server and client code during the build step. Although it would would actually have some benefits have generated so that there is less JS running on client initial load, there is added complexity in trying to wrangle with the Docusaurus SSR build step, so just leaving it client side is fine as well. It also are no SEO benefits to be gained, so leaving it client side is fine.

For any other React library, this would likely be irrelevant.

Step 1: Create the Formโ€‹

Create the form inside EmailOctopus and obtain the embed script.

Example of form creation

Step 2: Add the wrapped component to your layoutโ€‹

Add in the <Newsletter /> tag to whereever you want to slot your newsletter form at. You can also swizzle one of the layout components, but how to do that is out of scope for this blog post.

import React from "react";
import Newsletter from "@site/src/components/Newsletter";
export default function MyComponent(props) {
return (
<>
<Newsletter />
...
</>
);
}

Step 3: Install React Helmetโ€‹

We'll need some way to load the script in the head of the HTML document. We'll reach for React Helmet in this walkthrough guide, so do the current variation du jour of npm install --save react-helmet.

Step 3: Add in the Newsletter componentโ€‹

For our component to work successfully, we need to create a the file at /src/components/Newsletter.tsx and define the compoennt as such:

//  /src/components/Newsletter.tsx
import React from "react";
import { Helmet } from "react-helmet-async";
import BrowserOnly from "@docusaurus/BrowserOnly";
const Newsletter = () => (
<div
style={{
marginLeft: "auto",
marginRight: "auto",
}}
>
<BrowserOnly>
{() => (
<>
<Helmet>
<script
async
defer
src="https://eomail1.com/form/983899ac-29fb-11ef-9fcd-4756bf35ba80.js"
type="text/javascript"
></script>
</Helmet>
<script
type="text/javascript"
data-form="983899ac-29fb-11ef-9fcd-4756bf35ba80"
></script>
</>
)}
</BrowserOnly>
</div>
);
export default Newsletter;

In this page, there are a few things that are going on:

  1. We set the script to the <Helmet /> component, meaning that this would be placed in the <head> tag of the HTML document. Two additiona attributes are added as well: defer to load this after the main document loads, and type="text/javascript" for completeness.
  2. We also add in the extra <script> tag in the component, with the data-form attribute to let the script identify it as the parent node to insert the form elements.
  3. We also wrap all of this inside of the <BrowserOnly /> component that comes with Docusaurus, which allows us to only run this code when on the client. As these scripts do not affect SEO, it is not necessary to include it in the server side generation.

Step 4: Verify it all worksโ€‹

Now check that it all works on your localhost as well as on production, and now pat yourself on the back!

ยท 5 min read
Ziinc
๐Ÿ‘‹ I'm a dev at Supabase

I work on logging and analytics, and manage the underlying service that Supabase Logs and Logflare. The service do over a billion of requests each day with traffic constantly growing, and these devlog posts talk a bit about my day-to-day open source dev work.

It serves as some insight on what one can expect when working on high performance and high availability software, with real code snippets and PRs to boot. Enjoy!๐Ÿ˜Š

This week, I'm implementing OpenTelemetry, which generates traces of our HTTP requests to Logflare, the underlying analytics server of Supabase. For Elixir, we have the following dependencies that we need to add:

# mix.exs
[
...
{:opentelemetry, "~> 1.3"},
{:opentelemetry_api, "~> 1.2"},
{:opentelemetry_exporter, "~> 1.6"},
{:opentelemetry_phoenix, "~> 1.1"},
{:opentelemetry_cowboy, "~> 0.2"}
]

A quick explanation of each package:

  • :opentelemetry - the underlying core Erlang modules that implement the OpenTelemetry Spec
  • :opentelemetry_api - the Erlang/Elixir API for easy usage of starting custom traces
  • :opentelemetry_exporter - the functionality that hooks into the recorded traces and exports them to somewhere
  • :opentelemetry_phoenix - automatic tracing for the Phoenix framework
  • :opentelemetry_cowboy - automatic tracing for the cowboy webserver

Excluding ingestion and querying routesโ€‹

Logflare handles a ton of ingestion and querying routes every second, and if we were to track every single one of them, we would generate huge amount of traces. This would not be desirable or even useful, because storage costs for these would be quite high and a lot of it would be noise.

What we need is to exlcude off these specific API routes, but record the rest. We don't want to record all, of course, as usually a sample of a large amount of traffic would suffice in giving a good analysis of overall performance.

Of course, when using sampling, we would not have a wholly representative dataset of traces that would represent real-world performance. However, for practical purposes, we would be using the OpenTelemetry traces for optimizing a majority of request happy paths.

In order to do so, I had to implement a custom sampler for OpenTelemetry. The main pull request is here, and I'll break down some parts of the code for easy digestion.

Configuration Adjustmentsโ€‹

We need to make the configuration flexible enough to allow for self-hosting users to increase/decrease the default sampling probability. This also allows us to configure the sampling probability differently for different clusters, such as having higher sampling for our canary cluster.

# runtime.exs
if System.get_env("LOGFLARE_OTEL_ENDPOINT") do
config :logflare, opentelemetry_enabled?: true
config :opentelemetry,
traces_exporter: :otlp
traces_exporter: :otlp,
sampler:
{:parent_based,
%{
root:
{LogflareWeb.OpenTelemetrySampler,
%{
probability:
System.get_env("LOGFLARE_OPEN_TELEMETRY_SAMPLE_RATIO", "0.001")
|> String.to_float()
}}
}}
end

Lines in GitHub

We define a custom sampler that LogflareWeb.OpenTelemetrySampler works on the parent (as specified by :parent_based), and input the :probability option as a map key to the sampler.

We also conditionally start the OpenTelemetry setup code for the Cowboy and Phoenix plugins based on whether the OpenTelemetry exporting endpoint is provided:

# lib/logflare/application.ex
if Application.get_env(:logflare, :opentelemetry_enabled?) do
:opentelemetry_cowboy.setup()
OpentelemetryPhoenix.setup(adapter: :cowboy2)
end

Lines on GitHub

Remember that we set the :opentelemetry_enabled? flag in the runtime.exs above?

Custom Samplerโ€‹

The custom OpenTelemetry sampler works by wrapping the base sampler :otel_sampler_trace_id_ratio_based with our own module. The logic is in two main portions of the module: the setup/1 callback, and the should_sample/7 callback.

In the setup/1 callback, we ensure that we delegate to the :otel_sampler_trace_id_ratio_based.setup/1 with the probability float input. This would generate a map with two keys, the probability as is, and the something called :id_upper_bound.

# lib/logflare/open_telemetry_sampler.ex
@impl :otel_sampler
def setup(opts) do
:otel_sampler_trace_id_ratio_based.setup(opts.probability)
end

How the trace ID sampling works is that each trace has a generated ID, which is a super large integer like 75141356756228984281078696925651880580. A bitwise AND is performed using a hardcoded max trace ID value, and the result of the bitwise AND is then used to compare against the upper bound ID. If it is smaller than the upper bound ID, then it will record the sample, otherwise it will drop it. This is implementation specific and is out of scope for this blog post, but you can read more about the OpenTelemetry spec on the TraceIdRatioBased sampler specification.

Here is the code. For brevity sake, I have omitted the arguments for should_sample/7 the function call and definition:

# lib/logflare/open_telemetry_sampler.ex
@impl :otel_sampler
def should_sample(... ) do
tracestate = Tracer.current_span_ctx(ctx) |> OpenTelemetry.Span.tracestate()

exclude_route? =
case Map.get(attributes, "http.target") do
"/logs" <> _ -> true
"/api/logs" <> _ -> true
"/api/events" <> _ -> true
"/endpoints/query" <> _ -> true
"/api/endpoints/query" <> _ -> true
_ -> false
end

if exclude_route? do
{:drop, [], tracestate}
else
:otel_sampler_trace_id_ratio_based.should_sample(...)
end

Lines on GitHub

Here, because this will be a highly executed code path, we use a case for the path check instead of multiple if-do or a cond-do, because binary pattern matching in Elixir is very performant. Furthermore, binary pattern matching is perfect for our situation because we only need to check for the first part of the HTTP route that is called, instead of all.

The code is relatively simple, it delegates to :otel_sampler_trace_id_ratio_based.should_sample/7 if it is not in one of the bad path. If it is one of the hot paths, we will drop the trace. As this sampler works on the parent, it will drop all child traces as well.

Arguably, we could optimize this even further by re-writing the conditional delegation into mutliple function heads and pattern matching on the attribute argument and doing the binary check within the function guard. As always, premature optimization is the enemy of all software engineers, so I'll defer this refactor until the next time when I need to improve this module.

Wrap upโ€‹

And that is how you implement a custom OpenTelemetry sampler!

ยท 3 min read
Ziinc

It has been a long time since my initial post on using Distillery to manage Ecto migrations on startup, and I'm super happy that the Elixir core team has worked on making deployments a breeze now.

The absolute simplest way to achieve migrations on startup is now as follows:

  1. Write the migrator function using this example here
  2. Add a startup script in the release overlays folder
  3. Add the startup script to your dockerfile CMD

Writing the Migrator Functionโ€‹

This is going to be largely lifted from the Phoenix documentation, but the core aspects that you need are all here:

defmodule MyApp.Release do
@app :my_app

def migrate do
load_app()

for repo <- repos() do
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
end

defp repos do
Application.fetch_env!(@app, :ecto_repos)
end

defp load_app do
Application.load(@app)
end
end

I have left out the rollback/2 function, but you can include it in if you really think that you will need it. Its more likely that in reality you'll just add in a new migration to fix a bad migration, so it is up to personal preference.

Adding the Startup Scriptโ€‹

With Elixir releases, we now have a nice convenient way to copy our files into our releases automatically, without having to do multiple COPY commands in our dockerfiles. Neat! Anything to save a few lines of code!

Create your startup file here:

# rel/overlays/startup.sh
./my_app eval "MyApp.Release.migrate"
# optionally, start the app
./my_app start

This assumes that we will be setting our working directory to our release root, which we will do in our docker file. If you wish to couple the migrations together with the app startup, you can add the optional ./my_app start portion. However, you can also decouple it so that you don't end up in a bootloop in the event of a bad migration. As always, it really depends on your sitation.

And then in your release configuration:

# mix.exs
releases: [
my_app: [
include_executables_for: [:unix],
# the important part!
overlays: ["rel/overlays"],
applications: [my_app: :permanent]
]
]

This will then copy your files under the re/overlays directory over to the built release.

Add the Startup Script to Dockerfileโ€‹

Let's run the startup script now from our dockerfile as so:

WORKDIR ".../my_app/bin"
CMD ["./startup.sh"]

The three dots are for illustration purposes, adjust the WORKDIR to the actual directory that you copied your release binaries to. If you coupled your app startup together with the startup script, the above CMD will run the migrations and then start up the app.

If you wish to decouple the migrations from the startup, you can do the following:

WORKDIR "..."
CMD ["./startup.sh", "&&", "./my_app", "start" ]

Wrap Upโ€‹

An important mention is that Phoenix now comes with mix phx.gen.release which also comes with a dockerfile option for bootstrapping docker-based release workflows. The migration files are also automatically generated for you too. However, you wouldn't want to use the helper if you aren't doing any Phoenix stuff, and the above example walkthrough will work for any generic Elixir release.

Thanks for reading!