Skip to main content

Using A Hunky Poolboy To Manage Your Python ErlPort Processes in Elixir

ยท 7 min read

So, you've got lots of Python code being executed from your Elixir app, and lots of ad-hoc calling of your Python code. On one hand, you want to dynamically scale to manage demand, but at the same time, you don't want to accidentally crash your application by starting too many Python processes. So... what what's a frazzled developer gotta do to save himself some headache? The key problem that we are trying to solve here pooling, which is the the management of limited resources kept at the ready for any ad-hoc requests for usage.

Before we talk about the hunky poolboy in the Erlang/Elixir room, we should consider WHY we're even having all of these problems listed above.

Problem Analysis: Python is Annoyingโ€‹

Problem 1: Python Processes May Crash On Errorโ€‹

When we start up a Python process using Erlport/Export, we inadvertently expose ourselves to the possibility of the Python process shutting down due to errors. In this case, the process created by Export will also shutdown, leaving us with one less Python process to work with. Ideally, we would want this Python process to be re-created automatically.

Doesn't this sound like a problem that Elixir already addresses?

Problem 2: We Need to Create More Processes On High Server Loadโ€‹

On high load, we'll definitely need to increase the number of Python processes started up to handle with our increase request traffic. This gives us some scalability and makes us more dynamic... but it inadvertently leads us to problem 3...

Problem 3: We Need a Cap on Python Processes Allowed To Startโ€‹

Each Python process started consumes a certain amount of memory, and each additional process started can start to snowball your app's memory into a monstrosity that goes out fo control. So, we'll definitely need some mechanism to allow a certain number of processes to be started up, but not too many to overwhelm the underlying machine.

Vanilla Elixir, the Naive Solutionโ€‹

One way to do this with vanilla Elixir is to use either a Supervisor to start and supervise a fixed number of processes, or to use a DynamicSupervisor to supervise a variable number of processes.

However, rolling our own solution has loads of drawbacks, such as extended product development time, additional time spend on development, and reduced reliability due to lack of battle testing and production use. So, lets have a look at the handsome out-of-the-box solution available for us...

Hunky Poolboy Saves The Dayโ€‹

Here comes the star of the show, Poolboy. Highly reliable and battle tested, we can most definitely trust it to create pools of processes that we can then check out and call, while trusting that the processes are supervised and will restart on crash. Furthermore, we can also configure the amount of overflow processes to be allowed to be started, permitting us to dynamically scale for problem 2 and 3 while also providing a hard cap to number of processes started.

It is noted that many many libraries utilize :poolboy (such as Ecto, the database ORM for Elixir) and it is extremely reliable and battle tested, having been released more than 10 years ago.

A Tasty Exampleโ€‹

After following the guide on setting up a GenServer wrapper for a Python process, you should have a GenServer module called MyApp.PyWorker that can now be called to perform some text duplication.

Let's create a manager module to "manage" a pool of PyWorkers, while also using :poolboy.

Create a manager module called MyApp.PyManager:

# lib/py_manager.ex
defmodule MyApp.PyManager do
use Supervisor

def start_link(_) do
Supervisor.start_link(__MODULE__, [], name: __MODULE__)
end

@impl true
def init(_) do
children = [
:poolboy.child_spec(:py_pool,
name: {:local, :py_pool},
worker_module: MyApp.PyWorker,
size: 4,
max_overflow: 2
)
]

Supervisor.init(children, strategy: :one_for_one)
end

def call(func, a \\ []) do
:poolboy.transaction(:py_pool, fn pid ->
GenServer.call(pid, {func, args})
end)
end
end

This PyManager module implements an Elixir supervisor, as seen through the use Supervisor macro. Through the init/1 callback, we implement a supervision tree, with the only supervised child being poolboy's own supervision tree.

Within the poolboy child specification, we specify the name of the pool as the first argument. The second argument is a keyword list of options. Poolboy itself doesn't really have great documentation, so we will rely on ElixirSchool's article on poolboy for understanding the keyword options.

  • :name - the pool name. Scope can be :local, :global, or :via.
  • :worker_module - the module that represents the worker.
  • :size - maximum pool size.
  • :max_overflow - maximum number of temporary workers created when the pool is empty. (optional)
  • :strategy - :lifo or :fifo, determines whether the workers that return to the pool should be placed first or last in the line of available workers. Default is :lifo. (optional)

from Poolboy - Elixir School s

In our case, we have provided the following options to the child spec:

  • a locally referenced name :py_pool
  • the worker module to be initiated.
  • a pool size of 4, meaning the number of workers created
  • a maxiumum of 2 workers that would be created on demand

After initiating the our supervisor in the final line of init/1, we will then define a general function interface to interact with the pool. This will hide the poolboy-specific parts from external users that call this module. In this case, we define a call/2 function that simply makes a genserver call to the underly PyWorker module. We will make use of :poolboy.transaction/2`, which allows us to check out a process from the pool, execute an anonymous function, and automatically check the processes back into the pool after the function is complete.

After defining our MyApp.PyManager module, we will now have to define our MyApp.PyWorker module that will handle the forwarded GenServer calls, just like how we created one in the explanatory article on connecting Python with Elxiir:

# lib/py_worker.ex
defmodule MyApp.PyWorker do
use GenServer
use Export.Python

def init(state) do
priv_path = Path.join(:code.priv_dir(:my_app), "python")
{:ok, py} = Python.start_link(python_path: priv_path)
{:ok, Map.put(state, :py, py)}
end
def handle_call({func, args}, _from, %{py: py} = state) do
raw = Python.call(py, "my_module", func, args)
{:reply, result, state}
end
def terminate(_reason, %{py: py} = state) do
Python.stop(py)
:ok
end
end

This simple module allows us to forward function calls to Python when PyManager.call/2 is used. For example, we can call a python function defined in our my_module.py file as so:

iex> MyApp.PyManager.call("duplicate_text", ["testing"])
"testingtesting"

Tips and Tricksโ€‹

Of course, from my professional experience, I'll share some secret sauce that I've been using to ensure that the whole world doesn't come crashing down and my weekends are free from worries.

Tip 1: Use Separate Pools For Separate Workload Prioritiesโ€‹

Are there Python calls that must not be dropped at any cost? If so, create your very own dedicated :write pool, for you to check out and call when working with sensitive or important workloads.

You can then have the main pool of python processes used for non-critical :read workloads, such as data fetching, calculations, etc. If the user won't notice a failure or it can be easily retried on crash/timeout, opt to use the :read pool.

Tip 2: Set an Overflow, But Not Too Muchโ€‹

For each overflow request, a new Python process gets started up. Once it is complete, it will then get killed. If you are receiving high call counts, consider setting a limit on the number of new processes allowed to be started up while increasing the pool count instead. This is because it may actually consume more resources to constantly start and kill processes through ErlPort than to simply pre-emptively increase the pool count. This is highly dependent on the VM that you are working with, so take this advice with a pinch of salt.

Additionally, if you allow too high of an overflow, your machine may max out its memory usage. As such, tailor this overflow to match the max workload that you are willing to withstand.

Wrapping Upโ€‹

Poolboy is one of the easiest ways in Elixir to create a process pool, and fits our needs perfectly for blending Python usage with Elixir. Use it or not, choice depends on your requirements. But more often than not, it will save you lots of headaches!