Disclaimer: everything that follows is the result of what I think and is in no way connected or endorsed by my employer.

I have started my personal Rust journey a few years ago but I really didn't venture into trying to put it in production at $dayjob until a year or so ago.. and since then I have learned a few things that I think are worth sharing. Most of what follows should be read in the context of writing code supporting a Computer Graphics / Visual Effects pipeline (one of those places where the software is only relevant if it helps produce better pixels).

Personal introduction

Before Rust, most of my programming experience has been in high level languages like Python and JavaScript. I have also done Java and C++ in my past, and I had very little exposure to C. I never wrote Assembly (unless playing TIS-100 counts). And I have beginner knowledge with various LISPs.

All of that should make it clear that I'm not exactly a wizard of low level code.

On writing Rust CLIs

White theme, hands too spaced apart and an Apple keyboard: clearly a stock photo from Wikipedia!

With that out of the way, let's start from a few simple premises:

  1. Rust is more than just a "memory safe language".
  2. Rust is an immense language, and there's different ways to write working Rust.
  3. Writing CLIs in Rust is incredibly practical.

The way parts of the Linux kernel are written in Rust is..

..not the same way a tokio async application is written.

..not the same way a no-std Rust firmware is written for embedded devices..

..not the same way that your Pipeline CLI or Server will be written.

Rust is so big that I tend to think of it as an enormous box full of tools (from Arcs to Box to threads to Futures) to help me tackle many different problems that I might encounter.

If you're writing things that aren't performance critical, there's a lot of shortcuts that you can take in safe Rust so that you can churn a lot of code without spending a lot of time. Those shortcuts make Rust feel closer to something like a Swift or a Go, but without necessarily constraining you to the subset of problems that a Swift or a Go are tailored to solve.

For example, you can really live a lifetime free life if you can afford to .clone() here and there, and you're not doing heavily parallelized workflows that have to share data. Yes, the Rust purists will squint at you, but at the end of the day, most of us can live a very happy life making those extra copies for the sake of lowering the complexity of a code-base. And even then, you'll be writing software that is much faster and more memory efficient than any vanilla Python you could write (unless you tap into C extensions). The benefit is that if you really need to do extras optimizations you can effectively start being more aggressive about your memory usage by refactoring your existing code-base, instead of having to throw the code-base altogether to move to a completly different language (the famous Go->Rust migration that Discord did comes to mind).

After the first weeks, where you'll basically just have fist fights with the borrow checker, you will eventually understand how things work (at least references and move semantics) and you will finally be able to tap into a language that's much more powerful than anything Python could ever offer.

And again - let's please stop talking just about memory safety, Rust has much more to offer than just a robust memory model (which, as Python people writing Pipeline code, we don't really need: I don't know about you but I never managed to cause a segfault by running a Python CLI, no matter how funky and spaghetti it was..).

So, let's have a look at some of these language features...

0. (true) Types

A Python developer annotating their code-base

In the last years we have seen Python grow a lot in term of support for type annotations. Personally, I can't imagine contributing to non type-hinted code bases anymore without feeling like I'm touching legacy code wrote in the last millennium. Type annotations help immensely in catching simple bugs due to mismatched return types and are great are removing useless doc-strings that just hint at which type something should be.

The problem about 'type annotations' in Python is that they're just that: annotations. Annotations bolted on top of a language that has been preaching the "if it quacks it's a 🦆" idiom for a very long time. People that have been long enough in Python-land will tell you that type hints and even Generic and TypeVar are just the natural evolution of duck typing into "Structural sub-typing", aka "static duck typing" (see https://peps.python.org/pep-0544). And while I agree that yes, there's a lot to be gained from opening our minds to static typing in Python, it's not as much as you could gain with going full on designing a language that really puts types at the core of its architecture.

And that language is Rust.

Unlike Python, where most things are really just "an object", Rust has a lot of different types that are better suited for different uses. A Rust Enum is not just a class in disguise.. A Rust Trait is not just an ABC class in disguise (* cough * protocols * cough *).. A Rust Struct is again, not just a (data)class in disguise.. A Rust Result is not just an exception class in disguise..

This deep difference allows the compiler to throw beautifully meaningful errors when you are misusing types, and also allows it to statically "prove" a lot more things that you would even be able to prove via mypy or pyright.

1. Enums

Modeling states via Enums allows you to explicitly define all cases that you might encounter, without runtime surprises. Rust Enums (aka Algebraic Data Types) can also carry payloads so they're much more flexible and expressive than Python's enums, especially when it comes to pattern matching.

Have a look at this snippet, for example:

fn handle_key_event(&mut self, key_event: KeyEvent) -> color_eyre::Result<()> {
    match key_event.code {
        KeyCode::Char('Q') => self.exit(),
        KeyCode::Backspace => {
            // [...]
        KeyCode::Char(char) => {
            // [...]
        KeyCode::Tab | KeyCode::Down => {
            let next_index = self
                .min(self.search_items.len() - 1);

            self.highlighted_item_index = next_index;
        KeyCode::BackTab | KeyCode::Up => {
            let next_index = self.highlighted_item_index.saturating_sub(1);
            self.highlighted_item_index = next_index;
        _ => {}


This a sample of a Ratatui app where we're pattern matching against key_event.code: note how sometimes the enum that we match against carries also a payload (like KeyCode::Char(char) or KeyCode::Char('Q')) and sometimes it doesn't (KeyCode::Tab).

I don't have the luxury to test 3.10 so soon (which introduces the match keyword), and I tried to write something in Python that implements the same ideas of the Rust counterpart to prove that it would be a nest of if statements.. but it's actually not trivial.

I can imagine that with a bit more time you would probably come up with two different set of classes/enums to separate the key events that map to an alphabetical character (e.g: pressing alphabet letters) from the key events that don't map to any alphabetical character (like up/down arrows).

You could also try just always carry the underlying value received (whether it's an escape code or a simple character), but then you'd end up at a much lower level than just worrying about event types and the potential characters they carry. And you would probably lose the ability to just simply match against a "BackTab" event, because you'd probably have to model it either as two events, or consider that a single event can carry more than one value at a time (e.g. For "BackTab", its .value would be a list containing the Shift and the Tab itself). And then again.. who cares about the actual underlying value that an 'up/down/tab/ctrl' represents really? At this level, I'm just interested to catch the UX pattern of "pressing Shift+Tab on a keyboard": I don't really care what are the escape codes for 'Shift' and 'Tab'.

The final remark is that even after all of this work that you will have done, you will still be missing one thing: the ability to know that you have effectively matched all possible events. Rust Enums give you that at compile time.

Sometimes this is a deal breaker, imagine that you thought you effectively covered all events but you discover at runtime that there's another event that is more suited for what you needed, or that changes how other events should be interpreted (like a modifier key), etc.

And while I could come up with many more examples where knowing that you covered all cases is a lifesaving feature, I think it's time to move on..

2. Iterators

Steve Porcaro trying to write a clean loop in Python

Iterators are another language feature that I absolutely love. Python's closest thing are list comprehensions and generators, but they don't really scale well if you need to do multiple operations on the iterated item. At that point, you're better off with just normal loops. The thing with loops though is that they tend to become quite nested if you need to check a lot of conditions inside them.

In Rust instead iterators concatenate just beautifully and allow you to express complex queries as a composition of cascading operations.

For example, this Rust snippet:

let current_highlighted_item = self
    .find(|(i, _item)| i == &current_highlighted_index)
    .map(|(_i, item)| item);

would be translate to something like this in Python:

    current_highlighted_item = [
        item for i, item in enumerate(all_items)
        if i == current_highlighted_index 
except IndexError:
    current_hightlighted_item = None

But in the process we have lost the short-circuiting nature of find (https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.find) and we had to manually remember to set the value to None.

So really, a more experience dev might realise it's time to write this:

current_hightlighted_item = None
for i, item in enumerate(all_items): 
    if i == current_highlighted_index:
        current_hightlighted_item = item

or this, to remove the extra nesting:

current_hightlighted_item = None
for i, item in enumerate(all_items): 
    if i != current_highlighted_index:
    current_hightlighted_item = item

Now imagine that you have to add a few more conditions, like this:

current_hightlighted_item = None
for i, item in enumerate(all_items): 
    if i != current_highlighted_index and \
        some_sub_str not in item and \
        some_str_to_exclude in item:
    current_hightlighted_item = item

The readability of the Python example starts to scale poorly, and I had to add \ just to make it easier to see the conditions matched.

On the other hand, in Rust we'd have just a few more lines added to our already concatenated operations:

let current_highlighted_item = self
    .find(|(i, _item)| i == &current_highlighted_index)
    .map(|(_i, item)| item)
    .filter(|item| !item.contains(some_str_to_exclude));
    .filter(|item| item.contains(some_sub_str));

On top of that, in Rust current_highlighted_item will be an Option<Item>, so we will be forced to check if it's None or Some(Item), there's no way we forget about that because the compiler will complain. And guess what, that's because Option is in fact, an Enum. While in Python it will be your responsibility (or of the type checker, if your team uses one) to warn you that current_highlighted_item is in fact an... Optional.

3. Handling Nones and Errors

Your coworkers trying to find that one function that returned None instead of raising

This leads me to the other huge benefit of Rust: error handling. Countless books have been written on this, so I will just focus on what's the benefit for somebody used to Python.

I play a little game in my team which consists of keeping track of the last time that we hit a AttributeError: 'NoneType' object has no attribute 'thing' in production code. And guess what - it happens a lot. It's probably the biggest source of bugs, followed by KeyErrors. And we've improved a lot on that, and we have learned that it's better to raise instead of returning None, but it still happens.

This is because in Python we generally use the

"ask for forgiveness, not permission"

approach. We really just think we're gonna be on the happy path most of the times, and when we're not.. well, in the that case you get a traceback and a support ticket for pipeline.

Now imagine that instead you could be forced to always correctly handle the case where that thing might actually not be there or that operation might fail in some cases. Imagine you could do this without having to read the source code of the function you're calling every single time.

This is the power of Rust error handling. By spending a bit more time being forced to do error handling at the time of writing code, we have to spend less time figuring out what part of our code-base is culprit of having returned None when running the code.

And if you don't really want to do it, even in Rust-land.. okay, go ahead and .unwrap() like there's no tomorrow: but it will come to bite you, trust me.

4. Closures

Lambdas are not really about lambda calculus in Python..

Closures are also lovely: they allow to write functions that are very flexible, so that people can use them in ways you can't anticipate or model explicitly.

In Python we're already used to passing around functions as argument to other functions, but Closures in Rust are more powerful because they're more than just function pointers. Rust Closures can access variables in the enclosing scope, which is pretty useful. The equivalent in Python would be nested functions that use nonlocal. In Rust Closures are split in Fn, FnMut and FnOnce, depending if they are meant to be callable multiple times or should be able to mutate enclosing variables, which is a nice boost in expressiveness that we don't get in Python.

Without Closures, iterators in Rust would be boring, and sorting wouldn't be so powerful.

Say that I need to do something like this:

def main():
    num_of_sort_operations = 0

    def my_sorting_func(item):
        nonlocal num_of_sort_operations
        num_of_sort_operations += 1
        return item.width

    my_items = [Item(1920, 1080), Item(1280, 720)]
    my_items = sorted(my_items, key=my_sorting_func)


In Rust I don't need to declare the my_sorting_func, since I can use a closure:

fn main(){
    let mut num_sort_operations = 0;
    let mut my_items = vec![Item::new(1920, 1080), Item::new(1280, 720)]; 
    my_items = my_items.sort_by_key(|item| {
        num_sort_operations += 1;

Note that for this example I had to create also a main() function in Python because you can only use nonlocal to access variables in a non global scope.

In Python we have the lambda sugar to help define anonymous functions, but lambda scales incredibly poorly as soon as you need to do more than 1 operation, and it's really just a function without a name, not a Closure.

5. Generics (and Traits)

Plato convincing Aristotle that his functions are not Generic enough..

I feel there's a lot more features that I'd like to mention, but I don't want to bore you, my dear reader. I think Generics are at least worth mentioning because of very common use case: data serialization.

Imagine you're trying to write a function that takes "something that can be serialized to JSON" and writes it to stdout. How do you write the requirement for "something that can be serialized to JSON" ? In modern Python, you could try to use a Protocol, and invent a method that the thing you are receiving has to implement in order to be considered "Serializable". Oh but wait, now how do you deal with primitives that are natively serializable, like dicts and lists ? You would need to have an if case to check if isinstance(thing, (list, dict, str, int)), or something like this.. ah, if only we could write this:

fn write_thing_to_stdout<T>(thing: T) 
    where T: Serializable 
   // stuff.. 

I don't care what thing is really.. as long as it implements the Serializable trait then it's fine for me. Welcome to the world of generics!

Here's the full working example, for example:

use serde; // 1.0.196
use serde_json; // 1.0.113

use serde::Serialize;

fn write_to_stdout<T>(thing: T)
    where T: Serialize
    let value = serde_json::json!(thing);

struct MyThing {
    some_value: usize

fn main(){
    write_to_stdout(MyThing {some_value: 1 });

On deploying Rust CLIs

Writing software is nothing without a strategy for deploying it, so let's have a look at that.

First, you don't have to worry about which version of the Python interpreter you have to target and how to get it on the $PATH if it doesn't come with the base system install/image. Second, you are free to choose the third party dependencies you want to use without having to always figure out a smart hack to bring these dependencies into the sys.path / $PYTHONPATH in a sane way. Even if you didn't know how painful and fragmented the Python packaging ecosystem is these days, it doesn't take much to understand it: just try to choose your build backends and you'll have to read through countless blog-posts and figure out why the language doesn't provide one by default, and why setuptools is going in the direction it's going. Yes, there is value in learning how to write code that just relies on Python's standard library, but sometimes it's just not possible. Try to write any kind of logic that requires HTTP requests using just urllib: you can, sure - but it's a whole different story than just pulling in requests. Not to mention more sophisticated things like authentication and cryptography (for example, dealing with JSON Web Tokens without using jwt).

With Rust there are other challenges, but they are more rewarding because there's a lot less variables at runtime, so you just need to plan correctly and then act: the puzzle is much simpler.

Mostly, you need to focus on two problems:

  • A) How to get your CLI to be in the $PATH (this might or not be a problem at all)
  • B) You need to understand if the GLIBC version on the host that will run your CLI is at least as new as the one on the host you'll compile your CLI from.

A) is generally a very simple problem, because every VFX pipeline that wasn't written yesterday already has a way to tweak environment variables, and so you might already have a path on the filesystem where you can drop your CLI and expect things to "just work". This is less complex than putting a whole interpreter on the $PATH because Rust CLI are pretty much self-contained and have few runtime dependencies (more on this later), while Python requires at least OpenSSL (at least, if you compile it from scratch). So whether it's about writing a new Rez package or using some other bespoke system, this is not hard to achieve.

B) is generally as simple as checking ldd --version on your hosts. I have successfully compiled Rust CLIs on CentOS7.9 (glibc 2.17) and run them on Rocky8.8 (glibc 2.28). I'm sure things might get more complex, but in my experience they haven't. If you want to play with musl, jemalloc and friends you can also do that, and try to achieve a binary that requires no shared libraries at all.

So really, the runtime footprint of your CLI will be much smaller and flexible that its Python counterpart. Well, expect for the file size: Rust binaries tend to grow a bit but these days (in the order of MBs), but I think we can afford those extra bytes these days.

On fearless refactoring

I have done refactors of 300 lines in Python that I could never be sure if I had messed up or not until I deployed the code in production. Yes these days we have mypy and pyright and things are much better. But given the same developer, the chances of a safe refactor will be immensely higher in Rust.

In fact, this has been my latest refactor in Rust, after splitting a CLI into 2 CLIs and library:

$ git diff HEAD..v1.11.1 --shortstat
31 files changed, 3081 insertions(+), 3305 deletions

Guess how many bugs I have introduced after such a deep rewrite ?


Guess how long it took ? 2 days of normal work.

On the fact that shared libraries are never really that shared

Rust embraces static compilation as a sane default.

This means that at compile time our CLI will pull all the source code of all of the dependencies that it required in its Cargo.toml, and compile it in as single chunk. Which means that by default, you will not need to link against much more than GLIBC.

In fact in most cases these are gonna be the only shared libs you'll ever see used by your CLI:

$ ldd ./target/release/your-cli
        linux-vdso.so.1 (0x00007ffde02fb000)
        libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f7978b76000)
        libm.so.6 => /usr/lib/libm.so.6 (0x00007f7978a8b000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007f7978622000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f7978bde000)

But wait: I'm NOT saying that "shared libraries are a horrible idea". It's still possible to do custom dynamic linking in Rust. And there's plenty of good use cases for them: they allow to distribute software and update it later without having to recompile it, which is a critical requirement in many areas, and can be very important for security.

And yet, there's also plenty of cases where shared libraries are not really shared across that many binaries, and in fact what I've seen after years of VFX experience is that each app vendors its own copy anyway (cough cough Bifrost I'm talking about you and Maya not even sharing the same USD lib cough). Especially for younger libs where APIs are moving fast and stable ABIs are just an hallucination, share libraries don't make that much sense.

To address some of the shortcomings of apps that vendor shared libraries, you have to enter the realms of manually crafting RPATHs via patchelf or LD_LIBRARY_PATH hacks, depending on how lucky you get on the libs you use, or recompiling things manually to try to simplify the shared libraries in your VFX environment.

And even then, you still get the burden of..

  • Potential segfaults
  • A more complex deployment (because you have a new thing to carefully deploy: runtime dependencies)
  • Increased runtime fragility (because you give up on doing a lot of checks that could be done at compile time)
  • Having to maintain a stable ABI (if you're the creator of a library, that's a lot of work that might slow down refactors)

..without really gaining that much at runtime. It's a trade-off, and I think that the benefits that you get with the Rust approach, at least in terms of simplified deployment, cannot be understated.

So going back to our CLI: less crashes, less headaches and less things to figure out post-deployment. If you have a good reason for using shared libraries you wouldn't be here reading this article, and you would just do it nonetheless. But let's for a moment stop thinking that there's no sane alternatives to this model.

Now for the bad part

So Rust is effectively pretty appealing. And if there was one request that I could make to the VFX community, it's this:

Lets write more Rust and less C++, please!

That being said, there are things you might NOT like in Rust.. especially coming from Python:

  • There's no REPL: this is a bummer, because a REPL is great to quickly test out ideas

  • async: ever heard of "What color is your function"? You will, multiple times. (and Python has the same problem really)

  • Complexity: Writing robust software is complex, and Rust doesn't try to pretend that complexity is not there. The standard library is quite high level compared to something like C, but it's not so high level to become a leaky abstraction, which I appreciate deeply.


I don't really gain anything from trying to sell Rust to anyone, but if I want to be the change that I want to see in the (VFX) world then I have to at least ask people in VFX to consider using Rust too in their tooling.

In my experience rewriting some core bits in Rust has really simplified how I write and deploy software at my $dayjob, and even how I approach Python code-bases. And I have already reached the "maintenance stage" of most tools I rewrote in Rust, so I'm not just writing after the initial falling in love stage where you just write one feature after another. In fact, those tools run hundreds of times each single day. All bugs I had with Rust CLIs I wrote have been to do with either filesystem access or faulty business logic. In other ways: bad ideas made into bad code, nothing to do with the language itself.

My generation of Pipeline people didn't really choose the language to work with, we "were forced to" because of that's what the DCCs decideed to embed, and we needed to write native pipeline integrations in DCCs, so here you go: Python everywhere, even when it doesn't belong.

C++ is, for good or worst, the de-facto standard for writing integrations that require more performance (like Maya or Nuke plugins), because, again, that's what we inherited and because shared libraries are a cheap way to extend an existing program. That won't change any time soon I guess, or at least until something like WASM becomes much more affordable, popular, and easy to integrate.

But the third languages we can, and should, choose.

We can explore alternatives to C++ and Python and you know, come back from these exploratory trips a bit changed.. and maybe, maybe, realize that the giant Makefile we have is not the best way to build the-library-everybody-relies-on. And that there might be better alternatives to CMake, at least conceptually.. and deploying a CLI doesn't have to be a life-changing experience, maybe it can be boring and just work.


The image for this blogpost was Rendered in Blender, using the following 3d assets: