Fuzzing Sequoia-PGP

28/05/20 — capitol


Sequoia is a promising new OpenPGP library that’s written in Rust. As Rust has excellent interoperability with C it also exposes itself as a C library in the sequoia_openpgp_ffi crate. This would be the way that you would call this library from other programming languages, as C often acts as the lowest common denominator.

As Sequoia is making progress towards a 1.0 release, I thought that it would be time to help out by trying to discover bugs in it by fuzzing, a technique where you generate random input to functions and observe the execution flow in order to detect problems.

When attacking a system it’s often useful to determine where the trust boundaries of the system lie, and the ffi crate is one of those boundaries. Parser code is a large source of bugs, it’s very hard to foresee all different types of invalid input when writing a parser so pgp_packet_parser_from_bytes looked like a prime candidate for fuzzing.

The Setup

I decided to use the cargo fuzz framework which turned out to be a good choice, it’s very simple to get started with.

Initial setup:

cargo install cargo-fuzz
git clone https://gitlab.com/sequoia-pgp/sequoia.git
cd sequioa
cargo fuzz init

This creates the setup you need for fuzzing and adds a skeleton file in fuzz/fuzz_targets/fuzz_target_1.rs

I implemented the target like this:

use libfuzzer_sys::fuzz_target;

extern crate sequoia_openpgp_ffi;

fuzz_target!(|data: &[u8]| {
    if data.len() > 0 {
        let _ = sequoia_openpgp_ffi::parse::pgp_packet_parser_from_bytes(core::option::Option::None, &data[0], data.len());

The framework generates random data and uses that to call the fuzz_target function. I then simply pass that data onto the sequoia_openpgp_ffi::parse::pgp_packet_parser_from_bytes function.

I started to fuzzing like this:

cargo fuzz run fuzz_target_1 -- -detect_leaks=0

The framework also detects memory leaks, and there was some output related to that. But that’s not what I’m looking for today so I disable that with -- -detect_leaks=0.


It seems like I was the first person to run a fuzzer against that function, so I found the following:

  1. An integer overflow on a shift left
  2. An attempt to parse invalid UTF-8
  3. An attempt to read nonexisting data

The framework is very explicit in telling us what data caused the broken behaviour, so it’s trivial to take that input and build a unit test from it. This helps the maintainers to triage and fix the problems.

It can also try to minimize the test input automatically, so to keep the tests small. But that functionality seemed to be broken.

Security Implications

Since this is Rust, these panics don’t cause undefined behaviour as they might have done in for example C. So these findings will at most cause a denial of service due to the process crashing.


Big thanks to the Sequoia team who was very responsive when I reported the issues and especially Neal on that team.

Packaging Rust for Debian - part II

26/05/20 — capitol


Let’s do another dive into packaging Rust for Debian with a slightly more complicated example.

One great tool in the packager’s toolbox is cargo-debstatus. By running it in the root of your crate you will get a list of your dependencies, together with information regarding its packaging status in Debian.

For ripasso a part of the tree looks like below at the time of writing. ripasso depends on gpgme which depends on a number of other Rust libraries (and a number of native ones which aren’t shown here).

├── gpgme v0.9.2
│   ├── bitflags v1.2.1 (in debian)
│   ├── conv v0.3.3
│   │   └── custom_derive v0.1.7
│   ├── cstr-argument v0.1.1 (in debian)
│   ├── gpg-error v0.5.1 (in debian)
│   ├── gpgme-sys v0.9.1
│   │   ├── libc v0.2.66 (in debian)
│   │   └── libgpg-error-sys v0.5.1 (in debian)
│   ├── libc v0.2.66 (in debian)
│   ├── once_cell v1.3.1 (in debian)
│   ├── smallvec v1.1.0 (in debian)
│   └── static_assertions v1.1.0

One of the dependencies is static_assertions which is actually already packaged in Debian, but version 0.3.3 and we need 1.1.0. Let’s investigate how to fix this one.

In order to verify that we won’t break any other package by upgrading the existing Debian package to 1.1.0 we run list-rdeps.sh.

$ ./dev/list-rdeps.sh static-assertions
APT cache is a bit old, update? [Y/n] n
Versions of rust-static-assertions in unstable:
  librust-static-assertions-dev                    0.3.3-2

Versions of rdeps of rust-static-assertions in unstable, that also exist in testing:
  librust-lexical-core-dev                         0.4.3-1+b1       depends on     librust-static-assertions-0.3+default-dev (>= 0.3.3-~~),

Here we see that the lexical-core package depends on static-assertions and a quick git clone and compilation confirms that it doesn’t compile with version 1.1.0.

We can take this one step further

$ ./dev/list-rdeps.sh lexical-core
APT cache is a bit old, update? [Y/n] n
Versions of rust-lexical-core in unstable:
  librust-lexical-core+correct-dev                 0.4.3-1+b1
  librust-lexical-core+default-dev                 0.4.3-1+b1
  librust-lexical-core-dev                         0.4.3-1+b1
  librust-lexical-core+dtoa-dev                    0.4.3-1+b1
  librust-lexical-core+grisu3-dev                  0.4.3-1+b1
  librust-lexical-core+ryu-dev                     0.4.3-1+b1
  librust-lexical-core+stackvector-dev             0.4.3-1+b1

Versions of rdeps of rust-lexical-core in unstable, that also exist in testing:
  librust-lexical-core+correct-dev                 0.4.3-1+b1       depends on     librust-lexical-core+table-dev (= 0.4.3-1+b1),
  librust-lexical-core+default-dev                 0.4.3-1+b1       depends on     librust-lexical-core+correct-dev (= 0.4.3-1+b1), librust-lexical-core+std-dev (= 0.4.3-1+b1),
  librust-nom+lexical-core-dev                     5.0.1-4          depends on     librust-lexical-core-0.4+default-dev,
  librust-nom+lexical-dev                          5.0.1-4          depends on     librust-lexical-core-0.4+default-dev,

And we see that nom depends on lexical-core.

$ ./dev/list-rdeps.sh nom
APT cache is a bit old, update? [Y/n] n
Versions of rust-nom in unstable:
  librust-nom+default-dev                          5.0.1-4
  librust-nom-dev                                  5.0.1-4
  librust-nom+lazy-static-dev                      5.0.1-4
  librust-nom+lexical-core-dev                     5.0.1-4
  librust-nom+lexical-dev                          5.0.1-4
  librust-nom+regex-dev                            5.0.1-4
  librust-nom+regexp-dev                           5.0.1-4
  librust-nom+regexp-macros-dev                    5.0.1-4
  librust-nom+std-dev                              5.0.1-4

Versions of rdeps of rust-nom in unstable, that also exist in testing:
  librust-cexpr-dev                                0.3.3-1+b1       depends on     librust-nom-4+default-dev, librust-nom-4+verbose-errors-dev,
  librust-dhcp4r-dev                               0.2.0-1          depends on     librust-nom-5+default-dev (>= 5.0.1-~~),
  librust-iso8601-dev                              0.3.0-1          depends on     librust-nom-4+default-dev,
  librust-nom-4+lazy-static-dev                    4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3),
  librust-nom-4+regex-dev                          4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3),
  librust-nom-4+regexp-macros-dev                  4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3), librust-nom-4+regexp-dev (= 4.2.3-3),
  librust-nom-4+std-dev                            4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3), librust-nom-4+alloc-dev (= 4.2.3-3),
  librust-nom+default-dev                          5.0.1-4          depends on     librust-nom+std-dev (= 5.0.1-4), librust-nom+lexical-dev (= 5.0.1-4),
  librust-nom+regexp-macros-dev                    5.0.1-4          depends on     librust-nom+regexp-dev (= 5.0.1-4),
  librust-nom+std-dev                              5.0.1-4          depends on     librust-nom+alloc-dev (= 5.0.1-4),
  librust-pktparse-dev                             0.4.0-1          depends on     librust-nom-4+default-dev (>= 4.2-~~),
  librust-rusticata-macros-dev                     2.0.4-1          depends on     librust-nom-5+default-dev,
  librust-tls-parser-dev                           0.9.2-3          depends on     librust-nom-5+default-dev,
  librust-weedle-dev                               0.10.0-3         depends on     librust-nom-4+default-dev,

And a lot of things depends on nom.

So in order to package static_assertions so that we can package gpgme we can choose one of three different strategies:

  1. Package both versions of static_assertions
  2. Upgrade lexical-core, nom and everything nom depends on to newer versions
  3. Patch version 0.4.3 of lexical-core to use a newer version of static_assertions

Packaging both versions of static_assertions

This is a working strategy, but packaging both means that we need to create a new package for version 0.3 of static_assertions. New packages in Debian go through the new queue, where a member of the ftp masters team needs to manually verify so that it doesn’t contain any non-free software.

Therefore we will not choose this strategy.

Upgrading lexical-core, nom and everything nom depends on to newer versions

There exists a new version of lexical-core that depend on static_assertions 1, but the newer version of nom is a beta version of nom 6, and upgrading to that version would mean that we would need to patch all the incompatibilities in the applications that use nom.

A lot of non-trivial work, specially as we in that case would like to upstream the patches so that the maintenance burden doesn’t grow too much.

Patching version 0.4.3 of lexical-core to use a newer version of static_assertions

It turns out that there is an upgrade commit in lexical-core that applies cleanly to version 0.4.3. This is what we will use.

So we take that commit as a patch and place it into the patches directory together with a series file that just lists what order to patches should be applied in.

That enables us to upgrade the static_assertions package to version 1.1.0 without breaking any other package.

How we did translations in Rust for Ripasso

30/04/20 — capitol


One core principle of writing user friendly software is that the software should adapt to the user, not the user to the software. A user shouldn’t need to learn a new language in order to use the software for example.

Therefore internationalization is a usability feature that we shouldn’t ignore.

There exists a number of frameworks for translations, but GNU Gettext is one of the most used. So that is what we will use also.

Solving that problem requires solving a few sub-problems:

  • Extracting strings to translate from the source code
  • Updating old translation files with new strings
  • Generating the binary translation files
  • Using the correct generated translation

Extracting translatable strings from Rust source code

The Gettext package contains a number of helper programs, among them xgettext which can be used to extract the strings from the Rust sources.

One drawback is that Rust isn’t on the list of languages that xgettext can parse. But Rust is similar enough to C so that we can use that parser, as long as we don’t have any multiline strings.

In Ripasso we extract it like this:

xgettext cursive/src/*rs -kgettext --sort-output -o cursive/res/ripasso-cursive.pot

Updating old translation files with new strings

This isn’t Rust specific in any way, but needs to be done so we include it as a step. We use the Gettext program msgmerge:

msgmerge --update cursive/res/sv.po cursive/res/ripasso-cursive.pot

The .pot file is the template file that contains all the strings, and there will be one .po per language that contains the translations for that language.

A translator can open the .po file in a translation program, for example poedit and translate it.

Generating the binary translation files

We do this with a third utility program from Gettext, called msgfmt, called from build.rs. It reads the .po files and generates binary .mo files.

This code from Ripasso is a bit verbose/ugly, but it gets the job done.

fn generate_translation_files() {
    let mut dest_path = std::env::current_exe().unwrap();
    print!("creating directory: {:?} ", &dest_path);
    let res = std::fs::create_dir(&dest_path);
    if res.is_ok() {
    } else {
        println!("error: {:?}", res.err().unwrap());
    print!("creating directory: {:?} ", &dest_path);
    let res = std::fs::create_dir(&dest_path);
    if res.is_ok() {
    } else {
        println!("error: {:?}", res.err().unwrap());

    let mut dir = std::env::current_exe().unwrap();

    let translation_path_glob = dir.join("**/*.po");
    let existing_iter =

    for existing_file in existing_iter {
        let file = existing_file.unwrap();
        let mut filename =
            format!("{}", file.file_name().unwrap().to_str().unwrap());
        filename.replace_range(3..4, "m");

            "generating .mo file for {:?} to {}/{} ",
        let res = Command::new("msgfmt")
            .arg(format!("{}", &file.display()))

        if res.is_ok() {
        } else {
            println!("error: {:?}", res.err().unwrap());

The .mo files will end up in target/translations/cursive/, one file per language.

Using the correct generated translation

The best crate I found was gettext. There were also others but they required unstable Rust features and were therefore unusable. Since the various Linux distributions use the stable Rust to compile their packages.

During runtime, the translations live inside a lazy_static variable:

lazy_static! {
    static ref CATALOG: gettext::Catalog = get_translation_catalog();

But getting the correct translation into that variable can be a bit tricky. Here is how we do it in Ripasso:

fn get_translation_catalog() -> gettext::Catalog {
    let locale = locale_config::Locale::current();

    let mut translation_locations = vec!["/usr/share/ripasso"];
    if let Some(path) = option_env!("TRANSLATION_INPUT_PATH") {
        translation_locations.insert(0, path);
    if cfg!(debug_assertions) {
        translation_locations.insert(0, "./cursive/res");

    for preferred in locale.tags_for("messages") {
        for loc in &translation_locations {
            let langid_res: Result<LanguageIdentifier, _> =
                format!("{}", preferred).parse();

            if let Ok(langid) = langid_res {
                let file = std::fs::File::open(format!(
                if let Ok(file) = file {
                    if let Ok(catalog) = gettext::Catalog::parse(file) {
                        return catalog;


A few things need explaining. First about the paths, /usr/share/ripasso is the default and always a place where we search for translation files. The option_env! macro is there so that different distributions can specify their own paths during compile time. The check cfg!(debug_assertions) is true when running in debug mode, and is there so that it’s easy to test a translation while you are working on it.

The for loop selects the most fitting language based on how the user has configured its locale.

If none of those match the languages that we have available, we return an empty Catalog, which means that it defaults back to English.


Translating Rust programs isn’t as straight forward as it could be, but it’s in no way impossible and well worth doing.

Oslo NixOS MiniCon 2020 report

07/03/20 — fnords && sgo

On February 22. and 23. Oslo NixOS User Group hosted a mini conference at Hackeriet. We had a variety of talks about different parts of the Nix ecosystem.


The Nix ecosystem

Elis Hirwig

Elis Hirwing (etu) talked about the Nix ecosystem! This was a great overview of the different Nix components and tools.

Some take-aways from this talk:

  • The Nix pkgs repository on Github is huge, over 49 000 packages! So this is a very active community. According to Repology it’s the most up-to-date repo right now!
  • The community works to keep packages as up to date as possible, and it is relatively easy to become a contributor.
  • They try to remove unmaintained or EOL packages (unless too many other packages depend on it….looking at you Python 2!).
  • You don’t have to use NixOS to take advantage of Nix packages, they can be used on basically any Linux or Darwin (macOS) distribution.

Elis Hirwig presentation

With tools like direnv and nix-shell, Nix is also great for setting up development environments. There is also a lot of tooling for different languages. This slide is an example of how Etu uses nix-shell to get the dependencies needed for generating the slides of this presentation. Nix has grown a lot in the last five years, and it is pretty exciting to follow that development further down the road.

Watch the talk

The slides are available on Github


Kim Lindberger

Then Kim Lindberger (talyz) gave a great presentation on NixOps. We even got treated to some demos!

Some things to note about NixOps:

  • NixOps can be used to deploy NixOS systems to machines and non-machine resources (DNS, S3 buckets, etc.). All configuration is build locally before being shipped.
  • There are plugins for a few cloud providers, for instance Amazon EC2 and Google Cloud Engine.
  • If a deploy fails for some reason, you are never stuck with a system that is in a half-updated state. If the config doesn’t build, it won’t get pushed upstream and applied at all.
  • NixOps is unfortunately still Python 2, but there are efforts on the way to port it to a modern Python.
  • Backends will be split into separate plugins in an upcoming release!

Kim Lindberger presentation

Watch the talk

You can find the slides and the examples used in the demos on Github

Nix expressions

Adam Höse

Last talk of the day was Adam Höse (Adisbladis) giving us an intro to reading Nix expressions. This is perhaps the most daunting aspect of NixOS for beginners.

A few things to consider:

  • It’s not an imperative language, it is functional! A description that was mentioned is “a little bit like a weird Lisp without all the parents”
  • Using the nix repl can be useful if you want to debug expressions or just play around with the language.
  • Nix configurations can have different weights. Meaning that if you duplicate expressions, you can assign a weight to one of them that determines what will actually be built. Nix will merge all config together and that way decide what takes precedence.
  • The key take-away: Learning the language will take your Nix journey further!

Adam Höse presentation

Lots of questions were asked by the audience during this talk, and hopefully some light was shed on the mysteries of the Nix language by the end.

Watch the talk

Then we ate some pizza and hung out Hackeriet style! Waiting for pizza to


Building Docker containers with Nix

Adam Höse

On the last day we got an overview of how to build Docker containers using Nix by Adam Höse.

  • Normally a Docker build is an imperative process that might have different outcomes at different points in time (mostly because of the use of base images). Building a Docker image “Nix style” makes it reproducible, it will build the same way every time.
  • Docker layers are content-addressed paths that are ordered; they are not sequential.
  • buildLayeredImage is great for minimizing the amount of dependencies that are pulled into the final Docker image.
  • Nixery is a project where you can get ad-hoc Docker images with a specific set of packages. The way to do this is to put the package names in the image name, like this: docker pull nixery.dev/shell/git/htop. Then you will get a custom docker image with bash, git and htop. Really cool!

Day 2 of the con

Watch the talk

After this talk we had a informal session of hacking on Nix things and socializing.

The organizers (fnords & sgo) want to thank the speakers and everyone else that came to the mini-con! Special thanks to NUUG Foundation for sponsoring this event! If you want to get notified of any future events in Oslo NixOS User Group, you can join our Meetup group.

Hackeriet 10 years

20/02/20 — hackeriet

hackeriet 10 years

Hackeriet turned 10 years old last December! Time flies when you’re busy hacking! To mark this momentous occasion we decided to invite some old friends and new acquaintances to speak at Hackeriet on March 14th (also known as Pi day, a mathematically fortuitous date)!


Due to the Corona virus situation in Oslo, we’ve decided to postpone the event. The new date is still undecided, but our next mathematically fortuitous date is τ-day (June 28th). Put it in your calendar, and check this place or the #oslohackerspace channel on irc.freenode.org for further information.


14:00: A short introduction to Internet Governance by Maja Enes

The term «Internet Governance» is an expression to define all actors and agencies who govern the Internet. But who are they, and what is this governing power they claim to hold? This session gives a short introduction to the term «Internet Governance», and looks closer at some of the more prominent bodies to see who are involved and how they work, and is it possible for you or me to make an actual difference?

Maja Enes has a background in Psychology and social work. She published the book «Internett Internett Internett» in 2019, to give people without a technical background the opportunity to be able to learn how Internet infrastructure works. This book was born out of the frustration Maja struggled with when she, as a newly appointed chair of the Norwegian Chapter of the Internet Society, tried to learn how Internet works. The last 5 years she has been working freelance and writes about things she does at www.frkenes.no, for example has she written about when she held a small soldering course at Hackeriet in 2015.

Separate signup for this talk.

15:00: Introduction to election security by Patricia Aas

Free and correct elections are the linchpin of democracy. For a government to be formed based the will of the people, the will of the people must be heard. Across the world election systems are being classified as critical infrastructure, and they face the same concerns as all other fundamental systems in society.

We are building our critical infrastructure from hardware and software built by nations and companies we can’t expect to trust. How can this be dealt with in Election Security, and can those lessons be applied to other critical systems society depends on today?

Patricia Aas is a programmer who has worked mostly in C++ and Java. She has spent her career continuously delivering from the same code-base to a large user base. She has worked on two browsers (Opera and Vivaldi), worked as a Java consultant and on embedded telepresence endpoints for Cisco. She is focused on the maintainability and flexibility of software architecture, and how to extend it to provide cutting edge user experiences. Her focus on the end user has led her work more and more toward privacy and security, and she has recently started her own company, TurtleSec, hoping to contribute positively to the infosec and C++ communities. She is also involved in the Include C++ organization hoping to improve diversity and inclusion in the C++ community.

Separate signup for this talk.

16:00: Dark Patterns and the GDPR by Quirin Weinzierl

Have you ever felt a website tricked you into providing your data? If so, you probably encountered a “Dark Pattern”. Service providers use tricks like defaults, countdowns and confusing language to get us to consent against our own interest. Understanding these patterns leads us into exploring how our decisions are shaped by psychological deficiencies, biases and rules of thumb.

The bad news: The General Data Protection Regulation (GDPR) is currently unable to stop Dark Patterns.

The good news: It could, if we use it the right way. And we all can help to confront Dark Patterns. In this talk we’ll get an idea of how both these things can work.

Quirin Weinzierl is a Ph.D. Candidate at University Speyer. He is currently a visiting Ph.D. researcher at the NRCCL/SERI, University of Oslo. At the German Research Institute for Public Administration Speyer Quirin serves as the coordinator of the research cluster “Transformation of the State in the Age of Digitalization”. Quirin studied law at University Munich and Yale Law School. He clerked at the European Court of Human Rights and worked at the German Parliament’s Academic Research Service. He searches for good behaviorally informed regulation and even better randonée-skiing in Norway.

Separate signup for this talk.

17:00: Learn lock picking with Hans-Petter Fjeld

Are you curious about locks and how lockpicking works? We are organizing a beginners workshop for just you!

We have a bunch of locks and simple lockpicks, and will tell you about a few simple techniques to get you started with locksports.

Different types of locks will be introduced to you, so you are able to recognize them, know the basic workings of each, and how you can pick them.

After the workshop we will sit around and attempt to pick locks, and if there is time we can talk about alternative non-destructive means of bypassing locks in general.

Hans-Petter is co-founder of Hackeriet and is currently working as an Information Security Engineer for a large Norwegian managed hosting company. He started lockpicking as a hobby around 10 years ago.

Separate signup for this workshop. NOTE: This workshop goes in parallel with the two talks below!

17:00: Unlocking closed software with Frida by Ole André Ravnås

In a world where so much runs on software, and so little of it is open and transparent, what can we do to uncover the truth about what the software actually does? Which files does it open, and who does it talk to?

In this talk, Ole André will show us how Frida can be used to understand software without access to its source code. This is also very helpful even when source code is available due to the sheer complexity of modern software, with layers and layers of code interacting in unknown ways.

Ole André Ravnås is the creator of Frida, which he’s currently building products on top of at NowSecure. Once upon a time a die-hard Linux user, he found himself reverse-engineering the proprietary video codec used by Windows Live Messenger for webcam conversations. The result was released as libmimic back in 2005, and this was his gateway drug to the world of reversing.

Separate signup for this talk.

18:00: Interactive serial communication by Øyvind Kolås

Øyvind Kolås will tell us about his experiences writing his own ANSI/ECMA-48/vt100 engine, and how he (via some dev-fuzzing) stumbled upon this bug

This talk on interactive serial communications might touch upon some of the following terms: baud, baudot, ascii, cp850, latin1, unicode, DEC, rs232, vt100, ANSI/ECMA-48, ansi.sys, telix, RIP, ReGIS, sixels, BBS, terminal emulator, tty, ssh/telnet NAWS, ascii-art, ansi-art, aalib, caca, chafa, tv. If any of this peaks your interest, come on down!

Øyvind Kolås is a digital media toolsmith, creating tools and infrastructure to aid creation of his own and others artistic and visual digital media experiments. He is the maintainer and lead developer of GIMPs next generations engine, babl/GEGL - infrastructure libraries where he for the more than the last decade have been working actively on providing high bitdepths, HDR, CMYK and non-destructive editing and capabilities/possibilities to GIMP - and other software. You can read more about Øyvind on his Patreon page.

Separate signup for this talk.

19:00: The birthday gathering will continue until morale improves.

Here’s to another 0x00001010 years!

Questions? Chat with us on IRC at #oslohackerspace @ Freenode.