CVE-2020-14423: Convos 4.19 Generates a Predictable Secret

19/06/20 — sgo

Convos is an open source web based irc client.

While packaging convos for nixpkgs, we found that the application generated a predictable local_secret. This was caused by the following code:

my $secret = Mojo::Util::md5_sum(join ':', $self->core->home->to_string, $<, $(, $0);

The secret is derived from the:

  • Home directory of the application
  • Real UID of the process
  • Real GID of the process
  • Name of the executed program.

The local_secret will therefore be predictable when running under Docker, and easily guessable on other platforms.

Impact

A remote attacker can possibly use a derived secret to create invite links or reset passwords for existing users.

Vulnerable

Convos 4.19 and earlier are vulnerable.

Mitigation

Users should upgrade to Convos 4.20 or newer, and regenerate secrets for their installation.

Timeline

  • 2020-06-12: Vulnerability discovered, Vendor notified
  • 2020-06-14: Patch created by vendor
  • 2020-06-18: Version 4.20 released, CVE assigned.

References

Credits

Thanks to Jan Henning Thorsen for quickly fixing the vulnerability.

Vulnerability discovered by Stig Palmquist.

Fuzzing Sequoia-PGP

28/05/20 — capitol

sequoia

Sequoia is a promising new OpenPGP library that’s written in Rust. As Rust has excellent interoperability with C it also exposes itself as a C library in the sequoia_openpgp_ffi crate. This would be the way that you would call this library from other programming languages, as C often acts as the lowest common denominator.

As Sequoia is making progress towards a 1.0 release, I thought that it would be time to help out by trying to discover bugs in it by fuzzing, a technique where you generate random input to functions and observe the execution flow in order to detect problems.

When attacking a system it’s often useful to determine where the trust boundaries of the system lie, and the ffi crate is one of those boundaries. Parser code is a large source of bugs, it’s very hard to foresee all different types of invalid input when writing a parser so pgp_packet_parser_from_bytes looked like a prime candidate for fuzzing.

The Setup

I decided to use the cargo fuzz framework which turned out to be a good choice, it’s very simple to get started with.

Initial setup:

cargo install cargo-fuzz
git clone https://gitlab.com/sequoia-pgp/sequoia.git
cd sequioa
cargo fuzz init

This creates the setup you need for fuzzing and adds a skeleton file in fuzz/fuzz_targets/fuzz_target_1.rs

I implemented the target like this:

#![no_main]
use libfuzzer_sys::fuzz_target;

extern crate sequoia_openpgp_ffi;

fuzz_target!(|data: &[u8]| {
    if data.len() > 0 {
        let _ = sequoia_openpgp_ffi::parse::pgp_packet_parser_from_bytes(core::option::Option::None, &data[0], data.len());
    }
});

The framework generates random data and uses that to call the fuzz_target function. I then simply pass that data onto the sequoia_openpgp_ffi::parse::pgp_packet_parser_from_bytes function.

I started to fuzzing like this:

cargo fuzz run fuzz_target_1 -- -detect_leaks=0

The framework also detects memory leaks, and there was some output related to that. But that’s not what I’m looking for today so I disable that with -- -detect_leaks=0.

Findings

It seems like I was the first person to run a fuzzer against that function, so I found the following:

  1. An integer overflow on a shift left
  2. An attempt to parse invalid UTF-8
  3. An attempt to read nonexisting data

The framework is very explicit in telling us what data caused the broken behaviour, so it’s trivial to take that input and build a unit test from it. This helps the maintainers to triage and fix the problems.

It can also try to minimize the test input automatically, so to keep the tests small. But that functionality seemed to be broken.

Security Implications

Since this is Rust, these panics don’t cause undefined behaviour as they might have done in for example C. So these findings will at most cause a denial of service due to the process crashing.

Thanks

Big thanks to the Sequoia team who was very responsive when I reported the issues and especially Neal on that team.

Packaging Rust for Debian - part II

26/05/20 — capitol

rusty-steel

Let’s do another dive into packaging Rust for Debian with a slightly more complicated example.

One great tool in the packager’s toolbox is cargo-debstatus. By running it in the root of your crate you will get a list of your dependencies, together with information regarding its packaging status in Debian.

For ripasso a part of the tree looks like below at the time of writing. ripasso depends on gpgme which depends on a number of other Rust libraries (and a number of native ones which aren’t shown here).

├── gpgme v0.9.2
│   ├── bitflags v1.2.1 (in debian)
│   ├── conv v0.3.3
│   │   └── custom_derive v0.1.7
│   ├── cstr-argument v0.1.1 (in debian)
│   ├── gpg-error v0.5.1 (in debian)
│   ├── gpgme-sys v0.9.1
│   │   ├── libc v0.2.66 (in debian)
│   │   └── libgpg-error-sys v0.5.1 (in debian)
│   ├── libc v0.2.66 (in debian)
│   ├── once_cell v1.3.1 (in debian)
│   ├── smallvec v1.1.0 (in debian)
│   └── static_assertions v1.1.0

One of the dependencies is static_assertions which is actually already packaged in Debian, but version 0.3.3 and we need 1.1.0. Let’s investigate how to fix this one.

In order to verify that we won’t break any other package by upgrading the existing Debian package to 1.1.0 we run list-rdeps.sh.

$ ./dev/list-rdeps.sh static-assertions
APT cache is a bit old, update? [Y/n] n
Versions of rust-static-assertions in unstable:
  librust-static-assertions-dev                    0.3.3-2

Versions of rdeps of rust-static-assertions in unstable, that also exist in testing:
  librust-lexical-core-dev                         0.4.3-1+b1       depends on     librust-static-assertions-0.3+default-dev (>= 0.3.3-~~),

Here we see that the lexical-core package depends on static-assertions and a quick git clone and compilation confirms that it doesn’t compile with version 1.1.0.

We can take this one step further

$ ./dev/list-rdeps.sh lexical-core
APT cache is a bit old, update? [Y/n] n
Versions of rust-lexical-core in unstable:
  librust-lexical-core+correct-dev                 0.4.3-1+b1
  librust-lexical-core+default-dev                 0.4.3-1+b1
  librust-lexical-core-dev                         0.4.3-1+b1
  librust-lexical-core+dtoa-dev                    0.4.3-1+b1
  librust-lexical-core+grisu3-dev                  0.4.3-1+b1
  librust-lexical-core+ryu-dev                     0.4.3-1+b1
  librust-lexical-core+stackvector-dev             0.4.3-1+b1

Versions of rdeps of rust-lexical-core in unstable, that also exist in testing:
  librust-lexical-core+correct-dev                 0.4.3-1+b1       depends on     librust-lexical-core+table-dev (= 0.4.3-1+b1),
  librust-lexical-core+default-dev                 0.4.3-1+b1       depends on     librust-lexical-core+correct-dev (= 0.4.3-1+b1), librust-lexical-core+std-dev (= 0.4.3-1+b1),
  librust-nom+lexical-core-dev                     5.0.1-4          depends on     librust-lexical-core-0.4+default-dev,
  librust-nom+lexical-dev                          5.0.1-4          depends on     librust-lexical-core-0.4+default-dev,

And we see that nom depends on lexical-core.

$ ./dev/list-rdeps.sh nom
APT cache is a bit old, update? [Y/n] n
Versions of rust-nom in unstable:
  librust-nom+default-dev                          5.0.1-4
  librust-nom-dev                                  5.0.1-4
  librust-nom+lazy-static-dev                      5.0.1-4
  librust-nom+lexical-core-dev                     5.0.1-4
  librust-nom+lexical-dev                          5.0.1-4
  librust-nom+regex-dev                            5.0.1-4
  librust-nom+regexp-dev                           5.0.1-4
  librust-nom+regexp-macros-dev                    5.0.1-4
  librust-nom+std-dev                              5.0.1-4

Versions of rdeps of rust-nom in unstable, that also exist in testing:
  librust-cexpr-dev                                0.3.3-1+b1       depends on     librust-nom-4+default-dev, librust-nom-4+verbose-errors-dev,
  librust-dhcp4r-dev                               0.2.0-1          depends on     librust-nom-5+default-dev (>= 5.0.1-~~),
  librust-iso8601-dev                              0.3.0-1          depends on     librust-nom-4+default-dev,
  librust-nom-4+lazy-static-dev                    4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3),
  librust-nom-4+regex-dev                          4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3),
  librust-nom-4+regexp-macros-dev                  4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3), librust-nom-4+regexp-dev (= 4.2.3-3),
  librust-nom-4+std-dev                            4.2.3-3          depends on     librust-nom-4-dev (= 4.2.3-3), librust-nom-4+alloc-dev (= 4.2.3-3),
  librust-nom+default-dev                          5.0.1-4          depends on     librust-nom+std-dev (= 5.0.1-4), librust-nom+lexical-dev (= 5.0.1-4),
  librust-nom+regexp-macros-dev                    5.0.1-4          depends on     librust-nom+regexp-dev (= 5.0.1-4),
  librust-nom+std-dev                              5.0.1-4          depends on     librust-nom+alloc-dev (= 5.0.1-4),
  librust-pktparse-dev                             0.4.0-1          depends on     librust-nom-4+default-dev (>= 4.2-~~),
  librust-rusticata-macros-dev                     2.0.4-1          depends on     librust-nom-5+default-dev,
  librust-tls-parser-dev                           0.9.2-3          depends on     librust-nom-5+default-dev,
  librust-weedle-dev                               0.10.0-3         depends on     librust-nom-4+default-dev,

And a lot of things depends on nom.

So in order to package static_assertions so that we can package gpgme we can choose one of three different strategies:

  1. Package both versions of static_assertions
  2. Upgrade lexical-core, nom and everything nom depends on to newer versions
  3. Patch version 0.4.3 of lexical-core to use a newer version of static_assertions

Packaging both versions of static_assertions

This is a working strategy, but packaging both means that we need to create a new package for version 0.3 of static_assertions. New packages in Debian go through the new queue, where a member of the ftp masters team needs to manually verify so that it doesn’t contain any non-free software.

Therefore we will not choose this strategy.

Upgrading lexical-core, nom and everything nom depends on to newer versions

There exists a new version of lexical-core that depend on static_assertions 1, but the newer version of nom is a beta version of nom 6, and upgrading to that version would mean that we would need to patch all the incompatibilities in the applications that use nom.

A lot of non-trivial work, specially as we in that case would like to upstream the patches so that the maintenance burden doesn’t grow too much.

Patching version 0.4.3 of lexical-core to use a newer version of static_assertions

It turns out that there is an upgrade commit in lexical-core that applies cleanly to version 0.4.3. This is what we will use.

So we take that commit as a patch and place it into the patches directory together with a series file that just lists what order to patches should be applied in.

That enables us to upgrade the static_assertions package to version 1.1.0 without breaking any other package.

How we did translations in Rust for Ripasso

30/04/20 — capitol

rust-loop

One core principle of writing user friendly software is that the software should adapt to the user, not the user to the software. A user shouldn’t need to learn a new language in order to use the software for example.

Therefore internationalization is a usability feature that we shouldn’t ignore.

There exists a number of frameworks for translations, but GNU Gettext is one of the most used. So that is what we will use also.

Solving that problem requires solving a few sub-problems:

  • Extracting strings to translate from the source code
  • Updating old translation files with new strings
  • Generating the binary translation files
  • Using the correct generated translation

Extracting translatable strings from Rust source code

The Gettext package contains a number of helper programs, among them xgettext which can be used to extract the strings from the Rust sources.

One drawback is that Rust isn’t on the list of languages that xgettext can parse. But Rust is similar enough to C so that we can use that parser, as long as we don’t have any multiline strings.

In Ripasso we extract it like this:

xgettext cursive/src/*rs -kgettext --sort-output -o cursive/res/ripasso-cursive.pot

Updating old translation files with new strings

This isn’t Rust specific in any way, but needs to be done so we include it as a step. We use the Gettext program msgmerge:

msgmerge --update cursive/res/sv.po cursive/res/ripasso-cursive.pot

The .pot file is the template file that contains all the strings, and there will be one .po per language that contains the translations for that language.

A translator can open the .po file in a translation program, for example poedit and translate it.

Generating the binary translation files

We do this with a third utility program from Gettext, called msgfmt, called from build.rs. It reads the .po files and generates binary .mo files.

This code from Ripasso is a bit verbose/ugly, but it gets the job done.

fn generate_translation_files() {
    let mut dest_path = std::env::current_exe().unwrap();
    dest_path.pop();
    dest_path.pop();
    dest_path.pop();
    dest_path.pop();
    dest_path.push("translations");
    print!("creating directory: {:?} ", &dest_path);
    let res = std::fs::create_dir(&dest_path);
    if res.is_ok() {
        println!("success");
    } else {
        println!("error: {:?}", res.err().unwrap());
    }
    dest_path.push("cursive");
    print!("creating directory: {:?} ", &dest_path);
    let res = std::fs::create_dir(&dest_path);
    if res.is_ok() {
        println!("success");
    } else {
        println!("error: {:?}", res.err().unwrap());
    }

    let mut dir = std::env::current_exe().unwrap();
    dir.pop();
    dir.pop();
    dir.pop();
    dir.pop();
    dir.pop();
    dir.push("cursive");
    dir.push("res");

    let translation_path_glob = dir.join("**/*.po");
    let existing_iter =
        glob::glob(&translation_path_glob.to_string_lossy()).unwrap();

    for existing_file in existing_iter {
        let file = existing_file.unwrap();
        let mut filename =
            format!("{}", file.file_name().unwrap().to_str().unwrap());
        filename.replace_range(3..4, "m");

        print!(
            "generating .mo file for {:?} to {}/{} ",
            &file,
            dest_path.display(),
            &filename
        );
        let res = Command::new("msgfmt")
            .arg(format!(
                "--output-file={}/{}",
                dest_path.display(),
                &filename
            ))
            .arg(format!("{}", &file.display()))
            .output();

        if res.is_ok() {
            println!("success");
        } else {
            println!("error: {:?}", res.err().unwrap());
        }
    }
}

The .mo files will end up in target/translations/cursive/, one file per language.

Using the correct generated translation

The best crate I found was gettext. There were also others but they required unstable Rust features and were therefore unusable. Since the various Linux distributions use the stable Rust to compile their packages.

During runtime, the translations live inside a lazy_static variable:

lazy_static! {
    static ref CATALOG: gettext::Catalog = get_translation_catalog();
}

But getting the correct translation into that variable can be a bit tricky. Here is how we do it in Ripasso:

fn get_translation_catalog() -> gettext::Catalog {
    let locale = locale_config::Locale::current();

    let mut translation_locations = vec!["/usr/share/ripasso"];
    if let Some(path) = option_env!("TRANSLATION_INPUT_PATH") {
        translation_locations.insert(0, path);
    }
    if cfg!(debug_assertions) {
        translation_locations.insert(0, "./cursive/res");
    }

    for preferred in locale.tags_for("messages") {
        for loc in &translation_locations {
            let langid_res: Result<LanguageIdentifier, _> =
                format!("{}", preferred).parse();

            if let Ok(langid) = langid_res {
                let file = std::fs::File::open(format!(
                    "{}/{}.mo",
                    loc,
                    langid.get_language()
                ));
                if let Ok(file) = file {
                    if let Ok(catalog) = gettext::Catalog::parse(file) {
                        return catalog;
                    }
                }
            }
        }
    }

    gettext::Catalog::empty()
}

A few things need explaining. First about the paths, /usr/share/ripasso is the default and always a place where we search for translation files. The option_env! macro is there so that different distributions can specify their own paths during compile time. The check cfg!(debug_assertions) is true when running in debug mode, and is there so that it’s easy to test a translation while you are working on it.

The for loop selects the most fitting language based on how the user has configured its locale.

If none of those match the languages that we have available, we return an empty Catalog, which means that it defaults back to English.

Conclusion

Translating Rust programs isn’t as straight forward as it could be, but it’s in no way impossible and well worth doing.

Oslo NixOS MiniCon 2020 report

07/03/20 — fnords && sgo

On February 22. and 23. Oslo NixOS User Group hosted a mini conference at Hackeriet. We had a variety of talks about different parts of the Nix ecosystem.

DAY 1

The Nix ecosystem

Elis Hirwig

Elis Hirwing (etu) talked about the Nix ecosystem! This was a great overview of the different Nix components and tools.

Some take-aways from this talk:

  • The Nix pkgs repository on Github is huge, over 49 000 packages! So this is a very active community. According to Repology it’s the most up-to-date repo right now!
  • The community works to keep packages as up to date as possible, and it is relatively easy to become a contributor.
  • They try to remove unmaintained or EOL packages (unless too many other packages depend on it….looking at you Python 2!).
  • You don’t have to use NixOS to take advantage of Nix packages, they can be used on basically any Linux or Darwin (macOS) distribution.

Elis Hirwig presentation
slide

With tools like direnv and nix-shell, Nix is also great for setting up development environments. There is also a lot of tooling for different languages. This slide is an example of how Etu uses nix-shell to get the dependencies needed for generating the slides of this presentation. Nix has grown a lot in the last five years, and it is pretty exciting to follow that development further down the road.

Watch the talk

The slides are available on Github

NixOps

Kim Lindberger

Then Kim Lindberger (talyz) gave a great presentation on NixOps. We even got treated to some demos!

Some things to note about NixOps:

  • NixOps can be used to deploy NixOS systems to machines and non-machine resources (DNS, S3 buckets, etc.). All configuration is build locally before being shipped.
  • There are plugins for a few cloud providers, for instance Amazon EC2 and Google Cloud Engine.
  • If a deploy fails for some reason, you are never stuck with a system that is in a half-updated state. If the config doesn’t build, it won’t get pushed upstream and applied at all.
  • NixOps is unfortunately still Python 2, but there are efforts on the way to port it to a modern Python.
  • Backends will be split into separate plugins in an upcoming release!

Kim Lindberger presentation
slide

Watch the talk

You can find the slides and the examples used in the demos on Github

Nix expressions

Adam Höse

Last talk of the day was Adam Höse (Adisbladis) giving us an intro to reading Nix expressions. This is perhaps the most daunting aspect of NixOS for beginners.

A few things to consider:

  • It’s not an imperative language, it is functional! A description that was mentioned is “a little bit like a weird Lisp without all the parents”
  • Using the nix repl can be useful if you want to debug expressions or just play around with the language.
  • Nix configurations can have different weights. Meaning that if you duplicate expressions, you can assign a weight to one of them that determines what will actually be built. Nix will merge all config together and that way decide what takes precedence.
  • The key take-away: Learning the language will take your Nix journey further!

Adam Höse presentation

Lots of questions were asked by the audience during this talk, and hopefully some light was shed on the mysteries of the Nix language by the end.

Watch the talk

Then we ate some pizza and hung out Hackeriet style! Waiting for pizza to
arrive

DAY 2

Building Docker containers with Nix

Adam Höse

On the last day we got an overview of how to build Docker containers using Nix by Adam Höse.

  • Normally a Docker build is an imperative process that might have different outcomes at different points in time (mostly because of the use of base images). Building a Docker image “Nix style” makes it reproducible, it will build the same way every time.
  • Docker layers are content-addressed paths that are ordered; they are not sequential.
  • buildLayeredImage is great for minimizing the amount of dependencies that are pulled into the final Docker image.
  • Nixery is a project where you can get ad-hoc Docker images with a specific set of packages. The way to do this is to put the package names in the image name, like this: docker pull nixery.dev/shell/git/htop. Then you will get a custom docker image with bash, git and htop. Really cool!

Day 2 of the con

Watch the talk

After this talk we had a informal session of hacking on Nix things and socializing.

The organizers (fnords & sgo) want to thank the speakers and everyone else that came to the mini-con! Special thanks to NUUG Foundation for sponsoring this event! If you want to get notified of any future events in Oslo NixOS User Group, you can join our Meetup group.