How we did translations in Rust for Ripasso

30/04/20 — capitol

rust-loop

One core principle of writing user friendly software is that the software should adapt to the user, not the user to the software. A user shouldn’t need to learn a new language in order to use the software for example.

Therefore internationalization is a usability feature that we shouldn’t ignore.

There exists a number of frameworks for translations, but GNU Gettext is one of the most used. So that is what we will use also.

Solving that problem requires solving a few sub-problems:

  • Extracting strings to translate from the source code
  • Updating old translation files with new strings
  • Generating the binary translation files
  • Using the correct generated translation

Extracting translatable strings from Rust source code

The Gettext package contains a number of helper programs, among them xgettext which can be used to extract the strings from the Rust sources.

One drawback is that Rust isn’t on the list of languages that xgettext can parse. But Rust is similar enough to C so that we can use that parser, as long as we don’t have any multiline strings.

In Ripasso we extract it like this:

xgettext cursive/src/*rs -kgettext --sort-output -o cursive/res/ripasso-cursive.pot

Updating old translation files with new strings

This isn’t Rust specific in any way, but needs to be done so we include it as a step. We use the Gettext program msgmerge:

msgmerge --update cursive/res/sv.po cursive/res/ripasso-cursive.pot

The .pot file is the template file that contains all the strings, and there will be one .po per language that contains the translations for that language.

A translator can open the .po file in a translation program, for example poedit and translate it.

Generating the binary translation files

We do this with a third utility program from Gettext, called msgfmt, called from build.rs. It reads the .po files and generates binary .mo files.

This code from Ripasso is a bit verbose/ugly, but it gets the job done.

fn generate_translation_files() {
    let mut dest_path = std::env::current_exe().unwrap();
    dest_path.pop();
    dest_path.pop();
    dest_path.pop();
    dest_path.pop();
    dest_path.push("translations");
    print!("creating directory: {:?} ", &dest_path);
    let res = std::fs::create_dir(&dest_path);
    if res.is_ok() {
        println!("success");
    } else {
        println!("error: {:?}", res.err().unwrap());
    }
    dest_path.push("cursive");
    print!("creating directory: {:?} ", &dest_path);
    let res = std::fs::create_dir(&dest_path);
    if res.is_ok() {
        println!("success");
    } else {
        println!("error: {:?}", res.err().unwrap());
    }

    let mut dir = std::env::current_exe().unwrap();
    dir.pop();
    dir.pop();
    dir.pop();
    dir.pop();
    dir.pop();
    dir.push("cursive");
    dir.push("res");

    let translation_path_glob = dir.join("**/*.po");
    let existing_iter =
        glob::glob(&translation_path_glob.to_string_lossy()).unwrap();

    for existing_file in existing_iter {
        let file = existing_file.unwrap();
        let mut filename =
            format!("{}", file.file_name().unwrap().to_str().unwrap());
        filename.replace_range(3..4, "m");

        print!(
            "generating .mo file for {:?} to {}/{} ",
            &file,
            dest_path.display(),
            &filename
        );
        let res = Command::new("msgfmt")
            .arg(format!(
                "--output-file={}/{}",
                dest_path.display(),
                &filename
            ))
            .arg(format!("{}", &file.display()))
            .output();

        if res.is_ok() {
            println!("success");
        } else {
            println!("error: {:?}", res.err().unwrap());
        }
    }
}

The .mo files will end up in target/translations/cursive/, one file per language.

Using the correct generated translation

The best crate I found was gettext. There were also others but they required unstable Rust features and were therefore unusable. Since the various Linux distributions use the stable Rust to compile their packages.

During runtime, the translations live inside a lazy_static variable:

lazy_static! {
    static ref CATALOG: gettext::Catalog = get_translation_catalog();
}

But getting the correct translation into that variable can be a bit tricky. Here is how we do it in Ripasso:

fn get_translation_catalog() -> gettext::Catalog {
    let locale = locale_config::Locale::current();

    let mut translation_locations = vec!["/usr/share/ripasso"];
    if let Some(path) = option_env!("TRANSLATION_INPUT_PATH") {
        translation_locations.insert(0, path);
    }
    if cfg!(debug_assertions) {
        translation_locations.insert(0, "./cursive/res");
    }

    for preferred in locale.tags_for("messages") {
        for loc in &translation_locations {
            let langid_res: Result<LanguageIdentifier, _> =
                format!("{}", preferred).parse();

            if let Ok(langid) = langid_res {
                let file = std::fs::File::open(format!(
                    "{}/{}.mo",
                    loc,
                    langid.get_language()
                ));
                if let Ok(file) = file {
                    if let Ok(catalog) = gettext::Catalog::parse(file) {
                        return catalog;
                    }
                }
            }
        }
    }

    gettext::Catalog::empty()
}

A few things need explaining. First about the paths, /usr/share/ripasso is the default and always a place where we search for translation files. The option_env! macro is there so that different distributions can specify their own paths during compile time. The check cfg!(debug_assertions) is true when running in debug mode, and is there so that it’s easy to test a translation while you are working on it.

The for loop selects the most fitting language based on how the user has configured its locale.

If none of those match the languages that we have available, we return an empty Catalog, which means that it defaults back to English.

Conclusion

Translating Rust programs isn’t as straight forward as it could be, but it’s in no way impossible and well worth doing.

Oslo NixOS MiniCon 2020 report

07/03/20 — fnords && sgo

On February 22. and 23. Oslo NixOS User Group hosted a mini conference at Hackeriet. We had a variety of talks about different parts of the Nix ecosystem.

DAY 1

The Nix ecosystem

Elis Hirwig

Elis Hirwing (etu) talked about the Nix ecosystem! This was a great overview of the different Nix components and tools.

Some take-aways from this talk:

  • The Nix pkgs repository on Github is huge, over 49 000 packages! So this is a very active community. According to Repology it’s the most up-to-date repo right now!
  • The community works to keep packages as up to date as possible, and it is relatively easy to become a contributor.
  • They try to remove unmaintained or EOL packages (unless too many other packages depend on it….looking at you Python 2!).
  • You don’t have to use NixOS to take advantage of Nix packages, they can be used on basically any Linux or Darwin (macOS) distribution.

Elis Hirwig presentation
slide

With tools like direnv and nix-shell, Nix is also great for setting up development environments. There is also a lot of tooling for different languages. This slide is an example of how Etu uses nix-shell to get the dependencies needed for generating the slides of this presentation. Nix has grown a lot in the last five years, and it is pretty exciting to follow that development further down the road.

Watch the talk

The slides are available on Github

NixOps

Kim Lindberger

Then Kim Lindberger (talyz) gave a great presentation on NixOps. We even got treated to some demos!

Some things to note about NixOps:

  • NixOps can be used to deploy NixOS systems to machines and non-machine resources (DNS, S3 buckets, etc.). All configuration is build locally before being shipped.
  • There are plugins for a few cloud providers, for instance Amazon EC2 and Google Cloud Engine.
  • If a deploy fails for some reason, you are never stuck with a system that is in a half-updated state. If the config doesn’t build, it won’t get pushed upstream and applied at all.
  • NixOps is unfortunately still Python 2, but there are efforts on the way to port it to a modern Python.
  • Backends will be split into separate plugins in an upcoming release!

Kim Lindberger presentation
slide

Watch the talk

You can find the slides and the examples used in the demos on Github

Nix expressions

Adam Höse

Last talk of the day was Adam Höse (Adisbladis) giving us an intro to reading Nix expressions. This is perhaps the most daunting aspect of NixOS for beginners.

A few things to consider:

  • It’s not an imperative language, it is functional! A description that was mentioned is “a little bit like a weird Lisp without all the parents”
  • Using the nix repl can be useful if you want to debug expressions or just play around with the language.
  • Nix configurations can have different weights. Meaning that if you duplicate expressions, you can assign a weight to one of them that determines what will actually be built. Nix will merge all config together and that way decide what takes precedence.
  • The key take-away: Learning the language will take your Nix journey further!

Adam Höse presentation

Lots of questions were asked by the audience during this talk, and hopefully some light was shed on the mysteries of the Nix language by the end.

Watch the talk

Then we ate some pizza and hung out Hackeriet style! Waiting for pizza to
arrive

DAY 2

Building Docker containers with Nix

Adam Höse

On the last day we got an overview of how to build Docker containers using Nix by Adam Höse.

  • Normally a Docker build is an imperative process that might have different outcomes at different points in time (mostly because of the use of base images). Building a Docker image “Nix style” makes it reproducible, it will build the same way every time.
  • Docker layers are content-addressed paths that are ordered; they are not sequential.
  • buildLayeredImage is great for minimizing the amount of dependencies that are pulled into the final Docker image.
  • Nixery is a project where you can get ad-hoc Docker images with a specific set of packages. The way to do this is to put the package names in the image name, like this: docker pull nixery.dev/shell/git/htop. Then you will get a custom docker image with bash, git and htop. Really cool!

Day 2 of the con

Watch the talk

After this talk we had a informal session of hacking on Nix things and socializing.

The organizers (fnords & sgo) want to thank the speakers and everyone else that came to the mini-con! Special thanks to NUUG Foundation for sponsoring this event! If you want to get notified of any future events in Oslo NixOS User Group, you can join our Meetup group.

Hackeriet 10 years

20/02/20 — hackeriet

hackeriet 10 years

Hackeriet turned 10 years old last December! Time flies when you’re busy hacking! To mark this momentous occasion we decided to invite some old friends and new acquaintances to speak at Hackeriet on March 14th (also known as Pi day, a mathematically fortuitous date)!

IMPORTANT: EVENT IS POSTPONED CANCELLED

Due to the Corona virus situation in Oslo, we’ve decided to postpone cancel the event. The new date is still undecided, but our next mathematically fortuitous date is τ-day (June 28th). Put it in your calendar, and check this place or the #oslohackerspace channel on irc.freenode.org for further information. We’ll get back to you with new events at a later date. Stay tuned!

Program

14:00: A short introduction to Internet Governance by Maja Enes

The term «Internet Governance» is an expression to define all actors and agencies who govern the Internet. But who are they, and what is this governing power they claim to hold? This session gives a short introduction to the term «Internet Governance», and looks closer at some of the more prominent bodies to see who are involved and how they work, and is it possible for you or me to make an actual difference?

Maja Enes has a background in Psychology and social work. She published the book «Internett Internett Internett» in 2019, to give people without a technical background the opportunity to be able to learn how Internet infrastructure works. This book was born out of the frustration Maja struggled with when she, as a newly appointed chair of the Norwegian Chapter of the Internet Society, tried to learn how Internet works. The last 5 years she has been working freelance and writes about things she does at www.frkenes.no, for example has she written about when she held a small soldering course at Hackeriet in 2015.

Separate signup for this talk.

15:00: Introduction to election security by Patricia Aas

Free and correct elections are the linchpin of democracy. For a government to be formed based the will of the people, the will of the people must be heard. Across the world election systems are being classified as critical infrastructure, and they face the same concerns as all other fundamental systems in society.

We are building our critical infrastructure from hardware and software built by nations and companies we can’t expect to trust. How can this be dealt with in Election Security, and can those lessons be applied to other critical systems society depends on today?

Patricia Aas is a programmer who has worked mostly in C++ and Java. She has spent her career continuously delivering from the same code-base to a large user base. She has worked on two browsers (Opera and Vivaldi), worked as a Java consultant and on embedded telepresence endpoints for Cisco. She is focused on the maintainability and flexibility of software architecture, and how to extend it to provide cutting edge user experiences. Her focus on the end user has led her work more and more toward privacy and security, and she has recently started her own company, TurtleSec, hoping to contribute positively to the infosec and C++ communities. She is also involved in the Include C++ organization hoping to improve diversity and inclusion in the C++ community.

Separate signup for this talk.

16:00: Dark Patterns and the GDPR by Quirin Weinzierl

Have you ever felt a website tricked you into providing your data? If so, you probably encountered a “Dark Pattern”. Service providers use tricks like defaults, countdowns and confusing language to get us to consent against our own interest. Understanding these patterns leads us into exploring how our decisions are shaped by psychological deficiencies, biases and rules of thumb.

The bad news: The General Data Protection Regulation (GDPR) is currently unable to stop Dark Patterns.

The good news: It could, if we use it the right way. And we all can help to confront Dark Patterns. In this talk we’ll get an idea of how both these things can work.

Quirin Weinzierl is a Ph.D. Candidate at University Speyer. He is currently a visiting Ph.D. researcher at the NRCCL/SERI, University of Oslo. At the German Research Institute for Public Administration Speyer Quirin serves as the coordinator of the research cluster “Transformation of the State in the Age of Digitalization”. Quirin studied law at University Munich and Yale Law School. He clerked at the European Court of Human Rights and worked at the German Parliament’s Academic Research Service. He searches for good behaviorally informed regulation and even better randonée-skiing in Norway.

Separate signup for this talk.

17:00: Learn lock picking with Hans-Petter Fjeld

Are you curious about locks and how lockpicking works? We are organizing a beginners workshop for just you!

We have a bunch of locks and simple lockpicks, and will tell you about a few simple techniques to get you started with locksports.

Different types of locks will be introduced to you, so you are able to recognize them, know the basic workings of each, and how you can pick them.

After the workshop we will sit around and attempt to pick locks, and if there is time we can talk about alternative non-destructive means of bypassing locks in general.

Hans-Petter is co-founder of Hackeriet and is currently working as an Information Security Engineer for a large Norwegian managed hosting company. He started lockpicking as a hobby around 10 years ago.

Separate signup for this workshop. NOTE: This workshop goes in parallel with the two talks below!

17:00: Unlocking closed software with Frida by Ole André Ravnås

In a world where so much runs on software, and so little of it is open and transparent, what can we do to uncover the truth about what the software actually does? Which files does it open, and who does it talk to?

In this talk, Ole André will show us how Frida can be used to understand software without access to its source code. This is also very helpful even when source code is available due to the sheer complexity of modern software, with layers and layers of code interacting in unknown ways.

Ole André Ravnås is the creator of Frida, which he’s currently building products on top of at NowSecure. Once upon a time a die-hard Linux user, he found himself reverse-engineering the proprietary video codec used by Windows Live Messenger for webcam conversations. The result was released as libmimic back in 2005, and this was his gateway drug to the world of reversing.

Separate signup for this talk.

18:00: Interactive serial communication by Øyvind Kolås

Øyvind Kolås will tell us about his experiences writing his own ANSI/ECMA-48/vt100 engine, and how he (via some dev-fuzzing) stumbled upon this bug

This talk on interactive serial communications might touch upon some of the following terms: baud, baudot, ascii, cp850, latin1, unicode, DEC, rs232, vt100, ANSI/ECMA-48, ansi.sys, telix, RIP, ReGIS, sixels, BBS, terminal emulator, tty, ssh/telnet NAWS, ascii-art, ansi-art, aalib, caca, chafa, tv. If any of this peaks your interest, come on down!

Øyvind Kolås is a digital media toolsmith, creating tools and infrastructure to aid creation of his own and others artistic and visual digital media experiments. He is the maintainer and lead developer of GIMPs next generations engine, babl/GEGL - infrastructure libraries where he for the more than the last decade have been working actively on providing high bitdepths, HDR, CMYK and non-destructive editing and capabilities/possibilities to GIMP - and other software. You can read more about Øyvind on his Patreon page.

Separate signup for this talk.

19:00: The birthday gathering will continue until morale improves.

Here’s to another 0x00001010 years!


Questions? Chat with us on IRC at #oslohackerspace @ Freenode.

Welcome to Oslo NixOS MiniCon 2020!

29/01/20 — fnords && sgo

oslo-nixos-minicon

Saturday 22 Feb, and Sunday 23 Feb at Hackeriet in Oslo. This event is community organized and free.

Oslo NixOS User Group is very excited to invite you to our first 2 day mini conference. We’ve invited some great speakers to cover different aspects of the Nix ecosystem.

Saturday is dedicated to talks on NixOS, NixOps and the Nix language (and pizza for the ones who stick around to the end!). On Sunday we have a talk on how to build Docker containers with Nix, and we’ll end the day with a hands on Nix package workshop.

Please tell us that you’re coming by RSVPing on meetup.com:

SCHEDULE , DAY 1

  • 15:00: “The Nix Ecosystem” - by Elis Hirwing (etu)

    This presentation aims to give a brief overview of what Nix is, how it functions and some of its potential uses. Then we will look at NixOS and some other tools that is part of what makes the Nix ecosystem great!

  • 15:30: “NixOps” - by Kim Lindberger

    NixOps is a declarative and fully automated deployment system for NixOS. If you’ve been using NixOS as your desktop OS and would like it to be your server OS, or have been playing around with the Amazon AWS images and wonder what the next step is, this talk is for you.

  • 16:00: “Reading Nix expressions” - by Adam Höse (Adisbladis)

    You just installed Nix or NixOS. You have have a configuration.nix, default.nix or shell.nix and it just looks like a soup of characters? This talk is for you. Nix is not a big language but the syntax is quite foreign to the usual language suspects. The goal of this talk is to demystify the language. At the end all the viewers should be able to read Nix code and start wielding Nix’s super powers.

  • 17:00 until late: Let’s eat pizza and hang out! 🍕

SCHEDULE, DAY 2

  • 13:00: “Building Docker containers with Nix” - by Adam Höse (Adisbladis)

    So you’ve learnt how to build a Nix package and everything is great. Now we need to learn how to ship our packages to production, which nowadays often means deploying to a container registry and then running inside of Docker and/or Kubernetes.

    This talk will teach you not only how to build seamlessly integrated containers using Nix that brings reproducibility and traceability benefits of Nix, but also the inner workings of the Nixpkgs dockerTools functions. Hopefully we’ll also break a few peoples mental model of how Docker actually works under the hood.

  • 14:00 until late: Nix package workshop

    Do you have some software you want to package for Nix? Bring your laptop and your questions, we will try our best to help you!


Questions? Chat with us on IRC at #oslohackerspace @ Freenode.

Cover Photo Credit: Michael Ankes

Release of Ripasso version 0.4.0

26/01/20 — capitol

ripasso-cursive

After two months of development effort, we are proud to present ripasso version 0.4.0.

New Features

Support for localization

Ripasso’s ncurses based application now have support for localization and have been translated into:

  • French
  • Italian
  • Norwegian bokmål
  • Norwegian nynorsk
  • Swedish

Display a padlock icon if the git commit a password comes from have been signed by a valid key

If the git commit that a password came from have been signed by a gpg key that’s in your keyring, then ripasso will display a padlock icon ( 🔒 ) to symbolize this.

If there is a minor problem with the key, then an open padlock icon ( 🔓 ) will be displayed.

And if there was a major problem, then a stop icon ( ⛔ ) will be displayed.

Package for fedora created

Ripasso have now been packaged for fedora by Artem Polishchuk

Major startup time improvement

In the previous versions, ripasso did the equivalence of a git blame on every password file in the repository, this was fine for small repositories, but for large ones it didn’t work at all. The startup cost was on the order of O(n2) where n was the number of passwords.

This have been replaced by walking through the history once and populating the metadata for each file as we see them in the history.

Support for environmental variable PASSWORD_STORE_SIGNING_KEY

If you specify one or more 40 character gpg key ids in this variable, this ripasso will verify that the .gpg-id file in the password directory has been signed by one of those keys.

The signature is a detached gpg signature located in the .gpg-id.sig file.

Bugs Fixed

Don’t print passwords onscreen as they are generated

The generate button now prints stars instead of the actual password when pressed.

Prevent directory traversal

Before it was possible to create files outside the password store directory, by writing .. in the password path.

Credits

  • Joakim Lundborg - Developer
  • Alexander Kjäll - Developer
  • Artem Polishchuk - Fedora packager
  • Silje Enge Kristensen - Norwegian bokmål translation
  • Eivind Syvertsen - Norwegian nynorsk translation
  • Enrico Razzetti - Italian translation
  • Camille Victor Prunier - French translation

Also a big thanks to everyone who contributed with bug reports and patches.