Go to the content
or

Debian

Full screen Suggest an article
 RSS feed

Planet Debian

July 1, 2016 14:34 , by valessiobrito - | No one following this article yet.

Planet.Debian is a website that aggregates the blogs of many Debian contributors. Planet maintainers can be reached at planet at debian.org


Yves-Alexis Perez: OpenPGP smartcard transition (part 1)

October 10, 2017 20:44, by Planet Debian - 0no comments yet

A long time ago, I switched my GnuPG setup to a smartcard based one. I kept using the same master key, but:

  • copied the rsa4096 master key to a “master” smartcard, for when I need to sign (certify) other keys;
  • created rsa2048 subkeys (for signature, encryption and authentication) and moved them to an OpenPGP smartcard for daily usage.

I've been working with that setup for a few years now and it is working perfectly fine. The signature counter on the OpenPGP basic card is a bit north of 5000 which is large but not that huge, all considered (and not counting authentication and decryption key usage).

One very nice feature of using a smartcard is that my laptop (or other machines I work on) never manipulates the private key directly but only sends request to the card, which is a really huge improvement, in my opinion. But it's also not the perfect solution for me: the OpenPGP card uses a proprietary platform from ZeitControl, named BasicCard. We have very few information on the smartcard, besides the fact that Werner Koch trust ZeistControl to not mess up. One caveat for me is that the card does not use a certified secure microcontroler like you would find in smartcard chips found in debit card or electronic IDs. That means it's not really been audited by a competent hardware lab, and thus can't be considered secure against physical attacks. The cardOS software and the application implementing the OpenPGP specification are not public either and have not been audited either, to the best of my knowledge.

At one point I was interested in the Yubikey Neo, especially since the architecture Yubico used was common: a (supposedly) certified platform (secure microcontroler, card OS) and a GlobalPlatform / JavaCard virtual machine. The applet used in the Yubikey Neo is open-source, too, so you could take a look at it and identify any issue.

Unfortunately, Yubico transitioned to a less common and more proprietary infrastructure for Yubikey 4: it's not longer Javacard based, and they don't provide the applet source anymore. This was not really seen as a good move by a lot of people, including Konstantin Ryabitsev (kernel.org administrator). Also, it wasn't possible  even for the Yubico Neo to actually build the applet yourself and inject it on the card: when the Yubikey leaves the facility, the applet is already installed and the smartcard is locked (for obvious security reason). I've tried asking about getting naked/empty Yubikey with developers keys to load the applet myself, but it' was apparently not possible or would have required signing an NDA with NXP (the chip maker), which is not really possible as an individual (not that I really want to anyway).

In the meantime, a coworker actually wrote an OpenPGP javacard applet, with the intention to support latest version of the OpenPGP specification, and especially elliptic curve cryptography. The applet is called SmartPGP and has been released on ANSSI github repository. I investigated a bit, and found a smartcard with correct specification: certified (in France or Germany), and supporting Javacard 3.0.4 (required for ECC). The card can do RSA2048 (unfortunately not RSA4096) and EC with NIST (secp256r1, secp384r1, secp521r1) and Brainpool (P256, P384, P512) curves.

I've ordered some cards, and when they arrived started playing. I've built the SmartPGP applet and pushed it to a smartcard, then generated some keys and tried with GnuPG. I'm right now in the process of migrating to a new smartcard based on that setup, which seems to work just fine after few days.

Part two of this serie will describe how to build the applet and inject it in the smartcard. The process is already documented here and there, but there are few things not to forget, like how to lock the card after provisionning, so I guess having the complete process somewhere might be useful in case some people want to reproduce it.



Michal Čihař: Better access control in Weblate

October 10, 2017 18:45, by Planet Debian - 0no comments yet

Upcoming Weblate 2.17 will bring improved access control settings. Previously this could be controlled only by server admins, but now the project visibility and access presets can be configured.

This allows you to better tweak access control for your needs. There is additional choice of making the project public, but restricting translations, what has been requested by several projects.

You can see the possible choices on the UI screenshot:

Weblate overall experience

On Hosted Weblate this feature is currently available only to commercial hosting customers. Projects hosted for free are limited to public visibility only.

Filed under: Debian English SUSE Weblate



Iain R. Learmonth: Automatic Updates

October 10, 2017 17:15, by Planet Debian - 0no comments yet

The @TorAtlas web application will now also prompt operators to update their relays if they are outdated. pic.twitter.com/HMixwqbBKM

— Tor Atlas (@TorAtlas) October 9, 2017

We have instructions for setting up new Tor relays on Debian. The only time the word “upgrade” is mentioned here is:

Be sure to set your ContactInfo line so we can contact you if you need to upgrade or something goes wrong.

This isn’t great. We should have some decent instructions for keeping your relay up to date too. I’ve been compiling a set of documentation for enabling automatic updates on various Linux distributions, here’s a taste of what I have so far:


Debian

Make sure that unattended-upgrades is installed and then enable the installation of updates (as root):

apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Fedora 22 or later

Beginning with Fedora 22, you can enable automatic updates via:

dnf install dnf-automatic

In /etc/dnf/automatic.conf set:

apply_updates = yes

Now enable and start automatic updates via:

systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer

(Thanks to Enrico Zini I know all about these timer units in systemd now.)

RHEL or CentOS

For CentOS, RHEL, and older versions of Fedora, the yum-cron package is the preferred approach:

yum install yum-cron

In /etc/yum/yum-cron.conf set:

apply_updates = yes

Enable and start automatic updates via:

systemctl start yum-cron.service

I’d like to collect together instructions also for other distributions (and *BSD and Mac OS). Atlas knows which platform a relay is running on, so there could be a link in the future to some platform specific instructions on how to keep your relay up to date.



Jamie McClelland: Docker in Debian

October 10, 2017 16:07, by Planet Debian - 0no comments yet

It's not easy getting Docker to work in Debian.

It's not in stable at all:

0 jamie@turkey:~$ rmadison docker.io
docker.io  | 1.6.2~dfsg1-1~bpo8+1 | jessie-backports | source, amd64, armel, armhf, i386
docker.io  | 1.11.2~ds1-5         | unstable         | source, arm64
docker.io  | 1.11.2~ds1-5         | unstable-debug   | source
docker.io  | 1.11.2~ds1-6         | unstable         | source, armel, armhf, i386, ppc64el
docker.io  | 1.11.2~ds1-6         | unstable-debug   | source
docker.io  | 1.13.1~ds1-2         | unstable         | source, amd64
docker.io  | 1.13.1~ds1-2         | unstable-debug   | source
0 jamie@turkey:~$ 

And a problem with runc makes it really hard to get it working on Debian unstable.

These are the steps I took to get it running today (2017-10-10).

Remove runc (allow it to remove containerd and docker.io):

sudo apt-get remove runc

Install docker-runc (now in testing)

sudo apt-get install docker-runc

Fix containerd package to depend on docker-runc instead of runc:

mkdir containerd
cd containerd
apt-get download containerd 
ar x containerd_0.2.3+git20170126.85.aa8187d~ds1-2_amd64.deb
tar -xzf control.tar.gz
sed -i s/runc/docker-runc/g control
tar -c md5sums control | gzip -c > control.tar.gz
ar rcs new-containerd.deb debian-binary control.tar.gz data.tar.xz
sudo dpkg -i new-containerd.deb

Fix docker.io package to depend on docker-runc instead of runc.

mkdir docker
cd docker
apt-get download docker.io
ar x docker.io_1.13.1~ds1-2_amd64.deb
tar -xzf control.tar.gz
sed -i s/runc/docker-runc/g control
tar -c {post,pre}{inst,rm} md5sums control | gzip -c > control.tar.gz
ar rcs new-docker.io.deb debian-binary control.tar.gz data.tar.xz
sudo dpkg -i new-docker.io.deb

Symlink docker-runc => runc

sudo ln -s /usr/sbin/docker-runc /usr/sbin/runc

Keep apt-get from upgrading until this bug is fixed:

printf "# Remove when docker.io and containerd depend on docker-runc
# instead of normal runc
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877329
Package: runc 
Pin: release * 
Pin-Priority: -1 

Package: containderd 
Pin: release * 
Pin-Priority: -1 

Package: docker.io
Pin: release * 
Pin-Priority: -1" | sudo tee /etc/apt/preferences.d/docker.pref

Thanks to coderwall for tips on manipulating deb files.



Lars Wirzenius: Debian and the GDPR

October 10, 2017 15:11, by Planet Debian - 0no comments yet

GDPR is a new EU regulation for privacy. The name is short for "General Data Protection Regulation" and it covers all organisations that handle personal data of EU citizens and EU residents. It will become enforceable May 25, 2018 (Towel Day). This will affect Debian. I think it's time for Debian to start working on compliance, mainly because the GDPR requires sensible things.

I'm not an expert on GDPR legislation, but here's my understanding of what we in Debian should do:

  • do a privacy impact assessment, to review and document what data we have, and collect, and what risks that has for the people whose personal data it is if the data leaks

  • only collect personal information for specific purposes, and only use the data for those purposes

  • get explicit consent from each person for all collection and use of their personal information; archive this consent (e.g., list subscription confirmations)

  • allow each person to get a copy of all the personal information we have about them, in a portable manner, and let them correct it if it's wrong

  • allow people to have their personal information erased

  • maybe appoint one or more data protection officers (not sure this is required for Debian)

There's more, but let's start with those.

I think Debian has at least the following systems that will need to be reviewed with regards to the GDPR:

  • db.debian.org - Debian project members, "Debian developers"
  • nm.debian.org
  • contributors.debian.org
  • lists.debian.org - at least membership lists, maybe archives
  • possibly irc servers and log files
  • mail server log files
  • web server log files
  • version control services and repositories

There may be more; these are just off the top of my head.

I expect that mostly Debian will be OK, but we can't just assume that.



Reproducible builds folks: Reproducible Builds: Weekly report #128

October 10, 2017 8:08, by Planet Debian - 0no comments yet

Here's what happened in the Reproducible Builds effort between Sunday October 1 and Saturday October 7 2017:

Media coverage

Documentation updates

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

32 package reviews have been added, 46 have been updated and 62 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (27)

diffoscope development

strip-nondeterminism development

Rob Browning noticed that strip-nondeterminism was causing serious performance regressions in the Clojure programming language within Debian. After some discussion, Chris Lamb also posted a query to debian-devel in case there were any other programming languages that might be suffering from the same problem.

reprotest development

Versions 0.7.1 and 0.7.2 were uploaded to unstable by Ximin Luo:

  • New features:
    • Add a --auto-build option to try to determine which specific variations cause unreproducibility.
    • Add a --source-pattern option to restrict copying of source_root, and set this automatically in our presets.
  • Usability improvements:
    • Improve error messages in some common scenarios.
      • Fiving a source_root or build_command that doesn't exist
      • Using reprotest with default settings after not installing Recommends
    • Output hashes after a successful --auto-build.
    • Print a warning message if we reproduced successfully but didn't vary everything.
  • Fix varying both umask and user_group at the same time.
  • Have dpkg-source extract to different build dir if varying the build-path.
  • Pass --exclude-directory-metadata to diffoscope(1) by default as this is the majority use-case.
  • Various bug fixes to get the basic dsc+schroot example working.

It included contributions already covered by posts of the previous weeks, as well as new ones from:

tests.reproducible-builds.org

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.



Vincent Fourmond: Define a function with inline Ruby code in QSoas

October 9, 2017 22:31, by Planet Debian - 0no comments yet

QSoas can read and execute Ruby code directly, while reading command files, or even at the command prompt. For that, just write plain Ruby code inside a ruby...ruby end block. Probably the most useful possibility is to define elaborated functions directly from within QSoas, or, preferable, from within a script; this is an alternative to defining a function in a completely separated Ruby-only file using ruby-run. For instance, you can define a function for plain Michaelis-Menten kinetics with a file containing:

ruby
def my_func(x, vm, km)
  return vm/(1 + km/x)
end
ruby end

This defines the function my_func with three parameters, , (vm) and (km), with the formula:

You can then test that the function has been correctly defined running for instance:

QSoas> eval my_func(1.0,1.0,1.0)
 => 0.5
QSoas> eval my_func(1e4,1.0,1.0)
 => 0.999900009999

This yields the correct answer: the first command evaluates the function with x = 1.0, vm = 1.0 and km = 1.0. For , the result is (here 0.5). For , the result is almost . You can use the newly defined my_func in any place you would use any ruby code, such as in the optional argument to generate-buffer, or for arbitrary fits:

QSoas> generate-buffer 0 10 my_func(x,3.0,0.6)
QSoas> fit-arb my_func(x,vm,km)

To redefine my_func, just run the ruby code again with a new definition, such as:
ruby
def my_func(x, vm, km)
  return vm/(1 + km/x**2)
end
ruby end
The previous version is just erased, and all new uses of my_func will refer to your new definition.


See for yourself

The code for this example can be found there. Browse the qsoas-goodies github repository for more goodies !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1. You can download its source code or buy precompiled versions for MacOS and Windows there.



Markus Koschany: My Free Software Activities in September 2017

October 9, 2017 22:18, by Planet Debian - 0no comments yet

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I sponsored a new release of hexalate for Unit193 and icebreaker for Andreas Gnau. The latter is a reintroduction.
  • New upstream releases this month: freeorion and hyperrogue.
  • I backported freeciv and freeorion to Stretch.

Debian Java

Debian LTS

This was my nineteenth month as a paid contributor and I have been paid to work 15,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 18. September to 24. September I was in charge of our LTS frontdesk. I triaged bugs in poppler, binutils, kannel, wordpress, libsndfile, libexif, nautilus, libstruts1.2-java, nvidia-graphics-drivers, p3scan, otrs2 and glassfish.
  • DLA-1108-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-1116-1. Issued a security update for poppler fixing 3 CVE.
  • DLA-1119-1. Issued a security update for otrs2 fixing 4 CVE.
  • DLA-1122-1. Issued a security update for asterisk fixing 1 CVE. I also investigated CVE-2017-14099 and CVE-2017-14603. I decided against a backport because the fix was too intrusive and the vulnerable option is disabled by default in Wheezy’s version which makes it a minor issue for most users.
  • I submitted a patch for Debian’s reportbug tool. (#878088) During our LTS BoF at DebConf 17 we came to the conclusion that we should implement a feature in reportbug that checks whether the bug reporter wants to report a regression for a recent security update. Usually the LTS and security teams  receive word from the maintainer or users who report issues directly to our mailing lists or IRC channels. However in some cases we were not informed about possible regressions and the new feature in reportbug shall ensure that we can respond faster to such reports.
  • I started to investigate the open security issues in wordpress and will complete the work in October.

Misc

  • I packaged a new version of xarchiver. Thanks to the work of Ingo Brückl xarchiver can handle almost all archive formats in Debian now.

QA upload

  • I did a QA upload of xball, an ancient game from the 90ies that simulates bouncing balls.  It should be ready for another decade at least.

Thanks for reading and see you next time.



Ben Hutchings: Debian LTS work, September 2017

October 9, 2017 16:25, by Planet Debian - 0no comments yet

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from August. I only worked 12 hours, so I will carry over 9 hours to the next month.

I prepared and released another update on the Linux 3.2 longterm stable branch (3.2.93). I then rebased the Debian linux package onto this version, added further security fixes, and uploaded it (DLA-1099-1).



Michal Čihař: Better acess control in Weblate

October 9, 2017 16:00, by Planet Debian - 0no comments yet

Upcoming Weblate 2.17 will bring improved access control settings. Previously this could be controlled only by server admins, but now the project visibility and access presets can be configured.

This allows you to better tweak access control for your needs. There is additional choice of making the project public, but restricting translations, what has been requested by several projects.

You can see the possible choices on the UI screenshot:

Weblate overall experience

On Hosted Weblate this feature is currently available only to commercial hosting customers. Projects hosted for free are limited to public visibility only.

Filed under: Debian English SUSE Weblate



Antonio Terceiro: pristine-tar updates

October 9, 2017 15:06, by Planet Debian - 0no comments yet

Introduction

pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team.

pristine-tar goals are to import the content of s a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don’t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway.

The current state of the art, and perspectives for the future

pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while.

So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar.

Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn’t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic.

Recent improvements

Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can’t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit.

In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let’s look at the improvements made since pristine-tar 1.38, which was the version included in stretch.

pristine-tar 1.39: xdelta3 by default.

This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports.

The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release.

pristine-tar 1.40: improved compression heuristics

pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz:

In 1.40, the delta produced from the test tarball faad2_2.6.1.orig.tar.gz went down from 800KB, almost the same size of tarball itself, to 6.8KB

pristine-tar 1.41: support for signatures

This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb.

pristine-tar 1.42: optionally recompressing tarballs

I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the “pristine” bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine.

In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all:

util-linux_2.30.1.orig.tar.xz can only be imported after being recompressed

In other cases, if the current heuristics can’t produce a reasonably small delta, recompressing makes a huge difference. It’s the case for mumble_1.1.8.orig.tar.gz:

with recompression, the delta produced from mumble_1.1.8.orig.tar.gz goes from 1.2MB, or 99% of the size to the original tarball, to 14.6KB, 1% of the size of original tarball

Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations.

Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what “too large” means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.



Michal Čihař: stardicter 1.1

October 9, 2017 13:15, by Planet Debian - 0no comments yet

Stardicter 1.1, the set of scripts to convert some freely available dictionaries to StarDict format, has been released today. The biggest change is that it will also keep source data together with generated dictionaries. This is good for licensing reasons and will also allow to actually build these as packages within Debian.

Full list of changes:

  • Various cleanups for first stable release.
  • Fixed generating of README for dictionaries.
  • Added support for generating source tarballs.
  • Fixed installation on systems with non utf-8 locale.

As usual, you can install from pip, download source or download generated dictionaries from my website. The package should be soon available in Debian as well.

Filed under: Debian English StarDict



Petter Reinholdtsen: Generating 3D prints in Debian using Cura and Slic3r(-prusa)

October 9, 2017 8:50, by Planet Debian - 0no comments yet

At my nearby maker space, Sonen, I heard the story that it was easier to generate gcode files for theyr 3D printers (Ultimake 2+) on Windows and MacOS X than Linux, because the software involved had to be manually compiled and set up on Linux while premade packages worked out of the box on Windows and MacOS X. I found this annoying, as the software involved, Cura, is free software and should be trivial to get up and running on Linux if someone took the time to package it for the relevant distributions. I even found a request for adding into Debian from 2013, which had seem some activity over the years but never resulted in the software showing up in Debian. So a few days ago I offered my help to try to improve the situation.

Now I am very happy to see that all the packages required by a working Cura in Debian are uploaded into Debian and waiting in the NEW queue for the ftpmasters to have a look. You can track the progress on the status page for the 3D printer team.

The uploaded packages are a bit behind upstream, and was uploaded now to get slots in the NEW queue while we work up updating the packages to the latest upstream version.

On a related note, two competitors for Cura, which I found harder to use and was unable to configure correctly for Ultimaker 2+ in the short time I spent on it, are already in Debian. If you are looking for 3D printer "slicers" and want something already available in Debian, check out slic3r and slic3r-prusa. The latter is a fork of the former.



Gunnar Wolf: Achievement unlocked - Made with Creative Commons translated to Spanish! (Thanks, @xattack!)

October 9, 2017 4:05, by Planet Debian - 0no comments yet

I am very, very, very happy to report this — And I cannot believe we have achieved this so fast:

Back in June, I announced I'd start working on the translation of the Made with Creative Commons book into Spanish.

Over the following few weeks, I worked out the most viable infrastructure, gathered input and commitments for help from a couple of friends, submitted my project for inclusion in the Hosted Weblate translations site (and got it approved!)

Then, we quietly and slowly started working.

Then, as it usually happens in late August, early September... The rush of the semester caught me in full, and I left this translation project for later — For the next semester, perhaps...

Today, I received a mail that surprised me. That stunned me.

99% of translated strings! Of course, it does not look as neat as "100%" would, but there are several strings not to be translated.

So, yay for collaborative work! Oh, and FWIW — Thanks to everybody who helped. And really, really, really, hats off to Luis Enrique Amaya, a friend whom I see way less than I should. A LIDSOL graduate, and a nice guy all around. Why to him specially? Well... This has several wrinkles to iron out, but, by number of translated lines:

  • Andrés Delgado 195
  • scannopolis 626
  • Leo Arias 812
  • Gunnar Wolf 947
  • Luis Enrique Amaya González 3258

...Need I say more? Luis, I hope you enjoyed reading the book :-]

There is still a lot of work to do, and I'm asking the rest of the team some days so I can get my act together. From the mail I just sent, I need to:

  1. Review the Pandoc conversion process, to get the strings formatted again into a book; I had got this working somewhere in the process, but last I checked it broke. I expect this not to be too much of a hurdle, and it will help all other translations.
  2. Start the editorial process at my Institute. Once the book builds, I'll have to start again the stylistic correction process so the Institute agrees to print it out under its seal. This time, we have the hurdle that our correctors will probably hate us due to part of the work being done before we had actually agreed on some important Spanish language issues... which are different between Mexico, Argentina and Costa Rica (where translators are from).

    Anyway — This sets the mood for a great start of the week. Yay!

Attachment Size
Screenshot from 2017-10-08 20-55-30.png 103.1 KB


Joachim Breitner: e.g. in TeX

October 8, 2017 19:08, by Planet Debian - 0no comments yet

When I learned TeX, I was told to not write e.g. something, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write e.g.\␣.

Years later, I learned from a convincing, but since forgotten source, that in fact e.g.\@ is the proper thing to write. I vaguely remembering that e.g.\␣ supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact e.g.\␣ is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that e.g.\@ is proper.

What now? I guess I just have to find it out myself.

The problem and two solutions

The problem and two solutions

The above image shows three variants: The obviously broken version with e.g., and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how \@ and \␣ react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;

Expanding whitespace, take 1

Expanding whitespace, take 1

Expanding whitespace, take 2

Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?

Expanding whitespace, take 3

Expanding whitespace, take 3

Expanding whitespace, take 4

Expanding whitespace, take 4

Again, I see the same behaviour.

Conclusion: It does not matter, but e.g.\␣ is less hassle when using lhs2tex than e.g.\@ (which has to be escaped as e.g.\@@), so the winner is e.g.\␣!

(Unless you put it in a macro, then \@ might be preferable, and it is still needed between a captial letter and a sentence period.)