Go to the content
or

Debian

Full screen Suggest an article
 RSS feed

Planet Debian

July 1, 2016 14:34 , by valessiobrito - | No one following this article yet.

Planet.Debian is a website that aggregates the blogs of many Debian contributors. Planet maintainers can be reached at planet at debian.org


Steve Kemp: A busy week or two

October 11, 2017 21:00, by Planet Debian - 0no comments yet

It feels like the past week or two has been very busy, and so I'm looking forward to my "holiday" next month.

I'm not really having a holiday of course, my wife is slowly returning to work, so I'll be taking a month of paternity leave, taking sole care of Oiva for the month of November. He's still a little angel, and now that he's reached 10 months old he's starting to get much more mobile - he's on the verge of walking, but not quite there yet. Mostly that means he wants you to hold his hands so that he can stand up, swaying back and forth before the inevitable collapse.

Beyond spending most of my evenings taking care of him, from the moment I return from work to his bedtime (around 7:30PM), I've made the Debian Administration website both read-only and much simpler. In the past that site was powered by a lot of servers, I think around 11. Now it has only a small number of machines, which should slowly decrease.

I've ripped out the database host, the redis host, the events-server, the planet-machine, the email-box, etc. Now we have a much simpler setup:

  • Front-end machine
    • Directly serves the code site
    • Directly serves the SSL site which exists solely for Let's Encrypt
    • Runs HAProxy to route the rest of the requests to the cluster.
  • 4 x Apache servers
    • Each one has a (read-only) MySQL database on it for the content.
      • In case of future-compromise I removed all user passwords, and scrambled the email-addresses.
      • I don't think there's a huge risk, but better safe than sorry.
    • Each one runs the web-application.
      • Which now caches each generated page to /tmp/x/x/x/x/$hash if it doesn't exist.
      • If the request is cached it is served from that cache rather than dynamically.

Finally although I'm slowly making progress with "radio stuff" I've knocked up a simple hack which uses an ultrasonic sensor to determine whether I'm sat in front of my (home) PC. If I am everything is good. If I'm absent the music is stopped and the screen locked. Kinda neat.

(Simple ESP8266 device wired to the sensor. When the state changes a message is posted to Mosquitto, where a listener reacts to the change(s).)

Oh, not final. I've also transfered my mobile phone from DNA.fi to MoiMobile. Which should complete soon, right now my phone is in limbo, active on niether service. Oops.



Carl Chenet: The Slack Threat

October 10, 2017 22:00, by Planet Debian - 0no comments yet

During a long era, electronic mail was the main communication tool for enterprises. Slack, which offer public or private group discussion boards and instant messaging between two people, challenge its position, especially in the IT industry.

Not only Slack has features known and used since IRC launch in the late ’80s, but Slack also offers file sending and sharing, code quoting, and it indexing for ulterior searches everything that goes through the application. Slack is also modular with numerous plug-in to easily add new features.

Using the Software-As-A-Service (SAAS) model, Slack basic version is free, and users pay for options. Slack is now considered by the Github generation like the new main enterprise communication tool.

As I did in my previous article on the Github threat, this one won’t promote Slask advantages, as many other articles have already covered all these points ad nauseam, but to show the other side and to warn the companies using this service about its inherent risks. So far, these risks have been ignored, sometimes voluntary in the name of the “It works™” ideology. Neglecting all economic and safety consideration, neglecting all threat to privacy and individual freedom. We’ll see about them below.

Github, a software forge as a SAAS, with all the advantage but also all the risk of its economic model

All your company communication since its creation

When a start-up chooses Slack, all of its internal communication will be stored by Slack. When someone uses this service, the simple fact to chat through it means that the whole communication is archived.

One may point that within the basic Slack offer, only the last 10.000 messages can be read and searched. Bad argument. Slack stored every message and every file shared as it pleases. We’ll see below this application behavior is of capital importance in the Slack threat to enterprises.

And the problem is the same for all other companies which choose Slack at one point or another. If they replace their traditional communication method with it, Slack will have access to capital data, not only in volume, but also because of their value for the company itself… Or anyone interested in this company life.

Search Your Entire Archive

One of the main arguments to use Slack is its “Search your entire archive” feature. One can search almost anything one can think of. Why? Because everything is indexed. Your team chat archive or the more or less confidential documents exchanged with the accountant department; everything is in it in order to provide the most effective search tool.

The search bar, well-known by Slack users

We can’t deny it’s a very attractive feature for everyone inside the company. But it is also a very attractive feature for everyone outside of the company who would want to know more about its internal life. Even more if you’re looking for a specific subject.

If Slack is the main communication tool of your company, and if as I’ve experienced in my professional life, some teams prefer to use it than to go to the office next door or even bugger you to put the information on the dedicated channel, one can easily deduce that nothing—in this type of company—escape Slack. The automatic indexation and the search feature efficiency are excellent tools to get all the information needed, in quantity and in quality.

As such, it’s a great social engineering tool for everyone who has access to it, with a history as old as the use of Slack as a communication tool in the company.

Across borders… And Beyond!

Slack is a Web service which uses mainly Amazon Web services and most specially Cloudfront, as stated by the available information on Slack infrastructure.

Even without a complete study of said infrastructure, it’s easy to state that all the data regarding many innovative global companies around the world (and some of them including for all their internal communication since their creation) are located in the United States, or at least in the hands of a US company, which must follow US laws, a country with a well-known history of large scale industrial espionage, as the whistleblower Edward Snowden demonstrated it in 2013 and where company data access has no restriction under the Patriot Act, as in the Microsoft case (2014) where data stored in Ireland by the Redmond software editor have been given to US authorities.

Edward Snowden, an individual—and corporate—freedom fighter

As such, Slack’s automatic indexation and search tool are a boon for anyone—spy agency or hacker—which get authorized access to it.

To trust a third party with all, or at least most of, your internal corporate communication is a certain risk for your company if the said third party doesn’t follow the same regulations as yours or if it has different interests, from a data security point of view or more globally on its competitiveness. A badly timed data leak can be catastrophic.

What’s the point of secretly preparing a new product launch or an aggressive takeover if all your recent Slack conversations have leaked, including your secret plans?

What if… Slack is hacked?

First let’s remember that even if a cyber attack may appear as a rare or hypothetical scenario to a badly informed and hurried manager, it is far from being as rare as she or he believes it (or wants to believe it).

Infrastructure hacking is quite common, as a regular visit to Hacker News will give you multiple evidence. And Slack itself has already been hacked.

February 2015: Slack is the victim during four days of a cyber attack, which was made public by the company in March. Officially, the unauthorized access was limited to information on the users’ profiles. It is impossible to measure exactly what and who was impacted by this attack. In a recent announcement, Yahoo confessed that these 3 billion accounts (you’ve read well: 3 billions) were compromised … late 2014!

Yahoo, the company which suffered the largest recorded cyberattack regarding the compromised account numbers

Officially, Slack stated that “No financial or payment information was accessed or compromised in this attack.” Which is, and by far, the least interesting of all data stored within Slack! With company internal communication indexed—sometimes from the very beginning of said company—and searchable, Slack may be a potential target for cybercriminal not looking for its users’ financial credentials but more their internal data already in a usable format. One can imagine Slack must give information on a massive data leak, which can’t be ignored. But what would happen if only one Slack user is the victim of said leak?

The Free Alternative Solutions

As we demonstrated above, companies need to find an alternative solution to Slack, one they can host themselves to reduce data leaks and industrial espionage and dependency on the Internet connection. Luckily, Slack success created its own copycats, some of them being also free software.

Rocket.chat is one of them. Its comprehensive service offers chat rooms, direct messages and file sharing but also videoconferencing and screen sharing, and even most features. Check their dedicated page. You can also try an online demo. And even more, Rocket Chat has a very simple extension system and an API.

Mattermost is another service which has the advantages of proximity and of compatibility with Slack. It offers numerous features including the main expected by this type of software. It also offers numerous apps and plug-ins to interact with online services, software forges, and continuous integration tools.

It works

In the introduction, we discussed the “It works™” effect, usually invoked to dispel any arguments about data protection and exchange confidentiality we discussed in this article. True, one single developer can ask: why worry about it? All I want is to chat with my colleagues and send files!

Because Slack service subscription in the long term put the company continuously at risk. Maybe it’s not the employees’ place to worry about it, they just have to do their job the more efficiently possible. On the other side, the company management, usually non-technical, may not be aware of what risks will threaten their company with this technical choice. The technical management may pretend to be omniscient, nobody is fooled.

Either someone from the direction will ask the right question (where are our data and who can access them?) or someone from the technical side alert them officially on these problems. This is this technical audience, even if not always heard by their direction, which is the target of this article. May they find in it the right arguments to be convincing.

We hope that the several points we developed in this article will help you to make the right choice.

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in October 2016.

 



Yves-Alexis Perez: OpenPGP smartcard transition (part 1)

October 10, 2017 20:44, by Planet Debian - 0no comments yet

A long time ago, I switched my GnuPG setup to a smartcard based one. I kept using the same master key, but:

  • copied the rsa4096 master key to a “master” smartcard, for when I need to sign (certify) other keys;
  • created rsa2048 subkeys (for signature, encryption and authentication) and moved them to an OpenPGP smartcard for daily usage.

I've been working with that setup for a few years now and it is working perfectly fine. The signature counter on the OpenPGP basic card is a bit north of 5000 which is large but not that huge, all considered (and not counting authentication and decryption key usage).

One very nice feature of using a smartcard is that my laptop (or other machines I work on) never manipulates the private key directly but only sends request to the card, which is a really huge improvement, in my opinion. But it's also not the perfect solution for me: the OpenPGP card uses a proprietary platform from ZeitControl, named BasicCard. We have very few information on the smartcard, besides the fact that Werner Koch trust ZeistControl to not mess up. One caveat for me is that the card does not use a certified secure microcontroler like you would find in smartcard chips found in debit card or electronic IDs. That means it's not really been audited by a competent hardware lab, and thus can't be considered secure against physical attacks. The cardOS software and the application implementing the OpenPGP specification are not public either and have not been audited either, to the best of my knowledge.

At one point I was interested in the Yubikey Neo, especially since the architecture Yubico used was common: a (supposedly) certified platform (secure microcontroler, card OS) and a GlobalPlatform / JavaCard virtual machine. The applet used in the Yubikey Neo is open-source, too, so you could take a look at it and identify any issue.

Unfortunately, Yubico transitioned to a less common and more proprietary infrastructure for Yubikey 4: it's not longer Javacard based, and they don't provide the applet source anymore. This was not really seen as a good move by a lot of people, including Konstantin Ryabitsev (kernel.org administrator). Also, it wasn't possible  even for the Yubico Neo to actually build the applet yourself and inject it on the card: when the Yubikey leaves the facility, the applet is already installed and the smartcard is locked (for obvious security reason). I've tried asking about getting naked/empty Yubikey with developers keys to load the applet myself, but it' was apparently not possible or would have required signing an NDA with NXP (the chip maker), which is not really possible as an individual (not that I really want to anyway).

In the meantime, a coworker actually wrote an OpenPGP javacard applet, with the intention to support latest version of the OpenPGP specification, and especially elliptic curve cryptography. The applet is called SmartPGP and has been released on ANSSI github repository. I investigated a bit, and found a smartcard with correct specification: certified (in France or Germany), and supporting Javacard 3.0.4 (required for ECC). The card can do RSA2048 (unfortunately not RSA4096) and EC with NIST (secp256r1, secp384r1, secp521r1) and Brainpool (P256, P384, P512) curves.

I've ordered some cards, and when they arrived started playing. I've built the SmartPGP applet and pushed it to a smartcard, then generated some keys and tried with GnuPG. I'm right now in the process of migrating to a new smartcard based on that setup, which seems to work just fine after few days.

Part two of this serie will describe how to build the applet and inject it in the smartcard. The process is already documented here and there, but there are few things not to forget, like how to lock the card after provisionning, so I guess having the complete process somewhere might be useful in case some people want to reproduce it.



Michal Čihař: Better access control in Weblate

October 10, 2017 18:45, by Planet Debian - 0no comments yet

Upcoming Weblate 2.17 will bring improved access control settings. Previously this could be controlled only by server admins, but now the project visibility and access presets can be configured.

This allows you to better tweak access control for your needs. There is additional choice of making the project public, but restricting translations, what has been requested by several projects.

You can see the possible choices on the UI screenshot:

Weblate overall experience

On Hosted Weblate this feature is currently available only to commercial hosting customers. Projects hosted for free are limited to public visibility only.

Filed under: Debian English SUSE Weblate



Iain R. Learmonth: Automatic Updates

October 10, 2017 17:15, by Planet Debian - 0no comments yet

The @TorAtlas web application will now also prompt operators to update their relays if they are outdated. pic.twitter.com/HMixwqbBKM

— Tor Atlas (@TorAtlas) October 9, 2017

We have instructions for setting up new Tor relays on Debian. The only time the word “upgrade” is mentioned here is:

Be sure to set your ContactInfo line so we can contact you if you need to upgrade or something goes wrong.

This isn’t great. We should have some decent instructions for keeping your relay up to date too. I’ve been compiling a set of documentation for enabling automatic updates on various Linux distributions, here’s a taste of what I have so far:


Debian

Make sure that unattended-upgrades is installed and then enable the installation of updates (as root):

apt install unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

Fedora 22 or later

Beginning with Fedora 22, you can enable automatic updates via:

dnf install dnf-automatic

In /etc/dnf/automatic.conf set:

apply_updates = yes

Now enable and start automatic updates via:

systemctl enable dnf-automatic.timer
systemctl start dnf-automatic.timer

(Thanks to Enrico Zini I know all about these timer units in systemd now.)

RHEL or CentOS

For CentOS, RHEL, and older versions of Fedora, the yum-cron package is the preferred approach:

yum install yum-cron

In /etc/yum/yum-cron.conf set:

apply_updates = yes

Enable and start automatic updates via:

systemctl start yum-cron.service

I’d like to collect together instructions also for other distributions (and *BSD and Mac OS). Atlas knows which platform a relay is running on, so there could be a link in the future to some platform specific instructions on how to keep your relay up to date.



Jamie McClelland: Docker in Debian

October 10, 2017 16:07, by Planet Debian - 0no comments yet

It's not easy getting Docker to work in Debian.

It's not in stable at all:

0 jamie@turkey:~$ rmadison docker.io
docker.io  | 1.6.2~dfsg1-1~bpo8+1 | jessie-backports | source, amd64, armel, armhf, i386
docker.io  | 1.11.2~ds1-5         | unstable         | source, arm64
docker.io  | 1.11.2~ds1-5         | unstable-debug   | source
docker.io  | 1.11.2~ds1-6         | unstable         | source, armel, armhf, i386, ppc64el
docker.io  | 1.11.2~ds1-6         | unstable-debug   | source
docker.io  | 1.13.1~ds1-2         | unstable         | source, amd64
docker.io  | 1.13.1~ds1-2         | unstable-debug   | source
0 jamie@turkey:~$ 

And a problem with runc makes it really hard to get it working on Debian unstable.

These are the steps I took to get it running today (2017-10-10).

Remove runc (allow it to remove containerd and docker.io):

sudo apt-get remove runc

Install docker-runc (now in testing)

sudo apt-get install docker-runc

Fix containerd package to depend on docker-runc instead of runc:

mkdir containerd
cd containerd
apt-get download containerd 
ar x containerd_0.2.3+git20170126.85.aa8187d~ds1-2_amd64.deb
tar -xzf control.tar.gz
sed -i s/runc/docker-runc/g control
tar -c md5sums control | gzip -c > control.tar.gz
ar rcs new-containerd.deb debian-binary control.tar.gz data.tar.xz
sudo dpkg -i new-containerd.deb

Fix docker.io package to depend on docker-runc instead of runc.

mkdir docker
cd docker
apt-get download docker.io
ar x docker.io_1.13.1~ds1-2_amd64.deb
tar -xzf control.tar.gz
sed -i s/runc/docker-runc/g control
tar -c {post,pre}{inst,rm} md5sums control | gzip -c > control.tar.gz
ar rcs new-docker.io.deb debian-binary control.tar.gz data.tar.xz
sudo dpkg -i new-docker.io.deb

Symlink docker-runc => runc

sudo ln -s /usr/sbin/docker-runc /usr/sbin/runc

Keep apt-get from upgrading until this bug is fixed:

printf "# Remove when docker.io and containerd depend on docker-runc
# instead of normal runc
# https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=877329
Package: runc 
Pin: release * 
Pin-Priority: -1 

Package: containderd 
Pin: release * 
Pin-Priority: -1 

Package: docker.io
Pin: release * 
Pin-Priority: -1" | sudo tee /etc/apt/preferences.d/docker.pref

Thanks to coderwall for tips on manipulating deb files.



Lars Wirzenius: Debian and the GDPR

October 10, 2017 15:11, by Planet Debian - 0no comments yet

GDPR is a new EU regulation for privacy. The name is short for "General Data Protection Regulation" and it covers all organisations that handle personal data of EU citizens and EU residents. It will become enforceable May 25, 2018 (Towel Day). This will affect Debian. I think it's time for Debian to start working on compliance, mainly because the GDPR requires sensible things.

I'm not an expert on GDPR legislation, but here's my understanding of what we in Debian should do:

  • do a privacy impact assessment, to review and document what data we have, and collect, and what risks that has for the people whose personal data it is if the data leaks

  • only collect personal information for specific purposes, and only use the data for those purposes

  • get explicit consent from each person for all collection and use of their personal information; archive this consent (e.g., list subscription confirmations)

  • allow each person to get a copy of all the personal information we have about them, in a portable manner, and let them correct it if it's wrong

  • allow people to have their personal information erased

  • maybe appoint one or more data protection officers (not sure this is required for Debian)

There's more, but let's start with those.

I think Debian has at least the following systems that will need to be reviewed with regards to the GDPR:

  • db.debian.org - Debian project members, "Debian developers"
  • nm.debian.org
  • contributors.debian.org
  • lists.debian.org - at least membership lists, maybe archives
  • possibly irc servers and log files
  • mail server log files
  • web server log files
  • version control services and repositories

There may be more; these are just off the top of my head.

I expect that mostly Debian will be OK, but we can't just assume that.



Reproducible builds folks: Reproducible Builds: Weekly report #128

October 10, 2017 8:08, by Planet Debian - 0no comments yet

Here's what happened in the Reproducible Builds effort between Sunday October 1 and Saturday October 7 2017:

Media coverage

Documentation updates

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

32 package reviews have been added, 46 have been updated and 62 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (27)

diffoscope development

strip-nondeterminism development

Rob Browning noticed that strip-nondeterminism was causing serious performance regressions in the Clojure programming language within Debian. After some discussion, Chris Lamb also posted a query to debian-devel in case there were any other programming languages that might be suffering from the same problem.

reprotest development

Versions 0.7.1 and 0.7.2 were uploaded to unstable by Ximin Luo:

  • New features:
    • Add a --auto-build option to try to determine which specific variations cause unreproducibility.
    • Add a --source-pattern option to restrict copying of source_root, and set this automatically in our presets.
  • Usability improvements:
    • Improve error messages in some common scenarios.
      • Fiving a source_root or build_command that doesn't exist
      • Using reprotest with default settings after not installing Recommends
    • Output hashes after a successful --auto-build.
    • Print a warning message if we reproduced successfully but didn't vary everything.
  • Fix varying both umask and user_group at the same time.
  • Have dpkg-source extract to different build dir if varying the build-path.
  • Pass --exclude-directory-metadata to diffoscope(1) by default as this is the majority use-case.
  • Various bug fixes to get the basic dsc+schroot example working.

It included contributions already covered by posts of the previous weeks, as well as new ones from:

tests.reproducible-builds.org

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.



Vincent Fourmond: Define a function with inline Ruby code in QSoas

October 9, 2017 22:31, by Planet Debian - 0no comments yet

QSoas can read and execute Ruby code directly, while reading command files, or even at the command prompt. For that, just write plain Ruby code inside a ruby...ruby end block. Probably the most useful possibility is to define elaborated functions directly from within QSoas, or, preferable, from within a script; this is an alternative to defining a function in a completely separated Ruby-only file using ruby-run. For instance, you can define a function for plain Michaelis-Menten kinetics with a file containing:

ruby
def my_func(x, vm, km)
  return vm/(1 + km/x)
end
ruby end

This defines the function my_func with three parameters, , (vm) and (km), with the formula:

You can then test that the function has been correctly defined running for instance:

QSoas> eval my_func(1.0,1.0,1.0)
 => 0.5
QSoas> eval my_func(1e4,1.0,1.0)
 => 0.999900009999

This yields the correct answer: the first command evaluates the function with x = 1.0, vm = 1.0 and km = 1.0. For , the result is (here 0.5). For , the result is almost . You can use the newly defined my_func in any place you would use any ruby code, such as in the optional argument to generate-buffer, or for arbitrary fits:

QSoas> generate-buffer 0 10 my_func(x,3.0,0.6)
QSoas> fit-arb my_func(x,vm,km)

To redefine my_func, just run the ruby code again with a new definition, such as:
ruby
def my_func(x, vm, km)
  return vm/(1 + km/x**2)
end
ruby end
The previous version is just erased, and all new uses of my_func will refer to your new definition.


See for yourself

The code for this example can be found there. Browse the qsoas-goodies github repository for more goodies !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1. You can download its source code or buy precompiled versions for MacOS and Windows there.



Markus Koschany: My Free Software Activities in September 2017

October 9, 2017 22:18, by Planet Debian - 0no comments yet

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I sponsored a new release of hexalate for Unit193 and icebreaker for Andreas Gnau. The latter is a reintroduction.
  • New upstream releases this month: freeorion and hyperrogue.
  • I backported freeciv and freeorion to Stretch.

Debian Java

Debian LTS

This was my nineteenth month as a paid contributor and I have been paid to work 15,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 18. September to 24. September I was in charge of our LTS frontdesk. I triaged bugs in poppler, binutils, kannel, wordpress, libsndfile, libexif, nautilus, libstruts1.2-java, nvidia-graphics-drivers, p3scan, otrs2 and glassfish.
  • DLA-1108-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-1116-1. Issued a security update for poppler fixing 3 CVE.
  • DLA-1119-1. Issued a security update for otrs2 fixing 4 CVE.
  • DLA-1122-1. Issued a security update for asterisk fixing 1 CVE. I also investigated CVE-2017-14099 and CVE-2017-14603. I decided against a backport because the fix was too intrusive and the vulnerable option is disabled by default in Wheezy’s version which makes it a minor issue for most users.
  • I submitted a patch for Debian’s reportbug tool. (#878088) During our LTS BoF at DebConf 17 we came to the conclusion that we should implement a feature in reportbug that checks whether the bug reporter wants to report a regression for a recent security update. Usually the LTS and security teams  receive word from the maintainer or users who report issues directly to our mailing lists or IRC channels. However in some cases we were not informed about possible regressions and the new feature in reportbug shall ensure that we can respond faster to such reports.
  • I started to investigate the open security issues in wordpress and will complete the work in October.

Misc

  • I packaged a new version of xarchiver. Thanks to the work of Ingo Brückl xarchiver can handle almost all archive formats in Debian now.

QA upload

  • I did a QA upload of xball, an ancient game from the 90ies that simulates bouncing balls.  It should be ready for another decade at least.

Thanks for reading and see you next time.



Ben Hutchings: Debian LTS work, September 2017

October 9, 2017 16:25, by Planet Debian - 0no comments yet

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from August. I only worked 12 hours, so I will carry over 9 hours to the next month.

I prepared and released another update on the Linux 3.2 longterm stable branch (3.2.93). I then rebased the Debian linux package onto this version, added further security fixes, and uploaded it (DLA-1099-1).



Michal Čihař: Better acess control in Weblate

October 9, 2017 16:00, by Planet Debian - 0no comments yet

Upcoming Weblate 2.17 will bring improved access control settings. Previously this could be controlled only by server admins, but now the project visibility and access presets can be configured.

This allows you to better tweak access control for your needs. There is additional choice of making the project public, but restricting translations, what has been requested by several projects.

You can see the possible choices on the UI screenshot:

Weblate overall experience

On Hosted Weblate this feature is currently available only to commercial hosting customers. Projects hosted for free are limited to public visibility only.

Filed under: Debian English SUSE Weblate



Antonio Terceiro: pristine-tar updates

October 9, 2017 15:06, by Planet Debian - 0no comments yet

Introduction

pristine-tar is a tool that is present in the workflow of a lot of Debian people. I adopted it last year after it has been orphaned by its creator Joey Hess. A little after that Tomasz Buchert joined me and we are now a functional two-person team.

pristine-tar goals are to import the content of s a pristine upstream tarball into a VCS repository, and being able to later reconstruct that exact same tarball, bit by bit, based on the contents in the VCS, so we don’t have to store a full copy of that tarball. This is done by storing a binary delta files which can be used to reconstruct the original tarball from a tarball produced with the contents of the VCS. Ultimately, we want to make sure that the tarball that is uploaded to Debian is exactly the same as the one that has been downloaded from upstream, without having to keep a full copy of it around if all of its contents is already extracted in the VCS anyway.

The current state of the art, and perspectives for the future

pristine-tar solves a wicked problem, because our ability to reconstruct the original tarball is affected by changes in the behavior of tar and of all of the compression tools (gzip, bzip2, xz) and by what exact options were used when creating the original tarballs. Because of this, pristine-tar currently has a few embedded copies of old versions of compressors to be able to reconstruct tarballs produced by them, and also rely on a ever-evolving patch to tar that is been carried in Debian for a while.

So basically keeping pristine-tar working is a game of Whac-A-Mole. Joey provided a good summary of the situation when he orphaned pristine-tar.

Going forward, we may need to rely on other ways of ensuring integrity of upstream source code. That could take the form of signed git tags, signed uncompressed tarballs (so that the compression doesn’t matter), or maybe even a different system for storing actual tarballs. Debian bug #871806 contains an interesting discussion on this topic.

Recent improvements

Even if keeping pristine-tar useful in the long term will be hard, too much of Debian work currently relies on it, so we can’t just abandon it. Instead, we keep figuring out ways to improve. And I have good news: pristine-tar has recently received updates that improve the situation quite a bit.

In order to be able to understand how better we are getting at it, I created a "visualization of the regression test suite results. With the help of data from there, let’s look at the improvements made since pristine-tar 1.38, which was the version included in stretch.

pristine-tar 1.39: xdelta3 by default.

This was the first release made after the stretch release, and made xdelta3 the default delta generator for newly-imported tarballs. Existing tarballs with deltas produced by xdelta are still supported, this only affects new imports.

The support for having multiple delta generator was written by Tomasz, and was already there since 1.35, but we decided to only flip the switch after using xdelta3 was supported in a stable release.

pristine-tar 1.40: improved compression heuristics

pristine-tar uses a few heuristics to produce the smaller delta possible, and this includes trying different compression options. In the release Tomasz included a contribution by Lennart Sorensen to also try the --gnu, which gretly improved the support for rsyncable gzip compressed files. We can see an example of the type of improvement we got in the regression test suite data for delta sizes for faad2_2.6.1.orig.tar.gz:

In 1.40, the delta produced from the test tarball faad2_2.6.1.orig.tar.gz went down from 800KB, almost the same size of tarball itself, to 6.8KB

pristine-tar 1.41: support for signatures

This release saw the addition of support for storage and retrieval of upstream signatures, contributed by Chris Lamb.

pristine-tar 1.42: optionally recompressing tarballs

I had this idea and wanted to try it out: most of our problems reproducing tarballs come from tarballs produced with old compressors, or from changes in compressor behavior, or from uncommon compression options being used. What if we could just recompress the tarballs before importing then? Yes, this kind of breaks the “pristine” bit of the whole business, but on the other hand, 1) the contents of the tarball are not affected, and 2) even if the initial tarball is not bit by bit the same that upstream release, at least future uploads of that same upstream version with Debian revisions can be regenerated just fine.

In some cases, as the case for the test tarball util-linux_2.30.1.orig.tar.xz, recompressing is what makes it possible to reproduce the tarball (and thus import it with pristine-tar) possible at all:

util-linux_2.30.1.orig.tar.xz can only be imported after being recompressed

In other cases, if the current heuristics can’t produce a reasonably small delta, recompressing makes a huge difference. It’s the case for mumble_1.1.8.orig.tar.gz:

with recompression, the delta produced from mumble_1.1.8.orig.tar.gz goes from 1.2MB, or 99% of the size to the original tarball, to 14.6KB, 1% of the size of original tarball

Recompressing is not enabled by default, and can be enabled by passing the --recompress option. If you are using pristine-tar via a wrapper tool like gbp-buildpackage, you can use the $PRISTINE_TAR environment variable to set options that will affect any pristine-tar invocations.

Also, even if you enable recompression, pristine-tar will only try it if the delta generations fails completely, of if the delta produced from the original tarball is too large. You can control what “too large” means by using the --recompress-threshold-bytes and --recompress-threshold-percent options. See the pristine-tar(1) manual page for details.



Michal Čihař: stardicter 1.1

October 9, 2017 13:15, by Planet Debian - 0no comments yet

Stardicter 1.1, the set of scripts to convert some freely available dictionaries to StarDict format, has been released today. The biggest change is that it will also keep source data together with generated dictionaries. This is good for licensing reasons and will also allow to actually build these as packages within Debian.

Full list of changes:

  • Various cleanups for first stable release.
  • Fixed generating of README for dictionaries.
  • Added support for generating source tarballs.
  • Fixed installation on systems with non utf-8 locale.

As usual, you can install from pip, download source or download generated dictionaries from my website. The package should be soon available in Debian as well.

Filed under: Debian English StarDict



Petter Reinholdtsen: Generating 3D prints in Debian using Cura and Slic3r(-prusa)

October 9, 2017 8:50, by Planet Debian - 0no comments yet

At my nearby maker space, Sonen, I heard the story that it was easier to generate gcode files for theyr 3D printers (Ultimake 2+) on Windows and MacOS X than Linux, because the software involved had to be manually compiled and set up on Linux while premade packages worked out of the box on Windows and MacOS X. I found this annoying, as the software involved, Cura, is free software and should be trivial to get up and running on Linux if someone took the time to package it for the relevant distributions. I even found a request for adding into Debian from 2013, which had seem some activity over the years but never resulted in the software showing up in Debian. So a few days ago I offered my help to try to improve the situation.

Now I am very happy to see that all the packages required by a working Cura in Debian are uploaded into Debian and waiting in the NEW queue for the ftpmasters to have a look. You can track the progress on the status page for the 3D printer team.

The uploaded packages are a bit behind upstream, and was uploaded now to get slots in the NEW queue while we work up updating the packages to the latest upstream version.

On a related note, two competitors for Cura, which I found harder to use and was unable to configure correctly for Ultimaker 2+ in the short time I spent on it, are already in Debian. If you are looking for 3D printer "slicers" and want something already available in Debian, check out slic3r and slic3r-prusa. The latter is a fork of the former.