Go to the content


Full screen Suggest an article
 RSS feed

Planet Debian

July 1, 2016 14:34 , by valessiobrito - | No one following this article yet.

Planet.Debian is a website that aggregates the blogs of many Debian contributors. Planet maintainers can be reached at planet at debian.org

Andrew Cater: How to share collaboratively

June 22, 2016 15:19, by Planet Debian - 0no comments yet

Following on:

When contributing to mailing lists and fora:

  • Contribute constructively - no one likes to be told "You've got a REALLY ugly baby there" or equivalent.
  • Think through what you post: check references and check that it reads clearly and is spelled correctly
  • Add value
 When contributing bug reports:
  •  Provide as full details of hardware and software as you have
  • Answer questions carefully: Ask questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html
  • Be prepared to follow up queries / provide sufficient evidence to reproduce behaviour or provide pathological test cases 
  • Provide a patch if possible: even if it's only pseudocode
When adding to / modifying FLOSS software:
  • Keep pristine sources that you have downloaded
  • Maintain patch series against pristine source
  • Talk to the originators of the software / current maintainers elsewhere
  • Follow upstream style if feasible / a consistent house style if not
  • Be generous in what you accept: be precise in what you put out
  • Don't produce licence conflicts - check and check again that your software can be distributed.
  • Don't apply inconsistent copyrights
When writing new FLOSS software / "freeing" prior commercial/closed code under a FLOSS licence
  • Make permissions explicit and publish under a well established FLOSS licence 
  • Be generous to potential contributors and collaborators: render them every assistance so that they can help you better
  • Be generous in what you accept: be precise in what you put out
  • Don't produce licence conflicts - check and check again that your software can be distributed.
  • Don't apply inconsistent copyrights: software you write is your copyright at the outset until you assign it elsewhere
  • Contribute documentation / examples
  • Maintain a bugtracker and mailing lists for your software
If you are required to sign a contributor license agreement [CLA]
  • Ensure that you have the rights you purport to assign
  • Assign the minimum of rights necessary - if you can continue to allow full and free use of your code, do so
  • Meet any  required code of conduct [CoC] stipulations in addition to the CLA
Always remember in all of this: just because you understand your code and your working practices doesn't mean that anyone else will.
There is no automatic right to contribution nor any necessary assumption or precondition that collaborators will come forward.
Just because you love your own code doesn't mean that it merits anyone else's interest or that anyone else should value it thereby
"Just because it scratches your itch doesn't mean that it scratches anyone else's - or that it's actually any good / any use to anyone else"

Satyam Zode: GSoC 2016 Week 4 and 5: Reproducible Builds in Debian

June 22, 2016 10:47, by Planet Debian - 0no comments yet

This is a brief report on my last week work with Debian Reproducible Builds.

In week 4, I mostly worked on designing an interfaces and tackling different issues related to argument completion feature of diffoscope and in week 5 I worked on adding hiding .buildinfo from .changes files.

Update for last week’s activities

  • I researched different diffoscope outputs. In reproducible-builds testing framework only differences of .buildinfo files are given but I needed diffoscope outputs for .changes files. Hence, I had to build packages locally using our experimental tool chain. My goal was to generate different outputs and to see how I can hide .buildinfo files from .changes.
  • I updated argument completion patch as per suggestions given by Paul Wise (pabs). Patch has been reviewed by Mattia Rizzolo, Holger Levsen and merged by Reiner Herrmann (deki) into diffoscope master. This patch closes #826711. Thanks all for support.

  • For Ignore .buildinfo files when comparing .changes files, we finally decided to enable this by default and without having any command line option to hide.

  • Last week I researched more on .changes and .buildinfo files. After getting guidelines from Lunar I was able to understand the need of this feature. I am in the middle of implementation of this particular problem.

Goal for upcoming week:

  • Finish the implementation of hiding .buildinfo from .changes
  • Start thinking on interfaces and discuss about different use cases.

I am thankful to Shirish Agarwal for helping me through visa process. But, unfortunately I won’t get visa till 5th July. So I don’t think, I would make it to debconf this year. I will certainly attend Debconf 2017. Good news for me is I have passed mid-term evaluations of Google Summer of Code 2016. I will continue my work to improve Debian. Even, I have post GSoC plans ready for Debian project ;)

Have a nice day :)

Andrew Cater: Why I must use Free Software - and why I tell others to do so

June 22, 2016 9:43, by Planet Debian - 0no comments yet

My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times,  I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
 In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace  1996  https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much  a "do-ocracy"  - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.

Martin-Éric Racine: Batch photo manipulation via free software tools?

June 22, 2016 8:12, by Planet Debian - 0no comments yet

I have a need for batch-processing pictures. My requirements are fairly simple:

  • Resize the image to fit Facebook's preferred 960 pixel box.
  • Insert Copyright, Byline and Bylinetitle into the EXIF data.
  • Optionally, paste my watermark onto a predefined corner of the image.
  • Optionally, adjust the white balance.
  • Rename the file according to a specific syntax.
  • Save the result to a predefined folder.

Until recently, I was using Phatch to perform all of this. Unfortunately, it cannot edit the EXIF data of my current Lumix camera, whose JPEG it claims to be MPO. I am thus forced to look for other options. Ideally, I would do this via a script inside gThumb (which is my main photo editing software), but I cannot seem to find adequate documentation on how to achieve this.

I am thus very interested in hearing about other options to achieve the same result. Ideas, anyone?

Clint Adams: Only in San Francisco would one brag about this

June 22, 2016 6:46, by Planet Debian - 0no comments yet

“I dated Appelbaum!” she said.

“I gotta go,” I said.

Gunnar Wolf: Answering to a CACM «Viewpoint»: on the patent review process

June 22, 2016 4:40, by Planet Debian - 0no comments yet

I am submitting a comment to Wen Wen and Chris Forman's Viewpoint on the Communications of the ACM, titled Economic and business dimensions: Do patent commons and standards-setting organizations help navigate patent thickets?. I believe my comment is worth sharing a bit more openly, so here it goes. Nevertheless, please refer to the original article; it makes very interesting and valid points, and my comment should be taken as an extra note on a great text only!

I was very happy to see an article with this viewpoint published. This article, however, mentions some points I believe should be further stressed out as problematic and important. Namely, still at the introduction, after mentioning that patents «are intended to provide incentives for innovation by granting to inventors temporary monopoly rights», the next paragraph continues, «The presence of patent thickets may create challenges for ICT producers. When introducing a new product, a firm must identify patents its product may infringe upon.»

The authors continue explaining the needed process — But this simple statement should be enough to explain how the patent system is broken and needs repair.

A requisite for patenting an invention was originally the «inventive» and «non-obvious» characteristics. Anything worth being granted a patent should be inventive enough, it should be non-obvious to an expert in the field.

When we see huge bodies of awarded (and upheld) patents falling in the case the authors mention, it becomes clear that the patent applications were not thoroughly researched prior to their patent grant. Sadly, long gone are the days where the United States Patent and Trademarks Office employed minds such as Albert Einstein's; nowadays, the office is more a rubber-stamping bureaucracy where most patents are awarded, and this very important requisite is left open to litigation: If somebody is found in breach of a patent, they might choose to defend the issue that the patent was obvious to an expert. But, of course, that will probably cost more in legal fees than settling for an agreement with the patent holder.

The fact that in our line of work we must take care to search for patents before releasing any work speaks a lot about the process. Patents are too easily granted. They should be way stricter; the occurence of an independent developer mistakenly (and innocently!) breaching a patent should be most unlikely, as patents should only be awarded to truly non-obvious solutions.

Matthew Garrett: I've bought some more awful IoT stuff

June 21, 2016 23:11, by Planet Debian - 0no comments yet

I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comment count unavailable comments

Ian Wienand: Zuul and Ansible in OpenStack CI

June 21, 2016 22:16, by Planet Debian - 0no comments yet

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

Overview of OpenStack CI with Zuul and Ansible
  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Gunnar Wolf: Relax and breathe...

June 21, 2016 19:30, by Planet Debian - 0no comments yet

Time passes. I had left several (too many?) pending things to be done un the quiet weeks between the end of the lective semestre and the beginning of muy Summer trip to Winter. But Saturday gets closer every moment... And our long trip to the South begins.

Among many other things, I wanted to avance with some Debían stuff - both packaging and WRT keyring analysis. I want to contacto some people I left pending interactions with, but honestly, that will only come face to face un Capetown.

As to "real life", I hace too many pending issues at work to even begin with; I hope to get some time at South África todo do some decent UNAM sysadmining. Also, I want to play the idea of using Git for my students' workflow (handing in projects and assignments, at least)... This can be interesting to talk with the Debían colleagues about, actually.

As a Masters student, I'm making good advances, and will probably finish muy class work next semester, six months ahead of schedule, but muy thesis work so far has progressed way slower than what I'd like. I have at least a better defined topic and approach, so I'll start the writing phase soon.

And the personal life? Family? I am more complete and happy than ever before. My life su completely different from two years ago. Yes, that was obvious. But it's also the only thing I can come up with. Having twin babies (when will they make the transition from "babies" to "kids"? No idea... We will find out as it comes) is more than beautiful, more than great. Our life has changed in every possible aspect. And yes, I admire my loved Regina for all of the energy and love she puts on the babies... Life is asymetric, I am out for most of the day... Mommy is always there.

As I said, happier than ever before.

Joey Hess: twenty years of free software -- part 2 etckeeper

June 21, 2016 15:24, by Planet Debian - 0no comments yet

etckeeper was a sleeper success for me. I created it, wrote one blog post about it, installed it on all my computers, and mostly forgot about it, except when I needed to look something up in the git history of /etc it helpfully maintains. It's a minor project.

Then I started getting patches porting it to many other version control systems, and other linux distributions, and fixing problems, and adding features. Mountains and mountains of patches over time.

And then I started hearing about distributions that install it by default. (Though Debian for some reason never did so I keep having to install it everywhere by hand.)

Writing this blog post, I noticed etckeeper had accumulated enough patches from other people to warrant a new release. That happens pretty regularly.

So it's still a minor project as far as I'm concerned, but quite a few people seem to be finding it useful. So it goes with free software.

Next: twenty years of free software -- part 3 myrepos

Reproducible builds folks: Reproducible builds: week 60 in Stretch cycle

June 21, 2016 8:24, by Planet Debian - 0no comments yet

What happened in the Reproducible Builds effort between June 12th and June 18th 2016:

Media coverage

GSoC and Outreachy updates

Weekly reports by our participants:

Toolchain fixes

  • texlive-bin/2016.20160513.41080-3 has been uploaded to unstable, featuring support for FORCE_SOURCE_DATE. See the last post for details on it.
  • doxygen/1.4.4-1 has been uploaded to unstable, fixing #822197 (upstream bug), which caused some generated html file to contain unreproducible memory addresses of Python objects used at build time.
  • debhelper/9.20160618 has been uploaded to unstable, fixing #824490, which instructs ant to not save the username of the build user in the generated files. Original patch by Emmanuel Bourg.
  • HW42 reported a long-known (although only internally) bug in our dpkg-buildinfo. this particular bug doesn't affect our current infrastructure, but it's a blocker for having .buildinfo support merged upstream.
  • epydoc/3.0.1+dfsg-13 and 3.0.1+dfsg-14 have been uploaded by Kenneth J. Pronovici which fixes nondeterministic ordering issues in generated documentation and removes memory addresses. Original patches (#825968 and #827416) by Sascha Steinbiss.

With this upload of texlive-bin we decided to stop keeping our patched fork of as most of the patches for SOURCE_DATE_EPOCH support had been integrated upstream already, and the last one (making FORCE_SOURCE_DATE default to 1) had been refused. So, we are now going to let the archive be rebuilt against unstable's texlive-bin and see how many packages will become unreproducible with this change; once enough data will be collected we will ponder whether FORCE_SOURCE_DATE should be exported by helper tools (such as debhelper) or manually exported by every package that needs it.

(For those wondering: we still recommend to follow SOURCE_DATE_EPOCH always and don't recommend other projects to implement FORCE_SOURCE_DATE…)

With the drop of texlive-bin we now have only three modified packages in our experimental repository.

Reproducible work in other projects

Packages fixed

The following 12 packages have become reproducible due to changes in their build dependencies: django-floppyforms flask-restful hy jets3t kombu llvm-toolchain-3.8 moap python-bottle python-debtcollector python-django-debug-toolbar python-osprofiler stevedore

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Uploads with reproducibility fixes that currently fail to build:

  • ruby2.3/2.3.1-3 by Christian Hofstaedtler, avoids unreproducible rbconfig.rb files by always using bash for building.

Patches submitted that have not made their way to the archive yet:

  • #827109 against asciijump by Reiner Herrmann: sort source files for deterministic linking order.
  • #827112 against boswars by Reiner Herrmann: sort source files for deterministic linking order.
  • #827114 against overgod by Reiner Herrmann: use C locale for sorting source files.
  • #827115 against netpbm by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH while generating output.
  • #827124 against funguloids by Reiner Herrmann: use C locale for sorting files.
  • #827145 against scummvm by Reiner Herrmann: don't embed extra fields in zip archive and build with proper host architecture.
  • #827150 against netpanzer by Reiner Herrmann: sort source files for deterministic linking order.
  • #827172 against reaver by Alexis Bienvenüe: sort object files for deterministic linking order.
  • #827187 against latex2html by Alexis Bienvenüe: iterate deterministic over Perl hashes; honour SOURCE_DATE_EPOCH for output; strip username from output; sort index keys.
  • #827313 against cherrypy3 by Sascha Steinbiss: prevent memory addresses in output.
  • #827361 against matplotlib by Alexis Bienvenüe: honour SOURCE_DATE_EPOCH in output and sort keys while iterating over dict.
  • #827382 against dwarfutils by Reiner Herrmann: fix array size, which caused memory from outside a table to be embedded into output.
  • #827384 against skytools3 by Sascha Steinbiss: use stable sorting order and remove timestamps from documentation.
  • #827419 against ldaptor by Sascha Steinbiss: sort list of input files and prevent home directory from leaking into documentation.
  • #827546 against git-buildpackage by Sascha Steinbiss: replace timestamps in documentation with changelog date; prevent temporary paths in documentation.
  • #827572 against xprobe by Reiner Herrmann: sort list of object files in static library archives.

Package reviews

36 reviews have been added, 12 have been updated and 31 have been removed in this week.

17 FTBFS bugs have been reported by Chris Lamb, Santiago Vila and Dominic Hargreaves.

diffoscope development

Satyam worked on argument completion (#826711) for diffoscope.

strip-nondeterminism development

Mattia Rizzolo uploaded strip-nondeterminism 0.019-1~bpo8+1 to jessie-backports.

reprotest development

Ceridwen filed an Intent To Package (ITP) bug for reprotest as #827293.


  • Mattia Rizzolo uploaded pbuilder 0.225 to unstable, providing built-in support for eatmydata. We're planning to use it in armhf and i386 builders where we don't build in tmpfs, to increase the build speed some more.
  • Valery Young reworked the appearance of the package page, hopefully making them more intuitive and usable. In the process she changed the script generating them to use a real templating system, thus improving maintenance for the future.
  • Holger adjusted the scheduler to reschedule packages in state 'depwait' after two days instead of three.
  • Mattia added the bug title next to the bug numbers in the notes.


This week's edition was written by Mattia Rizzolo, Reiner Herrmann, Ed Maste and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Hideki Yamane: Debian developers reference update (3.4.18)

June 21, 2016 4:16, by Planet Debian - 0no comments yet

Thanks to our lovely translators, Debian developers reference has now added Russian and Italian version. Hope people who use those languages can look into Debian development more deeply.

And also added fancy CSS for pages, hope you like it...

We have looong queue in BTS, so please help us to make it better with patch (or just article), not just post your opinion. I cannot squash those since I'm not good at English... ;-), so please give your hands.

And maybe it'd be better to use date for versioning, IMHO (not 3.4.18 but 20160621).

Russ Allbery: Review: Furiously Happy

June 21, 2016 3:26, by Planet Debian - 0no comments yet

Review: Furiously Happy, by Jenny Lawson

Publisher: Flatiron
Copyright: September 2015
ISBN: 1-250-07700-1
Format: Hardcover
Pages: 329

Jenny Lawson, who blogs as The Bloggess, is best known for a combination of being extremely funny and being extremely open about her struggles with mental illness (a combination of anxiety and depression, alongside a few other problems). Her first book primarily told the story of her family, childhood, and husband. Furiously Happy is a more random and disconnected book, but insofar as it has a theme, it's about surviving depression, anxiety, and other manifestations of your brain being an asshole.

I described Lawson's previous book, Let's Pretend This Never Happened, as the closest thing I've found to a stand-up comedy routine in book form. Furiously Happy is very similar, but it lacks the cohesiveness of a routine. Instead, it feels like a blog collection: a pile of essays with only some vague themes and not a lot of continuity from essay to essay.

This doesn't surprise me. Second books are very different than first books, particularly second books by someone whose writing focus is not writing books, and particularly for non-fiction. I feel like Let's Pretend This Never Happened benefited from drawing on Lawson's full life experience to form the best book she could write. When that became wildly popular, everyone of course wanted a second book, including me. But when the writing is this personal, the second book is, out of necessity, partly leftovers. Lawson's recent experiences don't generate as much material as her whole life up to the point of the first book.

That said, there is a bit of a theme, and the title fits it. Early in the book, Lawson describes how, after the death of a friend and a bout of depression, she decided to be furiously, vehemently happy to get back at the universe, to spite its attempts to destroy her mood. It's one of the best bits in this book. The surrounding philosophy is about embracing the moment, enjoying the hell out of everything that one can enjoy, and taking a lot of chances.

A lot of the stories in this book come after the beginning of Lawson's fame and popularity. She has book tours, a vacation tour of Australia, and a community of people from her blog. That, of course, doesn't make the depression and anxiety any better; indeed, it provides a lot of material for her anxiety to work with. Lawson talks a lot about surviving, about how important that community is to her, about not believing your brain when it lies to you. This isn't as uniformly funny as her first book, and sometimes it feels a bit too much like an earnest pep talk. But there are also moments of insightful life philosophy mixed into the madcap adventures and stream-of-consciousness wild associations.

Lawson also does for anxiety what Allie Brosh does for depression: make the severe form of it relatable to people who have not suffered from it. I was particularly struck by her description of flying: the people around her are getting nervous and anxious as the plane starts to take off, and she's finally able to relax because her anxiety focused on all the things she had to do in order to get onto the right plane at the right time. Once she didn't have to make any decisions or do anything other than sit in one place, her anxiety let go. I don't have any type of clinical anxiety, but I was able to identify with that moment of relief and its contrast with anxiety in a deeper way than with other descriptions.

Furiously Happy is a bit more serious and earnest, and I'm not sure it worked as well. I liked Lawson's first book better; this felt more like a blog archive. But she's still funny, entertaining, and delightful, and I'm happy to support her with a book purchase. Start with either her blog or Let's Pretend This Never Happened if you're new to Lawson, but if you're already a fan, here's more of her writing.

Rating: 7 out of 10

Joey Hess: twenty years of free software -- part 1 ikiwiki

June 20, 2016 18:50, by Planet Debian - 0no comments yet

I'm forty years old. I've been developing free software for twenty years.

A decade ago, I wrote a series of posts about my first ten years of free software, looking back over projects I'd developed. These retrospectives seem even more valuable in retrospect; there are things in the old posts that jog my memory, and other details I've forgotten by now.

So, I'm doing it again. Over the next two weeks (with a week off in the middle for summer vacation), I'll be writing one post each day about a free software project I've developed in the past decade.

We begin with Ikiwiki. I started it 10 years ago, and still use it to this day; it's the engine that builds this website, and nearly all my other websites, as well as wikis and websites belonging to tons and tons of other projects, like NetBSD, X.org, Freedesktop.org, FreedomBox and many other users.

Indeed I'm often reading a website and find myself wondering "hey.. is this using Ikiwiki?", and glance at the html and find that yes, it is. So, Ikiwiki is a reasonably successful and widely used peice of software, at least in its niche.

More important to me, it was a static site generator before we knew what those were. It wasn't the first, but it broke plenty of new ground. I'm particularly proud of the way it combines a wiki with blogging support seamlessly, and the incremental updating of the static pages including updating wikilinks and backlinks. Some of these and other features are still pretty unique to Ikiwiki despite the glut of other static site generators available now.

Ikiwiki is written in Perl, which was great for getting lots of other contributions (including many of its 113 plugins), but has also held it back some lately. There are less Perl programmers these days. And over the past decade, concurrency has become very important, but Ikiwiki's implementation is stubbornly single threaded, and multithreading such a Perl program is a losing propoisition. I occasionally threaten to rewrite it in Haskell, but I doubt I will.

Ikiwiki has several developers now, and I'm the least active of them. I stepped back because I can't write Perl very well anymore, and am mostly very happy with how Ikiwiki works, so only pop up now and then when something annoys me.

Next: twenty years of free software -- part 2 etckeeper