Go to the content
or

Debian

Full screen Suggest an article
 RSS feed

Planet Debian

July 1, 2016 14:34 , by valessiobrito - | No one following this article yet.

Planet.Debian is a website that aggregates the blogs of many Debian contributors. Planet maintainers can be reached at planet at debian.org


Martin-Éric Racine: Debian within a Windows partition?

August 7, 2016 17:18, by Planet Debian - 0no comments yet

A few years ago, I remember that Ubuntu had a trick that allowed the distribution to be installed as one large file within a Windows partition. Does the same thing exist to install Debian?



Dirk Eddelbuettel: littler 0.3.1

August 7, 2016 13:45, by Planet Debian - 0no comments yet

max-heap image

The second release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in the summer of 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. It is still faster, and in my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It prefers to live on Linux and Unix, has its difficulties on the OS X due yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

This release brings us fixes and enhancements from three other contributors, a couple new example scripts, more robust builds, extended documentation and more -- see below for details.

The NEWS file entry is below.

Changes in littler version 0.3.1 (2016-08-06)

  • Changes in examples

    • install2.r now passes on extra options past -- to R CMD INSTALL (PR #37 by Steven Pav)

    • Added rcc.r to run rcmdcheck::rcmdcheck()

    • Added (still simple) render.r to render (R)markdown

    • Several examples now support the -x or --usage flag to show extended help.

  • Changes in build system

    • The AM_LDFLAGS variable is now set and used too (PR #38 by Mattias Ellert)

    • Three more directories, used when an explicit installation directory is set, are excluded (also #38 by Mattias)

    • Travis CI is now driven via run.sh from our fork, and deploys all packages as .deb binaries using our PPA where needed

  • Changes in package

    • SystemRequirements now mentions the need for libR, i.e. an R built with a shared library so that we can embed R.

    • The docopt and rcmdcheck packages are now suggested, and added to the Travis installation.

    • A new helper function r() is now provided and exported so that the package can be imported (closes #40).

    • URL and BugReports links were added to DESCRIPTION.

  • Changes in documentation

    • The help output for installGithub.r was corrected (PR #39 by Brandon Bertelsen)

Full details for the littler release are provided as usual at the ChangeLog page.

The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. will probably have new

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.



Sean Whitton: git-push-all

August 6, 2016 22:45, by Planet Debian - 0no comments yet

I maintain Debian packages for several projects which are hosted on GitHub. I have a master packaging branch containing both upstream’s code, and my debian/ subdirectory containing the packaging control files. When upstream makes a new release, I simply merge their release tag into master: git merge 1.2.3 (after reviewing the diff!).

Packaging things for Debian turns out to be a great way to find small bugs that need to be fixed, and I end up forwarding a lot of patches upstream. Since the projects are on GitHub, that means forking the repo and submitting pull requests. So I end up with three remotes:

origin
the Debian git server
upstream
upstream’s GitHub repo from which I’m getting the release tags
fork
my GitHub fork of upstream’s repo, where I’m pushing bugfix branches

I can easily push individual branches to particular remotes. For example, I might say git push -u fork fix-gcc-6. However, it is also useful to have a command that pushes everything to the places it should be: pushes bugfix branches to fork, my master packaging branch to origin, and definitely doesn’t try to push anything to upstream (recently an upstream project gave me push access because I was sending so many patches, and then got a bit annoyed when I pushed a series of Debian release tags to their GitHub repo by mistake).

I spent quite a lot of time reading git-config(1) and git-push(1), and came to the conclusion that there is no combination of git settings and a push command that do the right thing in all cases. Candidates, and why they’re insufficient:

git push --all
I thought about using this with the remote.pushDefault and branch.*.pushRemote configuration options. The problem is that git push --all pushes to only one remote, and it selects it by looking at the current branch. If I ran this command for all remotes, it would push everything everywhere.
git push <remote> : for each remote
This is the “matching push strategy”. It will push all branches that already exist on the remote with the same name. So I thought about running this for each remote. The problem is that I typically have different master branchs on different remotes. The fork and upstream remotes have upstream’s master branch, and the origin remote has my packaging branch.

I wrote a perl script implementing git push-all, which does the right thing. As you will see from the description at the top of the script, it uses remote.pushDefault and branch.*.pushRemote to determine where it should push, falling back to pushing to the remote the branch is tracking. If won’t push something when all three of these are unspecified, and more generally, it won’t create new remote branches except in the case where the branch-specific setting branch.*.pushRemote has been specified. Magit makes it easy to set remote.pushDefault and branch.*.pushRemote.

I have this in my ~/.mrconfig:

git_push = git push-all

so that I can just run mr push to ensure that all of my work has been sent where it needs to be (see myrepos).

#!/usr/bin/perl

# git-push-all -- intelligently push most branches

# Copyright (C) 2016 Sean Whitton
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at
# your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# Prerequisites:

# The Git::Wrapper, Config::GitLike, and List::MoreUtils perl
# libraries.  On a Debian system,
#     apt-get install libgit-wrapper-perl libconfig-gitlike-perl \
#         liblist-moreutils-perl

# Description:

# This script will try to push all your branches to the places they
# should be pushed, with --follow-tags.  Specifically, for each branch,
#
# 1. If branch.pushRemote is set, push it there
#
# 2. Otherwise, if remote.pushDefault is set, push it there
#
# 3. Otherwise, if it is tracking a remote branch, push it there
#
# 4. Otherwise, exit non-zero.
#
# If a branch is tracking a remote that you cannot push to, be sure to
# set at least one of branch.pushRemote and remote.pushDefault.

use strict;
use warnings;
no warnings "experimental::smartmatch";

use Git::Wrapper;
use Config::GitLike;
use List::MoreUtils qw{ uniq apply };

my $git = Git::Wrapper->new(".");
my $config = Config::GitLike->new( confname => 'config' );
$config->load_file('.git/config');

my @branches = apply { s/[ \*]//g } $git->branch;
my @allBranches = apply { s/[ \*]//g } $git->branch({ all => 1 });
my $pushDefault = $config->get( key => "remote.pushDefault" );

my %pushes;

foreach my $branch ( @branches ) {
    my $pushRemote = $config->get( key => "branch.$branch.pushRemote" );
    my $tracking = $config->get( key => "branch.$branch.remote" );

    if ( defined $pushRemote ) {
        print "I: pushing $branch to $pushRemote (its pushRemote)\n";
        push @{ $pushes{$pushRemote} }, $branch;
    # don't push unless it already exists on the remote: this script
    # avoids creating branches
    } elsif ( defined $pushDefault
              && "remotes/$pushDefault/$branch" ~~ @allBranches ) {
        print "I: pushing $branch to $pushDefault (the remote.pushDefault)\n";
        push @{ $pushes{$pushDefault} }, $branch;
    } elsif ( !defined $pushDefault && defined $tracking ) {
        print "I: pushing $branch to $tracking (probably to its tracking branch)\n";
        push @{ $pushes{$tracking} }, $branch;
    } else {
        die "E: couldn't find anywhere to push $branch";
    }
}

foreach my $remote ( keys %pushes ) {
    my @branches = @{ $pushes{$remote} };
    system "git push --follow-tags $remote @branches";
    exit 1 if ( $? != 0 );
}


Mirco Bauer: Ethereum GPU Mining on Linux How-To

August 6, 2016 22:35, by Planet Debian - 0no comments yet

TL;DR

Install/use Debian 8 or Ubuntu 16.0.4 then execute:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update
sudo apt-get install ethereum ethminer
geth account new
# copy long character sequence within {}, that is your <YOUR_WALLET_ADDRESS>
# if you lose the passphrase, you lose your coins!
sudo apt-get install linux-headers-amd64 build-essential
chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run
ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200
echo done

My Attention Span is > 60 seconds

Ethereum is a crypto currency similar to Bitcoin as it is based on the blockchain technology. Ethereum is not yet another Bitcoin clone though, since it has an additional feature called Smart Contracts that makes it unique and very promising. I am not going into details how Ethereum works, you can get that into great detail on the Internet. This post is about Ethereum mining. Mining is how crypto coins are created. You need to spent computing time to get coins out. At the beginning CPU mining was sufficient, but as the Ethereum network difficulty has increased you need to use GPUs as they can calculate at a much higher hashrate than a general purpose CPU can do.

About 2 months ago I bought a new gaming rig, with a Nvidia GTX 1070 so I can experience virtual-reality gaming with a HTC Vive at a great framerate. As it turns out modern graphics cards are very good at hashing so I gave it a spin.

Initially I did this mining setup with Windows 10, as that is the operating system on my gaming rig. If you want to do Ethereum mining using your GPU, then you really want to use Linux. On Windows the GTX 1070 produced a hashrate of 6 MH/s (megahashes per second) while the same hardware does 25 MH/s on Linux. The hashrate multiplied by 4 by using Linux instead of Windows. Sounds good? Keep reading and follow this guide.

You have to pick a Linux distro to use for mining. As I am a Debian developer, all my systems run Debian, which is what I am also using for this guide. The same procedure can be done for Ubuntu as it is similar enough. For other distros you have to substitute the steps yourself. So I assume you already have Debian 8 or Ubuntu 16.04 installed on your system.

Install Ethereum Software

First we need the geth tool which is the main Ethereum "client". Ethereum is really a peer-to-peer network, that means each node is a server and client at the same time. A node that contains the complete blockchain history in a database is called a full node. For this guide you don't need to run a full node, as mining pools do this for you. We still need geth to create the private key of your Ethereum wallet. Somewhere we have to receive the coins we are mining ;)

Add the Ethereum APT repository using these commands:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo apt-get update

On Debian 8 (on Ubuntu you can skip this) you need to replace the repository name with this command:

sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update

Install ethereum, ethminer and geth:

sudo apt-get install ethereum ethminer geth

Create Ethereum Wallet

A wallet is where coins are "stored". They are not really stored in the wallet because the wallet is just a private key that nobody has. The balance of that wallet is visible to everyone using the blockchain database. And this is what full nodes do, they contain and distribute the database to all other peers. So this this command to create your first private key for your wallet:

geth account new

Be aware, that this passphrase protects the private key of your wallet. Anyone who has access to that file and knows your passphrase will have full control over your coins. And also do not forget the passphrase, as if you do, you lost all your coins!

The output of "geth account new" shows a long character/number sequence quoted in {}. This is your wallet address and you should write that number down, as if someone wants to send you money, then it is to that address. We will use that for the mining pool later.

Install (proprietary) nvidia driver

For OpenCL to work with nvidia graphics cards, like my GTX 1070, you need to install this proprietary driver from nvidia. If you have an older card maybe the opensource drivers will work for you. For the nvidia pascal cards numbers 10xx you will need this driver package.

After you have agreed the terms, download the NVIDIA-Linux-x86_64-367.35.run file. But before we can use that installer we need to install some dependencies that installer needs as it will have to compile a Linux kernel module for you. Install the dependencies using this command:

sudo apt-get install linux-headers-amd64 build-essential

Now we can make the installer executable and run it like this:

chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run

If that step completed without error, then we should be able to run the mining benchmark!

ethminer -M -G

The -M means "run benchmark" and the -G is for GPU mining. The first time you run it it will create a DAG file and that will takes a while. For me it took about 12 minutes on my GTX 1070. After that is should show a inner mean hashrate. If it says H/s that is hashes per second and KH is kilo (H/1000) and MH is megahashes per second (KH/1000). I had numbers around 25-30 MH/s, but for real mining you will see an average that is a balanced number and not a min/max range.

Pick Ethereum Network

Now it gets serious, you need to decide 2 things. First which Ethereum network you want to mine for and the second is using which pool.

Ethereum has 2 networks, one is called Ethereum One or Core, while the other is called Ethereum Classic. Ethereum has made a hardfork to undo the consequences of a software bug in the DAO. The DAO is a smart contract for a decentralized organization. Because of that bug, a blackhat could use that bug to obtain money from that DAO. The Ethereum developers made a poll and decided that the consequences will be undone. Not everyone agreed and the old network stayed alive and is now called Ethereum Classic short ETC. The hardfork kept its short name ETH.

This is important to understand for mining, because the hashing difficulty has a huge difference between ETH and ETC. As of writing, the hashrate of ETC is at 20% compared to ETH. Thus you need less computing time to get ETC coins and more time to get ETH coins. Differently said, ETC mining is currently more profitable.

Pick a Pool

Hmmmm, I want a swimming pool, thanks! Just kidding... You can mine without a pool, that is called solo mining, but you will get less reward. A mining pool are multiple computers that work on the same block to find a solution quicker than others. The pool has an aggregated hashrate that is higher than other solo miners. Each found block by anyone in this pool will be rewarded to everyone in the pool. The reward of 5 ether currently per block gets split in the same ratio of hashrate each member provides (minus the pool fee). So while you get less for a found block, you still have a steady lower income rate instead of higher with less chance of finding one (in time). Simply said: you have to find a new block faster than the others to receive the reward.

If you want to mine Ethereum Classic (ETC) use one of the pools listed here (at the bottom of the page).

If you want to mine Ethereum One / Core (ETH) use one of the pools listed here.

Run ethminer

The instruction page of the pool website usually says how to start the miner program, but here is an example of the pool that I use (because pony!):

ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200

Profit

If this guide was helpful for you, you can tip me at ethereum:0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 or simply run the ethminer using my wallet address for a day or two:

ethminer -G -F http://yolo.ethclassic.faith:9999/0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 --farm-recheck 200

Happy mining!



Robert Edmonds: Cable modems: Arris SB6190 vs. Netgear CM600

August 6, 2016 21:52, by Planet Debian - 0no comments yet

Recently I activated new cable ISP service at my home and needed to purchase a cable modem. There were only a few candidate devices that supported at least 24 downstream channels (preferably 32), and did not contain an integrated router or access point.

The first modem I tried was the Arris SB6190, which supports 32 downstream channels. It is based on the Intel Puma 6 SoC, and looking at an older release of the SB6190 firmware source reveals that the device runs Linux. This device, running the latest 9.1.93N firmware, goes into a failure mode after several days of uptime which causes it to drop 1-2% of packets. Here is a SmokePing graph that measures latency to my ISP's recursive DNS server, showing the transition into the “degraded” mode:

SmokePing Arris SB6190 Firmware 9.1.93N

It didn't drop packets at random, though. Some traffic would be deterministically dropped, such as the parallel A/AAAA DNS lookups generated by the glibc DNS stub resolver. For instance, in the following tcpdump output:

[1] 17:31:46.989073 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[2] 17:31:46.989102 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[3] 17:31:47.020423 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., […]
[4] 17:31:51.993680 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[5] 17:31:52.025138 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., […]
[6] 17:31:52.025282 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[7] 17:31:52.056550 IP 75.75.75.75.53 > [My IP].50775: 14987 2/0/0 CNAME comcast6.g.comcast.net., […]

Packets [1] and [2] are the A and AAAA queries being initiated in parallel. Note that they both use the same 4-tuple of (Source IP, Destination IP, Source Port, Destination Port), but with different DNS IDs. Packet [3] is the response to packet [1]. The response to packet [2] never arrives, and five seconds later, the glibc stub resolver times out and retries in single-request mode, which performs the A and AAAA queries sequentially. Packets [4] and [5] are the type A query and response, and packets [6] and [7] are the AAAA query and response.

The Arris SB6190 running firmware 9.1.93N would consistently interfere with these parallel DNS requests, but only when operating in its “degraded” mode. It also didn't matter whether glibc was configured to use an IPv4 or IPv6 nameserver, or which nameserver was being used. Power cycling the modem would fix the issue for a few days.

My ISP offered to downgrade the firmware on the Arris SB6190 to version 9.1.93K. This firmware version doesn't go into a degraded mode after a few days, but it does exhibit higher latency, and more jitter:

SmokePing Arris SB6190 Firmware 9.1.93K

It seemed unlikely that Arris would fix the firmware issues in the SB6190 before the end of my 30-day return window, so I returned the SB6190 and purchased a Netgear CM600. This modem appears to be based on the Broadcom BCM3384 and looking at an older release of the CM600 firmware source reveals that the device runs the open source eCos embedded operating system.

The Netgear CM600 so far hasn't exhibited any of the issues I found with the Arris SB6190 modem. Here is a SmokePing graph for the CM600, which shows median latency about 1 ms lower than the Arris modem:

SmokePing Netgear CM600

It's not clear which company is to blame for the problems in the Arris modem. Looking at the DOCSIS drivers in the SB6190 firmware source reveals copyright statements from ARRIS Group, Texas Instruments, and Intel. However, I would recommend avoiding cable modems produced by Arris in particular, and cable modems based on the Intel Puma SoC in general.



Norbert Preining: Debian/TeX Live 2016.20160805-1

August 6, 2016 21:31, by Planet Debian - 0no comments yet

TUG 2016 is over, and I have returned from a wonderful trip to Toronto and Maine. High time to release a new checkout of the TeX Live packages. After that I will probably need some time for another checkout, as there are a lot of plans on the table: upstream created a new collection, which means new package in Debian, which needs to go through NEW, and I am also planning to integrate tex4ht to give it an update. Help greatly appreciated here.

texlive2016-debian

This package also sees the (third) revision of how config files for pdftex and luatex are structured, since then we have settled down. Hopefully this will close some of the issues that have appeared.

New packages

biblatex-ijsra, biblatex-nottsclassic, binarytree, diffcoeff, ecgdraw, fvextra, gitfile-info, graphics-def, ijsra, mgltex, milog, navydocs, nodetree, oldstandardt1, pdflatexpicscale, randomlist, texosquery

Updated packages

2up, acmart, acro, amsmath, animate, apa6, arabluatex, archaeologie, autobreak, beebe, biblatex-abnt, biblatex-gost, biblatex-ieee, biblatex-mla, biblatex-source-division, biblatex-trad, binarytree, bxjscls, changes, cloze, covington, cs, csplain, csquotes, csvsimple, datatool, datetime2, disser, dvipdfmx, dvips, emisa, epstopdf, esami, etex-pkg, factura, fancytabs, forest, genealogytree, ghsystem, glyphlist, gost, graphics, hyperref, hyperxmp, imakeidx, jadetex, japanese-otf, kpathsea, latex, lstbayes, luatexja, mandi, mcf2graph, mfirstuc, minted, oldstandard, optidef, parnotes, philosophersimprint, platex, protex, pst-pdf, ptex, pythontex, readarray, reledmac, sepfootnotes, sf298, skmath, skrapport, stackengine, sttools, tcolorbox, tetex, texinfo, texlive-docindex, texlive-es, texlive-scripts, thesis-ekf, tools, toptesi, tudscr, turabian-formatting, updmap-map, uplatex, uptex, velthuis, xassoccnt, ycbook.

Enjoy.



Dirk Eddelbuettel: RcppStreams 0.1.1

August 6, 2016 2:54, by Planet Debian - 0no comments yet

Streamulus

A maintenance release of RcppStreams is now on CRAN. RcppStreams brings the excellent Streamulus C++ template library for event stream processing to R.

Streamulus, written by Irit Katriel, uses very clever template meta-programming (via Boost Fusion) to implement an embedded domain-specific event language created specifically for event stream processing.

This release updates the compilation standard to C++11 per CRAN's request as this helps with both current and upcoming compiler variants. A few edits were made to DESCRIPTION and README.md, the Travis driver file was updated, but no new code was added.

The NEWS file entries follows below:

Changes in version 0.1.1 (2016-08-05)

  • Compilation is now done using C++11 standards per request of CRAN to help with an array of newer (pre-release) and older compilers

  • The Travis CI script was updated to use run.sh from our fork; it now also installs all dependencies as binary .deb files.

  • The README.md was updated with additional badges.

  • The DESCRIPTION file now has URL and BugReports entries.

Courtesy of CRANberries, there is also a copy of the DESCRIPTION file for this initial release. More detailed information is on the RcppStreams page page and of course on the Streamulus page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.



Petter Reinholdtsen: Sales number for the Free Culture translation, first half of 2016

August 5, 2016 20:45, by Planet Debian - 0no comments yet

As my regular readers probably remember, the last year I published a French and Norwegian translation of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig. A bit less known is the fact that due to the way I created the translations, using docbook and po4a, I also recreated the English original. And because I already had created a new the PDF edition, I published it too. The revenue from the books are sent to the Creative Commons Corporation. In other words, I do not earn any money from this project, I just earn the warm fuzzy feeling that the text is available for a wider audience and more people can learn why the Creative Commons is needed.

Today, just for fun, I had a look at the sales number over at Lulu.com, which take care of payment, printing and shipping. Much to my surprise, the English edition is selling better than both the French and Norwegian edition, despite the fact that it has been available in English since it was first published. In total, 24 paper books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:

Title / language Quantity
Culture Libre / French 3
Fri kultur / Norwegian 7
Free Culture / English 14

The books are available both from Lulu.com and from large book stores like Amazon and Barnes&Noble. Most revenue, around $10 per book, is sent to the Creative Commons project when the book is sold directly by Lulu.com. The other channels give less revenue. The summary from Lulu tell me 10 books was sold via the Amazon channel, 10 via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells me that the revenue sent so far this year is USD $101.42. No idea what kind of sales numbers to expect, so I do not know if that is a good amount of sales for a 10 year old book or not. But it make me happy that the buyers find the book, and I hope they enjoy reading it as much as I did.

The ebook edition is available for free from Github.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.



Arturo Borrero González: Spawning a new blog with jekyllrb

August 5, 2016 5:00, by Planet Debian - 0no comments yet


I have been delighted with git for several years now. It's a very powerful tool and I use it every day.
I try to use git in all possible tasks: bind servers, configurations, firewalls, and other personal stuff.

However, there has been always a thing in my git-TODO: a blog managed with git.

After a bit of searching, I found an interesting technology: jekyllrb hosted at github pages. Jekyll looked easy to manage and easy to learn for a newbie like me.
There are some very good looking blogs out there using this combination, for example: https://rsms.me/

But I was lazy to migrate this 'ral-arturo' blog from blogger to jekyll, so I decided to create a new blog from scratch.

This time, the new blog in written in Spanish and is about adventures, nature, travels and outdoor sports.
Perhaps you noticed this article about the Mulhacen mountain (BTW, we did it! :-))
The new blog is called alfabravo.org,

I like the workflow with git & jekyll & github:

  • clone the repository
  • write a new post in markdown
  • read it and correct it locally with 'jekyll serve'
  • commit and push to github
  • done!

Who knows, perhaps this 'ral-arturo' blog ends being migrated to the new system as well.



Steve Kemp: Using the compiler to help you debug segfaults

August 5, 2016 2:15, by Planet Debian - 0no comments yet

Recently somebody reported that my console-based mail-client was segfaulting when opening an IMAP folder, and then when they tried with a local Maildir-hierarchy the same fault was observed.

I couldn't reproduce the problem at all, as neither my development host (read "my personal desktop"), nor my mail-host had been crashing at all, both being in use to read my email for several months.

Debugging crashes with no backtrace, or real hint of where to start, is a challenge. Even when downloading the same Maildir samples I couldn't see a problem. It was only when I decided to see if I could add some more diagnostics to my code that I came across a solution.

My intention was to make it easier to receive a backtrace, by adding more compiler options:

  -fsanitize=address -fno-omit-frame-pointer

I added those options and my mail-client immediately started to segfault on my own machine(s), almost as soon as it started. Ultimately I found three pieces of code where I was allocating C++ objects and passing them to the Lua stack, a pretty fundamental part of the code, which were buggy. Once I'd tracked down the areas of code that were broken and fixed them the user was happy, and I was happy too.

Its interesting that I've been running for over a year with these bogus things in place, which "just happened" to not crash for me or anybody else. In the future I'll be adding these options to more of my C-based projects, as there seems to be virtually no downside.

In related news my console editor has now achieved almost everything I want it to, having gained:

  • Syntax highlighting via Lua + LPEG
  • Support for TAB completion of Lua-code and filenames.
  • Bookmark support.
  • Support for setting the mark and copying/cutting regions.

The only outstanding feature, which is a biggy, is support for Undo which I need to add.

Happily no segfaults here, so far..



Joey Hess: keysafe

August 5, 2016 0:24, by Planet Debian - 0no comments yet

Have you ever thought about using a gpg key to encrypt something, but didn't due to worries that you'd eventually lose the secret key? Or maybe you did use a gpg key to encrypt something and lost the key. There are nice tools like paperkey to back up gpg keys, but they require things like printers, and a secure place to store the backups.

I feel that simple backup and restore of gpg keys (and encryption keys generally) is keeping some users from using gpg. If there was a nice automated solution for that, distributions could come preconfigured to generate encryption keys and use them for backups etc. I know this is a missing peice in the git-annex assistant, which makes it easy to generate a gpg key to encrypt your data, but can't help you back up the secret key.

So, I'm thinking about storing secret keys in the cloud. Which seems scary to me, since when I was a Debian Developer, my gpg key could have been used to compromise millions of systems. But this is not about developers, it's about users, and so trading off some security for some ease of use may be appropriate. Especially since the alternative is no security. I know that some folks back up their gpg keys in the cloud using DropBox.. We can do better.

I've thought up a design for this, called keysafe. The synopsis of how it works is:

The secret key is split into three shards, and each is uploaded to a server run by a different entity. Any two of the shards are sufficient to recover the original key. So any one server can go down and you can still recover the key.

A password is used to encrypt the key. For the servers to access your key, two of them need to collude together, and they then have to brute force the password. The design of keysafe makes brute forcing extra difficult by making it hard to know which shards belong to you.

Indeed the more people that use keysafe, the harder it becomes to brute-force anyone's key!

I could really use some additional reviews and feedback on the design by experts.


This project is being sponsored by Purism and by my Patreon supporters. By the way, I'm 15% of the way to my Patreon goal after one day!



Phil Hands: EOMA68: &gt; $60k pledged on crowdsupply.com

August 4, 2016 22:04, by Planet Debian - 0no comments yet

crowdsupply.com has a campaign to fund production of EOMA68 computer cards (and associated peripherals) which recently passed the $60,000 mark.

If you were at DebConf13 in Switzerland, you may have seen me with some early prototypes that I had been lent to show people.

The inside of the A20 EOMA68 computer board

The inside of the A20 EOMA68 computer board

The concept: build computers on a PCMCIA physical form-factor, thus confining most of the hardware and software complexity in a single replaceable item, decoupling the design of the outer device from the chips that drive it.

EOMA68 pack-shot

EOMA68 pack-shot

There is a lot more information about this at crowdsupply, and at http://rhombus-tech.net/ -- I hope people find it interesting enough to sign up.

BTW While I host Rhombus Tech's website as a favour to Luke Leighton, I have no financial links with them.



Daniel Kahn Gillmor: Changes for GnuPG in Debian

August 3, 2016 21:55, by Planet Debian - 0no comments yet

The GNU Privacy Guard (GnuPG) upstream team maintains three branches of development: 1.4 ("classic"), 2.0 ("stable"), and 2.1 ("modern").

They differ in various ways: software architecture, supported algorithms, network transport mechanisms, protocol versions, development activity, co-installability, etc.

Debian currently ships two versions of GnuPG in every maintained suite -- in particular, /usr/bin/gpg has historically always been provided by the "classic" branch.

That's going to change!

Debian unstable will soon be moving to the "modern" branch for providing /usr/bin/gpg. This will give several advantages for Debian and its users in the future, but it will require a transition. Hopefully we can make it a smooth one.

What are the benefits?

Compared to "classic", The "modern" branch has:

  • updated crypto (including elliptic curves)
  • componentized architecture (e.g. libraries, some daemonized processes)
  • improved key storage
  • better network access (including talking to keyservers over tor)
  • stronger defaults
  • more active upstream development
  • safer info representation (e.g. no more key IDs, fingerprints easier to copy-and-paste)

If you want to try this out, the changes are already made in experimental. Please experiment!

What does this mean for end users?

If you're an end user and you don't use GnuPG directly, you shouldn't notice much of a change once the packages start to move through the rest of the archive.

Even if you do use GnuPG regularly, you shouldn't notice too much of a difference. One of the main differences is that all access to your secret key will be handled through gpg-agent, which should be automatically launched as needed. This means that operations like signing and decryption will cause gpg-agent to prompt the the user to unlock any locked keys directly, rather than gpg itself prompting the user.

If you have an existing keyring, you may also notice a difference based on a change of how your public keys are managed, though again this transition should ideally be smooth enough that you won't notice unless you care to investigate more deeply.

If you use GnuPG regularly, you might want to read the NEWS file that ships with GnuPG and related packages for updates that should help you through the transition.

If you use GnuPG in a language other than English, please install the gnupg-l10n package, which contains the localization/translation files. For versions where those files are split out of the main package, gnupg explicitly Recommends: gnupg-l10n already, so it should be brought in for new installations by default.

If you have an archive of old data that depends on known-broken algorithms, PGP3 keys, or other deprecated material, you'll need to have "classic" GnuPG around to access it. That will be provided in the gnupg1 package

What does this mean for package maintainers?

If you maintain a package that depends on gnupg: be aware that the gnupg package in debian is going through this transition.

A few general thoughts:

  • If your package Depends: gnupg for signature verification only, you might prefer to have it Depends: gpgv instead. gpgv is a much simpler tool that the full-blown GnuPG suite, and should be easier to manage. I'm happy to help with such a transition (we've made it recently with apt already)

  • If your package Depends: gnupg and expects ~/.gnupg/ to be laid out in a certain way, that's almost certainly going to break at some point. ~/.gnupg/ is GnuPG's internal storage, and it's not recommended to rely on any specific data structures there, as they may change. gpg offers commands like --export, --import, and --delete for manipulating its persistent storage. please use them instead!

  • If your package depends on parsing or displaying gpg's output for the user, please make sure you use its special machine-readable form (--with-colons). Parsing the human-readable text is not advised and may change from version to version.

If you maintain a package that depends on gnupg2 and tries to use gpg2 instead of gpg, that should stay ok. However, at some point it'd be nice to get rid of /usr/bin/gpg2 and just have one expected binary (gpg). So you can help with that:

  • Look for places where your package expects gpg2 and make it try gpg instead. If you can make your code fall back cleanly

  • Change your dependencies to indicate gnupg (&gt;= 2)

  • Patch lintian to encourage other people to make this switch ;)

What specifically needs to happen?

The last major step for this transition was renaming the source package for "classic" GnuPG to be gnupg1. This transition is currently in the ftp-master's NEW queue. Once it makes it through that queue, and both gnupg1 and gnupg2 have been in experimental for a few days without reports of dangerous breakage, we'll upload both gnupg1 and gnupg2 to unstable.

We'll also need to do some triage on the BTS, reassigning some reports which are really only relevant for the "classic" branch.

Please report bugs via the BTS as usual! You're also welcome to ask questions and make suggestions on #debian-gnupg on irc.oftc.net, or to mail the Debian GnuPG packaging team at pkg-gnupg-maint@lists.alioth.debian.org.

Happy hacking!



Bdale Garbee: ChaosKey

August 3, 2016 21:17, by Planet Debian - 0no comments yet

I'm pleased to announce that, at long last, the ChaosKey hardware random number generator described in talks at Debconf 14 in Portland and Debconf 16 in Cape Town is now available for purchase from Garbee and Garbee.



Keith Packard: chaoskey

August 3, 2016 21:16, by Planet Debian - 0no comments yet

ChaosKey v1.0 Released — USB Attached True Random Number Generator

ChaosKey, our random number generator that attaches via USB, is now available for sale from the altusmetrum store.

We talked about this device at Debconf 16 last month

Support for this device is included in Linux starting with version 4.1. Plug ChaosKey into your system and the driver will automatically add entropy into the kernel pool, providing a constant supply of true random numbers to help keep the system secure.

ChaosKey is free hardware running free software, built with free software on a free operating system.