Go to the content
or

Debian

Full screen Suggest an article
 RSS feed

Planet Debian

July 1, 2016 14:34 , by valessiobrito - | No one following this article yet.

Planet.Debian is a website that aggregates the blogs of many Debian contributors. Planet maintainers can be reached at planet at debian.org


Petter Reinholdtsen: Legal to share more than 3000 movies listed on IMDB?

November 18, 2017 20:20, by Planet Debian - 0no comments yet

A month ago, I blogged about my work to automatically check the copyright status of IMDB entries, and try to count the number of movies listed in IMDB that is legal to distribute on the Internet. I have continued to look for good data sources, and identified a few more. The code used to extract information from various data sources is available in a git repository, currently available from github.

So far I have identified 3186 unique IMDB title IDs. To gain better understanding of the structure of the data set, I created a histogram of the year associated with each movie (typically release year). It is interesting to notice where the peaks and dips in the graph are located. I wonder why they are placed there. I suspect World War II caused the dip around 1940, but what caused the peak around 2010?

I've so far identified ten sources for IMDB title IDs for movies in the public domain or with a free license. This is the statistics reported when running 'make stats' in the git repository:

  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
 2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
 2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
  350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
    4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
    8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
 3186 unique IMDB title IDs in total

The entries without IMDB title ID are candidates to increase the data set, but might equally well be duplicates of entries already listed with IMDB title ID in one of the other sources, or represent movies that lack a IMDB title ID. I've seen examples of all these situations when peeking at the entries without IMDB title ID. Based on these data sources, the lower bound for movies listed in IMDB that are legal to distribute on the Internet is between 3186 and 4713.

It would be great for improving the accuracy of this measurement, if the various sources added IMDB title ID to their metadata. I have tried to reach the people behind the various sources to ask if they are interested in doing this, without any replies so far. Perhaps you can help me get in touch with the people behind VODO, Public Domain Torrents, Public Domain Movies and Public Domain Review to try to convince them to add more metadata to their movie entries?

Another way you could help is by adding pages to Wikipedia about movies that are legal to distribute on the Internet. If such page exist and include a link to both IMDB and The Internet Archive, the script used to generate free-movies-archive-org-wikidata.json should pick up the mapping as soon as wikidata is updates.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.



Joey Hess: stupid long route

November 18, 2017 15:22, by Planet Debian - 0no comments yet

There's an old net story from the 80's, which I can't find right now, but is about two computers, 10 feet apart, having a ridiculously long network route between them, packets traveling into other states or countries and back, when they could have flowed over a short cable.

Ever since I read that, I've been collecting my own ridiculously long routes. ssh bouncing from country to country, making letters I type travel all the way around the world until they echo back on my screen. Tasting the latency that's one of the only ways we can viscerally understand just how big a tangle of wires humanity has built.

Yesterday, I surpassed all that, and I did it in a way that hearkens right back to the original story. I had two computers, 20 feet apart, I wanted one to talk to the other, and the route between the two ended up traveling not around the Earth, but almost the distance to the Moon.

I was rebuilding my home's access point, and ran into a annoying bug that prevented it from listening to wifi. I knew it was still connected over ethernet to the satellite receiver.

I connected my laptop to the satellite receiver over wifi. But, I didn't know the IP address to reach the access point. Then I remembered I had set it up so incoming ssh to the satellite receiver was directed to the access point.

So, I sshed to a computer in New Jersey. And from there I sshed to my access point. And the latency was amazing. Because, every time I pressed a key:

  • It was sent to a satellite in geosynchrous orbit, 22250 miles high.
  • Which beamed it back to a ground station in Texas, another 22250 miles.
  • Which routed it over cable to New Jersey to my server there.
  • Which bounced it back to a Texas-size dish, which zapped it back to orbit, another 22250 miles.
  • And the satellite transmitted it back in the general direction of my house, another 22250 miles.
  • So my keystroke finally reached the access point. But then it had to show me it had received it. So that whole process happened again in reverse, adding another 89000 miles travel total.
  • And finally, after 178000 and change miles of data transfer, the letter I'd typed a full second ago appeared on my screen.

Not bad for a lazy solution to a problem that could have been solved by walking across the room, eh?

Previously: roundtrip latency from a cabin with dialup in 2011



Jonathan Carter: I am now a Debian Developer

November 17, 2017 17:48, by Planet Debian - 0no comments yet

It finally happened

On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montréal, my application was approved. If you’re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I’d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it.

How it started

In 1999… no wait, I can’t start there, as much as I want to, this is a short post, so… In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its “boot-floppies” installer program kept crashing on my very vanilla computers. 

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn’t want to support Windows ever again, and I didn’t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don’t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called “freshmeat” that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again.

Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up “highvoltage@debian.org”. I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :)

Ubuntu and beyond

Ubuntu 4.10 default desktop – Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just “warty” written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called “Ubuntu” and the desktop edition has naked people on it. I wasn’t sure what he meant and was kind of dumbfounded so I just laughed and said something like “Uh ok”. At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline “Linux for Human Beings”. Fun fact, one of my first jobs at the foundation was to register the ubuntu.com domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal.

Closer to Ubuntu’s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying “Go talk to them! Go talk to them!”, but I felt so intimidated by them that I couldn’t even bring myself to walk up and say hello.

In the interest of keeping this short, I’m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach’s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I’d like to do a similar video series that might help a new generation of packagers.

I’ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I’d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It’s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to https://mentors.debian.net and if I have some spare time I’ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!



Raphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2017

November 17, 2017 14:31, by Planet Debian - 0no comments yet

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 197 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 21h (out of 16h allocated + 8.75h remaining, thus keeping 3.75h for November).
  • Ben Hutchings did 20 hours (out of 15h allocated + 9 extra hours, thus keeping 4 extra hours for November).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did not publish his report yet.
  • Guido Günther did 6.5 hours (out of 11h allocated + 1 extra hour, thus keeping 5.5h for November).
  • Hugo Lefeuvre did 20h.
  • Lucas Kanashiro did 2 hours (out of 5h allocated, thus keeping 3 hours for November).
  • Markus Koschany did 19 hours (out of 20.75h allocated, thus keeping 1.75 extra hours for November).
  • Ola Lundqvist did 7.5h (out of 7h allocated + 0.5 extra hours).
  • Raphaël Hertzog did 13.5 hours (out of 12h allocated + 1.5 extra hours).
  • Roberto C. Sanchez did 11 hours (out of 20.75 hours allocated + 14.75 hours remaining, thus keeping 24.50 extra hours for November, he will give back remaining hours at the end of the month).
  • Thorsten Alteholz did 20.75 hours.

Evolution of the situation

The number of sponsored hours increased slightly to 183 hours per month. With the increasing number of security issues to deal with, and with the number of open issues not really going down, I decided to bump the funding target to what amounts to 1.5 full-time position.

The security tracker currently lists 50 packages with a known CVE and the dla-needed.txt file 36 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.



Craig Small: Short Delay with WordPress 4.9

November 17, 2017 11:03, by Planet Debian - 0no comments yet

You may have heard WordPress 4.9 is out. While this seems a good improvement over 4.8, it has a new editor that uses codemirror.  So what’s the problem? Well, inside codemirror is jshint and this has that idiotic no evil license. I think this was added in by WordPress, not codemirror itself.

So basically WordPress 4.9 has a file, or actually a tiny part of a file that is non-free.  I’ll now have to delay the update of WordPress to hack that piece out, which probably means removing the javascript linter. Not ideal but that’s the way things go.

 



Michal Čihař: Running Bitcoin node and ElectrumX server

November 17, 2017 11:00, by Planet Debian - 0no comments yet

I've been tempted to run own ElectrumX server for quite some. First attempt was to run this on Turris Omnia router, however that turned out to be impossible due to memory requirements both Bitcoind and ElectrumX have.

This time I've dedicated host for this and it runs fine:

Electrum connecting to btc.cihar.com

The server runs Debian sid (probably it would be doable on stretch as well, but I didn't try much) and the setup was pretty simple.

First we need to install some things - Bitcoin daemon and ElectrumX dependencies:

# Bitcoin daemon, not available in stretch
apt install bitcoind

# We will checkout ElectrumX from git
apt install git

# ElectrumX deps
apt install python3-aiohttp

# Build environment for ElectrumX deps
apt install build-essentials python3-pip libleveldb-dev

# ElectrumX deps not packaged in Debian
pip3 install plyvel pylru

# Download ElectrumX sources
su - electrumx -c 'git clone https://github.com/kyuupichan/electrumx.git'

Create users which will run the services:

adduser bitcoind
adduser electrumx

Now it's time to prepare configuration for the services. For Bitcoin it's quite simple - we need to configure RPC interface and enable transaction index in /home/bitcoind/.bitcoin/bitcoin.conf:

txindex=1
listen=1
rpcuser=bitcoin
rpcpassword=somerandompassword

The ElectrumX configuration is quite simple as well and it's pretty well documented. I've decided to place it in /etc/electrumx.conf:

COIN=BitcoinSegwit
DB_DIRECTORY=/home/electrumx/.electrumx
DAEMON_URL=http://bitcoin:somerandompassword@localhost:8332/
TCP_PORT=50001
SSL_PORT=50002
HOST=::

DONATION_ADDRESS=3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF

SSL_CERTFILE=/etc/letsencrypt/live/btc.cihar.com/fullchain.pem
SSL_KEYFILE=/etc/letsencrypt/live/btc.cihar.com/privkey.pem

REPORT_HOST=btc.cihar.com
BANNER_FILE=banner

I've decided to control both services using systemd, so it's matter of creating pretty simple units for that. Actually the Bitcoin one closely matches the one I've used on Turris Omnia and the ElectrumX the one they ship, but there are some minor changes.

Systemd unit for ElectrumX in /etc/systemd/system/electrumx.service:

[Unit]
Description=Electrumx
After=bitcoind.target

[Service]
EnvironmentFile=/etc/electrumx.conf
ExecStart=/home/electrumx/electrumx/electrumx_server.py
User=electrumx
LimitNOFILE=8192
TimeoutStopSec=30min

[Install]
WantedBy=multi-user.target

And finally systemd unit for Bitcoin daemon in /etc/systemd/system/bitcoind.service:

[Unit]
Description=Bitcoind
After=network.target

[Service]
ExecStart=/usr/bin/bitcoind
User=bitcoind
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target

Now everything should be configured and it's time to start up the services:

# Enable services so that they start on boot 
systemctl enable electrumx.service bitcoind.service

# Start services
systemctl start electrumx.service bitcoind.service

Now you have few days time until Bitcoin fetches whole blockchain and ElectrumX indexes that. If you happen to have another Bitcoin node running (or was running in past), you can speedup the process by copying blocks from that system (located in ~/.bitcoin/blocks/). Only get blocks from sources you trust absolutely as it might change your view of history, see Bitcoin wiki for more information on the topic. There is also magnet link in the ElectrumX docs to download ElectrumX database to speed up this process. This should be safe to download from untrusted source.

The last think I'd like to mention is resources usage. You should have at least 4 GB of memory to run this, 8 GB is really preferred (both services consume around 4GB). On disk space, Bitcoin currently consumes 170 GB and ElectrumX 25 GB. Ideally all this should be running on the SSD disk.

You can however offload some of the files to slower storage as old blocks are rarely accessed and this can save some space on your storage. Following script will move around 50 GB of blockchain data to /mnt/btc/blocks (use only when Bitcoin daemon is not running):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#!/bin/sh
set -e

DEST=/mnt/btc/blocks

cd ~/.bitcoin/blocks/

find . -type f \( -name 'blk00[0123]*.dat' -o -name 'rev00[0123]*dat' \) | sed 's@^\./@@' | while read name ; do
        mv $name $DEST/$name
        ln -s $DEST/$name $name
done

Anyway if you would like to use this server, configure btc.cihar.com in your Electrum client.

If you find this howto useful, you can send some Satoshis to 3KPccmPtejpMczeog7dcFdqX4oTebYZ3tF.

Filed under: Crypto Debian English



Norbert Preining: ScalaFX: Problems with Tables abound

November 17, 2017 6:27, by Planet Debian - 0no comments yet

Doing a lot with all kinds of tables in ScalaFX, I stumbled upon a bug in ScalaFX that, with the help of the bug report, I was able to circumvent. It is a subtle bug where types are mixed between scalafx.SOMETHING and the corresponding javafx.SOMETHING.

In one of the answers it is stated that:

The issue is with implicit conversion from TableColumn not being located by Scala. I am not clear why this is happening (maybe a Scala bug).

But the provided work-around at least made it work. Until today I stumbled onto a (probably) just another instance of this bug, but where the same work-around does not help. I am using TreeTableViews and try to replace the children of the root by filtering out one element. The code I use is of course very different, but here is a reduced and fully contained example, based on the original bug report and adapted to use a TreeTableView:

import scalafx.Includes._
import scalafx.scene.control.TreeTableColumn._
import scalafx.scene.control.TreeItem._
import scalafx.application.JFXApp.PrimaryStage
import scalafx.application.JFXApp
import scalafx.scene.Scene
import scalafx.scene.layout._
import scalafx.scene.control._
import scalafx.scene.control.TreeTableView
import scalafx.scene.control.Button
import scalafx.scene.paint.Color
import scalafx.beans.property.{ObjectProperty, StringProperty}
import scalafx.collections.ObservableBuffer


// TableTester.scala
object TableTester extends JFXApp {

  val characters = ObservableBuffer[Person](
    new Person("Peggy", "Sue", "123", Color.Violet),
    new Person("Rocky", "Raccoon", "456", Color.GreenYellow),
    new Person("Bungalow ", "Bill", "789", Color.DarkSalmon)
  )

  val table = new TreeTableView[Person](
    new TreeItem[Person](new Person("","","",Color.Red)) {
      expanded = true
      children = characters.map(new TreeItem[Person](_))
    }) {
    columns ++= List(
      new TreeTableColumn[Person, String] {
        text = "First Name"
        cellValueFactory = {
          _.value.value.value.firstName
        }
        prefWidth = 180
      },
      new TreeTableColumn[Person, String]() {
        text = "Last Name"
        cellValueFactory = {
          _.value.value.value.lastName
        }
        prefWidth = 180
      }
    )
  }

  stage = new PrimaryStage {
    title = "Simple Table View"
    scene = new Scene {
      content = new VBox() {
        children = List(
          new Button("Test it") {
            onAction = p => {
              val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p => {
                val bar: TreeItem[Person] = p
                p
              })
              table.root.value.children = foo
            }
          },
          table)
      }
    }
  }
}

// Person.scala
class Person(firstName_ : String, lastName_ : String, phone_ : String, favoriteColor_ : Color = Color.Blue) {

  val firstName = new StringProperty(this, "firstName", firstName_)
  val lastName = new StringProperty(this, "lastName", lastName_)
  val phone = new StringProperty(this, "phone", phone_)
  val favoriteColor = new ObjectProperty(this, "favoriteColor", favoriteColor_)

  firstName.onChange((x, _, _) => System.out.println(x.value))
}

With this code what one gets on compilation with the latest Scala and ScalaFX is:

[error]  found   : scalafx.collections.ObservableBuffer[javafx.scene.control.TreeItem[Person]]
[error]  required: scalafx.collections.ObservableBuffer[scalafx.scene.control.TreeItem[Person]]
[error]               val foo: ObservableBuffer[TreeItem[Person]] = table.root.value.children.map(p => {
[error]                                                                                          ^
[error] one error found

And in this case, adding import statements didn’t help, what a pity. Unfortunately this bug is open since 2014 with a helpwanted tag and nothing is going on. I guess I have to try to dive into the source code of ScalaFX 🙁



Renata Scheibler: Hello, world!

November 17, 2017 2:49, by Planet Debian - 0no comments yet

A bit about me



Colin Watson: Kitten Block equivalent for Firefox 57

November 16, 2017 0:00, by Planet Debian - 0no comments yet

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.



Steinar H. Gunderson: Introducing Narabu, part 6: Performance

November 15, 2017 22:43, by Planet Debian - 0no comments yet

Narabu is a new intraframe video codec. You probably want to read part 1, part 2, part 3, part 4 and part 5 first.

Like I wrote in part 5, there basically isn't a big splashy ending where everything is resolved here; you're basically getting some graphs with some open questions and some interesting observations.

First of all, though, I'll need to make a correction: In the last part, I wrote that encoding takes 1.2 ms for 720p luma-only on my GTX 950, which isn't correct—I remembered the wrong number. The right number is 2.3 ms, which I guess explains even more why I don't think it's acceptable at the current stage. (I'm also pretty sure it's possible to rearchitect the encoder so that it's much better, but I am moving on to other video-related things for the time being.)

I encoded a picture straight off my DSLR (luma-only) at various resolutions, keeping the aspect. Then I decoded it a bunch of times on my GTX 950 (low-end last-generation NVIDIA) and on my HD 4400 (ultraportable Haswell laptop) and measured the times. They're normalized for megapixels per second decoded; remember that doubling width (x axis) means quadruple the pixels. Here it is:

Narabu decoding performance graph

I'm not going to comment much beyond two observations:

  • Caches matter, even on GPU. This is the same data over and over again (so small images get an unrealistic boost), so up to a certain point, it's basically all in L1.
  • The GTX 950 doesn't really run away from the Intel card before it's getting enough data to chew on. Bigger GPUs don't have faster cores—they're just more parallel.

Encoding only contains the GTX 950 because I didn't finish the work to get that single int64 divide off:

Narabu encoding performance graph

This is… interesting. I have few explanations. Probably more benchmarking and profiling would be needed to make sense of any of it. In fact, it's so strange that I would suspect a bug, but it does indeed seem to create a valid bitstream that is decoded by the decoder.

Do note, however, that seemingly even on the smallest resolutions, there's a 1.7 ms base cost (you can't see it on the picture, but you'd see it in an unnormalized graph). I don't have a very good explanation for this either (even though there are some costs that are dependent on the alphabet size instead of the number of pixels), but figuring it out would probably be a great start for getting the performance up.

So that concludes the series, on a cliffhanger. :-) Even though it's not in a situation where you can just take it and put it into something useful, I hope it was an interesting introduction to the GPU! And in the meantime, I've released version 1.6.3 of Nageru, my live video mixer (also heavily GPU-based) with various small adjustments and bug fixes found before and during Trøndisk. And Movit is getting compute shaders for that extra speed boost, although parts of it is bending my head. Exciting times in GPU land :-)



Daniel Pocock: Linking hackerspaces with OpenDHT and Ring

November 15, 2017 19:57, by Planet Debian - 0no comments yet

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.



Kees Cook: security things in Linux v4.14

November 15, 2017 5:23, by Planet Debian - 0no comments yet

Previously: v4.13.

Linux kernel v4.14 was released this last Sunday, and there’s a bunch of security things I think are interesting:

vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implemented CONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM’s STACK_GUARD_PAGE_LEADING/TRAILING tests.)

One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we’ll see the addition of PaX/grsecurity’s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This “stack probing” ability will likely also become directly available from the compiler as well.

set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the “to user” really means “within the addr_limit range”).

Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future.

SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit.

Additionally, Alexander Popov implemented a simple double-free defense, similar to the “fasttop” check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.)

Future work would be to provide similar metadata protections to the SLAB allocator (though SLAB doesn’t store its freelist within the individual unused objects, so it has a different set of exposures compared to SLUB).

setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13.

randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable.

structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable’s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn’t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.)

improved boot entropy
Laura Abbott and Daniel Micay improved early boot entropy available to the stack protector by both moving the stack protector setup later in the boot, and including the kernel command line in boot entropy collection (since with some devices it changes on each boot).

eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn’t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel.

seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter.

Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (née SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS.

That’s it for now; please let me know if I missed anything. The v4.15 merge window is now open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License



Russ Allbery: Review: The Piper's Son

November 15, 2017 5:18, by Planet Debian - 0no comments yet

Review: The Piper's Son, by Melina Marchetta

Series: Francesca #2
Publisher: Candlewick Press
Copyright: 2010
Printing: 2011
ISBN: 0-7636-5458-2
Format: Kindle
Pages: 330

Tom Mackee's family has fallen apart. The impetus was the death of his uncle Joe in the London tube terrorist bombings, but that was only the start. He destroyed his chances with the only woman he really loved. His father's drinking got out of control, his mother left with his younger sister to live in a different city, and he refused to go with them and abandon his father. But then, six months later, his father abandoned him anyway. As this novel opens, Tom collapses while performing a music set, high on drugs and no sleep, and wakes up to discover his roommates have been fired from their jobs for stealing, and in turn have thrown him out of their apartment. He's at rock bottom.

The one place he can turn for a place to stay is his aunt Georgie, the second (although less frequent) viewpoint character of this book. She was the one who took the trip to the UK to try to find out what happened and retrieve her brother's body, and the one who had to return to Australia with nothing. Her life isn't in much better shape than Tom's. She's kept her job, but she's pregnant by her ex-boyfriend but barely talking to him, since he now has a son by another woman he met during their separation. And she's not even remotely over her grief.

The whole Finch/Mackee family is, in short, a disaster. But they have a few family relationships left that haven't broken, some underlying basic decency, and some patient and determined friends.

I should warn up-front, despite having read this book without knowing this, that this is a sequel to Saving Francesca, set five years later and focusing on secondary characters from the original novel. I've subsequently read that book as well, though, and I don't think reading it first is necessary. This is one of the rare books where being a sequel made it a better stand-alone novel. I never felt a gap of missing story, just a rich and deep background of friendships and previous relationships that felt realistic. People are embedded in networks of relationships even when they feel the most alone, and I really enjoyed seeing that surface in this book. All those patterns from Tom's past didn't feel like information I was missing. They felt like glimpses of what you'd see if you looked into any other person's life.

The plot summary above might make The Piper's Son sound like a depressing drama fest, but Marchetta made an excellent writing decision: the worst of this has already happened before the start of the book, and the rest is in the first chapter. This is not at all a book about horrible things happening to people. It's a book about healing. An authentic, prickly, angry healing that doesn't forget and doesn't turn into simple happily-ever-after stories, but does involve a lot of recognition that one has been an ass, and that it's possible to be less of an ass in the future, and maybe some things can be fixed.

A plot summary might fool you into thinking that this is a book about a boy and his father, or about dealing with a drunk you still love. It's not. The bright current under this whole story is not father-son bonding. It's female friendships. Marchetta pulls off a beautiful double-story, writing a book that's about Tom, and Georgie, and the layered guilt and tragedy of the Finch/Mackee family, but whose emotional heart is their friends. Francesca, Justine, absent Siobhan. Georgie's friend Lucia. Ned, the cook, and his interactions with Tom's friends. And Tara Finke, also mostly absent, but perfectly written into the story in letters and phone calls.

Marchetta never calls unnecessary attention to this, keeping the camera on Tom and Georgie, but the process of reading this book is a dawning realization of just how much work friendship is doing under the surface, how much back-channel conversation is happening off the page, and how much careful and thoughtful and determined work went into providing Tom a floor, a place to get his feet under him, and enough of a shove for him to pull himself together. Pulling that off requires a deft and subtle authorial touch, and I'm in awe at how well it worked.

This is a beautifully written novel. Marchetta never belabors an emotional point, sticking with a clear and spare description of actions and thoughts, with just the right sentences scattered here and there to expose the character's emotions. Tom's family is awful at communication, which is much of the reason why they start the book in the situation they're in, but Marchetta somehow manages to write that in a way that didn't just frustrate me or make me want to start banging their heads together. She somehow conveys the extent to which they're trying, even when they're failing, and adds just the right descriptions so that the reader can follow the wordless messages they send each other even when they can't manage to talk directly. I usually find it very hard to connect with people who can only communicate by doing things rather than saying them. It's a high compliment to the author that I felt I understood Tom and his family as well as I did.

One bit of warning: while this is not a story of a grand reunion with an alcoholic father where all is forgiven because family, thank heavens, there is an occasional wiggle in that direction. There is also a steady background assumption that one should always try to repair family relationships, and a few tangential notes about the Finches and Mackees that made me think there was a bit more abuse here than anyone involved wants to admit. I don't think the book is trying to make apologies for this, and instead is trying to walk the fine line of talking about realistically messed up families, but I also don't have a strong personal reaction to that type of story. If you have an aversion to "we should all get along because faaaaamily" stories, you may want to skip this book, or at least go in pre-warned.

That aside, the biggest challenge I had in reading this book was not breaking into tears. The emotional arc is just about perfect. Tom and Georgie never stay stuck in the same emotional cycle for too long, Marchetta does a wonderful job showing irritating characters from a slightly different angle and having them become much less irritating, and the interactions between Tom, Tara, and Francesca are just perfect. I don't remember reading another book that so beautifully captures that sensation of knowing that you've been a total ass, knowing that you need to stop, but realizing just how much work you're going to have to do, and how hard that work will be, once you own up to how much you fucked up. That point where you keep being an ass for a few moments longer, because stopping is going to hurt so much, but end up stopping anyway because you can't stand yourself any more. And stopping and making amends is hard and hurts badly, and yet somehow isn't quite as bad as you thought it was going to be.

This is really great stuff.

One final complaint, though: what is it with mainstream fiction and the total lack of denouement? I don't read very much mainstream fiction, but this is the second really good mainstream book I've read (after The Death of Bees) that hits its climax and then unceremoniously dumps the reader on the ground and disappears. Come back here! I wasn't done with these people! I don't need a long happily-ever-after story, but give me at least a handful of pages to be happy with the characters after crying with them for hours! ARGH.

But, that aside, the reader does get that climax, and it's note-perfect to the rest of the book. Everyone is still themselves, no one gets suddenly transformed, and yet everything is... better. It's the kind of book you can trust.

Highly, highly recommended.

Rating: 9 out of 10



Jonathan Dowland: WadC 2.2

November 14, 2017 19:38, by Planet Debian - 0no comments yet Bird Cage, a WadC-generated map for Heretic
Bird Cage map

I have recently released version 2.2 of Wad Compiler, a lazy functional programming language and IDE for the construction of Doom maps.

The biggest change in this version is a reworking of the preferences system (to use the Java Preferences API), the wadcli command-line interface respecting preferences and a new preferences UI dialog (adapted from Quake Injector).

There are two new example maps: A Labyrinth demonstration contributed by "Yoruk", and a Heretic map Bird Cage by yours truly. These are both now amongst the largest examples in the collection, although laby.wl was generated by a higher-level program.

For more information see the release notes and the reference, or check out the new gallery of examples or skip straight to downloads.

I have no plans to work on WadC further (but never say never, I suppose.)



Steve Kemp: Paternity-leave is half-over

November 13, 2017 22:00, by Planet Debian - 0no comments yet

I'm taking the month of November off work, so that I can exclusively take care of our child. Despite it being a difficult time, with him teething, it has been a great half-month so far.

During the course of the month I've found my interest in a lot of technological things waning, so I've killed my account(s) on a few platforms, and scaled back others - if I could exclusively do child-care for the next 20 years I'd be very happy, but sadly I don't think that is terribly realistic.

My interest in things hasn't entirely vanished though, to the extent that I found the time to replace my use of etcd with consul yesterday, and I'm trying to work out how to simplify my hosting setup. Right now I have a bunch of servers doing two kinds of web-hosting:

Hosting static-sites is trivial, whether with a virtual machine, via Amazons' S3-service, or some other static-host such as netlify.

Hosting for "dynamic stuff" is harder. These days a trend for "serverless" deployments allows you to react to events and be dynamic, but not everything can be a short-lived piece of ruby/javascript/lambda. It feels like I could setup a generic platform for launching containers, or otherwise modernising FastCGI, etc, but I'm not sure what the point would be. (I'd still be the person maintaining it, and it'd still be a hassle. I've zero interest in selling things to people, as that only means more support.)

In short I have a bunch of servers, they mostly tick over unattended, but I'm not really sure I want to keep them running for the next 10+ years. Over time our child will deserve, demand, and require more attention which means time for personal stuff is only going to diminish.

Simplify things now wouldn't be a bad thing to do, before it is too late.