Go to the content
or

Debian

Full screen Suggest an article
 RSS feed

Planet Debian

July 1, 2016 14:34 , by valessiobrito - | No one following this article yet.

Planet.Debian is a website that aggregates the blogs of many Debian contributors. Planet maintainers can be reached at planet at debian.org


Gunnar Wolf: Hey, everybody, come share the joy of work!

July 20, 2017 5:17, by Planet Debian - 0no comments yet

I got several interesting and useful replies, both via the blog and by personal email, to my two previous posts where I mentioned I would be starting a translation of the Made With Creative Commons book. It is my pleasure to say: Welcome everybody, come and share the joy of work!

Some weeks ago, our project was accepted as part of Hosted Weblate, lowering the bar for any interested potential contributor. So, whoever wants to be a part of this: You just have to log in to Weblate (or create an account if needed), and start working!

What is our current status? Amazingly better than anything I have exepcted: Not only we have made great progress in Spanish, reaching >28% of translated source strings, but also other people have started translating into Norwegian Bokmål (hi Petter!) and Dutch (hats off to Heimen Stoffels!). So far, Spanish (where Leo Arias and myself are working) is most active, but anything can happen.

I still want to work a bit on the initial, pre-po4a text filtering, as there are a small number of issues to fix. But they are few and easy to spot, your translations will not be hampered much when I solve the missing pieces.

So, go ahead and get to work! :-D Oh, and if you translate sizeable amounts of work into Spanish: As my university wants to publish (in paper) the resulting works, we would be most grateful if you can fill in the (needless! But still, they ask me to do this...) authorization for your work to be a part of a printed book.



Norbert Preining: The poison of academia.edu

July 20, 2017 1:28, by Planet Debian - 0no comments yet

All those working in academics or research have surely heard about academia.edu. It started out as a service for academics, in their own words:

Academia.edu is a platform for academics to share research papers. The company’s mission is to accelerate the world’s research.

But as with most of these platforms, they need to get money, and since some months now academia.edu is pressing users to pay into a premium account at the incredible rate of 8.25USD per month.

This is about he same you pay for Netflix, or some other streaming service. If you remain on the free side, what remains for you to do is SNS-like stuff, and uploading your papers so that academia.edu can make money from it.

What I am really surprised that they can pull this of at a .edu domain. The registry requirements state

For Institutions Within the United States. To obtain an Internet name in the .edu domain, your institution must be a postsecondary institution that is institutionally accredited by an agency on the U.S. Department of Education’s list of Nationally Recognized Accrediting Agencies (see recognized accrediting bodies).
Educause web site

Seeing what they are doing I think it is high time to request removal of the domain name.

So let us see what they are offering for their paid service:

  • Reader “The Readers feature tells you who is reading, downloading, and bookmarking your papers.”
  • Mentions “Get notified when you’re cited or mentioned, including in papers, books, drafts, theses, and syllabuses that Google Scholar can’t find.”
  • Advanced Search “Search the full text and citations of over 18 million papers”
  • Analytics “Learn more about who visits your profile”
  • Homepage – automatically generated home page from the data you enter into the system

On the other hand, the free service is consisting of SNS elements where you can follow other researchers, see when they upload/input an event, and that is it more or less. They have lured a considerable amount of academics into this service, gathered lots of papers, and now they are showing their real face – money.

In contrast to LinkedIn, which also offers paid tier, but keeps the free tier reasonably usable, academia.edu has broken its promise to “accelerate the world’s research” and even worse, it is NOT a “platform for academics to share research papers”. They are collecting papers and sell access to them, like the publisher paywalls.

I consider this kind of service highly poisonous for the academic environment and researchers.



Benjamin Mako Hill: Testing Our Theories About “Eternal September”

July 20, 2017 0:12, by Planet Debian - 0no comments yet Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.



Dirk Eddelbuettel: RcppAPT 0.0.4

July 19, 2017 12:12, by Planet Debian - 0no comments yet

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- arrived on CRAN yesterday.

We added a few more functions in order to compute on the package graph. A concrete example is shown in this vignette which determines the (minimal) set of remaining Debian packages requiring a rebuild under R 3.4.* to update their .C() and .Fortran() registration code. It has been used for the binNMU request #868558.

As we also added a NEWS file, its (complete) content covering all releases follows below.

Changes in version 0.0.4 (2017-07-16)

  • New function getDepends

  • New function reverseDepends

  • Added package registration code

  • Added usage examples in scripts directory

  • Added vignette, also in docs as rendered copy

Changes in version 0.0.3 (2016-12-07)

  • Added dumpPackages, showSrc

Changes in version 0.0.2 (2016-04-04)

  • Added reverseDepends, dumpPackages, showSrc

Changes in version 0.0.1 (2015-02-20)

  • Initial version with getPackages and hasPackages

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.



Lars Wirzenius: Dropping Yakking from Planet Debian

July 19, 2017 5:54, by Planet Debian - 0no comments yet

A couple of people objected to having Yakking on Planet Debian, so I've removed it.



Daniel Silverstone: Yay, finished my degree at last

July 18, 2017 21:56, by Planet Debian - 0no comments yet

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV".

Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'.

Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so...

At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology).

I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy.

It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace.

Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montréal!



Foteini Tsiami: Internationalization, part three

July 18, 2017 10:18, by Planet Debian - 0no comments yet

The first builds of the LTSP Manager were uploaded and ready for testing. Testing involves installing or purging the ltsp-manager package, along with its dependencies, and using its GUI to configure LTSP, create users, groups, shared folders etc. Obviously, those tasks are better done on a clean system. And the question that emerges is: how can we start from a clean state, without having to reinstall the operating system each time?

My mentors pointed me to an answer for that: VirtualBox snapshots. VirtualBox is a virtualization application (others are KVM or VMware) that allows users to install an operating system like Debian in a contained environment inside their host operating system. It comes with an easy to use GUI, and supports snapshots, which are points in time where we mark the guest operating system state, and can revert to that state later on.

So I started by installing Debian Stretch with the MATE desktop environment in VirtualBox, and I took a snapshot immediately after the installation. Now whenever I want to test LTSP Manager, I revert to that snapshot, and that way I have a clean system where I can properly check the installation procedure and all of its features!




Reproducible builds folks: Reproducible Builds: week 116 in Stretch cycle

July 18, 2017 7:29, by Planet Debian - 0no comments yet

Here's what happened in the Reproducible Builds effort between Sunday July 9 and Saturday July 15 2017:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

13 package reviews have been added, 12 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

3 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (47)

diffoscope development

Version 84 was uploaded to unstable by Mattia Rizzolo. It included contributions already reported from the previous weeks, as well as new ones:

After the release, development continued in git with contributions from:

strip-nondeterminism development

Versions 0.036-1, 0.037-1 and 0.038-1 were uploaded to unstable by Chris Lamb. They included contributions from:

reprotest development

Development continued in git with contributions from:

buildinfo.debian.net development

tests.reproducible-builds.org

  • Mattia Rizzolo:
    • Make database backups quicker to restore by avoiding --column-inserts's pg_dump option.
    • Fixup the deployment scripts after the stretch migration.
    • Fixup Apache redirects that were broken after introducing the buster suite
    • Fixup diffoscope jobs that were not always installing the highest possible version of diffoscope
  • Holger Levsen:
    • Add a node health check for a too big jenkins.log.

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.



Matthew Garrett: Avoiding TPM PCR fragility using Secure Boot

July 18, 2017 6:48, by Planet Debian - 0no comments yet

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments



Norbert Preining: Calibre and rar support

July 18, 2017 1:33, by Planet Debian - 0no comments yet

Thanks to the cooperation with upstream authors and the maintainer Martin Pitt, the Calibre package in Debian is now up-to-date at version 3.4.0, and has adopted a more standard packaging following upstream. In particular, all the desktop files and man pages have been replaced by what is shipped by Calibre. What remains to be done is work on RAR support.

Rar support is necessary in the case that the eBook uses rar as compression, which happens quite often in comic books (cbr extension). Calibre 3 has split out rar support into a dynamically loaded module, so what needs to be done is packaging it. I have prepared a package for the Python library unrardll which allows Calibre to read rar-compressed ebooks, but it depends on the unrar shared library, which unfortunately is not built in Debian. I have sent a patch to fix this to the maintainer, see bug 720051, but without reaction from the maintainer.

Thus, I am publishing updated packages for unrar shipping also libunrar5, and unrardll Python package in my calibre repository. After installing python-unrardll Calibre will happily import meta-data from rar-compressed eBooks, as well as display them.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy



Jonathan McDowell: Just because you can, doesn't mean you should

July 17, 2017 18:41, by Planet Debian - 0no comments yet

There was a recent Cryptoparty Belfast event that was aimed at a wider audience than usual; rather than concentrating on how to protect ones self on the internet the 3 speakers concentrated more on why you might want to. As seems to be the way these days I was asked to say a few words about the intersection of technology and the law. I think people were most interested in all the gadgets on show at the end, but I hope they got something out of my talk. It was a very high level overview of some of the issue around the Investigatory Powers Act - if you’re familiar with it then I’m not adding anything new here, just trying to provide some sort of details about why it’s a bad thing from both a technological and a legal perspective.

Download



Steinar H. Gunderson: Solskogen 2017: Nageru all the things

July 17, 2017 15:45, by Planet Debian - 0no comments yet

Solskogen 2017 is over! What a blast that was; I especially enjoyed that so many old-timers came back to visit, it really made the party for me.

This was the first year we were using Nageru for not only the stream but also for the bigscreen mix, and I was very relieved to see the lack of problems; I've had nightmares about crashes with 150+ people watching (plus 200-ish more on stream), but there were no crashes and hardly a dropped frame. The transition to a real mixing solution as well as from HDMI to SDI everywhere gave us a lot of new opportunities, which allowed a number of creative setups, some of them cobbled together on-the-spot:

  • Nageru with two cameras, of which one was through an HDMI-to-SDI converter battery-powered from a 20000 mAh powerbank (and sent through three extended SDI cables in series): Live music compo (with some, er, interesting entries).
  • 1080p60 bigscreen Nageru with two computer inputs (one of them through a scaler) and CasparCG graphics run from an SQL database, sent on to a 720p60 mixer Nageru (SDI pass-through from the bigscreen) with two cameras mixed in: Live graphics compo
  • Bigscreen Nageru switching from 1080p50 to 1080p60 live (and stream between 720p50 and 720p60 correspondingly), running C64 inputs from the Framemeister scaler: combined intro compo
  • And finally, it's Nageru all the way down: A camera run through a long extended SDI cable to a laptop running Nageru, streamed over TCP to a computer running VLC, input over SDI to bigscreen Nageru and sent on to streamer Nageru: Outdoor DJ set/street basket compo (granted, that one didn't run entirely smoothly, and you can occasionally see Windows device popups :-) )

It's been a lot of fun, but also a lot of work. And work will continue for an even better show next year… after some sleep. :-)



Jose M. Calhariz: Crossgrading a complex Desktop and Debian Developer machine running Debian 9

July 16, 2017 16:49, by Planet Debian - 0no comments yet

This article is an experiment in progress, please recheck, while I am updating with the new information.

I have a very old installation of Debian, possibly since v2, dot not remember, that I have upgraded since then both in software and hardware. Now the hardware is 64bits, runs a kernel of 64bits but the run-time is still 32bits. For 99% of tasks this is very good. Now that I have made many simulations I may have found a solution to do a crossgrade of my desktop. I write here the tentative procedure and I will update with more ideias on the problems that I may found.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading and the instalation of all the libs as amd64:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --list > original.dpkg
 for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
   echo $pack32 ; 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
   fi ; 
 done
 cd /var/cache/apt/archives/
 dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
 dpkg --configure --pending
 dpkg -i --skip-same-version dpkg_1.18.24_amd64.deb apt_1.4.6_amd64.deb bash_4.4-5_amd64.deb dash_0.5.8-2.4_amd64.deb mawk_1.3.3-17+b3_amd64.deb *.deb

 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

But this step does not prevent the apt-get install to have broken dependencies. So instead of only installing the libs with dpkg -i, I am going to try to install all the packages with dpkg -i:

apt-get clean
apt-get upgrade
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
dpkg --list > original.dpkg
for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
    apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
  fi ; 
done
cd /var/cache/apt/archives/
dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg -i --skip-same-version dpkg_1.18.24_amd64.deb apt_1.4.6_amd64.deb bash_4.4-5_amd64.deb dash_0.5.8-2.4_amd64.deb mawk_1.3.3-17+b3_amd64.deb *.deb

dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --print-architecture
dpkg --print-foreign-architectures

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)
 fi


Vasudev Kamath: Overriding version information from setup.py with pbr

July 16, 2017 15:23, by Planet Debian - 0no comments yet

I recently raised a pull request on zfec for converting its python packaging from pure setup.py to pbr based. Today I got review from Brian Warner and one of the issue mentioned was python setup.py --version is not giving same output as previous version of setup.py.

Previous version used versioneer which extracts version information needed from VCS tags. Versioneer also provides flexibility of specifying type of VCS used, style of version, tag prefix (for VCS) etc. pbr also does extract version information from git tag but it expects git tag to be of format tags/refs/x.y.z format but zfec used a zfec- prefix to tag (example zfec-1.4.24) and pbr does not process this. End result, I get a version in format 0.0.devNN where NN is number of commits in the repository from its inception.

Me and Brian spent few hours trying to figure out a way to tell pbr that we would like to override version information it auto deduces, but there was none other than putting version string in PBR_VERSION environment variable. That documentation was contributed by me 3 years back to pbr project.

So finally I used versioneer to create a version string and put it in the environment variable PBR_VERSION.

import os
import versioneer

os.environ['PBR_VERSION'] = versioneer.get_version()

...
setup(
    setup_requires=['pbr'],
    pbr=True,
    ext_modules=extensions
)

And added below snippet to setup.cfg which is how versioneer can be configured with various information including tag prefixes.

[versioneer]
VCS = git
style = pep440
versionfile_source = zfec/_version.py
versionfile_build = zfec/_version.py
tag_prefix = zfec-
parentdir_prefix = zfec-

Though this work around gets the work done, it does not feel correct to set environment variable to change the logic of other part of same program. If you guys know the better way do let me know!. Also probably I should consider filing an feature request against pbr to provide a way to pass tag prefix for version calculation logic.



Lior Kaplan: PDO_IBM: tracking changes publicly

July 16, 2017 13:13, by Planet Debian - 0no comments yet

As part of my work at Zend (now a RogueWave company), I maintain the various patch sets. One of those is the changes for PDO_IBM extension for PHP.

After some patch exchange I decided it’s would be easier to manage the whole process over a public git repository, and maybe gain some more review / feedback along the way. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches

Another aspect of this, is having IBMi specific patches from YIPS (young i professionals) at http://www.youngiprofessionals.com/wiki/index.php/XMLService/PHP, which itself are patches on top of vanilla releases. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches-for-yips

So keeping track over these changes as well is easier while using git’s ability to rebase efficiently, so when a new release is done, I can adapt my patches quite easily. Make sure the changes can be back and forward ported between vanilla and IBMi versions of the extension.


Filed under: PHP