November 29, 2015

osmo-pcu and a case for Free Software

By Holger "zecke" Freyther

Last year Jacob and me worked on the osmo-sgsn of OpenBSC. We have improved the stability and reliability of the system and moved it to the next level. By adding the GSUP interface we are able to connect it to our commercial grade Smalltalk MAP stack and use it in the real world production GSM network. While working and manually testing this stack we have not used our osmo-pcu software but another proprietary IP based BTS, after all we didn't want to debug the PCU issues right now.

This year Jacob has taken over as a maintainer of the osmo-pcu, he started with a frequent crash fix (which was introduced due us understanding the specification on TBF re-use better but not the code), he has spent hours and hours reading the specification, studied the log output and has fixed defect after defect and then moved to features. We have tried the software at this years Camp and fixed another round of reliability issues.

Some weeks ago I noticed that the proprietary IP based BTS has been moved from the desk into the shelf. In contrast to the proprietary BTS, issues has a real possibility to be resolved. It might take a long time, it might take one paying another entity to do it but in the end your system will run better. Free Software allows you to genuinely own and use the hardware you have bought!

November 15, 2015

GSM test network at 32C3, after all

By Harald "LaF0rge" Welte

Contrary to my blog post yesterday, it looks like we will have a private GSM network at the CCC congress again, after all.

It appears that Vodafone Germany (who was awarded the former DECT guard band in the 2015 spectrum auctions) is not yet using it in December, and they agreed that we can use it at the 32C3.

With this approval from Vodafone Germany we can now go to the regulator (BNetzA) and obtain the usual test license. Given that we used to get the license in the past, and that Vodafone has agreed, this should be a mere formality.

For the German language readers who appreciate the language of the administration, it will be a Frequenzzuteilung für Versuchszwecke im nichtöffentlichen mobilen Landfunk.

So thanks to Vodafone Germany, who enabled us at least this time to run a network again. By end of 2016 you can be sure they will have put their new spectrum to use, so I'm not that optimistic that this would be possible again.

No GSM test network at 32C3

By Harald "LaF0rge" Welte

I currently don't assume that there will be a GSM network at the 32C3.

Ever since OpenBSC was created in 2008, the annual CCC congress was a great opportunity to test OpenBSC and related software with thousands of willing participants. In order to do so, we obtained a test licence from the German regulatory authority. This was never any problem, as there was a chunk of spectrum in the 1800 MHz GSM band that was not allocated to any commercial operator, the so-called DECT guard band. It's called that way as it was kept free in order to ensure there is no interference between 1800 MHz GSM and the neighboring DECT cordless telephones.

Over the decades, it was determined on a EU level that this guard band might not be necessary, or at least not if certain considerations are taken for BTSs deployed in that band.

When the German regulatory authority re-auctioned the GSM spectrum earlier this year, they decided to also auction the frequencies of the former DECT guard band. The DECT guard band was awarded to Vodafone.

This is a pity, as this means that people involved with cellular research or development of cellular technology now have it significantly harder to actually test their systems.

In some other EU member states it is easier, like in the Netherlands or the UK, where the DECT guard band was not treated like any other chunk of the GSM bands, but put under special rules. Not so in Germany.

To make a long story short: Without the explicit permission of any of the commercial mobile operators, it is not possible to run a test/experimental network like we used to ran at the annual CCC congress.

Given that

  • the event is held in the city center (where frequencies are typically used and re-used quite densely), and
  • an operator has nothing to gain from permitting us to test our open source GSM/GPRS implementations,

I think there is little chance that this will become a reality.

If anyone has really good contacts to the radio network planning team of a German mobile operator and wants to prove me wrong: Feel free to contact me by e-mail.

Thanks to everyone involved with the GSM team at the CCC events, particularly Holger Freyther, Daniel Willmann, Stefan Schmidt, Jan Luebbe, Peter Stuge, Sylvain Munaut, Kevin Redon, Andreas Eversberg, Ulli (and everyone else whom I may have forgot, my apologies). It's been a pleasure!

Thanks also to our friends at the POC (Phone Operation Center) who have provided interfacing to the DECT, ISDN, analog and VoIP network at the events. Thanks to roh for helping with our special patch requests. Thanks also to those entities and people who borrowed equipment (like BTSs) in the pre-sysmocom years.

So long, and thanks for all the fish!

Progress on the Linux kernel GTP code

By Harald "LaF0rge" Welte

It is always sad if you start to develop some project and then never get around finishing it, as there are too many things to take care in parallel. But then, days only have 24 hours...

Back in 2012 I started to write some generic Linux kernel GTP tunneling code. GTP is the GPRS Tunneling Protocol, a protocol between core network elements in GPRS networks, later extended to be used in UMTS and even LTE networks.

GTP is split in a control plane for management and the user plane carrying the actual user IP traffic of a mobile subscriber. So if you're reading this blog via a cellular interent connection, your data is carried in GTP-U within the cellular core network.

To me as a former Linux kernel networking developer, the user plane of GTP (GTP-U) had always belonged into kernel space. It is a tunneling protocol not too different from many other tunneling protocols that already exist (GRE, IPIP, L2TP, PPP, ...) and for the user plane, all it does is basically add a header in one direction and remove the header in the other direction. User data, particularly in networks with many subscribers and/or high bandwidth use.

Also, unlike many other telecom / cellular protocols, GTP is an IP-only protocol with no E1, Frame Relay or ATM legacy. It also has nothing to do with SS7, nor does it use ASN.1 syntax and/or some exotic encoding rules. In summary, it is nothing like any other GSM/3GPP protocol, and looks much more of what you're used from the IETF/Internet world.

Unfortunately I didn't get very far with my code back in 2012, but luckily Pablo Neira (one of my colleagues from netfilter/iptables days) picked it up and brought it along. However, for some time it has been stalled until recently it was thankfully picked up by Andreas Schultz and now receives some attention and discussion, with the clear intention to finish + submit it for mainline inclusion.

The code is now kept in a git repository at

Thanks to Pablo and Andreas for picking this up, let's hope this is the last coding sprint before it goes mainline and gets actually used in production.

Osmocom Berlin meetings

By Harald "LaF0rge" Welte

Back in 2012, I started the idea of having a regular, bi-weekly meeting of people interested in mobile communications technology, not only strictly related to the Osmocom projects and software. This was initially called the Osmocom User Group Berlin. The meetings were held twice per month in the rooms of the Chaos Computer Club Berlin.

There are plenty of people that were or still are involved with Osmocom one way or another in Berlin. Think of zecke, alphaone, 2b-as, kevin, nion, max, prom, dexter, myself - just to name a few.

Over the years, I got "too busy" and was no longer able to attend regularly. Some people kept it alive (thanks to dexter!), but eventually they were discontinued in 2013.

Starting in October 2015, I started a revival of the meetings, two have been held already, the third is coming up next week on November 11.

I'm happy that I had the idea of re-starting the meeting. It's good to meet old friends and new people alike. Both times there actually were some new faces around, most of which even had a classic professional telecom background.

In order to emphasize the focus is strictly not on Osmocom alone ( particularly not about its users only), I decided to rename the event to the Osmocom Meeting Berlin.

If you're in Berlin and are interested in mobile communications technology on the protocol and radio side of things, feel free to join us next Wednesday.

Germany's excessive additional requirements for VAT-free intra-EU shipments

By Harald "LaF0rge" Welte


At my company sysmocom we are operating a small web-shop providing small tools and accessories for people interested in mobile research. This includes programmable SIM cards, SIM card protocol tracers, adapter cables, duplexers for cellular systems, GPS disciplined clock units, and other things we consider useful to people in and around the various Osmocom projects.

We of course ship domestic, inside the EU and world-wide. And that's where the trouble starts, at least since 2014.

What are VAT-free intra-EU shipments?

As many readers of this blog (at least the European ones) know, inside the EU there is a system by which intra-EU sales between businesses in EU member countries are performed without charging VAT.

This is the result of different countries having different amount of VAT, and the fact that a business can always deduct the VAT it spends on its purchases from the VAT it has to charge on its sales. In order to avoid having to file VAT return statements in each of the countries of your suppliers, the suppliers simply ship their goods without charging VAT in the first place.

In order to have checks and balances, both the supplier and the recipient have to file declarations to their tax authorities, indicating the sales volume and the EU VAT ID of the respective business partners.

So far so good. This concept was reasonably simple to implement and it makes the life easier for all involved businesses, so everyone participates in this scheme.

Of course there always have been some obstacles, particularly here in Germany. For example, you are legally required to confirm the EU-VAT-ID of the buyer before issuing a VAT-free invoice. This confirmation request can be done online

However, the Germany tax authorities invented something unbelievable: A Web-API for confirmation of EU-VAT-IDs that has opening hours. Despite this having rightfully been at the center of ridicule by the German internet community for many years, it still remains in place. So there are certain times of the day where you cannot verify EU-VAT-IDs, and thus cannot sell products VAT-free ;)

But even with that one has gotten used to live.


Now in recent years (since January 1st, 2014) , the German authorities came up with the concept of the Gelangensbescheinigung. To the German reader, this newly invented word already sounds ugly enough. Literal translation is difficult, as it sounds really clumsy. Think of something like a reaching-its-destination-certificate

So now it is no longer sufficient to simply verify the EU-VAT-ID of the buyer, issue the invoice and ship the goods, but you also have to produce such a Gelangensbescheinigung for each and every VAT-free intra-EU shipment. This document needs to include

  • the name and address of the recipient
  • the quantity and designation of the goods sold
  • the place and month when the goods were received
  • the date of when the document was signed
  • the signature of the recipient (not required in case of an e-mail where the e-mail headers show that the messages was transmitted from a server under control of the recipient)

How can you produce such a statement? Well, in the ideal / legal / formal case, you provide a form to your buyer, which he then signs and certifies that he has received the goods in the destination country.

First of all, I find if offensive that I have to ask my customers to make such declarations in the first place. And then even if I accept this and go ahead with it, it is my legal responsibility to ensure that he actually fills this in.

What if the customer doesn't want to fill it in or forgets about it?

Then I as the seller am liable to pay 19% VAT on the purchase he made, despite me never having charged those 19%.

So not only do I have to generate such forms and send them with my goods, but I also need a business process of checking for their return, reminding the customers that their form has not yet been returned, and in the end they can simply not return it and I loose money. Great.

Track+Trace / Courier Services

Now there are some alternate ways in which a Gelangensbescheinigung can be generated. For example by a track+trace protocol of the delivery company. However, the requirements to this track+trace protocol are so high, that at least when I checked in late 2013, the track and trace protocol of UPS did not fulfill the requirements. For example, a track+trace protocol usually doesn't show the quantity and designation of goods. Why would it? UPS just moves a package from A to B, and there is no customs involved that would require to know what's in the package.

Postal Packages

Now let's say you'd like to send your goods by postal service. For low-priced non-urgent goods, that's actually what you generally want to do, as everything else is simply way too expensive compared to the value of the goods.

However, this is only permitted, if the postal service you use produces you with a receipt of having accepted your package, containing the following mandatory information:

  • name and address of the entity issuing the receipt
  • name and address of the sender
  • name and address of the recipient
  • quantity and type of goods
  • date of having receive the goods

Now I don't know how this works in other countries, but in Germany you will not be able to get such a receipt form the postal office.

In fact I inquired several times with the legal department of Deutsche Post, up to the point of sending a registered letter (by Deutsche Post) to Deutsche Post. They have never responded to any of those letters!

So we have the German tax authorities claiming yes, of course you can still do intra-EU shipments to other countries by postal services, you just need to provide a receipt, but then at the same time they ask for a receipt indicating details that no postal receipt would ever show.

Particularly a postal receipt would never confirm what kind of goods you are sending. How would the postal service know? You hand them a package, and they transfer it. It is - rightfully - none of their business what its content may be. So how can you ask them to confirm that certain goods were received for transport ?!?


So in summary:

Since January 1st, 2014, we now have German tax regulations in force that make VAT free intra-EU shipments extremely difficult to impossible

  • The type of receipt they require from postal services is not provided by Deutsche Post, thereby making it impossible to use Deutsche Post for VAT free intra-EU shipments
  • The type of track+trace protocol issued by UPS does not fulfill the requirements, making it impossible to use them for VAT-free intra-EU shipments
  • The only other option is to get an actual receipt from the customer. If that customer doesn't want to provide this, the German seller is liable to pay the 19% German VAT, despite never having charged that to his customer


To me, the conclusion of all of this can only be one:

German tax authorities do not want German sellers to sell VAT-free goods to businesses in other EU countries. They are actively trying to undermine the VAT principles of the EU. And nobody seem to complain about it or even realize there is a problem.

What a brave new world we live in.

small tools: rtl8168-eeprom

By Harald "LaF0rge" Welte

Some time ago I wrote a small Linux command line utility that can be used to (re)program the Ethernet (MAC) address stored in the EEPROM attached to an RTL8168 Ethernet chip.

This is for example useful if you are a system integrator that has its own IEEE OUI range and you would like to put your own MAC address in devices that contain the said Realtek etherent chips (already pre-programmed with some other MAC address).

The source code can be obtaned from:

small tools: gpsdate

By Harald "LaF0rge" Welte

In 2013 I wrote a small Linux program that can be usded to set the system clock based on the clock received from a GPS receiver (via gpsd), particularly when a system is first booted. It is similar in purpose to ntpdate, but of course obtains time not from ntp but from the GPS receiver.

This is particularly useful for RTC-less systems without network connectivity, which come up with a completely wrong system clock that needs to be properly set as soon as th GPS receiver finally has acquired a signal.

I asked the ntp hackers if they were interested in merging it into the official code base, and their response was (summarized) that with a then-future release of ntpd this would no longer be needed. So the gpsdate program remains an external utility.

So in case anyone else might find the tool interesting: The source code can be obtained from

Deutsche Bank / unstable interfaces

By Harald "LaF0rge" Welte

Deutsche Bank is a large, international bank. They offer services world-wide and are undoubtedly proud of their massive corporate IT department.

Yet, at the same time, they fail to get the most fundamental principles of user/customer-visible interfaces wrong: Don't change them. If you need to change them, manage the change carefully.

In many software projects, keeping the API or other interface stable is paramount. Think of the Linux kernel, where breaking a userspace-visible interface is not permitted. The reasons are simple: If you break that interface, _everyone_ using that interface will need to change their implementation, and will have to synchronize that with the change on the other side of the interface.

The internet online banking system of Deutsche Bank in Germany permits the upload of transactions by their customers in a CSV file format.

And guess what? They change the file format from one day to the other.

  • without informing their users in advance, giving them time to adopt their implementations of that interface
  • without documenting the exact nature of the change
  • adding new fields to the CSV in the middle of the line, rather than at the end of the line, to make sure things break even more

Now if you're running a business and depend on automatizing your payments using the interface provided by Deutsche Bank, this means that you fail to pay your suppliers in time, you hastily drop/delay other (paid!) work that you have to do in order to try to figure out what exactly Deutsche Bank decided to change completely unannounced, from one day to the other.

If at all, I would have expected this from a hobbyist kind of project. But seriously, from one of the worlds' leading banks? An interface that is probably used by thousands and thousands of users? WTF?!?

The VMware GPL case

By Harald "LaF0rge" Welte

My absence from blogging meant that I didn't really publicly comment on the continued GPL violations by VMware, and the 2015 legal case that well-known kernel developer Christoph Hellwig has brought forward against VMware.

The most recent update by the Software Freedom Conservancy on the VMware GPL case can be found at

In case anyone ever doubted: I of course join the ranks of the long list of Linux developers and other stakeholders that consider VMware's behavior completely unacceptable, if not outrageous.

For many years they have been linking modified Linux kernel device drivers and entire kernel subsystems into their proprietary vmkernel software (part of ESXi). As an excuse, they have added a thin shim layer under GPLv2 which they call vmklinux. And to make all of this work, they had to add lots of vmklinux specific API to the proprietary vmkernel. All the code runs as one program, in one address space, in the same thread of execution. So basically, it is at the level of the closest possible form of integration between two pieces of code: Function calls within the same thread/process.

In order to make all this work, they had to modify their vmkernel, implement vmklinux and also heavily modify the code they took from Linux in the first place. So the drivers are not usable with mainline linux anymore, and vmklinux is not usable without vmkernel either.

If all the above is not a clear indication that multiple pieces of code form one work/program (and subsequently must be licensed under GNU GPLv2), what should ever be considered that?

To me, it is probably one of the strongest cases one can find about the question of derivative works and the GPL(v2). Of course, all my ramblings have no significance in a court, and the judge may rule based on reports of questionable technical experts. But I'm convinced if the court was well-informed and understood the actual situation here, it would have to rule in favor of Christoph Hellwig and the GPL.

What I really don't get is why VMware puts up the strongest possible defense one can imagine. Not only did they not back down in lengthy out-of-court negotiations with the Software Freedom Conservancy, but also do they defend themselves strongly against the claims in court.

In my many years of doing GPL enforcement, I've rarely seen such a dedication and strong opposition. This shows the true nature of VMware as a malicious, unfair entity that gives a damn sh*t about other peoples' copyright, the Free Software community and its code of conduct as a whole, and the Linux kernel developers in particular.

So let's hope they waste a lot of money in their legal defense, get a sufficient amount of negative PR out of this to the point of tainting their image, and finally obtain a ruling upholding the GPL.

All the best to Christoph and the Conservancy in fighting this fight. For those readers that want to help their cause, I believe they are looking for more supporter donations.

What I've been busy with

By Harald "LaF0rge" Welte

Those who don't know me personally and/or stay in touch more closely might be wondering what on earth happened to Harald in the last >= 1 year?

The answer would be long, but I can summarize it to I disappeared into sysmocom. You know, the company that Holger and I founded four years ago, in order to commercially support OpenBSC and related projects, and to build products around it.

In recent years, the team has been growing to the point where in 2015 we had suddenly 9 employees and a handful of freelancers working for us.

But then, that's still a small company, and based on the projects we're involved, that team has to cover a variety of topics (next to the actual GSM/GPRS related work), including

  • mechanical engineering (enclosure design)
  • all types of electrical engineering
    • AC/electrical wiring/fusing on DIN rails
    • AC/DC and isolated DC/DC power supplies (based on modules)
    • digital design
    • analog design
    • RF design
  • prototype manufacturing and testing
  • software development
    • bare-iron bootloader/os/application on Cortex-M0
    • NuttX on Cortex-M3
    • OpenAT applications on Sierra Wireless
    • custom flavors of Linux on several different ARM architectures (TI DaVinci, TI Sitara)
    • drivers for various peripherals including Ethernet Switches, PoE PSE controller
    • lots of system-level software for management, maintenance, control

I've been involved in literally all of those topics, with most of my time spent on the electronics side than on the software side. And if software, the more on the bootloader/RTOS side, than on applications.

So what did we actually build? It's unfortunately still not possible to disclose fully at this point, but it was all related to marine communications technology. GSM being one part of it, but only one of many in the overall picture.

Given the quite challenging breadth/width of the tasks at hand and problem to solve, I'm actually surprised how much we could achieve with such a small team in a limited amount of time. But then, there's virtually no time left, which meant no work, no blogging, no progress on the various Osmocom Erlang projects for core network protocols, and last but not least no Taiwan holidays this year.

ately I see light at the end of the tunnel, and there is again a bit ore time to get back to old habits, and thus I

  • resurrected this blog from the dead
  • resurrected various project homepages that have disappeared
  • started some more work on actual telecom stuff (osmo-iuh, for example)
  • restarted the Osmocom Berlin Meeting

Weblog + homepage online again

By Harald "LaF0rge" Welte

On October 31st, 2014, I had reeboote my main server for a kernel upgrade, and could not mount the LUKS crypto volume ever again. While the techincal cause for this remains a mystery until today (it has spawned some conspiracy theories), I finally took some time to recover some bits and pieces from elsewhere. I didn't want this situation to drag on for more than a year...

Rather than bringing online the old content using sub-optimal and clumsy tools to generate static content (web sites generated by docbook-xml, blog by blosxom), I decided to give it a fresh start and try nikola, a more modern and actively maintained tool to generate static web pages and blogs.

The blog is now available at (a redirect from the old /weblog is in place, for those who keep broken links for more than 12 months). The RSS feed URLs are different from before, but there are again per-category feeds so people (and planets) can subscribe to the respective category they're interested in.

And yes, I do plan to blog again more regularly, to make this place not just an archive of a decade of blogging, but a place that is alive and thrives with new content.

My personal web site is available at while my (similarly re-vamped) freelancing business web site is also available again at

I still need to decide what to do about the old site. It still has its old manual web 1.0 structure from the late 1990ies.

I've also re-surrected and as well as (old content). Next in line is, which I also intend to convert to nikola for maintenance reasons.

November 13, 2015

Things you are permitted to do…

By Sean Moss-Pultz


  • consume
  • alter
  • share
  • redefine
  • rent
  • mortgage
  • pawn
  • sell
  • exchange
  • transfer
  • give away
  • destroy it
  • or exclude others from doing these things.

Copyright License

(Seek legal advice)

November 10, 2015

My „new“ issue tracker installation

By Michael "mickeyl" Lauer

I’m absolutely relying on working with issue trackers for managing features, bugs, and releases. Although it’s always a bit cumbersome to teach (new) clients how to properly use it (it takes quite some hand holding and improving their tickets), sooner or later they all realize that it’s way better than the chaos we get by tossing Excel sheets back and forwards.

Although we shut our company LaTe App-Developers down last year, my colleague and me are still using our old redmine 2.x installation to support existing clients. For new projects, I recently started to look into the current state of issue tracker offerings. My needs are:

  1. Open source – I don’t believe in closed source solutions for such a critical thing in my daily routine.
  2. Simple & efficient – I don’t need a feature monster, I need something that I and my clients can use and that doesn’t get in our way.
  3. Self-hosted on debian 7.x – I know it’s quite some administrative work to get things running smoothly, but once in a while I enjoy these kind of tasks.
  4. Configurable task states and workflow – My status flow is usually open => reproducible => in progress => testable => closed.
  5. Remote GIT repository integration:
    1. A combined activity screen where I see not only tickets, but also change sets.
    2. Being able to advance the task state with commit messages.
    3. Fetching new data by using git hooks! No cron jobs, no pulling.

As I’ve mentioned, we previously used redmine and this is what I’m familiar with and what I love to work with. In the past (when my requirements were not developed yet), I also used trac, mantis, and bugzilla – but neither of those got me hooked.

Despite being somewhat satisfied with redmine, the last time I researched the market was five years ago, so I set out to do another survey – to see whether there is anything better than redmine meeting my requirements. I spare you the itchy details, but after looking and trying for some weeks, there were only three contenders left:

  1. Good ole‘ redmine, this time in version 3.1.
  2. OpenProject, a fork of the (now defunct) ChilliProject – hence a 2nd order fork of redmine.
  3. Phabricator, the new hotness introduced by Facebook, Inc.

OpenProject looked interesting to me since it basically is still redmine, but with a focus on a more streamline UI and better usability (and more frequent releases). Installation was pretty good, since finnlabs (the company that is steering its development and offering professional services around the product) provides packages. Everything worked pretty well, but at the end of the day though, it wasn’t that much of a step-up from redmine.

Phabricator is pretty impressive. Installation is painless (On most systems, PHP still has the better out-of-the-box experience than Ruby and Rails) and the web GUI looks amazing. Although the individual parts are very strong and it has a lot of features that redmine lacks, the configuration UI is pretty barebones (for custom ticket states, you have to edit JSON files) and it doesn’t felt as integrated as redmine. It has great potential though, perhaps I just need to play with it for a longer time. I left my installation intact and will use it for an internal project for a while.

For all other projects though, I have decided to come back to redmine. To spice up the look and feel, I’m using the Circle theme from RedmineCRM (who are making some great plugins for redmine) and some of their plugins.

One thing that’s always a bit of a nuisance is the git repository integration without pulling or stalling when it reads the changesets while you are, say, querying the tickets. Phabricator comes with a dedicated daemon that eases this part, I think that’s a way redmine et. al. should look into, as well. For redmine 3.1, I got it working with their new (to me) repository web service. If you don’t have the repository on the same machine as your tracker installation, the ideal system has the following parts:

  1. I push a new changeset to my repository. A simple post-receive hook on my gitolite installation then calls the redmine web service to trigger the next step, e.g.
    curl "http://<redmine url>/sys/fetch_changesets?key=$apikey&id=$projectid" &
  2. The redmine webservice updates its local mirror of the repository by calling git fetch. This can easily be done with the redmine plugin gitremote.
  3. The redmine installation processes the new changesets, checks for special commit texts, and updates its internal databases accordingly.

The hardest part of that is debugging, when something does not work (as always…). In my case it was a custom SSH port on my machine, which made it silently fail fetching new changesets until I realized that :)

I still recommend redmine, if you have similar needs as me. Yes, it may not integrate the latest web technologies and look a bit rusty (which you can improve by using a theme), but it’s solid and does not get in your way.



Der Beitrag My „new“ issue tracker installation erschien zuerst auf

November 09, 2015

The bridge Analogy

By Sean Moss-Pultz

Originally, computer programs were not protected by copyrights because they were not considered fixed, tangible works. Object code was distinguished from source code. (Object code is instructions for machines–source codes are for humans.) Object code was viewed as a utilitarian good, produced from source code rather than as a creative work in and of itself.

The U.S. Copyright Office attempted to classify computer programs by drawing an analogy: the blueprints of a bridge and the resulting bridge compared to the source code of a program and the resulting executable object code:


This analogy caused the Copyright Office to issue copyright certificates under its “Rule of Doubt“.

In 1974, the newly established U.S. Commission on New Technological Uses of Copyrighted Works (CONTU) decided that “computer programs, to the extent that they embody an author’s original creation, are proper subject matter of copyright.”

Then in 1980, the U.S. Congress added the definition of “computer program” to existing copyright laws in order to allow the owner of the program to make another copy or adaptation for use on a computer. This legislation, plus court decisions such as Apple v. Franklin, clarified that the Copyright Act gave computer programs the copyright status of literary works. To simplify: Congress said compiling code is like writing War an Peace.

As a result, software companies began to claim that they did not sell their products but rather “licensed” them to customers. Why? Because this enabled them to avoid the transfer of rights to the end-user via the first-sale doctrine. These software license agreements are now called end-user license agreements (EULAs).

For roughly 1,000 years Western Civilization has expanded property rights to democratize who can own what. In the last 30 years, since software licensing began eating the world, we’ve been screwing it all up.

True ownership is as different from licensing as owning land is to renting a house. Licensing specifically precludes ownership. Licensing is a great model for the kings of the tech industry (Google, Amazon, Apple, Facebook) but it’s less than optimal for us serfs (end-users and developers).

To understand why, one has to look back at the history of property…


November 02, 2015


By Sean Moss-Pultz

I’ve begun to collect quite a collection of good books on the theme of Ownership. The quest started a few years ago; thinking about the birth of my son, I realized what I care most about these days is digital. And as any astute EULA reader would know, we don’t own what we buy from Apple, Google, or Amazon – we merely rent.

I started a company to enable ownership of digital things. I’m interested in how, albeit with a western bias, we have been building up properly law for 1,000+ years. But since the 1970s we have been screwing it up.

This is the first post of (hopefully) many as I try to work out what is OWNERSHIP in a more public way.

August 31, 2015

Is anyone still sampling?

By Michael "mickeyl" Lauer

Hi fellow readers, this time we’re talking music. For quite some time, I’ve been wondering about the sound of a certain area which seems to be abandoned by pretty much all bands. It was the area when sampling technology became affordable and where a bunch of musicians adopted this (back then very limited) technology in creative ways to use any kind of noise in a musical context.


In this period (say, 1984-1990), bands like „The Art Of Noise“, „Jean-Michel Jarre“, „Depeche Mode“, „Moskwa TV“, and many more, emerged, who thrilled their audience with sounds that had been previously unheard. Obviously we lost this form of art somewhere along the way – I can’t think of any currently active band embracing these kind of instruments. Even the aforementioned pioneers have „developed“ (why must every successful band „develop“ and lose the distinct quality that made them big? but that’s a topic for another blogpost) and turned their back on this.


Is it because listeners became tired or did the massive improvements in sampling quality and length (due to the vast price decline in computer memory we now have hours of sampling time where back in the 80s we only had seconds) thus the limitless possibilities strangled the creativity (again!)?


I still love to hear (and produce) those kinds of sound. So let me ask: Is anyone still sampling? (except sample library vendors, that is)

Cheerio – Gone fishing, err… field recording!

Der Beitrag Is anyone still sampling? erschien zuerst auf

April 24, 2015

Web Navigation Transitions

By Chris Lord

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.

 Navigation Transitions specification proposal


An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" />
<link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.


Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport {
  navigate-away-duration: 500ms;

@media (navigate-away) {

I think this would indeed be nicer, though I think the exact naming might need some work.


When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards          | Backwards
normal            | reverse
reverse           | normal
alternate         | alternate-reverse
alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.


When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.


Simple slide between two pages:


  <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" />
    body {
      border: 0;
      height: 100%;

    #bg {
      width: 100%;
      height: 100%;
      background-color: red;
  <div id="bg" onclick="window.location='page-2.html'"></div>


#bg {
  animation-name: slide-left;
  animation-duration: 0.25s;

@keyframes slide-left {
  from {}
  to { transform: translateX(-100%); }


  <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" />
    body {
      border: 0;
      height: 100%;

    #bg {
      width: 100%;
      height: 100%;
      background-color: green;
  <div id="bg" onclick="history.back()"></div>


#bg {
  animation-name: slide-from-left;
  animation-duration: 0.25s;

@keyframes slide-from-left {
  from { transform: translateX(100%) }
  to {}

I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

March 29, 2015 API docs live again

By Michael "mickeyl" Lauer

After the outage of the VM where has been hosted on, we are now almost fully back. I have integrated the DBus API documentation (that has been hosted on the doc subdomain) into the top level documentation which now lives at The source code has already been moved to and the new mailing list has been alive for a few months at goldelico.

Now that the documentation is live again, I have plans for short, mid, and long term:

1. Short-term I’m working on completing the merge to libgee-0.8 and then cut the next point release.

2. Mid-term I want to discuss integrating the unmerged branches to the individual subprojects and continue cleaning up.

3. Long-term I’m looking for a new reference hardware platform, funding, contributors, and decide whether to move the existing reference platform to kdbus (or another IPC.)

If you have any plans or questions with regards to the initiative and its subprojects, please contact me via the FSO mailing list (preferred) or personally.

Der Beitrag API docs live again erschien zuerst auf

March 22, 2015

RFC: Future of SidPlayer, ModPlayer, PokeyPlayer for iOS

By Michael "mickeyl" Lauer

This is a post about the state of Sid-, Mod, or PokeyPlayer on iOS.

Coming from the background of the C64 and AMIGA demo scenes, I always thought that every platform needs a way to play back the musical artwork created by those great musicians in the 80s and 90s on machines like the Commodore C64, the AMIGA, and the ATARI XL.

Fast forward to the iPhone: Being excited about the new platform, me and another guy from the good ole‘ AMIGA days started working on SidPlayer in 2008, shortly after Apple opened the developer program for european developers. After some months of work, we had the first version ready for the Apple review, standing on the shoulder of the great libsidplay and the HVSC. Due to libsidplay being GPL, we had to open source the whole iOS app. To our surprise, _this_ hasn’t been a problem with the Apple review.

SidPlayer for iOS was available for some months, then we developed adaptations for AMIGA .mod files (ModPlayer) and Atari XL pokey sound files (PokeyPlayer). In the meantime, iOS development went from being a hobby to our profession (we formed the LaTe App-Developers GbR), which unfortunately had great impact on our pet projects. Being busy with paid projects, we could not find enough time to do serious updates to the players.

The original plan in 2008 was to create an app that has additional value around the core asset of a high quality retro computing player, such as a retro-museum-in-a-box (giving background information about those classic computing machines) and a community that shares playlists (important given the amount of songs), comments, statistics, and ratings. Alas, due to our time constraints during the lifetime of the apps, we could only do small updates in order to fix bugs with newer operating system versions. There was not enough time to add features, do an iPad adaptation, nor to unify the three distinct player apps. In the meantime, other apps came along that also could play some of those tunes, although we weren’t (and still aren’t) very excited about their user interfaces and sound quality.

The final nail for the coffin came in 2013, when – much to our surprise – out of the blue (not even due to reviewing an update), we received a letter from Apple where they claimed that our player apps would violate the review guidelines, in particular the dreaded sections 2.7 / 2.8, which read „2.7: Apps that download code in any way or form will be rejected.“ and „2.8: Apps that install or launch other executable code will be rejected“. Although we went past this guideline for several years, this turned into a showstopper – some weeks later, Apple removed our apps from the store.

Unfortunately, those sections really apply – at least for the Sid- and PokeyPlayer. Both players rely on emulating parts of the CPU and custom chip infrastructure of the C64 / Atari XL (hence run „executable“ code, albeit for a foreign processor architecture) and said code gets downloaded from the internet (we didn’t want to ship the actual music files with the app for licensing reasons). ModPlayer actually was an exception, since the .mod format does not contain code, but is a descriptive format, however back then I did not have the energy to argue with Apple on that, hence ModPlayer has been removed without a valid reason.

In the meantime, my priorities have shifted a bit and we had to shutdown our iOS company LaTe AppDevelopers for a number of reasons. Still I have great motivation to work on the original goal for those players. Due to the improved hard- and software of the iOS platform, these days we could add some major improvements to the playing routines, such as using recent filter distortion improvements in libsidplay2, audio post-processing with reverb and chorus, etc.

The chance of the existing apps coming back into the store is – thanks to Apple – zero. It wouldn’t be a pleasant experience anyways, since the code base is very old and rather unmaintainable (remember, it was our first app for a new platform, and neither one of us had any Mac OS X experience to rely on).

Basically, three question come to my mind now:

1. Would there be enough interest in a player that fulfills the original goal or is the competition on the store „good enough“?
2. Will it be possible to get past Apple’s review, if we ship the App with all the sound (code) files, thus not downloading any code?
3. How can I fund working on this app? To honor all the countless hours the original authors put into creating the music and the big community working on preserving the files, I want this app to be free for everyone.

As you may have guessed, I do not have any concrete answers (let alone a timeframe), but just some ideas and the track record of having created one of the most popular set of C64/AMIGA/Atari XL music player apps. So I wanted to use this opportunity to gather some feedback. If you have any comments, feel free to send them to me. If you even want to collaborate on such a project, I’m all ears. If there’s sufficient interest, we can create some project infrastructure, i.e. mailing list.

Der Beitrag RFC: Future of SidPlayer, ModPlayer, PokeyPlayer for iOS erschien zuerst auf

February 23, 2015

Wayback Machine

By Michael "mickeyl" Lauer

Thanks to the fabulous wayback machine, I have imported my blog from between 1999 and 2006. It’s not properly formatted and most of the images are missing, but it’s somewhat interesting to read the things my younger self wrote about 15 years ago.

Der Beitrag Wayback Machine erschien zuerst auf

January 26, 2015

To web or not?

By Michael "mickeyl" Lauer

I have pondered a long time whether to learn web programming for my customers‘ services app, so that they can access their user & device statistics, crash logs, manage service messages, push messages, etc.

I now have decided not to pursue this path. Web technology is a mess, even more than mobile technology. It’s lacking a clear separation of layers and although many frameworks nowadays are using MVC or similar patterns, I feel I have to do too many things at once (web service, html templating, css design, java script for interactive stuff, etc.) to really make a professional web app.

I’m going to make a mobile client instead, using the technologies I already have mastered and in which I’m productive. Yes, I still want to learn something new, that’s why I’m working with a NoSQL database now for the first time.

Of course the downside is my customers can no longer use their web browsers to manage all that, but since they have their iPhones and iPads always around anyways, I’m sure they can cope with that.

Der Beitrag To web or not? erschien zuerst auf

January 04, 2015

Printing in PolyProlylene

By Talpadk

Bowl printed in PP

PP bowl printed on a piece of cutting board

I recently purchased a small sample of white polypropylene (PP) plastic from a shop in China.

While it was relatively expensive at ~$11 for 200g worth of plastic it allowed me to try out printing in PP without buying an entire spool of filament.
And since it isn’t supposed to be the easiest thing to print due to a thermal contraction that should make it more warp prone than ABS and additionally is slightly slippery and doesn’t stick that well to other materials.

You may then ask why I would want to attempt to print in PP then, after all PLA prints just fine… well sort of anyway.

Polypropylene is/has the following features:

  • Relatively heat resistant, plastic handles on dishwasher safe cutlery are for instance often made of it.
  • Good chemical resistance.
  • Handles bending and flexing relatively well, living hinges can be made of it.
  • Is relatively soft, not always a good thing.

Well back to the printing business…

For the experiments I used my trusty RepRapPro Huxley with a smaller 0.3mm nozzle more on that later.
Yes I really should get that Mendle90 build, that would have allowed me to borrow some 3mm PP welding rod from work.

Anyway I’m by far not the first to print in polypropylene, but as with NinjaFlex I though it could use another post on the internet about the material.(Some links to “prior art”: RapMan wiki and a forum post)

While it seemed that PP and especially HDPE are good candidates for the print bed I had to try out polyimide and some generic blue masking tape as well.
As I expected they didn’t seem to work too well for me. But didn’t experiment too much with them.

I there for proceeded and bought some cheap plastic cutting boards from Biltema.
They don’t specify the type of plastic but they don’t feel very much like PP so I assume they are made of HDPE.

Once cut to size I actually managed to print unheated onto the 5mm thick sheet of plastic!
Some of the prints actually stuck too good to the print bed and got damaged while being removed.

I also encountered some problems with jams/the plastic coiling up inside extruder.
This led me to increasing the extrusion temperature to 235C and reduce the speed down to 15/20 mm/s for the perimeter/infill.
In an attempt to reduce the print bed adhesion I used a lower 225C for the first layer.

In hindsight increasing the temperature might not have been necessary, at least when manually pushing PP@235C and PLA@215C the pressure seems to be in the same range.
The extruder problems may simply be caused by the PP filament being softer than PLA.
Reducing the speed print might have been enough.

As some of the prints had left thin layers of PP on the print bed surface and new prints stuck annoyingly well to those spots I decided to try to sand the surface.
This removed both the PP residue and the grid of ridges in the plastic due to it’s cutting board origin.After this the surface seemed to be less problematic with regard to local over sticking.

I have yet to attempt to heat up this print surface, as I have previous had bad a experience with an experiment using a SAN sheet that warped badly when heated.
Besides it actually looks quite promising to use the print bed unheated.

While the chopping board isn’t that bad or expensive it is 5mm thick which is too much for my bulldog clips to handle.
I therefore looked for alternative sources of PP and PE.

The next experiment involved plastic wrap.
Here in Denmark PVC based warps have fallen out of favour and been replaced by PE based products (Assumed to be LDPE as it is soft).

The wrap was applied to a mirror surface and clamped onto the regular print bed.
The first unheated print had way too much warping.
I then cleaned the wrap using rubbing alcohol (which visually roughened the surface a little) and may have heated the bed to 90C.This resulted in a slightly better print but not quite as good as the chopping board.
Plastic wrap may be promising but I quickly stopped playing with it as it would probably have to be glued to the glass which would complicate the process.

PP printed on tape

PP printed on tape

Next up was packaging tape.
At least some of it are made of PP, biltema has some brown tape that is I did however just use some clear stuff I had laying in the drawer.

The first unheated on glass attempt had too little adhesion.
I then roughed the surface using a scouring pad and heated the bed to 70C which made the part stick relatively well to the tape.

Thoughts and notes:

  • Running the extruder at 235C might be too warm (stringing in the bowl print)
  • Maybe the glass surface conducts too much heat away too fast, might be why the cutting board sticks so well unheated?
    Perhaps experiment with a more insulating/lower heat capacity base material.
  • For flexible/soft materials I expect it is better with a thicker filament as it is harder to curl up inside the extruder
    (Note to self: Find time to build that mendel90)
  • Also for softer materials I probably ought to switch to my 0.5mm nozzle.
  • Tape based print bed materials have an advantage over solid ones, if the print is really stuck pealing the tape off might help to remove the part without damaging it.