May 01, 2016
Right now I'm feeling sad. I really shouldn't, but I still do.
Many years ago I started OpenBSC and Osmocom in order to bring Free
Software into an area where it barely existed before: Cellular
Infrastructure. For the first few years, it was "just for fun", without
any professional users. A FOSS project by enthusiasts. Then we got
some commercial / professional users, and with them funding, paying for
e.g. Holger and my freelance work. Still, implementing all protocol
stacks, interfaces and functional elements of GSM and GPRS from the
radio network to the core network is something that large corporations
typically spend hundreds of man-years on. So funding for Osmocom GSM
implementations was always short, and we always tried to make the best
out of it.
After Holger and I started sysmocom in 2011, we had a chance to use
funds from BTS sales to hire more developers, and we were growing our
team of developers. We finally could pay some developers other than
ourselves from working on Free Software cellular network infrastructure.
In 2014 and 2015, sysmocom got side-tracked with some projects where
Osmocom and the cellular network was only one small part of a much
larger scope. In Q4/2015 and in 2016, we are back on track with
focussing 100% at Osmocom projects, which you can probably see by a lot
more associated commits to the respective project repositories.
By now, we are in the lucky situation that the work we've done in the
Osmocom project on providing Free Software implementations of cellular
technologies like GSM, GPRS, EDGE and now also UMTS is receiving a lot
of attention. This attention translates into companies approaching us
(particularly at sysmocom) regarding funding for implementing new
features, fixing existing bugs and short-comings, etc. As part of that,
we can even work on much needed infrastructural changes in the software.
So now we are in the opposite situation: There's a lot of interest in
funding Osmocom work, but there are few people in the Osmocom community
interested and/or capable to follow-up to that. Some of the early
contributors have moved into other areas, and are now working on
proprietary cellular stacks at large multi-national corporations. Some
others think of GSM as a fun hobby and want to keep it that way.
At sysmocom, we are trying hard to do what we can to keep up with the
demand. We've been looking to add people to our staff, but right now we
are struggling only to compensate for the regular fluctuation of
employees (i.e. keep the team size as is), let alone actually adding new
members to our team to help to move free software cellular networks
I am struggling to understand why that is. I think Free Software in
cellular communications is one of the most interesting and challenging
frontiers for Free Software to work on. And there are many FOSS
developers who love nothing more than to conquer new areas of
At sysmocom, we can now offer what would have been my personal dream job
for many years:
- paid work on Free Software that is available to the general public,
rather than something only of value to the employer
- interesting technical challenges in an area of technology where you
will not find the answer to all your problems on stackoverflow or the
- work in a small company consisting almost entirely only of die-hard
engineers, without corporate managers, marketing departments, etc.
- work in an environment free of Microsoft and Apple software or cloud
services; use exclusively Free Software to get your work done
I would hope that more developers would appreciate such an environment.
If you're interested in helping FOSS cellular networks ahead, feel free
to have a look at http://sysmocom.de/jobs or contact us at
firstname.lastname@example.org. Together, we can try to move Free Software for mobile
communications to the next level!
March 27, 2016
This is great news: You can now install a GSM network using apt-get!
Thanks to the efforts of Debian developer Ruben Undheim, there's now
an OpenBSC (with all its flavors like OsmoBSC, OsmoNITB, OsmoSGSN,
...) package in the official Debian repository.
Here is the link to the e-mail indicating acceptance into Debian:
I think for the past many years into the OpenBSC (and wider Osmocom)
projects I always assumed that distribution packaging is not really
something all that important, as all the people using OpenBSC surely
would be technical enough to build it from the source. And in fact, I
believe that building from source brings you one step closer to
actually modifying the code, and thus contribution.
Nevertheless, the project has matured to a point where it is not used
only by developers anymore, and particularly also (god beware) by
people with limited experience with Linux in general. That such
people still exist is surprisingly hard to realize for somebody like
myself who has spent more than 20 years in Linux land by now.
So all in all, today I think that having packages in a Distribution
like Debian actually is important for the further adoption of the
project - pretty much like I believe that more and better public
Looking forward to seeing the first bug reports reported through
bugs.debian.org rather than https://projects.osmocom.org/ . Once that
happens, we know that people are actually using the official Debian
As an unrelated side note, the Osmocom project now also has nightly
builds available for Debian 7.0, Debian 8.0 and Ubunut 14.04 on both
i586 and x86_64 architecture from
nightly builds are for people who want to stay on the bleeding edge of
the code, but who don't want to go through building everything from
scratch. See Holgers post on the openbsc mailing list
for more information.
March 14, 2016
While preparing my presentation for the Troopers 2016 TelcoSecDay
I was thinking once again about the importance of having FOSS
implementations of cellular protocol stacks, interfaces and network
elements in order to enable security researches (aka Hackers) to work on
improving security in mobile communications.
From the very beginning, this was the motivation of creating OpenBSC and
OsmocomBB: To enable more research in this area, to make it at least in
some ways easier to work in this field. To close a little bit of the
massive gap on how easy it is to do applied security research (aka
hacking) in the TCP/IP/Internet world vs. the cellular world.
We have definitely succeeded in that. Many people have successfully the
various Osmocom projects in order to do cellular security research, and
I'm very happy about that.
However, there is a back-side to that, which I'm less happy about. In
those past eight years, we have not managed to attract significant
amount of contributions to the Osmocom projects from those people that
benefit most from it: Neither from those very security researchers that
use it in the first place, nor from the Telecom industry as a whole.
I can understand that the large telecom equipment suppliers may think
that FOSS implementations are somewhat a competition and thus might not
be particularly enthusiastic about contributing. However, the story for
the cellular operators and the IT security crowd is definitely quite
different. They should have no good reason not to contribute.
So as a result of that, we still have a relatively small amount of
people contributing to Osmocom projects, which is a pity. They can
currently be divided into two groups:
- the enthusiasts: People contributing because they are enthusiastic
about cellular protocols and technologies.
- the commercial users, who operate 2G/2.5G networks based on the
Osmocom protocol stack and who either contribute directly or fund
development work at sysmocom. They typically operate small/private
networks, so if they want data, they simply use Wifi. There's thus
not a big interest or need in 3G or 4G technologies.
On the other hand, the security folks would love to have 3G and 4G
implementations that they could use to talk to either mobile devices
over a radio interface, or towards the wired infrastructure components
in the radio access and core networks. But we don't see significant
contributions from that sphere, and I wonder why that is.
At least that part of the IT security industry that I know typically
works with very comfortable budgets and profit rates, and investing in
better infrastructure/tools is not charity anyway, but an actual
investment into working more efficiently and/or extending the possible
scope of related pen-testing or audits.
So it seems we might want to think what we could do in order to motivate
such interested potential users of FOSS 3G/4G to contribute to it by
either writing code or funding associated developments...
If you have any thoughts on that, feel free to share them with me by
e-mail to email@example.com.
March 08, 2016
Following up from my last post, I’ve had some time to research and assess the current state of embedding Gecko. This post will serve as a (likely incomplete) assessment of where we are today, and what I think the sensible path forward would be. Please note that these are my personal opinions and not those of Mozilla. Mozilla are gracious enough to employ me, but I don’t yet get to decide on our direction
The TLDR; there are no first-class Gecko embedding solutions as of writing.
EmbedLite (aka IPCLite)
EmbedLite is an interesting solution for embedding Gecko that relies on e10s (Electrolysis, Gecko’s out-of-process feature code-name) and OMTC (Off-Main-Thread Compositing). From what I can tell, the embedding app creates a new platform-specific compositor object that attaches to a window, and with e10s, a separate process is spawned to handle the brunt of the work (rendering the site, running JS, handling events, etc.). The existing widget API is exposed via IPC, which allows you to synthesise events, handle navigation, etc. This builds using the xulrunner application target, which unfortunately no longer exists. This project was last synced with Gecko on April 2nd 2015 (the day before my birthday!).
The most interesting thing about this project is how much code it reuses in the tree, and how little modification is required to support it (almost none – most of the changes are entirely reasonable, even outside of an embedding context). That we haven’t supported this effort seems insane to me, especially as it’s been shipping for a while as the basis for the browser in the (now defunct?) Jolla smartphone.
Building this was a pain, on Fedora 22 I was not able to get the desktop Qt build to compile, even after some effort, but I was able to compile the desktop Gtk build (trivial patches required). Unfortunately, there’s no support code provided for the Gtk version and I don’t think it’s worth the time me implementing that, given that this is essentially a dead project. A huge shame that we missed this opportunity, this would have been a good base for a lightweight, relatively easily maintained embedding solution. The quality of the work done on this seems quite high to me, after a brief examination.
Node.js using spidermonkey ought to provide some interesting advantages over a V8-based Node. Namely, modern language features, asm.js (though I suppose this will soon be supplanted by WebAssembly) and speed. Spidernode is unfortunately unmaintained since early 2012, but I thought it would be interesting to do a simple performance test. Using the (very flawed) technique detailed here, I ran a few quick tests to compare an old copy of Node I had installed (~0.12), current stable Node (4.3.2) and this very old (~0.5) Spidermonkey-based Node. Spidermonkey-based Node was consistently over 3x faster than both old and current (which varied very little in performance). I don’t think you can really draw any conclusions than this, other than that it’s an avenue worth exploring.
Many new projects are prototyped (and indeed, fully developed) in Node.js these days; particularly Internet-Of-Things projects. If there’s the potential for these projects to run faster, unchanged, this seems like a worthy project to me. Even forgetting about the advantages of better language support. It’s sad to me that we’re experimenting with IoT projects here at Mozilla and so many of these experiments don’t promote our technology at all. This may be an irrational response, however.
GeckoView is the only currently maintained embedding solution for Gecko, and is Android-only. GeckoView is an Android project, split out of Firefox for Android and using the same interfaces with Gecko. It provides an embeddable widget that can be used instead of the system-provided WebView. This is not a first-class project from what I can tell, there are many bugs and many missing features, as its use outside of Firefox for Android is not considered a priority. Due to this dependency, however, one would assume that at least GeckoView will see updates for the foreseeable future.
I’d experimented with this in the past, specifically with this project that uses GeckoView with Cordova. I found then that the experience wasn’t great, due to the huge size of the GeckoView library and the numerous bugs, but this was a while ago and YMMV. Some of those bugs were down to GeckoView not using the shared APZC, a bug which has since been fixed, at least for Nightly builds. The situation may be better now than it was then.
This post is built on the premise that embedding Gecko is a worthwhile pursuit. Others may disagree about this. I’ll point to my previous post to list some of the numerous opportunities we missed, partly because we don’t have an embedding story, but I’m going to conjecture as to what some of our next missed opportunities might be.
A less tenuous example, let’s talk about VR. VR is also looking like it might finally break out into the mid/high-end consumer realm this year, with heavy investment from Facebook (via Oculus), Valve/HTC (SteamVR/Vive), Sony (Playstation VR), Microsoft (HoloLens), Samsung (GearVR) and others. Mozilla are rightly investing in WebVR, but I think the real end-goal for VR is an integrated device with no tether (certainly Microsoft and Samsung seem to agree with me here). So there may well be a new class of device on the horizon, with new kinds of browsers and ways of experiencing and integrating the web. Can we afford to not let people experiment with our technology here? I love Mozilla, but I have serious doubts that the next big thing in VR is going to come from us. That there’s no supported way of embedding Gecko worries me for future classes of device like this.
In-vehicle information/entertainment systems are possibly something that will become more of the norm, now that similar devices have become such commodity. Interestingly, the current big desktop and mobile players have very little presence here, and (mostly awful) bespoke solutions are rife. Again, can we afford to make our technology inaccessible to the people that are experimenting in this area? Is having just a good desktop browser enough? Can we really say that’s going to remain how people access the internet for the next 10 years? Probably, but I wouldn’t want to bet everything on that.
If we want an embedding solution, I think the best way to go about it is to start from Firefox for Android. Due to the way Android used to require its applications to interface with native code, Firefox for Android is already organised in such a way that it is basically an embedding API (thus GeckoView). From this point, I think we should make some of the interfaces slightly more generic and remove the JNI dependency from the Gecko-side of the code. Firefox for Android would be the main consumer of this API and would guarantee that it’s maintained. We should allow for it to be built on Linux, Mac and Windows and provide the absolute minimum harness necessary to allow for it to be tested. We would make no guarantees about API or ABI. Externally to the Gecko tree, I would suggest that we start, and that the community maintain, a CEF-compatible library, at least at the API level, that would be a Tier-3 project, much like Firefox OS now is. This, to me, seems like the minimal-effort and most useful way of allowing embeddable Gecko.
In addition, I think we should spend some effort in maintaining a fork of Node.js LTS that uses spidermonkey. If we can promise modern language features and better performance, I expect there’s a user-base that would be interested in this. If there isn’t, fair enough, but I don’t think current experiments have had enough backing to ascertain this.
I think that both of these projects are important, so that we can enable people outside of Mozilla to innovate using our technology, and by osmosis, become educated about our mission and hopefully spread our ideals. Other organisations will do their utmost to establish a monopoly in any new emerging market, and I think it’s a shame that we have such a powerful and comprehensive technology platform and we aren’t enabling other people to use it in more diverse situations.
This post is some insightful further reading on roughly the same topic.
February 24, 2016
Today, I took some time off to attend the court hearing in the GPL
violation/infringement case that Christoph Hellwig has brought against
I am not in any way legally involved in the lawsuit. However, as a
fellow (former) Linux kernel developer myself, and a long-term Free
Software community member who strongly believes in the copyleft model, I
of course am very interested in this case - and of course in an outcome
in favor of the plaintiff. Nevertheless, the below report tries to
provide an un-biased account of what happened at the hearing today, and
does not contain my own opinions on the matter. I can always write
another blog post about that :)
I blogged about this case before briefly, and
there is a lot of information publicly discussed about the case,
including the information published by the Software Freedom
Conservancy (see the link above, the announcement and the
Still, let's quickly summarize the facts:
- VMware is using parts of the Linux kernel in their proprietary ESXi
product, including the entire SCSI mid-layer, USB support, radix tree
and many, many device drivers.
- as is generally known, Linux is licensed under GNU GPLv2, a
- VMware has modified all the code they took from the Linux kernel and
integrated them into something they call vmklinux.
- VMware has modified their proprietary virtualization OS kernel
vmkernel with specific API/symbol to interact with vmklinux
- at least in earlier versions of ESXi, virtually any block device
access has to go through vmklinux and thus the portions of Linux
- vmklinux and vmkernel are dynamically linked object files that are
linked together at run-time
- the Linux code they took runs in the same execution context (address
space, stack, control flow) like the vmkernel.
Ok, now enter the court hearing of today.
Christoph Hellwig was represented by his two German Lawyers,
Dr. Till Jaeger and
Dr. Miriam Ballhausen.
VMware was represented by three German lawyers lead by
as well as a US attorney,
(by means of two simultaneous interpreters). There were also several
members of the in-house US legal team of VMware present, but not
formally representing the defendant in court.
As is unusual for copyright disputes, there was quite some audience
following the court. Next to the VMware entourage, there were also a
couple of fellow Linux kernel developers as well as some German IT press
representatives following the hearing.
General Introduction of the presiding judge
After some formalities (like the question whether or not a ',' is
missing after the "Inc." in the way it is phrased in the lawsuit), the
presiding judge started with some general remarks
- the court is well aware of the public (and even international public)
interest in this case
- the court understands there are novel fundamental legal questions
raised that no court - at least no German court - had so far to decide
- the court also is well aware that the judges on the panel are not
technical experts and thus not well-versed in software development or
computer science. Rather, they are a court specialized on all sorts
of copyright matters, not particularly related to software.
- the court further understands that Linux is a collaborative,
community-developed operating system, and that the development process
is incremental and involves many authors.
- the court understands there is a lot of discussion about interfaces
between different programs or parts of a program, and that there are a
variety of different definitions and many interpretations of what
Presentation about the courts understanding of the subject matter
The presiding judge continued to explain what was their understanding of
the subject matter. They understood VMware ESXi serves to virtualize a
computer hardware in order to run multiple copies of the same or of
different versions of operating systems on it. They also understand
that vmkernel is at the core of that virtualization system, and that it
contains something called vmkapi which is an interface towards Linux
However, they misunderstood that this case was somehow an interface
between a Linux guest OS being virtualized on top of vmkernel. It took
both defendant and plaintiff some time to illustrate that in fact this
is not the subject of the lawsuit, and that you can still have portions
of Linux running linked into vmkernel while exclusively only
virtualizing Windows guests on top of vmkernel.
The court went on to share their understanding of the GPLv2 and its
underlying copyleft principle, that it is not about abandoning the
authors' rights but to the contrary exercising copyright. They
understood the license has implications on derivative works and
demonstrated that they had been working with both the German
translation a well as the English language original text of GPLv2. At
least I was sort-of impressed by the way they grasped it - much better
than some of the other courts that I had to deal with in the various
cases I was bringing forward during my gpl-violations.org work before.
They also illustrated that they understood that Christoph Hellwig has
been developing parts of the Linux kernel, and that modified parts of
Linux were now being used in some form in VMware ESXi.
After this general introduction, there was the question of whether or
not both parties would still want to settle before going further. The
court already expected that this would be very unlikely, as it
understood that the dispute serves to resolve fundamental legal
question, and there is hardly any compromise in the middle between
using or not using the Linux code, or between licensing vmkernel under a
GPL compatible license or not. And as expected, there was no indication
from either side that they could see an out-of-court settlement of the
dispute at this point.
Discussion of specific Legal Issues (standing)
In terms of the legal arguments brought forward in hundreds of pages of
legal briefs being filed between the parties, the court summarized:
- they do not see a problem in the fact that the lawsuit by Christoph
Hellwig may be funded or supported by the Software Freedom
Conservancy. Christoph is acting on his own behalf, using his own
- they do not see any issues regarding the place of jurisdiction being
placed in Hamburg, Germany, as the defendant is providing the disputed
software via the Internet, which according to German law permits the
plaintiff to choose any court within Germany. The court added, of
course, that whatever verdict it may rule, this verdict will be
limited to the German jurisdiction.
- In terms of the type of authors' right being claimed by the plaintiff,
there was some discussion about paragraph 3 vs. 8 vs. 9 of German
UrhG (the German copyright law). In general it is understood that
the development method of the Linux kernel is a sequential,
incremental development process, and thus it is what we call
Bearbeiterurheberecht (loosely translated as modifying/editing
authors right) that is used by Christoph to make his claim.
Right to sue / sufficient copyrighted works of the plaintiff
There was quite some debate about the question whether or not the
plaintiff has shown that he actually holds a sufficient amount of
The question here is not, whether Christoph has sufficient copyrightable
contributions on Linux as a whole, but for the matter of this legal case
it is relevant which of his copyrighted works end up in the disputed
product VMware ESXi.
Due to the nature of the development process where lots of developers
make intermittent and incremental changes, it is not as straight-forward
to demonstrate this, as one would hope. You cannot simply print an
entire C file from the source code and mark large portions as being
written by Christoph himself. Rather, lines have been edited again and
again, were shifted, re-structured, re-factored. For a non-developer
like the judges, it is therefore not obvious to decide on this question.
This situation is used by the VMware defense in claiming that overall,
they could only find very few functions that could be attributed to
Christoph, and that this may altogether be only 1% of the Linux code
they use in VMware ESXi.
The court recognized this as difficult, as in German copyright law there
is the concept of fading. If the original work by one author has been
edited to an extent that it is barely recognizable, his original work
has faded and so have his rights. The court did not state whether it
believed that this has happened. To the contrary, the indicated that it
may very well be that only very few lines of code can actually make a
significant impact on the work as a whole. However, it is problematic
for them to decide, as they don't understand source code and software
So if (after further briefs from both sides and deliberation of the
court) this is still an open question, it might very well be the case
that the court would request a techncial expert report to clarify this
to the court.
Are vmklinux + vmkernel one program/work or multiple programs/works?
Finally, there was some deliberation about the very key question of
whether or not vmkernel and vmklinux were separate programs / works
or one program / work in the sense of copyright law. Unfortunately only
the very surface of this topic could be touched in the hearing, and the
actual technical and legal arguments of both sides could not be heard.
The court clarified that if vmkernel and vmklinux would be considered
as one program, then indeed their use outside of the terms of the GPL
would be an intrusion into the rights of the plaintiff.
The difficulty is how to actually venture into the legal implications of
certain technical software architecture, when the people involved have
no technical knowledge on operating system theory, system-level software
development and compilers/linkers/toolchains.
A lot is thus left to how good and 'believable' the parties can present
their case. It was very clear from the VMware side that they wanted to
down-play the role and proportion of vmkernel and its Linux heritage.
At times their lawyers made statements like linux is this small yellow
box in the left bottom corner (of our diagram). So of course already
the diagrams are drawn in a way to twist the facts according to their
view on reality.
- The court seems very much interested in the case and wants to
understand the details
- The court recognizes the general importance of the case and the public
interest in it
- There were some fundamental misunderstandings on the technical
architecture of the software under dispute that could be clarified
- There are actually not that many facts that are disputed between both
sides, except the (key, and difficult) questions on
- does Christoph hold sufficient rights on the code to bring forward the legal case?
- are vmkernel and vmklinux one work or two separate works?
The remainder of this dispute will thus be centered on the latter two
questions - whether in this court or in any higher courts that may have
to re-visit this subject after either of the parties takes this further,
if the outcome is not in their favor.
In terms of next steps,
- both parties have until April 15, 2016 to file further briefs to
follow-up the discussions in the hearing today
- the court scheduled May 19, 2016 as date of promulgation. However,
this would of course only hold true if the court would reach a clear
decision based on the briefs by then. If there is a need for an
expert, or any witnesses need to be called, then it is likely there
will be further hearings and no verdict will be reached by then.
Strap yourself in, this is a long post. It should be easy to skim, but the history may be interesting to some. I would like to make the point that, for a web rendering engine, being embeddable is a huge opportunity, how Gecko not being easily embeddable has meant we’ve missed several opportunities over the last few years, and how it would still be advantageous to make Gecko embeddable.
Embedding Gecko means making it easy to use Gecko as a rendering engine in an arbitrary 3rd party application on any supported platform, and maintaining that support. An embeddable Gecko should make very few constraints on the embedding application and should not include unnecessary resources.
- A 3rd party browser with a native UI
- A game’s embedded user manual
- OAuth authentication UI
- A web application
It’s hard to predict what the next technology trend will be, but there’s is a strong likelihood it’ll involve the web, and there’s a possibility it may not come from a company/group/individual with an existing web rendering engine or particular allegiance. It’s important for the health of the web and for Mozilla’s continued existence that there be multiple implementations of web standards, and that there be real competition and a balanced share of users of the various available engines.
Many technologies have emerged over the last decade or so that have incorporated web rendering or web technologies that could have leveraged Gecko;
(2007) iPhone: Instead of using an existing engine, Apple forked KHTML in 2002 and eventually created WebKit. They did investigate Gecko as an alternative, but forking another engine with a cleaner code-base ended up being a more viable route. Several rival companies were also interested in and investing in embeddable Gecko (primarily Nokia and Intel). WebKit would go on to be one of the core pieces of the first iPhone release, which included a better mobile browser than had ever been seen previously.
(2008) Chrome: Google released a WebKit-based browser that would eventually go on to eat a large part of Firefox’s user base. Chrome was initially praised for its speed and light-weightedness, but much of that was down to its multi-process architecture, something made possible by WebKit having a well thought-out embedding capability and API.
(2008) Android: Android used WebKit for its built-in browser and later for its built-in web-view. In recent times, it has switched to Chromium, showing they aren’t adverse to switching the platform to a different/better technology, and that a better embedding story can benefit a platform (Android’s built in web view can now be updated outside of the main OS, and this may well partly be thanks to Chromium’s embedding architecture). Given the quality of Android’s initial WebKit browser and WebView (which was, frankly, awful until later revisions of Android Honeycomb, and arguably remained awful until they switched to Chromium), it’s not much of a leap to think they may have considered Gecko were it easily available.
(2009) WebOS: Nothing came of this in the end, but it perhaps signalled the direction of things to come. WebOS survived and went on to be the core of LG’s Smart TV, one of the very few real competitors in that market. Perhaps if Gecko was readily available at this point, we would have had a large head start on FirefoxOS?
(2009) Samsung Smart TV: Also available in various other guises since 2007, Samsung’s Smart TV is certainly the most popular smart TV platform currently available. It appears Samsung built this from scratch in-house, but it includes many open-source projects. It’s highly likely that they would have considered a Gecko-based browser if it were possible and available.
(2011) PhantomJS: PhantomJS is a headless, scriptable browser, useful for testing site behaviour and performance. It’s used by several large companies, including Twitter, LinkedIn and Netflix. Had Gecko been more easily embeddable, such a product may well have been based on Gecko and the benefits of that would be many sites that use PhantomJS for testing perhaps having better rendering and performance characteristics on Gecko-based browsers. The demand for a Gecko-based alternative is high enough that a similar project, SlimerJS, based on Gecko was developed and released in 2013. Due to Gecko’s embedding deficiencies though, SlimerJS is not truly headless.
(2011) WIMM One: The first truly capable smart-watch, which generated a large buzz when initially released. WIMM was based on a highly-customised version of Android, and ran software that was compatible with Android, iOS and BlackBerryOS. Although it never progressed past the development kit stage, WIMM was bought by Google in 2012. It is highly likely that WIMM’s work forms the base of the Android Wear platform, released in 2014. Had something like WebOS been open, available and based on Gecko, it’s not outside the realm of possibility that this could have been Gecko based.
(2013) Blink: Google decide to fork WebKit to better build for their own uses. Blink/Chromium quickly becomes the favoured rendering engine for embedding. Google were not afraid to introduce possible incompatibility with WebKit, but also realised that embedding is an important feature to maintain.
(2014) Android Wear: Android specialised to run on watch hardware. Smart watches have yet to take off, and possibly never will (though Pebble seem to be doing alright, and every major consumer tech product company has launched one), but this is yet another area where Gecko/Mozilla have no presence. FirefoxOS may have lead us to have an easy presence in this area, but has now been largely discontinued.
(2014) Atom/Electron: Github open-sources and makes available its web-based text editor, which it built on a home-grown platform of Node.JS and Chromium, which it later called Electron. Since then, several large and very successful projects have been built on top of it, including Slack and Visual Studio Code. It’s highly likely that such diverse use of Chromium feeds back into its testing and development, making it a more robust and performant engine, and importantly, more widely used.
(2016) Brave: Former Mozilla co-founder and CTO heads a company that makes a new browser with the selling point of blocking ads and tracking by default, and doing as much as possible to protect user privacy and agency without breaking the web. Said browser is based off of Chromium, and on iOS, is a fork of Mozilla’s own WebKit-based Firefox browser. Brendan says they started based off of Gecko, but switched because it wasn’t capable of doing what they needed (due to an immature embedding API).
Current state of affairs
WebKit is the only viable alternative for an embeddable web rendering engine and is still quite commonly used, but is generally viewed as a less up-to-date and less performant engine vs. Chromium/Blink.
Gecko has limited embedding capability that is not well-documented, not well-maintained and not heavily invested in. I say this with the utmost respect for those who are working on it; this is an observation and a criticism of Mozilla’s priorities as an organisation. We have at various points in history had embedding APIs/capabilities, but we have either dropped them (gtkmozembed) or let them bit-rot (IPCLite). We do currently have an embedding widget for Android that is very limited in capability when compared to the default system WebView.
It’s not too late. It’s incredibly hard to predict where technology is going, year-to-year. It was hard to predict, prior to the iPhone, that Nokia would so spectacularly fall from the top of the market. It was hard to predict when Android was released that it would ever overtake iOS, or even more surprisingly, rival it in quality (hard, but not impossible). It was hard to predict that WebOS would form the basis of a major competing Smart TV several years later. I think the examples of our missed opportunities are also good evidence that opening yourself up to as much opportunity as possible is a good indicator of future success.
If we want to form the basis of the next big thing, it’s not enough to be experimenting in new areas. We need to enable other people to experiment in new areas using our technology. Even the largest of companies have difficulty predicting the future, or taking charge of it. This is why it’s important that we make easily-embeddable Gecko a reality, and I plead with the powers that be that we make this higher priority than it has been in the past.
February 23, 2016
It seems my recent concerns on the OpenAirInterface re-licensing were
I contacted various legal experts on Free Software legal community about
this, and the response was unanimous: In all feedback I received, the
general opinion was that software under the OSA Public License V1.0 is
neither Free Software nor Open Source Software.
The rational is, that it does not fulfill the criteria of
- the FSF Free Software definition, as the license does
not fulfill freedom 0: The freedom to run the program as you wish,
for any purpose (which obviously includes commercial use)
- the Open Source Initiatives Open Source Definition, as the license must not
discriminate against fields of endeavor, such as commercial use.
- the Debian Free Software Guidelines, as the DFSG
also require no discrimination against fields of endeavor, such as
I think we as the community need to be very clear about this. We should
not easily tolerate that people put software under restrictive licenses
but still call that software open source. This creates a bad impression
to those not familiar with the culture and spirit of both Free Software
and Open Source. It creates the impression that people can call
something Open Source but then still ask royalties for it, if used
It is a shame that entities like Eurecom and the OpenAirInterface
Software Association are open-washing their software by calling it
Open Source when in fact it isn't. This attitude frankly makes me sick.
That's just like green-washing when companies like BP are claiming
they're now an environmental friendly company just because they put some
solar panels on the roof of some building.
February 20, 2016
In 2008, we started bs11-abis, which was shortly after renamed to
OpenBSC. At the time it seemed like a good idea to use
trac as the project management system,
to have a wiki and an issue tracker.
When further Osmocom projects like OsmocomBB, OsmocomTETRA etc. came
around, we simply replicated that infrastructure: Another trac instance
with the same theme, and a shared password file.
The problem with this (and possibly the way we used it) is:
- it doesn't scale, as creating projects is manual, requires a sysadmin
and is time-consuming. This meant e.g. SIMtrace was just a wiki page
in the OsmocomBB trac installation + associated http redirect, causing
- issues can not easily be moved from one project to another, or have
cross-project relationships (like, depend on an issue in another
- we had to use an external planet in order to aggregate the blog of
each of the trac instances
- user account management the way we did it required shell access to the
machine, meaning user account applications got dropped due to the
effort involved. My apologies for that.
Especially the lack of being able to move pages and tickets between
trac's has resulted in a suboptimal use of the tools. If we first write
code as part of OpenBSC and then move it to libosmocore, the associated
issues + wiki pages should be moved to a new project.
At the same time, for the last 5 years we've been successfully using
redmine inside sysmocom to keep track of
many dozens of internal projects.
So now, finally, we (zecke, tnt, myself) have taken up the task to
migrate the osmocom.org projects into redmine. You can see the current
status at http://projects.osmocom.org/. We could create a more
comprehensive project hierarchy, and give libosmocore, SIMtrace,
OsmoSGSN and many others their own project.
Thanks to zecke for taking care of the installation/sysadmin part and
the initial conversion!
Unfortunately the conversion from trac to redmine wiki syntax (and
structure) was not as automatic and straight-forward as one would have
hoped. But after spending one entire day going through the most
important wiki pages, things are looking much better now. As a side
effect, I have had a more comprehensive look into the history of all of
our projects than ever before :)
Still, a lot of clean-up and improvement is needed until I'm happy,
particularly splitting the OpenBSC wiki into separate OsmoBSC, OsmoNITB,
OsmoBTS, OsmoPCU and OsmoSGSN wiki's is probably still going to take
If you would like to help out, feel free to register an account on
projects.osmocom.org (if you don't already have one from the old trac
projects) and mail me for write access to the project(s) of your choice.
Possible tasks include
- putting pages into a more hierarchic structure (there's a parent/child
relationship in redmine wikis)
- fixing broken links due to page renames / wiki page moves
- creating a new redmine 'Project' for your favorite tool that has a git
repo on http://git.osmocom.org/ and writing some (at least initial)
documentation about it.
You don't need to be a software developer for that!
February 19, 2016
After quite some time of gradual bug fixing and improvement, there have
been quite some significant changes being made in OsmoBTS over the last months.
Just a quick reminder: In Fall 2015 we finally merged the long-pending
L1SAP changes originally developed by Jolly, introducing a new
intermediate common interface between the generic part of OsmoBTS, and
the hardware/PHY specific part. This enabled a clean structure between
osmo-bts-sysmo (what we use on the sysmoBTS) and osmo-bts-trx (what
people with general-purpose SDR hardware use).
The L1SAP changes had some fall-out that needed to be fixed, not a big
surprise with any change that big.
More recently however, three larger changes were introduced:
phy_link / phy_instance abstraction
There now is the concept of a phy_link, each of which can have multiple
phy_instances. Each instance represents one baseband transceiver, i.e.
a software or hardware unit driving a TRX inside a BTS.
Every BTS model has been converted to use this new abstraction layer.
proper Multi-TRX support
Based on the above phy_link/phy_instance infrastructure, one can map
each phy_instance to one TRX by means of the VTY / configuration file.
The core of OsmoBTS now supports any number of TRXs, leading to
flexible Multi-TRX support.
A Canadian company called Octasic has been developing a custom GSM PHY
for their custom multi-core DSP architecture (OCTDSP). Rather than
re-inventing the wheel for everything on top of the PHY, they chose to
integrate OsmoBTS on top of it. I've been working at sysmocom on
integrating their initial code into OsmoBTS, rendering a new
This back-end has also recently been ported to the phy_link/phy_instance
API and is Multi-TRX ready. You can both run multiple TRX in one DSP,
as well as have multiple DSPs in one BTS, paving the road for
osmo-bts-octphy is now part of OsmoBTS master.
Corresponding changes to OsmoPCU (for full GPRS support on OCTPHY) are
currently been worked on by Max at sysmocom.
Litecell 1.5 PHY support
Another Canadian company (Nutaq/Nuran) has been building a new BTS
called Litecell 1.5. They also implemented OsmoBTS support, based on
the osmo-bts-sysmo code. We've been able to integrate that code with
the above-mentioned phy_link/phy_interface in order to support the
MultiTRX capability of this hardware.
Litecell 1.5 MultiTRX capability has also been integrated with OsmoPCU.
osmo-bts-litecell15 is now part of OsmoBTS master.
- 2016 starts as the OsmoBTS year of MultiTRX.
- 2016 also starts as a year of many more hardware choices for OsmoBTS
- we see more commercial adoption of OsmoBTS outside of the traditional
options of sysmocom and Fairwaves
February 14, 2016
I've had the pleasure of being invited to netdevconf 1.1 in Seville, spain.
After about a decade of absence in the Linux kernel networking
community, it was great to meet lots of former colleagues again, as well
as to see what kind of topics are currently being worked on and under
The conference had a really nice spirit to it. I like the fact that it
is run by the community itself. Organized by respected members of the
community. It feels like Linux-Kongress or OLS or UKUUG or many others
felt in the past. There's just something that got lost when the Linux
Foundation took over (or pushed aside) virtually any other Linux kernel
related event on the planet in the past :/ So thanks to Jamal for
starting netdevconf, and thanks to Pablo and his team for running this
particular instance of it.
I never really wanted to leave netfilter and the Linux kernel network
stack behind - but then my problem appears to be that there are simply
way too many things of interest to me, and I had to venture first into
RFID (OpenPCD, OpenPICC), then into smartphone hardware and software
(Openmoko) and finally embark on a journey of applied telecoms
archeology by starting OpenBSC, OsmocomBB and various other Osmocom
Staying in Linux kernel networking land was simply not an option with a
scope that can only be defined as wide as wanting to implement any
possible protocol on any possible interface of any possible generation
of cellular network.
At times like attending netdevconf I wonder if I made the right choice
back then. Linux kernel networking is a lot of fun and hard challenges,
too - and it is definitely an area that's much more used by many more
organizations and individuals: The code I wrote on netfilter/iptables
is probably running on billions of devices by now. Compare that to the
Osmocom code, which is probably running on a few thousands of devices,
if at all. Working on Open Source telecom protocols is sometimes a
lonely fight. Not that I wouldn't value the entire team of developers
involved in it. to the contrary. But lonely in the context that 99.999%
of that world is a proprietary world, and FOSS cellular infrastructure
is just the 0.001% at the margin of all of that.
One the Linux kernel side, you have virtually every IT company putting in
their weight these days, and properly funded development is not that
hard to come by. In cellular, reasonable funding for anything (compared
to the scope and complexity of the tasks) is rather the exception than
But no, I don't have any regrets. It has been an interesting journey and
I probably had the chance to learn many more things than if I had stayed
If only each day had 48 hours and I could work both on Osmocom and on
the Linux kernel...
February 10, 2016
February 09, 2016
January 31, 2016
In the recent FOSDEM 2016 SDR Devroom, the
Q&A session following a presentation on OpenAirInterface touched the topic of its
controversial licensing. As I happen to be involved deeply with Free
Software licensing and Free Software telecom topics, I thought I might
have some things to say about this topic. Unfortunately the Q&A session
was short, hence this blog post.
As a side note, the presentation was actually certainly the least
technical presentation in all of the FOSDEM SDR track, and that with a
deeply technical audience. And probably the only presentation at all at
FOSDEM talking a lot about "Strategic Industry Partners".
Let me also state that I actually have respect for what OAI/OSA has been
and still is doing. I just don't think it is attractive to the Free
Software community - and it might actually not be Free Software at all.
OpenAirInterface / History
Within EURECOM, a group around Prof.
Raymond Knopp has been working on a Free Software implementation of all
layers of the LTE (4G) system known as OpenAirInterface. It includes the physical layer
and goes through to the core network.
The OpenAirInterface code was for many years under GPL license (GPLv2,
other parts GPLv3). Initially the SVN repositories were not public
(despite the license), but after some friendly mails one (at least I)
could get access.
I've read through the code at several points in the past, it often
seemed much more like a (quick and dirty?) proof of concept
implementation to me, than anything more general-purpose. But then,
that might have been a wrong impression on my behalf, or it might be
that this was simply sufficient for the kind of research they wanted to
do. After all, scientific research and FOSS often have a complicated
relationship. Researchers naturally have their papers as primary output
of their work, and software implementations often are more like a
necessary evil than the actual goal. But then, I digress.
Now at some point in 2014, a new organization the OpenAirInterface
Software Association (OSA) was established. The idea apparently was to
get involved with the tier-1 telecom suppliers (like Alcatel, Huawei,
Ericsson, ...) and work together on an implementation of Free Software
for future mobile data, so-called 5G technologies.
The existing GPLv2/GPLv3 license of the OpenAirInterface code of course
would have meant that contributions from the patent-holding telecom
industry would have to come with appropriate royalty-free patent
licenses. After all, of what use is it if the software is free in terms
of copyright licensing, but then you still have the patents that make it
Now the big industry of course wouldn't want to do that, so the OSA
decided to re-license the code-base under a new license.
As we apparently don't yet have sufficient existing Free Software
licenses, they decided to create a new license. That new license (the
OSA Public License V1.0
not only does away with copyleft, but also does away with a normal
This is very sad in several ways:
- license proliferation is always bad. Major experts and basically all
major entities in the Free Software world (FSF, FSFE, OSI, ...) are
opposed to it and see it as a problem. Even companies like Intel and
Google have publicly raised concern about license Proliferation.
- abandoning copyleft. Many people particularly from a GNU/Linux
background would agree that copyleft is a fair deal. It ensures that
everyone modifying the software will have to share such modifications
with other users in a fair way. Nobody can create proprietary
- taking away the patent grant. Even the non-copyleft Apache 2.0
License the OSA used as template has a broad patent grant, even for
commercial applications. The OSA Public License has only a patent
grant for use in research context
In addition to this license change, the OSA also requires a copyright
assignment from all contributors.
What kind of effect does this have in case I want to contribute?
- I have to sign away my copyright. The OSA can at any given point
in time grant anyone whatever license they want to this code.
- I have to agree to a permissive license without copyleft, i.e.
everyone else can create proprietary derivatives of my work
- I do not even get a patent grant from the other contributors (like
the large Telecom companies).
So basically, I have to sign away my copyright, and I get nothing in
return. No copyleft that ensures other people's modifications will be
available under the same license, no patent grant, and I don't even keep
my own copyright to be able to veto any future license changes.
My personal opinion (and apparently those of other FOSDEM attendees) is
thus that the OAI / OSA invitation to contributions from the community
is not a very attractive one. It might all be well and fine for large
industry and research institutes. But I don't think the Free Software
community has much to gain in all of this.
Now OSA will claim that the above is not true, and that all contributors
(including the Telecom vendors) have agreed to license their patents
under FRAND conditions to all other contributors. It even seemed to me
that the speaker at FOSDEM believed this was something positive in any
way. I can only laugh at that ;)
FRAND (Fair, Reasonable and Non-Discriminatory) is a frequently invoked
buzzword for patent licensing schemes. It isn't actually defined
anywhere, and is most likely just meant to sound nice to people who
don't understand what it really means. Like, let's say, political
In practise, it is a disaster for individuals and small/medium sized
companies. I can tell you first hand from having tried to obtain patent
licenses from FRAND schemes before. While they might have reasonable
per-unit royalties and they might offer those royalties to everyone,
they typically come with ridiculous minimum annual fees.
For example let's say they state in their FRAND license conditions you
have to pay 1 USD per device, but a minimum of USD 100,000 per year. Or
a similarly large one-time fee at the time of signing the contract.
That's of course very fair to the large corporations, but it makes it
impossible for a small company who sells maybe 10 to 100 devices per
year, as the 100,000 / 10 then equals to USD 10k per device in terms of
royalties. Does that sound fair and Non-Discriminatory to you?
OAI/OSA are trying to get a non-commercial / research-oriented foot into
the design and specification process of future mobile telecom network
standardization. That's a big and difficult challenge.
However, the decisions they have taken in terms of licensing show that
they are primarily interested in aligning with the large corporate
telecom industry, and have thus created something that isn't really Free
Software (missing non-research patent grant) and might in the end only
help the large telecom vendors to uni-directionally consume
contributions from academic research, small/medium sized companies and
In the good tradition about writing a blog post on my way back from a FOSDEM (see earlier installments e.g. for 2012, 2010, 2008, and 2007), here is this year’s take.
No issues with transportation this time (I’m still in the train, but it looks good so far), other than road construction works at the venue, which itself seems to establish a tradition now
This year I stayed in the Be Manos hotel – near to Gare du Midi – which was quite nice. Since I find myself being too old for the pre-FOSDEM beer event, I did not attend it. I had my share of Leffe Bruin (my favorite belgium beer) in the hotel lobby though.
The temperature was around +8°, much better than FROZDEM 2012, where it had -20°.
I saw 5 presentations, 4 of those which were quite good. That’s a better ratio than in previous years. My favorite talk was given by Carsten ‚Rasterman‘ Heitzler, an ex-Openmoko colleague who is now working for SAMSUNG on Tizen’s graphical subsystems (EFL-based).
Most important though were the people I met, a lot of old and new friends, in particular Phil Blundell, Harald Welte, Daniel Willmann, Jan Lübbe, Marcin ‚HRW‘ J., Paul Espin, Florian Boor, and many more. Seeing all of you alive and kicking gave me a lot of positive energy!
I’m returning excited and with many mental notes of things to check out and the motivation to make a major contribution to at least one open project this year. See you all soon!
Der Beitrag Coming back from FOSDEM 2016 erschien zuerst auf Vanille.de.
January 18, 2016
Imagine you run a GSM network and you have multiple systems at the edge of your network that communicate with other systems. For debugging reasons you might want to collect traffic and then look at it to explore an issue or look at it systematically to improve your network, your roaming traffic, etc.
The first approach might be to run tcpdump on each of these systems, run it in a round-robin manner, compress the old traffic and then have a script that downloads/uploads it once a day to a central place. The issue is that each node needs to have enough disk space, you might not feel happy to keep old files on the edge or you just don't know when is a good time to copy it.
Another approach is to create an aggregation framework. A client will use libpcap to capture the traffic and then redirect it to a central server. The central server will then store the traffic and might rotate based on size or age of the file. Old files can then be compressed and removed.
I created the osmo-pcap tool many years ago and have recently fixed a 64bit PCAP header issue (the timeval in the header is 32bit), collection of jumbo frames and now updated the README.md
file of the project and created packages for Debian, Ubuntu, CentOS, OpenSUSE, SLES and I made sure that it can be compiled and use on FreeBSD10 as well.
If you are using or decided not to use this software I would be very happy to hear about it.
January 03, 2016
While I was still active in the Linux kernel development / network
security field, I was regularly attending 10 to 15 conferences per year.
Doing so is relatively easy if you earn a decent freelancer salary and
are working all by yourself. Running a company funded out of your own
pockets, with many issues requiring (or at least benefiting) from
personal physical presence in the office changes that.
Nevertheless, after some years of being less of a conference speaker,
I'm happy to see that the tide is somewhat changing in 2016.
After my talk at 32C3, I'm looking forward to attending (and sometimes
speaking) at events in the first quarter of 2016. Not sure if I can
keep up that pace in the following quarters...
FOSDEM (http://fosdem.org/2016) a classic, and I don't even remember for
how many years I've been attending it. I would say it is fair to state
it is the single largest event specifically by and for
community-oriented free software developers. Feels like home every
netdevconf (http://www.netdevconf.org/1.1/) is actually something I'm
really looking forward to. A relatively new grass-roots conference.
Deeply technical, and only oriented towards Linux networking hackers.
The part of the kernel community that I've known and loved during my old
I'm very happy to attend the event, both for its technical content, and
of course to meet old friends like Jozsef, Pablo, etc. I also read that
Kuninhiro Ishiguro will be there. I always adored his initial work on
Zebra (whose vty code we coincidentally use in almost all osmocom
projects as part of libosmovty).
It's great to again see an event that is not driven by commercial /
professional conference organizers, high registration fees, and
corporate interests. Reminds me of the good old days where Linux was
still the underdog and not mainstream... Think of Linuxtag in its early
I'll be attending Linaro Connect for the first time in many years. It's
a pity that one cannot run various open source telecom protocol stack /
network element projects and a company and at the same time still be
involved deeply in Embedded Linux kernel/system development. So I'll
use the opportunity to get some view into that field again - and of
course meet old friends.
OsmoDevCon is our annual invitation-only developer meeting of the
Osmocom developers. It's very low-profile, basically a no-frills family
meeting of the Osmocom community. But really great to meet with all of
the team and hearing about their respective experiences / special
is another invitation-only event, organized by the makers of the
TROOPERS conference. The idea is to make folks from the classic Telco
industry meet with people in IT Security who are looking at Telco related
topics. I've been there some years ago, and will finally be able to
make it again this year to talk about how the current introduction of
3G/3.5G into the Osmocom network side elements can be used for security
Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon
. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows).
From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states "Runs GitLab Enterprise" Edition. If you have a look at the feature comparison
between the "Community Edition" (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential.
So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference.
December 30, 2015
The 32C3 GSM Network
32C3 was great from the Osmocom perspective: We could again run our own
cellular network at the event in order to perform load testing with real
users. We had 7 BTSs running, each with a single TRX. What was new
compared to previous years:
- OsmoPCU is significantly more robust and stable due to the efforts of
Jacob Erlbeck at sysmocom. This means that GPRS is now actually still
usable in severe overload situations, like 1000 subscribers sharing
only very few kilobits. Of course it will be slow, but at least data
still passes through as much as that's possible.
- We were using half-rate traffic channels from day 2 onwards, in order
to enhance capacity. Phones supporting AMR-HR would use that, but
then there are lots of old phones that only do classic HR (v1).
OsmoNITB with internal MNCC handler supports TCH/H with HR and AMR for
at least five years, but the particular combination of OsmoBTS +
OsmoNITB + lcr (all master branches) was not yet deployed at previous
CCC event networks so far.
Being forced to provide classic HR codec actually revealed several bugs
in the existing code:
- OsmoBTS (at least with the sysmoBTS hardware) is using bit ordering
that is not compliant to what the spec says on how GSM-HR frames
should be put into RTP frames. We didn't realize this so far, as
handing frames from one sysmoBTS to another sysmoBTS of course works,
as both use the same (wrong) bit ordering.
- The ETSI reference implementation of the HR codec has lots of
global/static variables, and thus doesn't really support running
multiple transcoders in parallel. This is however what lcr was trying
(and needing) to do, and it of course failed as state from one
transcoder instance was leaking into another. The problem is simple,
but the solution not so simple. If you want to avoid re-structuring
the entire code in very intrusive ways or running one thread per
transcoder instance, then the only solution was to basically memcpy()
the entire data section of the transcoding library every time you
switch the state from one transcoder instance to the other. It's
surprisingly difficult to learn the start + size of that data section
at runtime in a portable way, though.
Thanks to our resident voice codec expert Sylvain for debugging and
fixing the above two problems.
Thanks also to Daniel and Ulli for taking care of the actual logistics
of bringing + installing (+ later unmounting) all associated equipment.
Thanks furthermore to Kevin who has been patiently handling the 'Level 2
Support' cases of people with various problems ending up in the GSM
It's great that there is a team taking care of those real-world test
networks. We learn a lot more about our software under heavy load
situations this way.
osmo-iuh progress + talk
I've been focussing basically full day (and night) over the week ahead
of Christmas and during Christmas to bring the osmo-iuh code into a
state where we could do a end-to-end demo with a regular phone + hNodeB
+ osmo-hnbgw + osmo-sgsn + openggsn. Unfortunately I only got it up to
the point where we do the PDP CONTEXT ACTIVATION on the signalling
plane, with no actual user data going back and forth. And then, for
strange reasons, I couldn't even demo that at the end of the talk.
Well, in either case, the code has made much progress.
The video of the talk can be found at
The annual CCC congress is always an event where you meet old friends
and colleagues. It was great talking to Stefan, Dimitri, Kevin, Nico,
Sylvain, Jochen, Sec, Schneider, bunnie and many other hackers. After
the event is over, I wish I could continue working together with all
those folks the rest of the year, too :/
Some people have been missed dearly. Absence from the CCC congress is
not acceptable. You know who you are, if you're reading this ;)
December 28, 2015
The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in:
- The vendor might enforce to run on a hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier.
- It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied.
- The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions).
- There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can.
The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP).
But there is a third way and it is available today. Look for a Free Software HLR
and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom
to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious:
- You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide.
- The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires)
- Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it.
- Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well.
The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies.
December 27, 2015
‚Tis the season to let the year pass by and make plans for the next one. 2015 was what I’d call a „transitioning“ year. Although LaTe App-Developers had been shut down in 2014, we still had to spend most of 2015 to work on some things our clients already paid for. This is now finally done and I can move forward looking for new endeavors. Here’s a bunch of my plans for 2016:
First off, I’ll attempt to bring three iOS-apps in the AppStore. These apps will be completely new versions of those that I did while working with my colleague at LaTe, in particular it’s going to be a radio station app, a guitar songbook, and a matching game for kids:
- The radio station app will be the successor to the popular „Volksradio“ App, with a clear focus on streamlining (i.e. only a bare minimum of features, but those supersolid) and social recommendations („listeners who enjoyed ‚Space Station Soma‘ also liked ‚Chillout Radio‘).
- The guitar songbook app will be the successor to the „Chord Pro Songbook“ app, with the focus on collection ’sets‘ (a number of titles to perform at a given time) and ‚alternates‘ (variants of songs). My dream would be to incorporate a great edit function (to create new transcriptions right on your device), but I’m not sure whether this will make it.
- The matching game will be the successor to the „Match and Learn“ app. This one hasn’t been updated for ages, I have new animals and plan to add a better gameplay as well as incorporate support for new devices.
I also have started working on ‚retro player‘, an app that will be the successor to SidPlayer, Module Player, and PokeyPlayer — I have mentioned this in an earlier installment of this blog. This one is going to be huge and I plan a crowdfunding campaign later in 2016 to make it happen. Note that the campaign will not be launched until the product is in a stage where it is sure that it can be completed. I’m not going to make this mistake — something that has annoyed me this year as a contributor to some projects.
Next to this iOS-related work, over the last year I have seriously reinvested time (and money) into my music again. In 2016 I’m going to publish some new material, consisting of rearranged and enhanced versions of some of my ancient AMIGA MODs (see http://www.vanille.de/mickey/my-amiga-history/), but also completely new compositions. It looks like my writer’s block has vanished and after roughly 15 years of sucking on my thumb, I started recording new stuff. Isn’t that exciting?
Professionally, there will be some iOS work, but hopefully also architectures with a broader scope. I really want to do more Python and distributed middleware again. I’m not so sure on Vala (one of my other favorite programming languages) though. This language has flourished between 2008 and 2012 and sadly I’m afraid it’s not going anywhere anymore. There’s little activity on the mailing list, the original author has moved on, and there is not enough fresh blood. Very sad, but given that state, it looks like my half-done book on Vala will not see the light of day.
That’s about everything I think of at the moment. I wish all of you a great healthy and successful 2016, may we manage to make the world a bit more peaceful (although I have my doubts). All the best to you! Yours truly,
Der Beitrag Towards the end of 2015 erschien zuerst auf Vanille.de.
December 26, 2015