In private conversation, Holger mentioned EC-GSM-IoT to me, and I had to
dig a bit into it. It was introduced in Release 13, but if you do a web
search for it, you find surprisingly little information beyond press
releases with absolutely zero information content and no "further
The primary reason for this seems to be that the feature was called
EC-EGPRS until the very late stages, when it was renamed for - believe
it or not - marketing reasons.
So when searching for the right term, you actually find specification
references and change requests in the 3GPP document archives.
I tried to get a very brief overview, and from what I could find, it is
centered around GERAN extension in the following ways:
- EC-EGPRS goal: Improve coverage by 20dB
- New single-burst coding schemes
- Blind Physical Layer Repetitions where bursts are repeated up to 28 times without feedback from remote end
- transmitter maintains phase coherency
- receiver uses processing gain (like incremental redundancy?)
- New logical channel types (EC-BCCH, EC-PCH, EC-AGC, EC-RACH, ...)
- New RLC/MAC layer messages for the EC-PDCH communication
- Power Efficient Operation (PEO)
- Introduction of eDRX (extended DRX) to allow for PCH listening
intervals from minutes up to a hour
- Relaxed Idle Mode: Important to camp on a cell, not best cell.
Reduces neighbor cell monitoring requirements
In terms of required modifications to an existing GSM/EDGE
implementation, there will be (at least):
- changes to the PHY layer regarding new coding schemes, logical
channels and burst scheduling / re-transmissions
- changes to the RLC/MAC layer in the PCU to implement the new EC
specific message types and procedures
- changes to the BTS and BSC in terms of paging in eDRX
In case you're interested in more pointers on technical details, check
out the links provided at https://osmocom.org/issues/1780
It remains to be seen how widely this will be adopted. Rolling this
cange out on moderm base station hardware seems technicalyl simple - but
it remains to be seen how many equipment makers implement it, and at
what cost to the operators. But I think the key issue is whether or not
the baseband chipset makers (Intel, Qualcomm, Mediatek, ...) will
implement it anytime soon on the device side.
There are no plans on implementing any of this in the Osmocom stack as
of now,but in case anyone was interested in working on this, feel free
to contact us on the email@example.com mailing list.
Some topics keep coming back, even a number of years after first having
worked on them. And then you start to search online using your favorite
search engine - and find your old posts
on that subject are the most comprehensive publicly available
information on the subject ;)
Back in 2011, I was working on some very basic support for Ericsson
RBS2xxx GSM BTSs in OpenBSC. The major part of this was to find out the
weird dynamic detection of the signalling timeslot, as well as the fully
non-standard OM2000 protocol for OML. Once it reached the state of a
'proof-of-concept', work at this ceased and remained in a state where
still lots of manual steps were involved in BTS bring-up.
I've recently picked this topic up again, resulting in some
work-in-progress code in
Beyond classic E1 based A-bis support, I've also been looking (again) at
Ericsson Packet Abis. Packet Abis is their understanding of Abis over
IP. However, it is - again - much further from the 3GPP specifications
than what we're used to in the Osmocom universe. Abis/IP as we know
- RSL and OML over TCP (inside an IPA multiplex)
- RTP streams for the user plane (voice)
- Gb over IP (NS over UDP/IP), as te PCU is in the BTS.
In the Ericsson world, they decided to taka a much lower-layer approach
and decided to
- start with L2TP over IP (not the L2TP over UDP that many people know from VPNs)
- use the IETF-standardized Pseudowire type for HDLC but use a frame
format in violation of the IETF RFCs
- Talk LAPD over L2TP for RSL and OML
- Invent a new frame format for voice codec frames called TFP and feed
that over L2TP
- Invent a new frame format for the PCU-CCU communication called P-GSL
and feed that over L2TP
I'm not yet sure if we want to fully support that protocol stack from
OpenBSC and related projects, but in any case I've extende wireshark to
decode such protocol traces properly by
- Extending the L2TP dissector with Ericsson specific AVPs
- Improving my earlier pakcet-ehdlc.c with better understanding of the
- Implementing a new TFP dissector from scratch
- Implementing a new P-GSL dissector from scratch
The resulting work can be found at http://git.osmocom.org/wireshark/log/?h=laforge/ericsson-packet-abis
in case anyone is interested. I've mostly been working with protocol
traces from RBS2409 so far, and they are decoded quite nicely for RSL,
OML, Voice and Packet data. As far as I know, the format of the STN /
SIU of other BTS models is identical.
Is anyone out there in possession of Ericsson RBS2xxx RBSs interested in
collboration on either a Packet Abis implementation, or an inteface of
the E1 or packet based CCU-PCU interface to OsmoPCU?
In recent days, various public allegations have been brought forward
against Jacob Appelbaum. The allegations rank from plagiarism to sexual
assault and rape.
I find it deeply disturbing that the alleged victims are putting up the
effort of a quite slick online campaign to defame Jakes's name, using
a domain name consisting of only his name and virtually any picture you
can find online of him from the last decade, and - to a large extent -
hide in anonymity.
I'm upset about this not because I happen to know Jake personally
for many years, but because I think it is fundamentally wrong to bring
up those accusations in such a form.
I have no clue what is the truth or what is not the truth. Nor does
anyone else who has not experienced or witnessed the alleged events
first hand. I'd hope more people would think about that before
commenting on this topic one way or another on Twitter, in their blogs,
on mailing lists, etc. It doesn't matter what we believe, hypothesize
or project based on a personal like or dislike of either the person
accused or of the accusers.
We don't live in the middle ages, and we have given up on the pillory
for a long time (and the pillory was used after a judgement, not
before). If there was illegal/criminal behavior, then our societies
have a well-established and respected procedure to deal with such: It
is based on laws, legal procedure and courts.
So if somebody has a claim, they can and should seek legal support
and bring those claims forward to the competent authorities, rather than
starting what very easily looks like a smear campaign (whether it is one
Please don't get me wrong: I have the deepest respect and sympathies for
victims of sexual assault or abuse - but I also have a deep respect for
the legal foundation our societies have built over hundreds of years,
and it's principles including the human right "presumption of
No matter who has committed which type of crime, everyone deserve to
receive a fair trial, and they are innocent until proven guilty.
I believe nobody deserves such a public defamation campaign, nor does
anyone have the authority to sentence such a verdict, not even a court
of law. The Pillory was abandoned for good reasons.
I’m currently working on the Vaani project at Mozilla, and part of my work on that allows me to do some exploration around the topic of speech recognition and speech assistants. After looking at some of the commercial offerings available, I thought that if we were going to do some kind of add-on API, we’d be best off aping the Amazon Alexa skills JS API. Amazon Echo appears to be doing quite well and people have written a number of skills with their API. There isn’t really any alternative right now, but I actually happen to think their API is quite well thought out and concise, and maps well to the sort of data structures you need to do reliable speech recognition.
So skipping forward a bit, I decided to prototype with Node.js and some existing open source projects to implement an offline version of the Alexa skills JS API. Today it’s gotten to the point where it’s actually usable (for certain values of usable) and I’ve just spent the last 5 minutes asking it to tell me Knock-Knock jokes, so rather than waste any more time on that, I thought I’d write this about it instead. If you want to try it out, check out this repository and run npm install in the usual way. You’ll need pocketsphinx installed for that to succeed (install sphinxbase and pocketsphinx from github), and you’ll need espeak installed and some skills for it to do anything interesting, so check out the Alexa sample skills and sym-link the ‘samples‘ directory as a directory called ‘skills‘ in your ferris checkout directory. After that, just run the included example file with node and talk to it via your default recording device (hint: say ‘launch wise guy‘).
Hopefully someone else finds this useful – I’ll be using this as a base to prototype further voice experiments, and I’ll likely be extending the Alexa API further in non-standard ways. What was quite neat about all this was just how easy it all was. The Alexa API is extremely well documented, Node.js is also extremely well documented and just as easy to use, and there are tons of libraries (of varying quality…) to do what you need to do. The only real stumbling block was pocketsphinx’s lack of documentation (there’s no documentation at all for the Node bindings and the C API documentation is pretty sparse, to say the least), but thankfully other members of my team are much more familiar with this codebase than I am and I could lean on them for support.
I’m reasonably impressed with the state of lightweight open source voice recognition. This is easily good enough to be useful if you can limit the scope of what you need to recognise, and I find the Alexa API is a great way of doing that. I’d be interested to know how close the internal implementation is to how I’ve gone about it if anyone has that insider knowledge.
Back in late April, the well-known high-quality SDR hardware company
Nuand published a blog post about an Open Source Release of a VHDL ADS-B
I was quite happy at that time about this, and bookmarked it for further
investigation at some later point.
Today I actually looked at the source code, and more by coincidence
noticed that the LICENSE file contains a
license that is anything but Open Source: The license is a "free for
evaluation only" license, and it is only valid if you run the code on an
actual Nuand board.
Both of the above are clearly not compatible with any
of the well-known and respected definitions of Open Source, particularly
not the official Open Source Definition of the Open Source Initiative.
I cannot even start how much this makes me upset. This is once again
openwashing, where something that clearly is not Free or Open Source
Software is labelled and marketed as such.
I don't mind if an author chooses to license his work under a
proprietary license. It is his choice to do so under the law, and it
generally makes such software utterly unattractive to me. If others
still want to use it, it is their decision. However, if somebody
produces or releases non-free or proprietary software, then they should
make that very clear and not mis-represent it as something that it
Open-washing only confuses everyone, and it tries to market the
respective company or product in a light that it doesn't deserve. I
believe the proper English proverb is to adorn oneself with borrowed
I strongly believe the community must stand up against such practise and
clearly voice that this is not something generally acceptable or
tolerated within the Free and Open Source software world. It's sad that
this is happening more frequently, like recently with OpenAirInterface
(see related blog post).
I will definitely write an e-mail to Nuand management requesting to
correct this mis-representation. If you agree with my posting, I'd
appreciate if you would contact them, too.
I've been giving a keynote at the Black Duck Korea Open Source
yesterday, and I'd like to share some thoughts about it.
In terms of the content, I spoke about the fact that the ultimate
goal/wish/intent of free software projects is to receive contributions
and for all of the individual and organizational users to join the
collaborative development process. However, that's just the intent, and
it's not legally required.
Due to GPL enforcement work, a lot of attention has been created over the
past ten years in the corporate legal departments on how to comply with
FOSS license terms, particularly copyleft-style licenses like GPLv2 and
License compliance ensures the absolute bare legal minimum on engaging
with the Free Software community. While that is legally sufficient, the
community actually wants to have all developers join the collaborative
development process, where the resources for development are
contributed and shared among all developers.
So I think if we had more contribution and a more fair distribution of
the work in developing and maintaining the related software, we would
not have to worry so much about legal enforcement of licenses.
However, in the absence of companies being good open source citizens,
pulling out the legal baton is all we can do to at least require them to
share their modifications at the time they ship their products. That
code might not be mergeable, or it might be outdated, so it's value
might be less than we would hope for, but it is a beginning.
Now some people might be critical of me speaking at a Black Duck Korea
event, where Black Duck is a company selling (expensive!) licenses to
proprietary tools for license compliance. Thereby, speaking at such an
event might be seen as an endorsement of Black Duck and/or proprietary
software in general.
Honestly, I don't think so. If you've ever seen a Black Duck Korea
event, then you will notice there is no marketing or sales booth, and
that there is no sales pitch on the conference agenda. Rather, you have
speakers with hands-on experience in license compliance either from a
community point of view, or from a corporate point of view, i.e. how
companies are managing license compliance processes internally.
Thus, the event is not a sales show for proprietary software, but an
event that brings together various people genuinely interested in
license compliance matters. The organizers very clearly understand that
they have to keep that kind of separation. So it's actually more like a
community event, sponsored by a commercial entity - and that in turn is
true for most technology conferences.
So I have no ethical problems with speaking at their event. People who
know me, know that I don't like proprietary software at all for ethical
reasons, and avoid it personally as far as possible. I certainly don't
promote Black Ducks products. I promote license compliance.
Let's look at it like this: If companies building products based on
Free Software think they need software tools to help them with license
compliance, and they don't want to develop such tools together in a
collaborative Free Software project themselves, then that's their
decision to take. To state using words of Rosa Luxemburg:
Freedom is always the freedom of those who think different
I may not like that others want to use proprietary software, but if they
think it's good for them, it's their decision to take.
Have you ever used mobile data on your phone or using Tethering?
In packet-switched cellular networks (aka mobile data) from GPRS to
EDGE, from UMTS to HSPA and all the way into modern LTE networks, there
is a tunneling protocol called GTP (GPRS Tunneling Protocol).
This was the first cellular protocol that involved transport over
TCP/IP, as opposed to all the ISDN/E1/T1/FrameRelay world with their
weird protocol stacks. So it should have been something super easy to
implement on and in Linux, and nobody should have had a reason to run a
proprietary GGSN, ever.
However, the cellular telecom world lives in a different universe, and to
this day you can be safe to assume that all production GGSNs are
proprietary hardware and/or software :(
In 2002, Jens Jakobsen at Mondru AB released the initial version of
OpenGGSN, a userspace
implementation of this tunneling protocol and the GGSN network element.
Development however ceased in 2005, and we at the Osmocom project
thus adopted OpenGGSN maintenance in 2016.
Having a userspace implementation of any tunneling protocol of course
only works for relatively low bandwidth, due to the scheduling and
memory-copying overhead between kernel, userspace, and kernel again.
So OpenGGSN might have been useful for early GPRS networks where the
maximum data rate per subscriber is in the hundreds of kilobits, but it
certainly is not possible for any real operator, particularly not at
today's data rates.
That's why for decades, all commonly used IP tunneling protocols have
been implemented inside the Linux kernel, which has some tunneling
infrastructure used with tunnels like IP-IP, SIT, GRE, PPTP, L2TP and
But then again, the cellular world lives in a universe where Free and
Open Source Software didn't exit until OpenBTS and OpenBSC changed all o
that from 2008 onwards. So nobody ever bothered to add GTP support to
the in-kernel tunneling framework.
In 2012, I started an in-kernel implementation of GTP-U (the user
plane with actual user IP data) as part of my work at sysmocom. My former netfilter colleague and current
netfilter core team leader Pablo Neira was contracted to bring it
further along, but unfortunately the customer project funding the effort
was discontinued, and we didn't have time to complete it.
Luckily, in 2015 Andreas Schultz of Travelping came around and has forward-ported the old
code to a more modern kernel, fixed the numerous bugs and started to
test and use it. He also kept pushing Pablo and me for review and
submission, thanks for that!
Finally, in May 2016, the code was merged into the mainline kernel,
and now every upcoming version of the Linux kernel will have a fast and
efficient in-kernel implementation of GTP-U. It is configured via
netlink from userspace, where you are expected to run a corresponding
daemon for the control plane, such as either OpenGGSN, or the new GGSN +
PDN-GW implementation in Erlang called erGW.
You can find the kernel code at drivers/net/gtp.c,
and the userspace netlink library code (libgtpnl) at git.osmocom.org.
I haven't done actual benchmarking of the performance that you can get
on modern x86 hardware with this, but I would expect it to be the same
of what you can also get from other similar in-kernel tunneling
Now that the cellular industry has failed for decades to realize how
easy and little effort would have been needed to have a fast and
inexpensive GGSN around, let's see if now that other people did it for
them, there will be some adoption.
If you're interested in testing or running a GGSN or PDN-GW and become
an early adopter, feel free to reach out to Andreas, Pablo and/or me.
The osmocom-net-gprs mailing list might be a good way to discuss further development and/or testing.
According to some news report, including this report at softpedia,
a 26 year old student at the Faculty of Criminal Justice and Security in
Maribor, Slovenia has received a suspended prison sentence for finding
flaws in Slovenian police and army TETRA network using OsmocomTETRA
As the Osmocom project leader and main author of OsmocomTETRA, this
is highly disturbing news to me. OsmocomTETRA was precisely developed
to enable people to perform research and analysis in TETRA networks, and
to audit their safe and secure configuration.
If a TETRA network (like any other network) is configured with broken
security, then the people responsible for configuring and operating that
network are to be blamed, and not the researcher who invests his
personal time and effort into demonstrating that police radio
communications safety is broken. On the outside, the court sentence
really sounds like "shoot the messenger". They should instead have
jailed the people responsible for deploying such an insecure network in
the first place, as well as those responsible for not doing the most
basic air-interface interception tests before putting such a network
According to all reports, the student had shared the results of his
research with the authorities and there are public detailed reports from
2015, like the report (in Slovenian) at
The statement that he should have asked the authorities for permission
before starting his research is moot. I've seen many such cases and you
would normally never get permission to do this, or you would most
likely get no response from the (in)competent authorities in the first
From my point of view, they should give the student a medal of honor,
instead of sentencing him. He has provided a significant service to the
security of the public sector communications in his country.
To be fair, the news report also indicates that there were other charges
involved, like impersonating a police officer. I can of course not
comment on those.
Please note that I do not know the student or his research first-hand,
nor did I know any of his actions or was involved in them. OsmocomTETRA
is a Free / Open Source Software project available to anyone in source
code form. It is a vital tool in demonstrating the lack of security in
many TETRA networks, whether networks for public safety or private
In the past I have written about my usage of Tufao and Qt to build REST services. This time I am writing about my experience of using the TreeFrog framework
to build a full web application.
You might wonder why one would want to build such a thing in a statically and compiled language instead of something more dynamic. There are a few reasons for it:
- Performance: The application is intended to run on our sysmoBTS GSM Basestation (TI Davinci DM644x). By modern standards it is a very low-end SoC (ARMv5te instruction set, single core, etc, low amount of RAM) and at the same time still perfectly fine to run a GSM network.
- Interface: For GSM we have various libraries with a C programming interface and they are easy to consume from C++.
- Compilation/Distribution: By (cross-)building the application there is a "single" executable and we don't have the dependency mess of Ruby.
The second decision was to not use Tufao and search for a framework that has user management and a template/rendering/canvas engine built-in. At the Chaos Computer Camp in 2007 I remember to have heard a conversation of "Qt" for the Web (Wt, C++ Web Toolkit) and this was the first framework I looked at. It seems like a fine project/product but interfacing with Qt seemed like an after thought. I continued to look and ended up finding and trying the TreeFrog framework.
I am really surprised how long this project exists without having heard about it. It is using/built on top of Qt, uses QtSQL for the ORM mapping, QMetaObject for dispatching to controllers and the template engine and resembles Ruby on Rails a lot. It has two template engines, routing of URLs to controllers/slots, one can embed any C++ in the template. The documentation is complete and by using the search on the website I found everything I was searching for my "advanced" topics. Because of my own stupidity I ended up single stepping through the code and a Qt coder should feel right at home.
My favorite features:
- tspawn model TableName will autogenerate (and update) a C++ model based on the table in the database. The updating is working as well.
- The application builds a libmodel.so, libhelper.so (I removed that) and libcontroller.so. When using the -r option of the application the application will respawn itself. At first I thought I would not like it but it improves round trip times.
- C++ in the template. The ERB template is parsed and a C++ class will be generated and the ::toString() method will generate the HTML code. So in case something is going wrong, it is very easy to inspect.
If you are currently using Ruby on Rails, Django but would like to do it with C++, have a look at TreeFrog. I really like it so far.
Right now I'm feeling sad. I really shouldn't, but I still do.
Many years ago I started OpenBSC and Osmocom in order to bring Free
Software into an area where it barely existed before: Cellular
Infrastructure. For the first few years, it was "just for fun", without
any professional users. A FOSS project by enthusiasts. Then we got
some commercial / professional users, and with them funding, paying for
e.g. Holger and my freelance work. Still, implementing all protocol
stacks, interfaces and functional elements of GSM and GPRS from the
radio network to the core network is something that large corporations
typically spend hundreds of man-years on. So funding for Osmocom GSM
implementations was always short, and we always tried to make the best
out of it.
After Holger and I started sysmocom in 2011, we had a chance to use
funds from BTS sales to hire more developers, and we were growing our
team of developers. We finally could pay some developers other than
ourselves from working on Free Software cellular network infrastructure.
In 2014 and 2015, sysmocom got side-tracked with some projects where
Osmocom and the cellular network was only one small part of a much
larger scope. In Q4/2015 and in 2016, we are back on track with
focussing 100% at Osmocom projects, which you can probably see by a lot
more associated commits to the respective project repositories.
By now, we are in the lucky situation that the work we've done in the
Osmocom project on providing Free Software implementations of cellular
technologies like GSM, GPRS, EDGE and now also UMTS is receiving a lot
of attention. This attention translates into companies approaching us
(particularly at sysmocom) regarding funding for implementing new
features, fixing existing bugs and short-comings, etc. As part of that,
we can even work on much needed infrastructural changes in the software.
So now we are in the opposite situation: There's a lot of interest in
funding Osmocom work, but there are few people in the Osmocom community
interested and/or capable to follow-up to that. Some of the early
contributors have moved into other areas, and are now working on
proprietary cellular stacks at large multi-national corporations. Some
others think of GSM as a fun hobby and want to keep it that way.
At sysmocom, we are trying hard to do what we can to keep up with the
demand. We've been looking to add people to our staff, but right now we
are struggling only to compensate for the regular fluctuation of
employees (i.e. keep the team size as is), let alone actually adding new
members to our team to help to move free software cellular networks
I am struggling to understand why that is. I think Free Software in
cellular communications is one of the most interesting and challenging
frontiers for Free Software to work on. And there are many FOSS
developers who love nothing more than to conquer new areas of
At sysmocom, we can now offer what would have been my personal dream job
for many years:
- paid work on Free Software that is available to the general public,
rather than something only of value to the employer
- interesting technical challenges in an area of technology where you
will not find the answer to all your problems on stackoverflow or the
- work in a small company consisting almost entirely only of die-hard
engineers, without corporate managers, marketing departments, etc.
- work in an environment free of Microsoft and Apple software or cloud
services; use exclusively Free Software to get your work done
I would hope that more developers would appreciate such an environment.
If you're interested in helping FOSS cellular networks ahead, feel free
to have a look at http://sysmocom.de/jobs or contact us at
firstname.lastname@example.org. Together, we can try to move Free Software for mobile
communications to the next level!
This is great news: You can now install a GSM network using apt-get!
Thanks to the efforts of Debian developer Ruben Undheim, there's now
an OpenBSC (with all its flavors like OsmoBSC, OsmoNITB, OsmoSGSN,
...) package in the official Debian repository.
Here is the link to the e-mail indicating acceptance into Debian:
I think for the past many years into the OpenBSC (and wider Osmocom)
projects I always assumed that distribution packaging is not really
something all that important, as all the people using OpenBSC surely
would be technical enough to build it from the source. And in fact, I
believe that building from source brings you one step closer to
actually modifying the code, and thus contribution.
Nevertheless, the project has matured to a point where it is not used
only by developers anymore, and particularly also (god beware) by
people with limited experience with Linux in general. That such
people still exist is surprisingly hard to realize for somebody like
myself who has spent more than 20 years in Linux land by now.
So all in all, today I think that having packages in a Distribution
like Debian actually is important for the further adoption of the
project - pretty much like I believe that more and better public
Looking forward to seeing the first bug reports reported through
bugs.debian.org rather than https://projects.osmocom.org/ . Once that
happens, we know that people are actually using the official Debian
As an unrelated side note, the Osmocom project now also has nightly
builds available for Debian 7.0, Debian 8.0 and Ubunut 14.04 on both
i586 and x86_64 architecture from
nightly builds are for people who want to stay on the bleeding edge of
the code, but who don't want to go through building everything from
scratch. See Holgers post on the openbsc mailing list
for more information.
While preparing my presentation for the Troopers 2016 TelcoSecDay
I was thinking once again about the importance of having FOSS
implementations of cellular protocol stacks, interfaces and network
elements in order to enable security researches (aka Hackers) to work on
improving security in mobile communications.
From the very beginning, this was the motivation of creating OpenBSC and
OsmocomBB: To enable more research in this area, to make it at least in
some ways easier to work in this field. To close a little bit of the
massive gap on how easy it is to do applied security research (aka
hacking) in the TCP/IP/Internet world vs. the cellular world.
We have definitely succeeded in that. Many people have successfully the
various Osmocom projects in order to do cellular security research, and
I'm very happy about that.
However, there is a back-side to that, which I'm less happy about. In
those past eight years, we have not managed to attract significant
amount of contributions to the Osmocom projects from those people that
benefit most from it: Neither from those very security researchers that
use it in the first place, nor from the Telecom industry as a whole.
I can understand that the large telecom equipment suppliers may think
that FOSS implementations are somewhat a competition and thus might not
be particularly enthusiastic about contributing. However, the story for
the cellular operators and the IT security crowd is definitely quite
different. They should have no good reason not to contribute.
So as a result of that, we still have a relatively small amount of
people contributing to Osmocom projects, which is a pity. They can
currently be divided into two groups:
- the enthusiasts: People contributing because they are enthusiastic
about cellular protocols and technologies.
- the commercial users, who operate 2G/2.5G networks based on the
Osmocom protocol stack and who either contribute directly or fund
development work at sysmocom. They typically operate small/private
networks, so if they want data, they simply use Wifi. There's thus
not a big interest or need in 3G or 4G technologies.
On the other hand, the security folks would love to have 3G and 4G
implementations that they could use to talk to either mobile devices
over a radio interface, or towards the wired infrastructure components
in the radio access and core networks. But we don't see significant
contributions from that sphere, and I wonder why that is.
At least that part of the IT security industry that I know typically
works with very comfortable budgets and profit rates, and investing in
better infrastructure/tools is not charity anyway, but an actual
investment into working more efficiently and/or extending the possible
scope of related pen-testing or audits.
So it seems we might want to think what we could do in order to motivate
such interested potential users of FOSS 3G/4G to contribute to it by
either writing code or funding associated developments...
If you have any thoughts on that, feel free to share them with me by
e-mail to email@example.com.
Following up from my last post, I’ve had some time to research and assess the current state of embedding Gecko. This post will serve as a (likely incomplete) assessment of where we are today, and what I think the sensible path forward would be. Please note that these are my personal opinions and not those of Mozilla. Mozilla are gracious enough to employ me, but I don’t yet get to decide on our direction
The TLDR; there are no first-class Gecko embedding solutions as of writing.
EmbedLite (aka IPCLite)
EmbedLite is an interesting solution for embedding Gecko that relies on e10s (Electrolysis, Gecko’s out-of-process feature code-name) and OMTC (Off-Main-Thread Compositing). From what I can tell, the embedding app creates a new platform-specific compositor object that attaches to a window, and with e10s, a separate process is spawned to handle the brunt of the work (rendering the site, running JS, handling events, etc.). The existing widget API is exposed via IPC, which allows you to synthesise events, handle navigation, etc. This builds using the xulrunner application target, which unfortunately no longer exists. This project was last synced with Gecko on April 2nd 2015 (the day before my birthday!).
The most interesting thing about this project is how much code it reuses in the tree, and how little modification is required to support it (almost none – most of the changes are entirely reasonable, even outside of an embedding context). That we haven’t supported this effort seems insane to me, especially as it’s been shipping for a while as the basis for the browser in the (now defunct?) Jolla smartphone.
Building this was a pain, on Fedora 22 I was not able to get the desktop Qt build to compile, even after some effort, but I was able to compile the desktop Gtk build (trivial patches required). Unfortunately, there’s no support code provided for the Gtk version and I don’t think it’s worth the time me implementing that, given that this is essentially a dead project. A huge shame that we missed this opportunity, this would have been a good base for a lightweight, relatively easily maintained embedding solution. The quality of the work done on this seems quite high to me, after a brief examination.
Node.js using spidermonkey ought to provide some interesting advantages over a V8-based Node. Namely, modern language features, asm.js (though I suppose this will soon be supplanted by WebAssembly) and speed. Spidernode is unfortunately unmaintained since early 2012, but I thought it would be interesting to do a simple performance test. Using the (very flawed) technique detailed here, I ran a few quick tests to compare an old copy of Node I had installed (~0.12), current stable Node (4.3.2) and this very old (~0.5) Spidermonkey-based Node. Spidermonkey-based Node was consistently over 3x faster than both old and current (which varied very little in performance). I don’t think you can really draw any conclusions than this, other than that it’s an avenue worth exploring.
Many new projects are prototyped (and indeed, fully developed) in Node.js these days; particularly Internet-Of-Things projects. If there’s the potential for these projects to run faster, unchanged, this seems like a worthy project to me. Even forgetting about the advantages of better language support. It’s sad to me that we’re experimenting with IoT projects here at Mozilla and so many of these experiments don’t promote our technology at all. This may be an irrational response, however.
GeckoView is the only currently maintained embedding solution for Gecko, and is Android-only. GeckoView is an Android project, split out of Firefox for Android and using the same interfaces with Gecko. It provides an embeddable widget that can be used instead of the system-provided WebView. This is not a first-class project from what I can tell, there are many bugs and many missing features, as its use outside of Firefox for Android is not considered a priority. Due to this dependency, however, one would assume that at least GeckoView will see updates for the foreseeable future.
I’d experimented with this in the past, specifically with this project that uses GeckoView with Cordova. I found then that the experience wasn’t great, due to the huge size of the GeckoView library and the numerous bugs, but this was a while ago and YMMV. Some of those bugs were down to GeckoView not using the shared APZC, a bug which has since been fixed, at least for Nightly builds. The situation may be better now than it was then.
This post is built on the premise that embedding Gecko is a worthwhile pursuit. Others may disagree about this. I’ll point to my previous post to list some of the numerous opportunities we missed, partly because we don’t have an embedding story, but I’m going to conjecture as to what some of our next missed opportunities might be.
A less tenuous example, let’s talk about VR. VR is also looking like it might finally break out into the mid/high-end consumer realm this year, with heavy investment from Facebook (via Oculus), Valve/HTC (SteamVR/Vive), Sony (Playstation VR), Microsoft (HoloLens), Samsung (GearVR) and others. Mozilla are rightly investing in WebVR, but I think the real end-goal for VR is an integrated device with no tether (certainly Microsoft and Samsung seem to agree with me here). So there may well be a new class of device on the horizon, with new kinds of browsers and ways of experiencing and integrating the web. Can we afford to not let people experiment with our technology here? I love Mozilla, but I have serious doubts that the next big thing in VR is going to come from us. That there’s no supported way of embedding Gecko worries me for future classes of device like this.
In-vehicle information/entertainment systems are possibly something that will become more of the norm, now that similar devices have become such commodity. Interestingly, the current big desktop and mobile players have very little presence here, and (mostly awful) bespoke solutions are rife. Again, can we afford to make our technology inaccessible to the people that are experimenting in this area? Is having just a good desktop browser enough? Can we really say that’s going to remain how people access the internet for the next 10 years? Probably, but I wouldn’t want to bet everything on that.
If we want an embedding solution, I think the best way to go about it is to start from Firefox for Android. Due to the way Android used to require its applications to interface with native code, Firefox for Android is already organised in such a way that it is basically an embedding API (thus GeckoView). From this point, I think we should make some of the interfaces slightly more generic and remove the JNI dependency from the Gecko-side of the code. Firefox for Android would be the main consumer of this API and would guarantee that it’s maintained. We should allow for it to be built on Linux, Mac and Windows and provide the absolute minimum harness necessary to allow for it to be tested. We would make no guarantees about API or ABI. Externally to the Gecko tree, I would suggest that we start, and that the community maintain, a CEF-compatible library, at least at the API level, that would be a Tier-3 project, much like Firefox OS now is. This, to me, seems like the minimal-effort and most useful way of allowing embeddable Gecko.
In addition, I think we should spend some effort in maintaining a fork of Node.js LTS that uses spidermonkey. If we can promise modern language features and better performance, I expect there’s a user-base that would be interested in this. If there isn’t, fair enough, but I don’t think current experiments have had enough backing to ascertain this.
I think that both of these projects are important, so that we can enable people outside of Mozilla to innovate using our technology, and by osmosis, become educated about our mission and hopefully spread our ideals. Other organisations will do their utmost to establish a monopoly in any new emerging market, and I think it’s a shame that we have such a powerful and comprehensive technology platform and we aren’t enabling other people to use it in more diverse situations.
This post is some insightful further reading on roughly the same topic.
Today, I took some time off to attend the court hearing in the GPL
violation/infringement case that Christoph Hellwig has brought against
I am not in any way legally involved in the lawsuit. However, as a
fellow (former) Linux kernel developer myself, and a long-term Free
Software community member who strongly believes in the copyleft model, I
of course am very interested in this case - and of course in an outcome
in favor of the plaintiff. Nevertheless, the below report tries to
provide an un-biased account of what happened at the hearing today, and
does not contain my own opinions on the matter. I can always write
another blog post about that :)
I blogged about this case before briefly, and
there is a lot of information publicly discussed about the case,
including the information published by the Software Freedom
Conservancy (see the link above, the announcement and the
Still, let's quickly summarize the facts:
- VMware is using parts of the Linux kernel in their proprietary ESXi
product, including the entire SCSI mid-layer, USB support, radix tree
and many, many device drivers.
- as is generally known, Linux is licensed under GNU GPLv2, a
- VMware has modified all the code they took from the Linux kernel and
integrated them into something they call vmklinux.
- VMware has modified their proprietary virtualization OS kernel
vmkernel with specific API/symbol to interact with vmklinux
- at least in earlier versions of ESXi, virtually any block device
access has to go through vmklinux and thus the portions of Linux
- vmklinux and vmkernel are dynamically linked object files that are
linked together at run-time
- the Linux code they took runs in the same execution context (address
space, stack, control flow) like the vmkernel.
Ok, now enter the court hearing of today.
Christoph Hellwig was represented by his two German Lawyers,
Dr. Till Jaeger and
Dr. Miriam Ballhausen.
VMware was represented by three German lawyers lead by
as well as a US attorney,
(by means of two simultaneous interpreters). There were also several
members of the in-house US legal team of VMware present, but not
formally representing the defendant in court.
As is unusual for copyright disputes, there was quite some audience
following the court. Next to the VMware entourage, there were also a
couple of fellow Linux kernel developers as well as some German IT press
representatives following the hearing.
General Introduction of the presiding judge
After some formalities (like the question whether or not a ',' is
missing after the "Inc." in the way it is phrased in the lawsuit), the
presiding judge started with some general remarks
- the court is well aware of the public (and even international public)
interest in this case
- the court understands there are novel fundamental legal questions
raised that no court - at least no German court - had so far to decide
- the court also is well aware that the judges on the panel are not
technical experts and thus not well-versed in software development or
computer science. Rather, they are a court specialized on all sorts
of copyright matters, not particularly related to software.
- the court further understands that Linux is a collaborative,
community-developed operating system, and that the development process
is incremental and involves many authors.
- the court understands there is a lot of discussion about interfaces
between different programs or parts of a program, and that there are a
variety of different definitions and many interpretations of what
Presentation about the courts understanding of the subject matter
The presiding judge continued to explain what was their understanding of
the subject matter. They understood VMware ESXi serves to virtualize a
computer hardware in order to run multiple copies of the same or of
different versions of operating systems on it. They also understand
that vmkernel is at the core of that virtualization system, and that it
contains something called vmkapi which is an interface towards Linux
However, they misunderstood that this case was somehow an interface
between a Linux guest OS being virtualized on top of vmkernel. It took
both defendant and plaintiff some time to illustrate that in fact this
is not the subject of the lawsuit, and that you can still have portions
of Linux running linked into vmkernel while exclusively only
virtualizing Windows guests on top of vmkernel.
The court went on to share their understanding of the GPLv2 and its
underlying copyleft principle, that it is not about abandoning the
authors' rights but to the contrary exercising copyright. They
understood the license has implications on derivative works and
demonstrated that they had been working with both the German
translation a well as the English language original text of GPLv2. At
least I was sort-of impressed by the way they grasped it - much better
than some of the other courts that I had to deal with in the various
cases I was bringing forward during my gpl-violations.org work before.
They also illustrated that they understood that Christoph Hellwig has
been developing parts of the Linux kernel, and that modified parts of
Linux were now being used in some form in VMware ESXi.
After this general introduction, there was the question of whether or
not both parties would still want to settle before going further. The
court already expected that this would be very unlikely, as it
understood that the dispute serves to resolve fundamental legal
question, and there is hardly any compromise in the middle between
using or not using the Linux code, or between licensing vmkernel under a
GPL compatible license or not. And as expected, there was no indication
from either side that they could see an out-of-court settlement of the
dispute at this point.
Discussion of specific Legal Issues (standing)
In terms of the legal arguments brought forward in hundreds of pages of
legal briefs being filed between the parties, the court summarized:
- they do not see a problem in the fact that the lawsuit by Christoph
Hellwig may be funded or supported by the Software Freedom
Conservancy. Christoph is acting on his own behalf, using his own
- they do not see any issues regarding the place of jurisdiction being
placed in Hamburg, Germany, as the defendant is providing the disputed
software via the Internet, which according to German law permits the
plaintiff to choose any court within Germany. The court added, of
course, that whatever verdict it may rule, this verdict will be
limited to the German jurisdiction.
- In terms of the type of authors' right being claimed by the plaintiff,
there was some discussion about paragraph 3 vs. 8 vs. 9 of German
UrhG (the German copyright law). In general it is understood that
the development method of the Linux kernel is a sequential,
incremental development process, and thus it is what we call
Bearbeiterurheberecht (loosely translated as modifying/editing
authors right) that is used by Christoph to make his claim.
Right to sue / sufficient copyrighted works of the plaintiff
There was quite some debate about the question whether or not the
plaintiff has shown that he actually holds a sufficient amount of
The question here is not, whether Christoph has sufficient copyrightable
contributions on Linux as a whole, but for the matter of this legal case
it is relevant which of his copyrighted works end up in the disputed
product VMware ESXi.
Due to the nature of the development process where lots of developers
make intermittent and incremental changes, it is not as straight-forward
to demonstrate this, as one would hope. You cannot simply print an
entire C file from the source code and mark large portions as being
written by Christoph himself. Rather, lines have been edited again and
again, were shifted, re-structured, re-factored. For a non-developer
like the judges, it is therefore not obvious to decide on this question.
This situation is used by the VMware defense in claiming that overall,
they could only find very few functions that could be attributed to
Christoph, and that this may altogether be only 1% of the Linux code
they use in VMware ESXi.
The court recognized this as difficult, as in German copyright law there
is the concept of fading. If the original work by one author has been
edited to an extent that it is barely recognizable, his original work
has faded and so have his rights. The court did not state whether it
believed that this has happened. To the contrary, the indicated that it
may very well be that only very few lines of code can actually make a
significant impact on the work as a whole. However, it is problematic
for them to decide, as they don't understand source code and software
So if (after further briefs from both sides and deliberation of the
court) this is still an open question, it might very well be the case
that the court would request a techncial expert report to clarify this
to the court.
Are vmklinux + vmkernel one program/work or multiple programs/works?
Finally, there was some deliberation about the very key question of
whether or not vmkernel and vmklinux were separate programs / works
or one program / work in the sense of copyright law. Unfortunately only
the very surface of this topic could be touched in the hearing, and the
actual technical and legal arguments of both sides could not be heard.
The court clarified that if vmkernel and vmklinux would be considered
as one program, then indeed their use outside of the terms of the GPL
would be an intrusion into the rights of the plaintiff.
The difficulty is how to actually venture into the legal implications of
certain technical software architecture, when the people involved have
no technical knowledge on operating system theory, system-level software
development and compilers/linkers/toolchains.
A lot is thus left to how good and 'believable' the parties can present
their case. It was very clear from the VMware side that they wanted to
down-play the role and proportion of vmkernel and its Linux heritage.
At times their lawyers made statements like linux is this small yellow
box in the left bottom corner (of our diagram). So of course already
the diagrams are drawn in a way to twist the facts according to their
view on reality.
- The court seems very much interested in the case and wants to
understand the details
- The court recognizes the general importance of the case and the public
interest in it
- There were some fundamental misunderstandings on the technical
architecture of the software under dispute that could be clarified
- There are actually not that many facts that are disputed between both
sides, except the (key, and difficult) questions on
- does Christoph hold sufficient rights on the code to bring forward the legal case?
- are vmkernel and vmklinux one work or two separate works?
The remainder of this dispute will thus be centered on the latter two
questions - whether in this court or in any higher courts that may have
to re-visit this subject after either of the parties takes this further,
if the outcome is not in their favor.
In terms of next steps,
- both parties have until April 15, 2016 to file further briefs to
follow-up the discussions in the hearing today
- the court scheduled May 19, 2016 as date of promulgation. However,
this would of course only hold true if the court would reach a clear
decision based on the briefs by then. If there is a need for an
expert, or any witnesses need to be called, then it is likely there
will be further hearings and no verdict will be reached by then.
Strap yourself in, this is a long post. It should be easy to skim, but the history may be interesting to some. I would like to make the point that, for a web rendering engine, being embeddable is a huge opportunity, how Gecko not being easily embeddable has meant we’ve missed several opportunities over the last few years, and how it would still be advantageous to make Gecko embeddable.
Embedding Gecko means making it easy to use Gecko as a rendering engine in an arbitrary 3rd party application on any supported platform, and maintaining that support. An embeddable Gecko should make very few constraints on the embedding application and should not include unnecessary resources.
- A 3rd party browser with a native UI
- A game’s embedded user manual
- OAuth authentication UI
- A web application
It’s hard to predict what the next technology trend will be, but there’s is a strong likelihood it’ll involve the web, and there’s a possibility it may not come from a company/group/individual with an existing web rendering engine or particular allegiance. It’s important for the health of the web and for Mozilla’s continued existence that there be multiple implementations of web standards, and that there be real competition and a balanced share of users of the various available engines.
Many technologies have emerged over the last decade or so that have incorporated web rendering or web technologies that could have leveraged Gecko;
(2007) iPhone: Instead of using an existing engine, Apple forked KHTML in 2002 and eventually created WebKit. They did investigate Gecko as an alternative, but forking another engine with a cleaner code-base ended up being a more viable route. Several rival companies were also interested in and investing in embeddable Gecko (primarily Nokia and Intel). WebKit would go on to be one of the core pieces of the first iPhone release, which included a better mobile browser than had ever been seen previously.
(2008) Chrome: Google released a WebKit-based browser that would eventually go on to eat a large part of Firefox’s user base. Chrome was initially praised for its speed and light-weightedness, but much of that was down to its multi-process architecture, something made possible by WebKit having a well thought-out embedding capability and API.
(2008) Android: Android used WebKit for its built-in browser and later for its built-in web-view. In recent times, it has switched to Chromium, showing they aren’t adverse to switching the platform to a different/better technology, and that a better embedding story can benefit a platform (Android’s built in web view can now be updated outside of the main OS, and this may well partly be thanks to Chromium’s embedding architecture). Given the quality of Android’s initial WebKit browser and WebView (which was, frankly, awful until later revisions of Android Honeycomb, and arguably remained awful until they switched to Chromium), it’s not much of a leap to think they may have considered Gecko were it easily available.
(2009) WebOS: Nothing came of this in the end, but it perhaps signalled the direction of things to come. WebOS survived and went on to be the core of LG’s Smart TV, one of the very few real competitors in that market. Perhaps if Gecko was readily available at this point, we would have had a large head start on FirefoxOS?
(2009) Samsung Smart TV: Also available in various other guises since 2007, Samsung’s Smart TV is certainly the most popular smart TV platform currently available. It appears Samsung built this from scratch in-house, but it includes many open-source projects. It’s highly likely that they would have considered a Gecko-based browser if it were possible and available.
(2011) PhantomJS: PhantomJS is a headless, scriptable browser, useful for testing site behaviour and performance. It’s used by several large companies, including Twitter, LinkedIn and Netflix. Had Gecko been more easily embeddable, such a product may well have been based on Gecko and the benefits of that would be many sites that use PhantomJS for testing perhaps having better rendering and performance characteristics on Gecko-based browsers. The demand for a Gecko-based alternative is high enough that a similar project, SlimerJS, based on Gecko was developed and released in 2013. Due to Gecko’s embedding deficiencies though, SlimerJS is not truly headless.
(2011) WIMM One: The first truly capable smart-watch, which generated a large buzz when initially released. WIMM was based on a highly-customised version of Android, and ran software that was compatible with Android, iOS and BlackBerryOS. Although it never progressed past the development kit stage, WIMM was bought by Google in 2012. It is highly likely that WIMM’s work forms the base of the Android Wear platform, released in 2014. Had something like WebOS been open, available and based on Gecko, it’s not outside the realm of possibility that this could have been Gecko based.
(2013) Blink: Google decide to fork WebKit to better build for their own uses. Blink/Chromium quickly becomes the favoured rendering engine for embedding. Google were not afraid to introduce possible incompatibility with WebKit, but also realised that embedding is an important feature to maintain.
(2014) Android Wear: Android specialised to run on watch hardware. Smart watches have yet to take off, and possibly never will (though Pebble seem to be doing alright, and every major consumer tech product company has launched one), but this is yet another area where Gecko/Mozilla have no presence. FirefoxOS may have lead us to have an easy presence in this area, but has now been largely discontinued.
(2014) Atom/Electron: Github open-sources and makes available its web-based text editor, which it built on a home-grown platform of Node.JS and Chromium, which it later called Electron. Since then, several large and very successful projects have been built on top of it, including Slack and Visual Studio Code. It’s highly likely that such diverse use of Chromium feeds back into its testing and development, making it a more robust and performant engine, and importantly, more widely used.
(2016) Brave: Former Mozilla co-founder and CTO heads a company that makes a new browser with the selling point of blocking ads and tracking by default, and doing as much as possible to protect user privacy and agency without breaking the web. Said browser is based off of Chromium, and on iOS, is a fork of Mozilla’s own WebKit-based Firefox browser. Brendan says they started based off of Gecko, but switched because it wasn’t capable of doing what they needed (due to an immature embedding API).
Current state of affairs
WebKit is the only viable alternative for an embeddable web rendering engine and is still quite commonly used, but is generally viewed as a less up-to-date and less performant engine vs. Chromium/Blink.
Gecko has limited embedding capability that is not well-documented, not well-maintained and not heavily invested in. I say this with the utmost respect for those who are working on it; this is an observation and a criticism of Mozilla’s priorities as an organisation. We have at various points in history had embedding APIs/capabilities, but we have either dropped them (gtkmozembed) or let them bit-rot (IPCLite). We do currently have an embedding widget for Android that is very limited in capability when compared to the default system WebView.
It’s not too late. It’s incredibly hard to predict where technology is going, year-to-year. It was hard to predict, prior to the iPhone, that Nokia would so spectacularly fall from the top of the market. It was hard to predict when Android was released that it would ever overtake iOS, or even more surprisingly, rival it in quality (hard, but not impossible). It was hard to predict that WebOS would form the basis of a major competing Smart TV several years later. I think the examples of our missed opportunities are also good evidence that opening yourself up to as much opportunity as possible is a good indicator of future success.
If we want to form the basis of the next big thing, it’s not enough to be experimenting in new areas. We need to enable other people to experiment in new areas using our technology. Even the largest of companies have difficulty predicting the future, or taking charge of it. This is why it’s important that we make easily-embeddable Gecko a reality, and I plead with the powers that be that we make this higher priority than it has been in the past.
In 2008, we started bs11-abis, which was shortly after renamed to
OpenBSC. At the time it seemed like a good idea to use
trac as the project management system,
to have a wiki and an issue tracker.
When further Osmocom projects like OsmocomBB, OsmocomTETRA etc. came
around, we simply replicated that infrastructure: Another trac instance
with the same theme, and a shared password file.
The problem with this (and possibly the way we used it) is:
- it doesn't scale, as creating projects is manual, requires a sysadmin
and is time-consuming. This meant e.g. SIMtrace was just a wiki page
in the OsmocomBB trac installation + associated http redirect, causing
- issues can not easily be moved from one project to another, or have
cross-project relationships (like, depend on an issue in another
- we had to use an external planet in order to aggregate the blog of
each of the trac instances
- user account management the way we did it required shell access to the
machine, meaning user account applications got dropped due to the
effort involved. My apologies for that.
Especially the lack of being able to move pages and tickets between
trac's has resulted in a suboptimal use of the tools. If we first write
code as part of OpenBSC and then move it to libosmocore, the associated
issues + wiki pages should be moved to a new project.
At the same time, for the last 5 years we've been successfully using
redmine inside sysmocom to keep track of
many dozens of internal projects.
So now, finally, we (zecke, tnt, myself) have taken up the task to
migrate the osmocom.org projects into redmine. You can see the current
status at http://projects.osmocom.org/. We could create a more
comprehensive project hierarchy, and give libosmocore, SIMtrace,
OsmoSGSN and many others their own project.
Thanks to zecke for taking care of the installation/sysadmin part and
the initial conversion!
Unfortunately the conversion from trac to redmine wiki syntax (and
structure) was not as automatic and straight-forward as one would have
hoped. But after spending one entire day going through the most
important wiki pages, things are looking much better now. As a side
effect, I have had a more comprehensive look into the history of all of
our projects than ever before :)
Still, a lot of clean-up and improvement is needed until I'm happy,
particularly splitting the OpenBSC wiki into separate OsmoBSC, OsmoNITB,
OsmoBTS, OsmoPCU and OsmoSGSN wiki's is probably still going to take
If you would like to help out, feel free to register an account on
projects.osmocom.org (if you don't already have one from the old trac
projects) and mail me for write access to the project(s) of your choice.
Possible tasks include
- putting pages into a more hierarchic structure (there's a parent/child
relationship in redmine wikis)
- fixing broken links due to page renames / wiki page moves
- creating a new redmine 'Project' for your favorite tool that has a git
repo on http://git.osmocom.org/ and writing some (at least initial)
documentation about it.
You don't need to be a software developer for that!
I've had the pleasure of being invited to netdevconf 1.1 in Seville, spain.
After about a decade of absence in the Linux kernel networking
community, it was great to meet lots of former colleagues again, as well
as to see what kind of topics are currently being worked on and under
The conference had a really nice spirit to it. I like the fact that it
is run by the community itself. Organized by respected members of the
community. It feels like Linux-Kongress or OLS or UKUUG or many others
felt in the past. There's just something that got lost when the Linux
Foundation took over (or pushed aside) virtually any other Linux kernel
related event on the planet in the past :/ So thanks to Jamal for
starting netdevconf, and thanks to Pablo and his team for running this
particular instance of it.
I never really wanted to leave netfilter and the Linux kernel network
stack behind - but then my problem appears to be that there are simply
way too many things of interest to me, and I had to venture first into
RFID (OpenPCD, OpenPICC), then into smartphone hardware and software
(Openmoko) and finally embark on a journey of applied telecoms
archeology by starting OpenBSC, OsmocomBB and various other Osmocom
Staying in Linux kernel networking land was simply not an option with a
scope that can only be defined as wide as wanting to implement any
possible protocol on any possible interface of any possible generation
of cellular network.
At times like attending netdevconf I wonder if I made the right choice
back then. Linux kernel networking is a lot of fun and hard challenges,
too - and it is definitely an area that's much more used by many more
organizations and individuals: The code I wrote on netfilter/iptables
is probably running on billions of devices by now. Compare that to the
Osmocom code, which is probably running on a few thousands of devices,
if at all. Working on Open Source telecom protocols is sometimes a
lonely fight. Not that I wouldn't value the entire team of developers
involved in it. to the contrary. But lonely in the context that 99.999%
of that world is a proprietary world, and FOSS cellular infrastructure
is just the 0.001% at the margin of all of that.
One the Linux kernel side, you have virtually every IT company putting in
their weight these days, and properly funded development is not that
hard to come by. In cellular, reasonable funding for anything (compared
to the scope and complexity of the tasks) is rather the exception than
But no, I don't have any regrets. It has been an interesting journey and
I probably had the chance to learn many more things than if I had stayed
If only each day had 48 hours and I could work both on Osmocom and on
the Linux kernel...