February 23, 2017
In May 2016 we got the GTP-U tunnel encapsulation/decapsulation
module developed by Pablo Neira, Andreas Schultz and myself merged into
the 4.8.0 mainline kernel.
During the second half of 2016, the code basically stayed untouched. In
early 2017, several patch series of (at least) three authors have been
published on the netdev mailing list for review and merge.
This poses the very valid question on how do we test those (sometimes
quite intrusive) changes. Setting up a complete cellular network with
either GPRS/EGPRS or even UMTS/HSPA is possible using OsmoSGSN and
related Osmocom components. But it's of course a luxury that not many
Linux kernel networking hackers have, as it involves the availability of
a supported GSM BTS or UMTS hNodeB. And even if that is available,
there's still the issue of having a spectrum license, or a wired setup
with coaxial cable.
So as part of the recent discussions on netdev, I tested and described a
minimal test setup using libgtpnl, OpenGGSN and sgsnemu.
This setup will start a mobile station + SGSN emulator inside a Linux
network namespace, which talks GTP-C to OpenGGSN on the host, as well as
GTP-U to the Linux kernel GTP-U implementation.
In case you're interested, feel free to check the following wiki page:
This is of course just for manual testing, and for functional (not
performance) testing only. It would be great if somebody would pick up
on my recent mail containing some suggestions about an automatic
regression testing setup for the kernel GTP-U code. I have way
too many spare-time projects in desperate need of some attention to work
on this myself. And unfortunately, none of the telecom operators (who
are the ones benefiting most from a Free Software accelerated GTP-U
implementation) seems to be interested in at least co-funding or
otherwise contributing to this effort :/
Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.
So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.
You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.
The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.
We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.
One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.
Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.
Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.
Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.
February 15, 2017
I've recently attended a seminar that (among other topics) also covered
RF interference hunting. The speaker was talking about various
real-world cases of RF interference and illustrating them in detail.
Of course everyone who has any interest in RF or cellular will know
about fundamental issues of radio frequency interference. To the
biggest part, you have
- cells of the same operator interfering with each other due to too
frequent frequency re-use, adjacent channel interference, etc.
- cells of different operators interfering with each other due to
intermodulation products and the like
- cells interfering with cable TV, terrestrial TV
- DECT interfering with cells
- cells or microwave links interfering with SAT-TV reception
- all types of general EMC problems
But what the speaker of this seminar covered was actually a cellular
base-station being re-broadcast all over Europe via a commercial
It is a well-known fact that most satellites in the sky are basically
just "bent pipes", i.e. they consist of a RF receiver on one frequency,
a mixer to shift the frequency, and a power amplifier. So basically
whatever is sent up on one frequency to the satellite gets
re-transmitted back down to earth on another frequency. This is abused
by "satellite hijacking" or "transponder hijacking" and has been covered
for decades in various publications.
Ok, but how does cellular relate to this? Well, apparently some people
are running VSAT terminals (bi-directional satellite terminals) with
improperly shielded or broken cables/connectors. In that case, the RF
emitted from a nearby cellular base station leaks into that cable, and
will get amplified + up-converted by the block up-converter of that VSAT
The bent-pipe satellite subsequently picks this signal up and
re-transmits it all over its coverage area!
I've tried to find some public documents about this, an there's
surprisingly little public information about this phenomenon.
However, I could find a slide set from SES, presented at a
Satellite Interference Reduction Group: Identifying Rebroadcast (GSM)
It describes a surprisingly manual and low-tech approach at hunting down
the source of the interference by using an old nokia net-monitor phone
to display the MCC/MNC/LAC/CID of the cell. Even in 2011 there were
already open source projects such as airprobe that could have done the
job based on sampled IF data. And I'm not even starting to consider
It should be relatively simple to have a SDR that you can tune to a
given satellite transponder, and which then would look for any
GSM/UMTS/LTE carrier within its spectrum and dump their identities in a
fully automatic way.
But then, maybe it really doesn't happen all that often after all to
rectify such a development...
February 12, 2017
In the good old days ever since the late 1980ies - and a surprising
amount even still today - telecom signaling traffic is still carried
over circuit-switched SS7 with its TDM lines as physical layer, and not
an IP/Ethernet based transport.
When Holger first created OsmoBSC, the BSC-only version of OpenBSC some
7-8 years ago, he needed to implement a minimal subset of SCCP wrapped
in TCP called SCCP Lite. This was due to the simple fact that the MSC
to which it should operate implemented this non-standard protocol
stacking that was developed + deployed before the IETF SIGTRAN WG
specified M3UA or SUA came around. But even after those were specified
in 2004, the 3GPP didn't specify how to carry A over IP in a standard
way until the end of 2008, when a first A interface over IP study
As time passese, more modern MSCs of course still implement classic
circuit-switched SS7, but appear to have dropped SCCPlite in favor of
real AoIP as specified by 3GPP meanwhile. So it's time to add this to
the osmocom universe and OsmoBSC.
A couple of years ago (2010-2013) implemented both classic SS7
(MTP2/MTP3/SCCP) as well as SIGTRAN stackings (M2PA/M2UA/M3UA/SUA in
Erlang. The result has been used in some production deployments, but
only with a relatively limited feature set. Unfortunately, this code
has nto received any contributions in the time since, and I have to say
that as an open source community project, it has failed. Also, while
Erlang might be fine for core network equipment, running it on a BSC
really is overkill. Keep in miond that we often run OpenBSC on
really small ARM926EJS based embedded systems, much more resource
constrained than any single smartphone during the late decade.
In the meantime (2015/2016) we also implemented some minimal SUA support
for interfacing with UMTS femto/small cells via Iuh (see OsmoHNBGW).
So in order to proceed to implement the required
SCCP-over-M3UA-over-SCTP stacking, I originally thought well, take
Holgers old SCCP code, remove it from the IPA multiplex below, stack it
on top of a new M3UA codebase that is copied partially from SUA.
However, this falls short of the goals in several ways:
- The application shouldn't care whether it runs on top of SUA or SCCP,
it should use a unified interface towards the SCCP Provider.
OsmoHNBGW and the SUA code already introduce such an interface baed on
the SCCP-User-SAP implemented using Osmocom primitives (osmo_prim).
However, the old OsmoBSC/SCCPlite code doesn't have such abstraction.
- The code should be modular and reusable for other SIGTRAN stackings
as required in the future
So I found myself sketching out what needs to be done and I ended up
pretty much with a re-implementation of large parts. Not quite fun, but
definitely worth it.
The strategy is:
And then finally stack all those bits on top of each other, rendering a
fairly clean and modern implementation that can be used with the IuCS of
the virtually unmodified OsmmoHNBGW, OsmoCSCN and OsmoSGSN for testing.
Next steps in the direction of the AoIP are:
- Implementation of the MTP-SAP based on the IPA transport
- Binding the new SCCP code on top of that
- Converting OsmoBSC code base to use the SCCP-User-SAP for its
From that point onwards, OsmoBSC doesn't care anymore whether it
transports the BSSAP/BSSMAP messages of the A interface over
SCCP/IPA/TCP/IP (SCCPlite) SCCP/M3UA/SCTP/IP (3GPP AoIP), or even
something like SUA/SCTP/IP.
However, the 3GPP AoIP specs (unlike SCCPlite) actually modify the
BSSAP/BSSMAP payload. Rather than using Circuit Identifier Codes and
then mapping the CICs to UDP ports based on some secret conventions,
they actually encapsulate the IP address and UDP port information for
the RTP streams. This is of course the cleaner and more flexible
approach, but it means we'll have to do some further changes inside the
actual BSC code to accommodate this.
February 11, 2017
When implementing any kind of communication protocol, one always dreams
of some existing test suite that one can simply run against the
implementation to check if it performs correct in at least those use
cases that matter to the given application.
Of course in the real world, there rarely are protocols where this is
true. If test specifications exist at all, they are often just very
abstract texts for human consumption that you as the reader should
For some (by far not all) of the protocols found in cellular networks,
every so often I have seen some formal/abstract machine-parseable test
specifications. Sometimes it was TTCN-2, and sometimes TTCN-3.
If you haven't heard about TTCN-3, it is basically a way to create
functional tests in an abstract description (textual + graphical), and
then compile that into an actual executable tests suite that you can run
against the implementation under test.
However, when I last did some research into this several years ago, I
couldn't find any Free / Open Source tools to actually use those
formally specified test suites. This is not a big surprise, as even
much more fundamental tools for many telecom protocols are missing, such
as good/complete ASN.1 compilers, or even CSN.1 compilers.
To my big surprise I now discovered that Ericsson had released their
(formerly internal) TITAN TTCN3 Toolset
as Free / Open Source Software under EPL 1.0. The project is even part
of the Eclipse Foundation. Now I'm certainly not a friend of Java or
Eclipse by all means, but well, for running tests I'd certainly not
The project also doesn't seem like it was a one-time code-drop but seems
very active with many repositories on gitub. For example for the core
module, titan.core shows
plenty of activity on an almost daily basis. Also, binary releases for
a variety of distributions are made available. They
even have a video showing the installation ;)
If you're curious about TTCN-3 and TITAN, Ericsson also have made
available a great 200+ pages slide set about TTCN-3 and TITAN.
I haven't yet had time to play with it, but it definitely is rather high
on my TODO list to try.
ETSI provides a couple of test suites in TTCN-3 for protocols like
DIAMETER, GTP2-C, DMR, IPv6, S1AP, LTE-NAS, 6LoWPAN, SIP, and others at
http://forge.etsi.org/websvn/ (It's also the first time I've seen that
ETSI has a SVN server. Everyone else is using git these days, but yes,
revision control systems rather than periodic ZIP files is definitely a
big progress. They should do that for their reference codecs and ASN.1
I'm not sure once I'll get around to it. Sadly, there is no TTCN-3 for
SCCP, SUA, M3UA or any SIGTRAN related stuff, otherwise I would want to
try it right away. But it definitely seems like a very interesting
technology (and tool).
February 10, 2017
Last weekend I had the pleasure of attending FOSDEM 2017. For many years, it is probably the most
exciting event exclusively on Free Software to attend every year.
My personal highlights (next to meeting plenty of old and new friends)
in terms of the talks were:
I was attending but not so excited by Georg Greve's OpenPOWER talk. It was a
great talk, and it is an important topic, but the engineer in me would
have hoped for some actual beefy technical stuff. But well, I was just
not the right audience. I had heard about OpenPOWER quite some time ago
and have been following it from a distance.
The LoRaWAN talk
couldn't have been any less technical, despite stating technical,
political and cultural in the topic. But then, well, just recently
33C3 had the most exciting LoRa PHY Reverse Engineering Talk by Matt
Other talks whose recordings I still want to watch one of these days:
January 31, 2017
I'm very happy that in 2017, we will have the first ever technical
conference on the Osmocom cellular infrastructure projects.
For many years, we have had a small, invitation only event by Osmocom
developers for Osmocom developers called OsmoDevCon. This was fine for
the early years of Osmocom, but during the last few years it became
apparent that we also need a public event for our many users. Those
range from commercial cellular operators to community based efforts like
Rhizomatica, and of course include the many
research/lab type users with whom we started.
So now we'll have the public OsmoCon on April 21st, back-to-back with
the invitation-only OsmoDevcon from April 22nd through 23rd.
I'm hoping we can bring together a representative sample of our user
base at OsmoCon 2017 in April. Looking forward to meet you all. I hope
you're also curious to hear more from other users, and of course the
January 22, 2017
A few days ago, Autodesk has announecd
that the popular EAGLE electronics design automation (EDA) software is
moving to a subscription based model.
When previously you paid once for a license and could use that
version/license as long as you wanted, there now is a monthly
subscription fee. Once you stop paying, you loose the right to use the
software. Welcome to the brave new world.
I have remotely observed this subscription model as a general trend in
the proprietary software universe. So far it hasn't affected me at all,
as the only two proprietary applications I use on a regular basis
during the last decade are IDA and EAGLE.
I already have ethical issues with using non-free software, but those
two cases have been the exceptions, in order to get to the productivity
required by the job. While I can somehow convince my consciousness in
those two cases that it's OK - using software under a subscription model is
completely out of the question, period. Not only would I end up paying
for the rest of my professional career in order to be able to open and
maintain old design files, but I would also have to accept software that
"calls home" and has "remote kill" features. This is clearly not
something I would ever want to use on any of my computers. Also, I
don't want software to be associated with any account, and it's not the
bloody business of the software maker to know when and where I use my
For me - and I hope for many, many other EAGLE users - this move is
utterly unacceptable and certainly marks the end of any business between
the EAGLE makers and myself and/or my companies. I will happily use
my current "old-style" EAGLE 7.x licenses for the near future, and theS
see what kind of improvements I would need to contribute to KiCAD or
other FOSS EDA software in order to eventually migrate to those.
As expected, this doesn't only upset me, but many other customers, some
of whom have been loyal to using EAGLE for many years if not decades,
back to the DOS version. This is reflected by some media reports (like
this one at hackaday
or user posts at element14.com or eaglecentral.ca
who are similarly critical of this move.
Rest in Peace, EAGLE. I hope Autodesk gets what they deserve: A new
influx of migrations away from EAGLE into the direction of Open Source
EDA software like KiCAD.
In fact, the more I think about it, I'm actually very much inclined to
work on good FOSS migration tools / converters - not only for my own
use, but to help more people move away from EAGLE. It's not that I
don't have enough projects at my hand already, but at least I'm
motivated to do something about this betrayal by Autodesk. Let's see
what (if any) will come out of this.
So let's see it that way: What Autodesk is doing is raising the level
off pain of using EAGLE so high that more people will use and contribute
FOSS EDA software. And that is actually a good thing!
December 30, 2016
I've just had the pleasure of attending all four days of 33C3 and have returned
home with somewhat mixed feelings.
I've been a regular visitor and speaker at CCC events since 15C3 in
1998, which among other things
means I'm an old man now. But I digress ;)
The event has come extremely far in those years. And to be honest, I
struggle with the size. Back then, it was a meeting of like-minded
hackers. You had the feeling that you know a significant portion of the
attendees, and it was easy to connect to fellow hackers.
These days, both the number of attendees and the size of the event make
you feel much rather that you're in general public, rather than at some
meeting of fellow hackers. Yes, it is good to see that more people are
interested in what the CCC (and the selected speakers) have to say, but
somehow it comes at the price that I (and I suspect other old-timers)
feel less at home. It feels too much like various other technology
One aspect creating a certain feeling of estrangement is also the venue
itself. There are an incredible number of rooms, with a labyrinth of
hallways, stairs, lobbies, etc. The size of the venue simply makes it
impossible to simply _accidentally_ running into all of your fellow
hackers and friends. If I want to meet somebody, I have to make an
explicit appointment. That is an option that exits most of the rest of
the year, too.
While fefe is happy about the many small children attending
the event, to me this seems
somewhat alien and possibly inappropriate. I guess from teenage years
onward it certainly makes sense, as they can follow the talks and
participate in the workshop. But below that age?
The range of topics covered at the event also becomes wider, at least I
feel that way. Topics like IT security, data protection, privacy,
intelligence/espionage and learning about technology have always been
present during all those years. But these days we have bloggers sitting
on stage and talking about bottles of wine (seriously?).
Contrary to many, I also really don't get the excitement about shows
like 'Methodisch Inkorrekt'. Seems to me like mainstream
compatible entertainment in the spirit of the 1990ies Knoff Hoff Show without much
potential to make the audience want to dig deeper into (information)
Yesterday, together with Holger 'zecke' Freyther, I co-presented at 33C3 about
Dissectiong modern (3G/4G) cellular modems.
This presentation covers some of our recent explorations into a specific
type of 3G/4G cellular modems, which next to the regular modem/baseband
processor also contain a Cortex-A5 core that (unexpectedly) runs Linux.
We want to use such modems for building self-contained M2M devices that
run the entire application inside the modem itself, without any external
needs except electrical power, SIM card and antenna.
Next to that, they also pose an ideal platform for testing the Osmocom
network-side projects for running GSM, GPRS, EDGE, UMTS and HSPA
You can find the Slides
and the Video recordings
in case you're interested in more details about our work.
The results of our reverse engineering can be found in the wiki at
http://osmocom.org/projects/quectel-modems/wiki together with links to
the various git repositories containing related tools.
As with all the many projects that I happen to end up doing, it would be
great to get more people contributing to them. If you're interested in
cellular technology and want to help out, feel free to register at the
osmocom.org site and start adding/updating/correcting information to the
You can e.g. help by
- playing with the modem and documenting your findings
- reviewing the source code released by Qualcomm + Quectel and
documenting your findings
- help us to create a working OE build with our own kernel and rootfs
images as well as opkg package feeds for the modems
- help reverse engineering DIAG and QMI protocols as well as the open
source programs to interact with them
December 29, 2016
In 2016, Osmocom gained initial 3.5G support with osmo-iuh and the Iu
interface extensions of our libmsc and OsmoSGSN code. This means you can run
your own small open source 3.5G cellular network for SMS, Voice and Data
However, the project needs more contributors: Become an active
member in the Osmocom development community and get your nano3G
femtocell for free.
I'm happy to announce that my company sysmocom hereby issues a call for
proposals to the general public. Please describe in a short proposal
how you would help us improving the Osmocom project if you were to
receive one of those free femtocells.
Details of this proposal can be found at
Please contact mailto:firstname.lastname@example.org in case of any
December 16, 2016
When you work with GSM/cellular systems, the definite resource are the
specifications. They were originally released by ETSI, later by 3GPP.
The problem start with the fact that there are separate numbering
schemes. Everyone in the cellular industry I know always uses the
GSM/3GPP TS numbering scheme, i.e. something like 3GPP TS 44.008.
However, ETSI assigns its own numbers to the specs, like ETSI TS
144008. Now in most cases, it is as simple s removing the '.' and
prefixing the '1' in the beginning. However, that's not always true and
there are exceptions such as 3GPP TS 01.01 mapping to ETSI TS
101855. To make things harder, there doesn't seem to be a
machine-readable translation table between the spec numbers, but there's
a website for spec number conversion at http://webapp.etsi.org/key/queryform.asp
When I started to work on GSM related topics somewhere between my work
at Openmoko and the start of the OpenBSC project, I manually downloaded
the PDF files of GSM specifications from the ETSI website. This was a
cumbersome process, as you had to enter the spec number (e.g. TS 04.08)
in a search window, look for the latest version in the search results,
click on that and then click again for accessing the PDF file (rather
than a proprietary Microsoft Word file).
At some point a poor girlfriend of mine was kind enough to do this
manual process for each and every 3GPP spec, and then create a
corresponding symbolic link so that you could type something like evince
/spae/openmoko/gsm-specs/by_chapter/44.008.pdf into your command line
and get instant access to the respective spec.
However, of course, this gets out of date over time, and by now almost a
decade has passed without a systematic update of that archive.
To the rescue, 3GPP started at some long time ago to not only provide
the obnoxious M$ Word DOC files, but have deep links to ETSI. So you
could go to http://www.3gpp.org/DynaReport/44-series.htm and then click
on 44.008, and one further click you had the desired PDF, served by
ETSI (3GPP apparently never provided PDF files).
However, in their infinite wisdom, at some point in 2016 the 3GPP
webmaster decided to remove those deep links. Rather than a nice long
list of released versions of a given spec,
http://www.3gpp.org/DynaReport/44008.htm now points to some crappy
then get a ZIP file with a single Word DOC file inside. You can hardly
male it any more inconvenient and cumbersome. The PDF links would open
favorite PDF viewer. Single click to the information you want. But no,
the PDF links had to go and replaced with ZIP file downloads that you
first need to extract, and then open in something like LibreOffice,
taking ages to load the document, rendering it improperly in a word
processor. I don't want to edit the spec, I want to read it, sigh.
So since the usability of this 3GPP specification resource had been
artificially crippled, I was annoyed sufficiently well to come up with a
- first create a complete mirror of all ETSI TS (technical
specifications) by using a recursive wget on
- then use a shell script that utilizes pdfgrep and awk to determine the
3GPP specification number (it is written in the title on the first
page of the document) and creating a symlink. Now I have something
like 44.008-4.0.0.pdf -> ts_144008v040000p.pdf
It's such a waste of resources to have to download all those files and
then write a script using pdfgrep+awk to re-gain the same usability that
the 3GPP chose to remove from their website. Now we can wait for ETSI
to disable indexing/recursion on their server, and easy and quick spec
access would be gone forever :/
Why does nobody care about efficiency these days?
If you're also an avid 3GPP spec reader, I'm publishing the rather
trivial scripts used at http://git.osmocom.org/3gpp-etsi-pdf-links
If you have contacts to the 3GPP webmaster, please try to motivate them
to reinstate the direct PDF links.
December 07, 2016
Many years ago, in the aftermath of Openmoko shutting down, fellow
former Linux kernel hacker Werner Almesberger
was working on an IEEE 802.15.4 (WPAN) adapter for the
As a spin-off to that, the ATUSB device was
designed: A general-purpose open hardware (and FOSS firmware + driver)
IEEE 802.15.4 adapter that can be plugged into any USB port.
This adapter has received a mainline Linux kernel driver written by
Werner Almesberger and Stefan Schmidt, which was eventually merged into
mainline Linux in May 2015 (kernel v4.2 and later).
Earlier in 2016, Stefan Schmidt (the current ATUSB Linux driver
maintainer) approached me about the situation that ATUSB hardware was
frequently asked for, but currently unavailable in its
physical/manufactured form. As we run a shop with smaller electronics
items for the wider Osmocom community at sysmocom, and we also
frequently deal with contract manufacturers for low-volume electronics
like the SIMtrace device anyway, it was easy to say "yes, we'll do it".
As a result, ready-built, programmed and tested ATUSB devices are now
finally available from the sysmocom webshop
Note: I was never involved with the development of the ATUSB hardware,
firmware or driver software at any point in time. All credits go to
Werner, Stefan and other contributors around ATUSB.
December 06, 2016
In a previous life I used to do a lot of IT security work, probably even
at a time when most people had no idea what IT security actually is. I
grew up with the Chaos Computer Club, as it was a great place to meet
people with common interests, skills and ethics. People were hacking
(aka 'doing security research') for fun, to grow their skills, to
advance society, to point out corporate stupidities and to raise
awareness about issues.
I've always shared any results worth noting with the general public.
Whether it was in RFID security, on GSM security, TETRA security, etc.
Even more so, I always shared the tools, creating free software
implementations of systems that - at that time - were very difficult to
impossible to access unless you worked for the vendors of related
device, who obviously had a different agenda then to disclose security
concerns to the general public.
Publishing security related findings at related conferences can be
interpreted in two ways:
On the one hand, presenting at a major event will add to your
credibility and reputation. That's a nice byproduct, but that shouldn't
be the primarily reason, unless you're some kind of a egocentric stage
On the other hand, presenting findings or giving any kind of
presentation or lecture at an event is a statement of support for that
event. When I submit a presentation at a given event, I think carefully
if that topic actually matches the event.
The reason that I didn't submit any talks in recent years at CCC events
is not that I didn't do technically exciting stuff that I could talk
about - or that I wouldn't have the reputation that would make people
consider my submission in the programme committee. I just thought there
was nothing in my work relevant enough to bother the CCC attendees with.
So when Holger 'zecke' Freyther and I chose to present about our recent
journeys into exploring modern cellular modems at the annual Chaos
Communications Congress, we did so because the CCC Congress is the right
audience for this talk. We did so, because we think the people there
are the kind of community of like-minded spirits that we would like to
contribute to. Whom we would like to give something back, for the many
years of excellent presentations and conversations had.
So far so good.
However, in 2016, something happened that I haven't seen yet in my 17
years of speaking at Free Software, Linux, IT Security and other
conferences: A select industry group (in this case the GSMA) asking me
out of the blue to give them the talk one month in advance at a private
I could hardly believe it. How could they? Who am I? Am I spending
sleepless nights and non-existing spare time into security research of
cellular modems to give a free presentation to corporate guys at a
closed industry meeting? The same kind of industries that create the
problems in the first place, and who don't get their act together in
building secure devices that respect people's privacy? Certainly not.
I spend sleepless nights of hacking because I want to share the results
with my friends. To share it with people who have the same passion,
whom I respect and trust. To help my fellow hackers to understand
technology one step more.
If that kind of request to undermine the researcher/authors initial
publication among friends is happening to me, I'm quite sure it must be
happening to other speakers at the 33C3 or other events, too. And that
makes me very sad. I think the initial publication is something that
connects the speaker/author with his audience.
Let's hope the researchers/hackers/speakers have sufficiently strong
ethics to refuse such requests. If certain findings are initially
published at a certain conference, then that is the initial publication.
Period. Sure, you can ask afterwards if an author wants to repeat the
presentation (or a similar one) at other events. But pre-empting the
initial publication? Certainly not with me.
I offered the GSMA that I could talk on the importance of having FOSS
implementations of cellular protocol stacks as enabler for security
research, but apparently this was not to their interest. Seems like all
they wanted is an exclusive heads-up on work they neither commissioned
or supported in any other way.
And btw, I don't think what Holger and I will present about is all that
exciting in the first place. More or less the standard kind of security
nightmares. By now we are all so numbed down by nobody considering
security and/or privacy in design of IT systems, that is is hardly any
news. IoT how it is done so far might very well be the doom of
mankind. An unstoppable tsunami of insecure and privacy-invading
devices, built on ever more complex technology with way too many
security issues. We shall henceforth call IoT the Industry of
I typically prefer to blog about technical topics, but the occasional
stupidity in every-day (business) life is simply too hard to resist.
Today I updated the shipping pricing / zones in the ERP system of my
company to predict shipping rates based on weight and destination of
Deutsche Post, the German Postal system is using their DHL brand for
postal packages. They divide the world into four zones:
- Zone 1 (EU)
- Zone 2 (Europe outside EU)
- Zone 3 (World)
You would assume that "World" encompasses everything that's not part of
the other zones. So far so good. However, I then stumbled about Zone 4 (rest of
world). See for yourself:
So the World according to DHL is a very small group of countries
including Libya and Syria, while countries like Mexico are rest of
Quite charming, I wonder which PR, communicatoins or marketing guru came
up with such a disqualifying name. Maybe they should hve called id 3rd
world and 4th world instead? Or even discworld?
November 27, 2016
In 2006 I first visited Taiwan. The reason back then was Sean Moss-Pultz
contacting me about a new Linux and Free Software based Phone that he
wanted to do at FIC in Taiwan. This later became the Neo1973 and
the Openmoko project and finally became part
of both Free Software as well as smartphone history.
Ten years later, it might be worth to share a bit of a retrospective.
It was about building a smartphone before Android or the iPhone existed
or even were announced. It was about doing things "right" from a Free
Software point of view, with FOSS requirements going all the way down to
component selection of each part of the electrical design.
Of course it was quite crazy in many ways. First of all, it was a
bunch of white, long-nosed western guys in Taiwan, starting a company
around Linux and Free Software, at a time where that was not really
well-perceived in the embedded and consumer electronics world yet.
It was also crazy in terms of the many cultural 'impedance mismatches',
and I think at some point it might even be worth to write a book about
the many stories we experienced. The biggest problem here is of course
that I wouldn't want to expose any of the companies or people in the
many instances something went wrong. So probably it will remain a
secret to those present at the time :/
In any case, it was a great project and definitely one of the most
exciting (albeit busy) times in my professional career so far. It was
also great that I could involve many friends and FOSS-compatriots from
other projects in Openmoko, such as Holger Freyther, Mickey Lauer,
Stefan Schmidt, Daniel Willmann, Joachim Steiger, Werner Almesberger,
Milosch Meriac and others. I am happy to still work on a daily basis
with some of that group, while others have moved on to other areas.
I think we all had a lot of fun, learned a lot (not only about Taiwan),
and were working really hard to get the hardware and software into
shape. However, the constantly growing scope, the [for western terms]
quite unclear and constantly changing funding/budget situation and the
many changes in direction have ultimately lead to missing the market
opportunity. At the time the iPhone and later Android entered the
market, it was too late for a small crazy Taiwanese group of
FOSS-enthusiastic hackers to still have a major impact on the landscape
of Smartphones. We tried our best, but in the end, after a lot of hype
and publicity, it never was a commercial success.
What's more sad to me than the lack of commercial success is also the
lack of successful free software that resulted. Sure, there were some
u-boot and linux kernel drivers that got merged mainline, but none of
the three generations of UI stacks (GTK, Qt or EFL based), nor the GSM
Modem abstraction gsmd/libgsmd nor middleware (freesmartphone.org) has
manage to survive the end of the Openmoko company, despite having
deserved to survive.
Probably the most important part that survived Openmoko was the
pioneering spirit of building free software based phones. This spirit
has inspired pure volunteer based projects like
GTA04/Openphoenux/Tinkerphone, who have achieved extraordinary results -
but who are in a very small niche.
What does this mean in practise? We're stuck with a smartphone world in
which we can hardly escape any vendor lock-in. It's virtually
impossible in the non-free-software iPhone world, and it's difficult in
the Android world. In 2016, we have more Linux based smartphones than
ever - yet we have less freedom on them than ever before. Why?
- the amount of hardware documentation on the processors and chipsets to
day is typically less than 10 years ago. Back then, you could still
get the full manual for the S3C2410/S3C2440/S3C6410 SoCs. Today,
this is not possible for the application processors of any vendor
- the tighter integration of application processor and baseband
processor means that it is no longer possible on most phone designs to
have the 'non-free baseband + free application processor' approach
that we had at Openmoko. It might still be possible if you designed
your own hardware, but it's impossible with any actually existing
hardware in the market.
- Google blurring the line between FOSS and proprietary code in the
Android OS. Yes, there's AOSP - but how many features are lacking?
And on how many real-world phones can you install it? Particularly
with the Google Nexus line being EOL'd? One of the popular exceptions
Fairphone2 with it's alternative AOSP operating system,
even though that's not the default of what they ship.
- The many binary-only drivers / blobs, from the graphics stack to wifi
to the cellular modem drivers. It's a nightmare and really scary if
you look at all of that, e.g. at the binary blob downloads for
to get an idea about all the binary-only blobs on a relatively current
Qualcomm SoC based design. That's compressed 70 Megabytes, probably
as large as all of the software we had on the Openmoko devices back
So yes, the smartphone world is much more restricted, locked-down and
proprietary than it was back in the Openmoko days. If we had been more
successful then, that world might be quite different today. It was a
lost opportunity to make the world embrace more freedom in terms of
software and hardware. Without single-vendor lock-in and proprietary
November 25, 2016
Early in 2016, a friend sent me a paper by Phillip Rogaway entitle, “The Moral Character of Cryptographic Work“. I have read it many times this year. Here’s the abstract:
Cryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension. The Snowden revelations motivate a reassessment of the political and moral positioning of cryptography. They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance. I plead for a reinvention of our disciplinary culture to attend not only to puzzles and math, but, also, to the societal implications of our work.
The ability to take control of our lives, again, has been on my mind this month. Loss of control is often rooted in reframed language. Rogaway shows how privacy, anonymity, and even security are now associated with terrorism. His suggestion? Reframe the work cryptography as building tools for anti-surveillance. Making “surveillance more expensive” is aligned with democracy and freedom. I think this is a great observation. Hopefully others will enjoy reading this paper as much as I did.
November 24, 2016
During the past 16 years I have been playing a lot with a variety of
One of the most important tasks for debugging or analyzing embedded
devices is usually to get access to the serial console on the UART of
the device. That UART is often exposed at whatever logic level the main
CPU/SOC/uC is running on. For 5V and 3.3V that is easy, but for ever
more and more unusual voltages I always had to build a custom cable or a
custom level shifter.
In 2016, I finally couldn't resist any longer and built a multi-voltage
USB UART adapter.
This board exposes two UARTs at a user-selectable voltage of 1.8, 2.3,
2.5, 2.8, 3.0 or 3.3V. It can also use whatever other logic voltage
between 1.8 and 3.3V, if it can source a reference of that voltage from
the target embedded board.
Rather than just building one for myself, I released the design as open
hardware under CC-BY-SA license terms. Full schematics + PCB layout
design files are available. For more information see
In case you don't want to build it from scratch, ready-made machine
assembled boards are also made available from
There are plenty of cellular modems on the market in the mPCIe form
Playing with such modems is reasonably easy, you can simply insert them
in a mPCIe slot of a laptop or an embedded device (soekris, pc-engines
or the like).
However, many of those modems actually export interesting signals like
digital PCM audio or UART ports on some of the mPCIe pins, both in
standard and in non-standard ways. Those signals are inaccessible in
those embedded devices or in your laptop.
So I built a small break-out board which performs the basic function of
exposing the mPCIe USB signals on a USB mini-B socket, providing power
supply to the mPCIe modem, offering a SIM card slot at the bottom, and
exposing all additional pins of the mPCIe header on a standard 2.54mm
pitch header for further experimentation.
The design of the board (including schematics and PCB layout design
files) is available as open hardware under CC-BY-SA license terms. For
more information see http://osmocom.org/projects/mpcie-breakout/wiki
If you don't want to build your own board, fully assembled and tested
boards are available from
September 29, 2016
A long time ago I wrote the OpenEmbedded User Manual and back then the obvious choice was to make it a docbook. In my community there were plenty of other examples that used docbook and it helped to get started. The great thing of docbook was with one XML input one could generate output in many different formats like HTML, XHTML, ePub or PDF. It separated the format from the presentation and was tailored for technical documents and articles with advanced features like generating a change history, appendix and many more. With XML entities it was possible to share chapters and parts between different manuals.
When creating Sysmocom and starting to write our usermanuals we have continued to use docbook. After all besides the many tags in XML it is a format that can be committed to git, allowing review and the publishing is just like a software build and can be triggered through git.
On the other hand writing XML by hand, indenting paragraphs to match the tree structure of the document is painful. In hindsight writing a docbook feels more like writing xml tags than writing content. I started to look for alternatives and heard about asciidoc, discarded it and then re-evaluated and started to use it as default. The ratio of content to formatting is really great. With a2x we continued to use docbook/dblatex to render the document. With some trial and error we could even continue to use the docbook docinfo (-a docinfo and a file manual-docinfo.xml). And finally asciidoc can be used on github as well. It works by adding .adoc to the filename and will be rendered nicely.
So with asciidoc, restructured text (rst), markdown (md) and many more (textile, pillar, …) we have great tools that make it easier to focus on the content and have an okay look. The downside is that there are so many of them now (and incompatible dialects). This leads to rendering tools having big differences, e.g. not being able to use a docinfo for PDF generation, being able to add raw PDF commands, etc.
I am currently exploring to publish documentation on readthedocs.org and my issue is that they are using Python sphinx which only works with markdown or restructure text. As github can’t
In the attempt to pick-up users where they are I am exploring to use readthedocs.org as an additional channel for documents. The website can integrate with github to automatically rebuild the documentation. One issue is that they exclusively use Python sphinx to render the documentation and that means it needs to use rst or markdown (or both) as input.
I can go down the xkcd way and create a meta-format to rule them all. Try to use pandoc to convert these documents on the fly (but pandoc already had some issues with basic tables rst) or switch the format. I looked at rst2pdf but while powerful seems to be lacking the docinfo support and markdown. I am currently exploring to stay with asciidoc and then use asciidoc -> docbook -> markdown_github for readthedocs. Let’s see how far this gets.
September 20, 2016
Going from 2G/3G requires to learn a new set of abbreviations. The network is referred to as IP Multimedia Subsystem (IMS) and the HLR becomes Home subscriber server (HSS). ITU ASN1 to define the RPCs (request, response, potential errors), message structure and encoding in 2G/3G is replaced with a set of IETF RFCs. From my point of view names of messages, names of attributes change but the basic broken trust model remains.
Having worked on probably the best ASN1/TCAP/MAP stack in Free Software it is time to move to the future and apply the good parts and lessons learned to Diameter. The first RFC is to look at is RFC 6733 – Diameter Base Protocol. This defines the basic encoding of messages, the requests, responses and errors, a BNF grammar to define these messages, when and how to connect to remote systems, etc.
The core part of our ASN1/TCAP/MAP stack is that the 3GPP ASN1 files are parsed and instead of just generating structs for the types (like done with asn1c and many other compilers) we have a model that contains the complete relationship between application-context, contract, package, argument, result and errors. From what I know this is quite unique (at least in the FOSS world) and it has allowed rapid development of a HLR, SMSC, SCF, security research and more.
So getting a complete model is the first step. This will allow us to generate encoders/decoders for languages like C/C++, be the base of a stack in Smalltalk, allow to browse the model graphically, generate fancy pictures, …. The RFC defines a grammar of how messages and grouped Attribute-Value-Pairs (AVP) are formatted and then a list of base messages. The Erlang/OTP framework has then extended this grammar to define a module and relationships between modules.
I started by converting the BNF into a PetitParser grammar. Which means each rule of the grammar becomes a method in the parser class, then one can create a unit test for this method and test the rule. To build a complete parser the rules are being combined (and, or, min, max, star, plus, etc.) with each other. One nice tool to help with debugging and testing the parser is the PetitParser Browser. It is pictured above and it can visualize the rule, show how rules are combined with each other, generate an example based on the grammar and can partially parse a message and provide debug hints (e.g. ‘::=’ was expected as next token).
After having written the grammar I tried to parse the RFC example and it didn’t work. The sad truth is that while the issue was known in RFC 3588, it has not been fixed. I created another errata item and let’s see when and if it is being picked up in future revisions of the base protocol.
The next step is to convert the grammar into a module. I will progress as time permits and contributions are more than welcome.
September 18, 2016
Previously I have written about connectivity options for IoT devices and today I assume that a cellular technology (e.g. names like GSM, 3G, UMTS, LTE, 4G) has been chosen. Unless you are a big vendor you will end up using a module (instead of a chipset) and either you are curious what the module is doing behind its AT command interface or you are trying to understand a real problem. The following is going to help you or at least be entertaining.
The xgoldmon project was a first to provide air interface traces and logging to the general public but it was limited to Infineon baseband (and some Gemalto devices), needed special commands to enable and didn’t include all messages all the time.
In the last months I have intensively worked with modules of a vendor called Quectel. They are using Qualcomm chipsets and have built the GSM/UMTS Quectel UC20 and the GSM/UMTS/LTE Quectel EC20 modules. They are available as a variant to solder but for speeding up development they provide them as miniPCI express as well. I ended up putting them into a PCengines APU2, soldered an additional SIM card holder for the second SIM card, placed U.FL to SMA connectors and put it into one of their standard cases. While the UC20 and EC20 are pretty similar the software is not the same and some basic features are missing from the EC20, e.g. the SIM ToolKit support. The easiest way to acquire these modules in Europe seems to be through the above links.
The extremely nice feature is that both modules export Qualcomm’s bi-directional DIAG debug interface by USB (without having to activate it through an undocumented AT command). It is a framed protocol with a simple checksum at the end of a frame and many general (e.g. logging and how regions are described) types of frames are known and used in projects like ModemManager to extract additional information. Some parts that include things like Tx-power are not well understood yet.
I have made a very simple utility available on github that will enable logging and then convert radio messages to the Osmocom GSMTAP protocol and send it to a remote host using UDP or write it to a pcap file. The result can be analyzed using wireshark.
You will need a new enough Linux kernel (e.g. >= Linux 4.4) to have the modems be recognized and initialized properly. This will create four ttyUSB serial devices, a /dev/cdc-wdmX and a wwanX interface. The later two can be used to have data as a normal network interface instead of launching pppd. In short these modules are super convenient to add connectivity to a product.
PCengines APU2 with Quectel EC20 and Quectel UC20
The repository includes a shell script to build some dependencies and the main utility. You will need to install autoconf, automake, libtool, pkg-config, libtalloc, make, gcc on your Linux distribution.
git clone git://github.com/moiji-mobile/diag-parser
Assuming that your modem has exposed the DIAG debug interface on /dev/ttyUSB0 and you have your wireshark running on a system with the internal IPv4 address of 10.23.42.7 you can run the following command.
./diag-parser -g 10.23.42.7 -i /dev/ttyUSB0
Analyzing UMTS with wireshark. The below shows a UMTS capture taken with the Quectel module. It allows you to see the radio messages used to register to the network, when sending a SMS and when placing calls.
Wireshark dissecting UMTS
August 22, 2016
Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX
or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS
position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings
, etc.The backend problem is work but mostly “solved”
. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend.
To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location.
Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs.
The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server.
The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations.
The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership).
The technologies that come to my mind are Barcode
, play some humanly not hearable noise and decode in an app
, Bluetooth Smart
. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other.
World wide usable
Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis).
Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them.
Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing).
Pick two of three
Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities.
The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t.
Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section.
||Scan Device needed
||Cost of device
||App scanning barcode required
||Sticker needs to be hard to remove and visible, maybe embedded to the frame
||Non human hearable audio
||App recording audio
||Button to play audio?
||Yes, but not on single band
||Centimeters to meters
||Many bands, specific readers needed
||Yes, but common
||Competes with Wifi for spectrum
||Yes, but not on single band
||Not commonly deployed, software more involved
||Uses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation
||Almost besides South Korea, Japan, some islands
||Almost global coverage, direct communication with backend possible
||Less than GSM but South Korea, Japan
||Meters to Kilometers depends on usage
||Higher power usage than GSM, higher device cost
||Less than GSM
||Designed for kilometers
||Expensive, higher power consumption
||Not deployed and coming in the future. Can embed GSM equally well into a LTE carrier
Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve.
For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts.