August 22, 2016
Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX
or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS
position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings
, etc.The backend problem is work but mostly “solved”
. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend.
To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location.
Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs.
The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server.
The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations.
The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership).
The technologies that come to my mind are Barcode
, play some humanly not hearable noise and decode in an app
, Bluetooth Smart
. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other.
World wide usable
Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis).
Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them.
Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing).
Pick two of three
Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities.
The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t.
Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section.
||Scan Device needed
||Cost of device
||App scanning barcode required
||Sticker needs to be hard to remove and visible, maybe embedded to the frame
||Non human hearable audio
||App recording audio
||Button to play audio?
||Yes, but not on single band
||Centimeters to meters
||Many bands, specific readers needed
||Yes, but common
||Competes with Wifi for spectrum
||Yes, but not on single band
||Not commonly deployed, software more involved
||Uses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation
||Almost besides South Korea, Japan, some islands
||Almost global coverage, direct communication with backend possible
||Less than GSM but South Korea, Japan
||Meters to Kilometers depends on usage
||Higher power usage than GSM, higher device cost
||Less than GSM
||Designed for kilometers
||Expensive, higher power consumption
||Not deployed and coming in the future. Can embed GSM equally well into a LTE carrier
Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve.
For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts.
August 21, 2016
As part of running infrastructure it might make sense or be required to store logs of transactions. A good way might be to capture the raw unmodified network traffic. For our GSM backend this is what we (have) to do and I wrote a client
that is using libpcap to capture data and sends it to a central server
for storing the trace. The system is rather simple and in production at various customers. The benefit of having a central server is having access to a lot of storage without granting too many systems and users access, central log rotation and compression, an easy way to grab all relevant traces and many more.
Recently the topic of doing real-time processing of captured data came up. I wanted to add some kind of side-channel that distributes data to interested clients before writing it to the disk. E.g. one might analyze a RTP audio flow for packet loss, jitter, without actually storing the personal conversation.
I didn’t create a custom protocol but decided to try ØMQ
(Zeromq). It has many built-in strategies (publish / subscribe, round robin routing, pipeline, request / reply, proxying, …) for connecting distributed system. The framework abstracts DNS resolving, connect, re-connect and exposes very easy to build the standard message exchange patterns. I opted for the publish / subscribe pattern because the collector server (acting as publisher) does not care if anyone is consuming the events or data. The message I sent are quite simple as well. There are two kind of multi-part messages, one for events and one for data. A subscriber is able to easily filter for events or data and filter for a specific capture source.
The support for Zeromq was added in two commits. The first
one adds basic zeromq context/socket support and configuration and the second
adds sending out the events and data in a fire and forget manner. And in a simple test set-up it seems to work just fine.
Since moving to Amsterdam I try to attend more meetups. Recently I went to talk at the local Elasticsearch
group and found out about packetbeat
. It is program written in Go that is using a PCAP library to capture network traffic, has protocol decoders written in go to make IP re-assembly and decoding and will upload the extracted information to an instance of Elasticsearch. In principle it is somewhere between my PCAP system and a distributed wireshark (without the same amount of protocol decoders). In our network we wouldn’t want the edge systems to directly talk to the Elasticsearch system and I wouldn’t want to run decoders as root (or at least with extended capabilities).
As an exercise to learn a bit more about the Go language I tried to modify packetbeat to consume trace data from my new data interface. The result can be found here
and I do understand (though I am still hooked on Smalltalk/Pharo
) why a lot of people like Go. The built-in fetching of dependencies from github is very neat, the module and interface/implementation approach is easy to comprehend and powerful.
The result of my work allows something like in the picture below. First we centralize traffic capturing at the pcap collector and then have packetbeat pick-up data, decode and forward for analysis into Elasticsearch. Let’s see if upstream is merging my changes.
This is part of a series of blog posts about testing inside the OpenBSC/Osmocom project. In this post I am focusing on our usage of GNU autotest.
The GNU autoconf ships with a not well known piece of software. It is called GNU autotest and we will focus about it in this blog post.
GNU autotest is a very simple framework/test runner. One needs to define a testsuite and this testsuite will launch test applications and record the exit code, stdout and stderr of the test application. It can diff the output with expected one and fail if it is not matching. Like any of the GNU autotools a log file is kept about the execution of each test. This tool can be nicely integrated with automake’s make check and make distcheck. This will execute the testsuite and in case of a test failure fail the build.
The way we use it is also quite simple as well. We create a simple application inside the test/testname directory and most of the time just capture the output on stdout. Currently no unit-testing framework is used, instead a simple application is built that is mostly using OSMO_ASSERT to assert the expectations. In case of a failure the application will abort and print a backtrace. This means that in case of a failure the stdout will not not be as expected and the exit code will be wrong as well and the testcase will be marked as FAILED.
The following will go through the details of enabling autotest in a project.
Enabling GNU autotest
The configure.ac file needs to get a line like this: AC_CONFIG_TESTDIR(tests). It needs to be put after the AC_INIT and AM_INIT_AUTOMAKE directives and make sure AC_OUTPUT lists tests/atlocal.
Integrating with the automake
The next thing is to define a testsuite inside the tests/Makefile.am. This is some boilerplate code that creates the testsuite and makes sure it is invoked as part of the build process.
# The `:;' works around a Bash 3.2 bug when the output is not writeable.
echo '# Signature of the current package.' &&
echo 'm4_define([AT_PACKAGE_NAME],' &&
echo ' [$(PACKAGE_NAME)])' &&;
echo 'm4_define([AT_PACKAGE_TARNAME],' &&
echo ' [$(PACKAGE_TARNAME)])' &&
echo 'm4_define([AT_PACKAGE_VERSION],' &&
echo ' [$(PACKAGE_VERSION)])' &&
echo 'm4_define([AT_PACKAGE_STRING],' &&
echo ' [$(PACKAGE_STRING)])' &&
echo 'm4_define([AT_PACKAGE_BUGREPORT],' &&
echo ' [$(PACKAGE_BUGREPORT)])';
echo 'm4_define([AT_PACKAGE_URL],' &&
echo ' [$(PACKAGE_URL)])';
EXTRA_DIST = testsuite.at $(srcdir)/package.m4 $(TESTSUITE)
TESTSUITE = $(srcdir)/testsuite
DISTCLEANFILES = atconfig
check-local: atconfig $(TESTSUITE)
$(SHELL) '$(TESTSUITE)' $(TESTSUITEFLAGS)
installcheck-local: atconfig $(TESTSUITE)
$(SHELL) '$(TESTSUITE)' AUTOTEST_PATH='$(bindir)'
test ! -f '$(TESTSUITE)' ||
$(SHELL) '$(TESTSUITE)' --clean
AUTOM4TE = $(SHELL) $(top_srcdir)/missing --run autom4te
AUTOTEST = $(AUTOM4TE) --language=autotest
$(TESTSUITE): $(srcdir)/testsuite.at $(srcdir)/package.m4
$(AUTOTEST) -I '$(srcdir)' -o $@.tmp $@.at
mv $@.tmp $@
Defining a testsuite
The next part is to define which tests will be executed. One needs to create a testsuite.at file with content like the one below:
cat $abs_srcdir/gsm0408/gsm0408_test.ok > expout
AT_CHECK([$abs_top_builddir/tests/gsm0408/gsm0408_test], , [expout], [ignore])
This will initialize the testsuite, create a banner. The lines between AT_SETUP and AT_CLEANUP represent one testcase. In there we are copying the expected output from the source directory into a file called expout and then inside the AT_CHECK directive we specify what to execute and what to do with the output.
Executing a testsuite and dealing with failure
The testsuite will be automatically executed as part of make check and make distcheck. It can also be manually executed by entering the test directory and executing the following.
$ make testsuite
make: `testsuite' is up to date.
## ---------------------------------- ##
## openbsc 0.13.0.60-1249 test suite. ##
## ---------------------------------- ##
1: gsm0408 ok
2: db ok
3: channel ok
4: mgcp ok
5: gprs ok
6: bsc-nat ok
7: bsc-nat-trie ok
8: si ok
9: abis ok
## ------------- ##
## Test results. ##
## ------------- ##
All 9 tests were successful.
In case of a failure the following information will be printed and can be inspected to understand why things went wrong.
2: db FAILED (testsuite.at:13)
## ------------- ##
## Test results. ##
## ------------- ##
ERROR: All 9 tests were run,
1 failed unexpectedly.
## -------------------------- ##
## testsuite.log was created. ##
## -------------------------- ##
Please send `tests/testsuite.log' and all information you think might help:
Subject: [openbsc 0.13.0.60-1249] testsuite: 2 failed
You may investigate any problem if you feel able to do so, in which
case the test suite provides a good starting point. Its output may
be found below `tests/testsuite.dir'.
You can go to tests/testsuite.dir and have a look at the failing tests. For each failing test there will be one directory that contains a log file about the run and the output of the application. We are using GNU autotest in libosmocore, libosmo-abis, libosmo-sccp, OpenBSC, osmo-bts and cellmgr_ng.
Last year Jacob and me worked on the osmo-sgsn of OpenBSC. We have improved the stability and reliability of the system and moved it to the next level. By adding the GSUP interface we are able to connect it to our commercial grade Smalltalk MAP stack and use it in the real world production GSM network. While working and manually testing this stack we have not used our osmo-pcu software but another proprietary IP based BTS, after all we didn’t want to debug the PCU issues right now.
This year Jacob has taken over as a maintainer of the osmo-pcu, he started with a frequent crash fix (which was introduced due us understanding the specification on TBF re-use better but not the code), he has spent hours and hours reading the specification, studied the log output and has fixed defect after defect and then moved to features. We have tried the software at this years Camp and fixed another round of reliability issues.
Some weeks ago I noticed that the proprietary IP based BTS has been moved from the desk into the shelf. In contrast to the proprietary BTS, issues has a real possibility to be resolved. It might take a long time, it might take one paying another entity to do it but in the end your system will run better. Free Software allows you to genuinely own and use the hardware you have bought!
Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon
. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows).
From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential.
So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference.
The classic question in IT is to buy something existing or to build it from scratch. When wanting to buy an off the shelves HLR (that actually works) in most cases the customer will end up in a vendor lock-in:
- The vendor might enforce to run on a hardware sold by your vendor. This might just be a dell box with a custom front, or really custom hardware in a custom chasis or even requiring you to put an entire rack. Either way you are trapped to a single supplier.
- It might come with a yearly license (or support fee) and on top of that might be dongled so after a reboot, the service might not start as the new license key has not been copied.
- The system might not export a configuration interface for what you want. Specially small MVNOs might have specific needs for roaming steering, multi IMSI and you will be sure to pay a premium for these features (even if they are off the shelves extensions).
- There might be a design flaw in the protocol and you would like to mitigate but the vendor will try to charge a premium from you because the vendor can.
The alternative is to build a component from scratch and the initial progress will be great as the technology is more manageable than many years ago. You will test against the live SS7 network, maybe even encode messages by hand and things appear to work but only then the fun will start. How big is your test suite? Do you have tests for ITU Q787? How will you do load-balancing, database failover? How do you track failures and performance? For many engineering companies this is a bit over their head (one needs to know GSM MAP, need to know ITU SCCP, SIGTRAN, ASN1, TCAP).
But there is a third way and it is available today. Look for a Free Software HLR
and give it a try. Check which features are missing and which you want and develop them yourself or ask a company like sysmocom
to implement them for you. Once you move the system into production maybe find a support agreement that allows the company to continuously improve the software and responds to you quickly. The benefits for anyone looking for a HLR are obvious:
- You can run the component on any Linux/FreeBSD system. On physical hardware, on virtualized hardware, together with other services, not with other services. You decide.
- The software will always be yours. Once you have a running system, there will be nothing (besides time_t overflowing) that has been designed to fail (no license key expires)
- Independence of a single supplier. You can build a local team to maintain the software, you can find another supplier to maintain it.
- Built for change. Having access to the source code enables you to modify it, with a Free Software license you are allowed to run your modified versions as well.
The only danger is to make sure to not fall in the OpenCore trap surrounded by many OpenSource projects. Make sure that all you need is available in source and allows you to run modified copies.
Imagine you run a GSM network and you have multiple systems at the edge of your network that communicate with other systems. For debugging reasons you might want to collect traffic and then look at it to explore an issue or look at it systematically to improve your network, your roaming traffic, etc.
The first approach might be to run tcpdump on each of these systems, run it in a round-robin manner, compress the old traffic and then have a script that downloads/uploads it once a day to a central place. The issue is that each node needs to have enough disk space, you might not feel happy to keep old files on the edge or you just don’t know when is a good time to copy it.
Another approach is to create an aggregation framework. A client will use libpcap to capture the traffic and then redirect it to a central server. The central server will then store the traffic and might rotate based on size or age of the file. Old files can then be compressed and removed.
I created the osmo-pcap tool many years ago and have recently fixed a 64bit PCAP header issue (the timeval in the header is 32bit), collection of jumbo frames and now updated the README.md file of the project and created packages for Debian, Ubuntu, CentOS, OpenSUSE, SLES and I made sure that it can be compiled and use on FreeBSD10 as well.
If you are using or decided not to use this software I would be very happy to hear about it.
Berlin continues to gain a lot of popularity, culturally and culinarily it is an awesome place and besides increasing rents it still remains more affordable than other cities. In terms of economy Berlin attracts new companies and branches/offices as well. At the same time I felt the itch and it was time to leave my home town once again. In the end I settled for the bicycle friendly (and sometimes sunny) city of Amsterdam.
My main interest remains building reliable systems with Smalltalk, C/C++, Qt and learn new technology (Tensorflow? Rust? ElasticSearch, Mongo, UUCP) and talk about GSM (SCCP, SIGTRAN, TCAP, ROS, MAP, Diameter, GTP) or get re-exposed to WebKit/Blink.
If you are in Amsterdam or if you know people or companies I am happy to meet and make new contacts.
In the past I have written about my usage of Tufao and Qt to build REST services. This time I am writing about my experience of using the TreeFrog framework
to build a full web application.
You might wonder why one would want to build such a thing in a statically and compiled language instead of something more dynamic. There are a few reasons for it:
- Performance: The application is intended to run on our sysmoBTS GSM Basestation (TI Davinci DM644x). By modern standards it is a very low-end SoC (ARMv5te instruction set, single core, etc, low amount of RAM) and at the same time still perfectly fine to run a GSM network.
- Interface: For GSM we have various libraries with a C programming interface and they are easy to consume from C++.
- Compilation/Distribution: By (cross-)building the application there is a “single” executable and we don’t have the dependency mess of Ruby.
The second decision was to not use Tufao and search for a framework that has user management and a template/rendering/canvas engine built-in. At the Chaos Computer Camp in 2007 I remember to have heard a conversation of “Qt” for the Web (Wt, C++ Web Toolkit) and this was the first framework I looked at. It seems like a fine project/product but interfacing with Qt seemed like an after thought. I continued to look and ended up finding and trying the TreeFrog framework.
I am really surprised how long this project exists without having heard about it. It is using/built on top of Qt, uses QtSQL for the ORM mapping, QMetaObject for dispatching to controllers and the template engine and resembles Ruby on Rails a lot. It has two template engines, routing of URLs to controllers/slots, one can embed any C++ in the template. The documentation is complete and by using the search on the website I found everything I was searching for my “advanced” topics. Because of my own stupidity I ended up single stepping through the code and a Qt coder should feel right at home.
My favorite features:
- tspawn model TableName will autogenerate (and update) a C++ model based on the table in the database. The updating is working as well.
- The application builds a libmodel.so, libhelper.so (I removed that) and libcontroller.so. When using the -r option of the application the application will respawn itself. At first I thought I would not like it but it improves round trip times.
- C++ in the template. The ERB template is parsed and a C++ class will be generated and the ::toString() method will generate the HTML code. So in case something is going wrong, it is very easy to inspect.
If you are currently using Ruby on Rails, Django but would like to do it with C++, have a look at TreeFrog. I really like it so far.
As part of the Osmocom.org
software development we have a Jenkins set-up
that is executing unit and system tests. For OpenBSC we will compile the software, then execute the unit tests and finally run a bunch of system tests. The system tests will verify making configuration changes through the telnet interface, the machine control interface, might try to connect to other parts, etc.
In the past this was executed after
a committer had pushed his changes to the repository and the build time didn’t matter. As part of the move to the Gerrit
code review we execute them before
and this means that people might need to wait for the result… (and waiting for a computer shouldn’t be necessary these days).
is renting a dedicated build machine to speed-up compilation and I have looked at how to execute the system tests in parallel. The issue is that during a system test we bind to ports on localhost and that means we can not have two test runs at the same time.
I decided to use the Linux network namespace support and opted for using docker to achieve it. There are some hick-ups but in general it is a great step forward. Using a statement like the following we execute our CI script in a clean environment.
$ docker run –rm=true -e HOME=/build -w /build -i -u build -v $PWD:/build osmocom:amd64 /build/contrib/jenkins.sh
As part of the OpenBSC build we are re-building dependencies and thanks to building in the virtual /build directory we can look at archiving libosmocore/libosmo-sccp/libosmo-abis and not rebuild it all the time.
July 23, 2016
At sysmocom we maintain a webshop with various
smaller items and accessories interesting to the Osmocom community as well as the
wider community of people experimenting (aka 'playing') with cellular
communications infrastructure. As this is primarily a service to the community
and not our main business, I'm always interested in ways to reduce the amount of
time our team has to use in order to operate the webshop.
In order to make the shipping process more efficient, I discovered that
Deutsche Post is offering a Web API based on SOAP+WSDL which can be used to generate
for the (registered) letters that we ship around the world with our products.
The most interesting part of this is that you can generate combined address +
franking labels. As address labels need to be printed anyway, there is little
impact on the shipping process beyond having to use this API to generate the
right franking for the particular shipment.
Given the general usefulness of such an online franking process, I would have
assumed that virtually anyone operating some kind of shop that regularly mails
letters/products would use it and hence at least one of those users would have
already written some free / open source software code fro it. To my big
surprise, I could not find any FOSS implementation of this API.
If you know me, I'm the last person to know anything about web technology
beyond HTML 4 which was the latest upcoming new thing when I last did anything
web related ;)
Nevertheless, using the python-zeep module, it was fairly easy to
interface the web service. The weirdest part is the custom signature algorithm
that they use to generate some custom soap headers. I'm sure they have their
Today I hence present the python-inema project, a python module for accessing
this Internetmarke API.
Please note while I'm fluent in Pascal, Perl, C and Erlang, programming in Python
doesn't yet come natural to me. So if you have any
comments/feedback/improvements, they're most welcome by e-mail, including any patches.
Based on some encouragement from friends as well as my desire to find
more time again to hang out at community events, I decided to attend
Electromagnetic Field 2016 held in
Guildford, UK from August 5th through 7th.
As I typically don't like just attending an event without contributing
to it in some form, I submitted a couple of talks / workshops, all of which were
- An overview talk about the Osmocom project
- A Workshop on running your own cellular network using OpenBSC and related Osmocom software
- A Workshop on tracing (U)SIM card communication using Osmocom SIMtrace
I believe the detailed schedule is still in the works, as I haven't yet
been able to find any on the event website.
Looking forward to having a great time at EMF 2016. After attending
Dutch and German hacker camps for almost 20 years, let's see how the
Brits go about it!
In private conversation, Holger mentioned EC-GSM-IoT to me, and I had to
dig a bit into it. It was introduced in Release 13, but if you do a web
search for it, you find surprisingly little information beyond press
releases with absolutely zero information content and no "further
The primary reason for this seems to be that the feature was called
EC-EGPRS until the very late stages, when it was renamed for - believe
it or not - marketing reasons.
So when searching for the right term, you actually find specification
references and change requests in the 3GPP document archives.
I tried to get a very brief overview, and from what I could find, it is
centered around GERAN extension in the following ways:
- EC-EGPRS goal: Improve coverage by 20dB
- New single-burst coding schemes
- Blind Physical Layer Repetitions where bursts are repeated up to 28 times without feedback from remote end
- transmitter maintains phase coherency
- receiver uses processing gain (like incremental redundancy?)
- New logical channel types (EC-BCCH, EC-PCH, EC-AGC, EC-RACH, ...)
- New RLC/MAC layer messages for the EC-PDCH communication
- Power Efficient Operation (PEO)
- Introduction of eDRX (extended DRX) to allow for PCH listening
intervals from minutes up to a hour
- Relaxed Idle Mode: Important to camp on a cell, not best cell.
Reduces neighbor cell monitoring requirements
In terms of required modifications to an existing GSM/EDGE
implementation, there will be (at least):
- changes to the PHY layer regarding new coding schemes, logical
channels and burst scheduling / re-transmissions
- changes to the RLC/MAC layer in the PCU to implement the new EC
specific message types and procedures
- changes to the BTS and BSC in terms of paging in eDRX
In case you're interested in more pointers on technical details, check
out the links provided at https://osmocom.org/issues/1780
It remains to be seen how widely this will be adopted. Rolling this
cange out on moderm base station hardware seems technicalyl simple - but
it remains to be seen how many equipment makers implement it, and at
what cost to the operators. But I think the key issue is whether or not
the baseband chipset makers (Intel, Qualcomm, Mediatek, ...) will
implement it anytime soon on the device side.
There are no plans on implementing any of this in the Osmocom stack as
of now,but in case anyone was interested in working on this, feel free
to contact us on the email@example.com mailing list.
July 16, 2016
Some topics keep coming back, even a number of years after first having
worked on them. And then you start to search online using your favorite
search engine - and find your old posts
on that subject are the most comprehensive publicly available
information on the subject ;)
Back in 2011, I was working on some very basic support for Ericsson
RBS2xxx GSM BTSs in OpenBSC. The major part of this was to find out the
weird dynamic detection of the signalling timeslot, as well as the fully
non-standard OM2000 protocol for OML. Once it reached the state of a
'proof-of-concept', work at this ceased and remained in a state where
still lots of manual steps were involved in BTS bring-up.
I've recently picked this topic up again, resulting in some
work-in-progress code in
Beyond classic E1 based A-bis support, I've also been looking (again) at
Ericsson Packet Abis. Packet Abis is their understanding of Abis over
IP. However, it is - again - much further from the 3GPP specifications
than what we're used to in the Osmocom universe. Abis/IP as we know
- RSL and OML over TCP (inside an IPA multiplex)
- RTP streams for the user plane (voice)
- Gb over IP (NS over UDP/IP), as te PCU is in the BTS.
In the Ericsson world, they decided to taka a much lower-layer approach
and decided to
- start with L2TP over IP (not the L2TP over UDP that many people know from VPNs)
- use the IETF-standardized Pseudowire type for HDLC but use a frame
format in violation of the IETF RFCs
- Talk LAPD over L2TP for RSL and OML
- Invent a new frame format for voice codec frames called TFP and feed
that over L2TP
- Invent a new frame format for the PCU-CCU communication called P-GSL
and feed that over L2TP
I'm not yet sure if we want to fully support that protocol stack from
OpenBSC and related projects, but in any case I've extende wireshark to
decode such protocol traces properly by
- Extending the L2TP dissector with Ericsson specific AVPs
- Improving my earlier pakcet-ehdlc.c with better understanding of the
- Implementing a new TFP dissector from scratch
- Implementing a new P-GSL dissector from scratch
The resulting work can be found at http://git.osmocom.org/wireshark/log/?h=laforge/ericsson-packet-abis
in case anyone is interested. I've mostly been working with protocol
traces from RBS2409 so far, and they are decoded quite nicely for RSL,
OML, Voice and Packet data. As far as I know, the format of the STN /
SIU of other BTS models is identical.
Is anyone out there in possession of Ericsson RBS2xxx RBSs interested in
collboration on either a Packet Abis implementation, or an inteface of
the E1 or packet based CCU-PCU interface to OsmoPCU?
June 06, 2016
In recent days, various public allegations have been brought forward
against Jacob Appelbaum. The allegations rank from plagiarism to sexual
assault and rape.
I find it deeply disturbing that the alleged victims are putting up the
effort of a quite slick online campaign to defame Jakes's name, using
a domain name consisting of only his name and virtually any picture you
can find online of him from the last decade, and - to a large extent -
hide in anonymity.
I'm upset about this not because I happen to know Jake personally
for many years, but because I think it is fundamentally wrong to bring
up those accusations in such a form.
I have no clue what is the truth or what is not the truth. Nor does
anyone else who has not experienced or witnessed the alleged events
first hand. I'd hope more people would think about that before
commenting on this topic one way or another on Twitter, in their blogs,
on mailing lists, etc. It doesn't matter what we believe, hypothesize
or project based on a personal like or dislike of either the person
accused or of the accusers.
We don't live in the middle ages, and we have given up on the pillory
for a long time (and the pillory was used after a judgement, not
before). If there was illegal/criminal behavior, then our societies
have a well-established and respected procedure to deal with such: It
is based on laws, legal procedure and courts.
So if somebody has a claim, they can and should seek legal support
and bring those claims forward to the competent authorities, rather than
starting what very easily looks like a smear campaign (whether it is one
Please don't get me wrong: I have the deepest respect and sympathies for
victims of sexual assault or abuse - but I also have a deep respect for
the legal foundation our societies have built over hundreds of years,
and it's principles including the human right "presumption of
No matter who has committed which type of crime, everyone deserve to
receive a fair trial, and they are innocent until proven guilty.
I believe nobody deserves such a public defamation campaign, nor does
anyone have the authority to sentence such a verdict, not even a court
of law. The Pillory was abandoned for good reasons.
June 01, 2016
I’m currently working on the Vaani project at Mozilla, and part of my work on that allows me to do some exploration around the topic of speech recognition and speech assistants. After looking at some of the commercial offerings available, I thought that if we were going to do some kind of add-on API, we’d be best off aping the Amazon Alexa skills JS API. Amazon Echo appears to be doing quite well and people have written a number of skills with their API. There isn’t really any alternative right now, but I actually happen to think their API is quite well thought out and concise, and maps well to the sort of data structures you need to do reliable speech recognition.
So skipping forward a bit, I decided to prototype with Node.js and some existing open source projects to implement an offline version of the Alexa skills JS API. Today it’s gotten to the point where it’s actually usable (for certain values of usable) and I’ve just spent the last 5 minutes asking it to tell me Knock-Knock jokes, so rather than waste any more time on that, I thought I’d write this about it instead. If you want to try it out, check out this repository and run npm install in the usual way. You’ll need pocketsphinx installed for that to succeed (install sphinxbase and pocketsphinx from github), and you’ll need espeak installed and some skills for it to do anything interesting, so check out the Alexa sample skills and sym-link the ‘samples‘ directory as a directory called ‘skills‘ in your ferris checkout directory. After that, just run the included example file with node and talk to it via your default recording device (hint: say ‘launch wise guy‘).
Hopefully someone else finds this useful – I’ll be using this as a base to prototype further voice experiments, and I’ll likely be extending the Alexa API further in non-standard ways. What was quite neat about all this was just how easy it all was. The Alexa API is extremely well documented, Node.js is also extremely well documented and just as easy to use, and there are tons of libraries (of varying quality…) to do what you need to do. The only real stumbling block was pocketsphinx’s lack of documentation (there’s no documentation at all for the Node bindings and the C API documentation is pretty sparse, to say the least), but thankfully other members of my team are much more familiar with this codebase than I am and I could lean on them for support.
I’m reasonably impressed with the state of lightweight open source voice recognition. This is easily good enough to be useful if you can limit the scope of what you need to recognise, and I find the Alexa API is a great way of doing that. I’d be interested to know how close the internal implementation is to how I’ve gone about it if anyone has that insider knowledge.
Back in late April, the well-known high-quality SDR hardware company
Nuand published a blog post about an Open Source Release of a VHDL ADS-B
I was quite happy at that time about this, and bookmarked it for further
investigation at some later point.
Today I actually looked at the source code, and more by coincidence
noticed that the LICENSE file contains a
license that is anything but Open Source: The license is a "free for
evaluation only" license, and it is only valid if you run the code on an
actual Nuand board.
Both of the above are clearly not compatible with any
of the well-known and respected definitions of Open Source, particularly
not the official Open Source Definition of the Open Source Initiative.
I cannot even start how much this makes me upset. This is once again
openwashing, where something that clearly is not Free or Open Source
Software is labelled and marketed as such.
I don't mind if an author chooses to license his work under a
proprietary license. It is his choice to do so under the law, and it
generally makes such software utterly unattractive to me. If others
still want to use it, it is their decision. However, if somebody
produces or releases non-free or proprietary software, then they should
make that very clear and not mis-represent it as something that it
Open-washing only confuses everyone, and it tries to market the
respective company or product in a light that it doesn't deserve. I
believe the proper English proverb is to adorn oneself with borrowed
I strongly believe the community must stand up against such practise and
clearly voice that this is not something generally acceptable or
tolerated within the Free and Open Source software world. It's sad that
this is happening more frequently, like recently with OpenAirInterface
(see related blog post).
I will definitely write an e-mail to Nuand management requesting to
correct this mis-representation. If you agree with my posting, I'd
appreciate if you would contact them, too.
May 27, 2016
I've been giving a keynote at the Black Duck Korea Open Source
yesterday, and I'd like to share some thoughts about it.
In terms of the content, I spoke about the fact that the ultimate
goal/wish/intent of free software projects is to receive contributions
and for all of the individual and organizational users to join the
collaborative development process. However, that's just the intent, and
it's not legally required.
Due to GPL enforcement work, a lot of attention has been created over the
past ten years in the corporate legal departments on how to comply with
FOSS license terms, particularly copyleft-style licenses like GPLv2 and
License compliance ensures the absolute bare legal minimum on engaging
with the Free Software community. While that is legally sufficient, the
community actually wants to have all developers join the collaborative
development process, where the resources for development are
contributed and shared among all developers.
So I think if we had more contribution and a more fair distribution of
the work in developing and maintaining the related software, we would
not have to worry so much about legal enforcement of licenses.
However, in the absence of companies being good open source citizens,
pulling out the legal baton is all we can do to at least require them to
share their modifications at the time they ship their products. That
code might not be mergeable, or it might be outdated, so it's value
might be less than we would hope for, but it is a beginning.
Now some people might be critical of me speaking at a Black Duck Korea
event, where Black Duck is a company selling (expensive!) licenses to
proprietary tools for license compliance. Thereby, speaking at such an
event might be seen as an endorsement of Black Duck and/or proprietary
software in general.
Honestly, I don't think so. If you've ever seen a Black Duck Korea
event, then you will notice there is no marketing or sales booth, and
that there is no sales pitch on the conference agenda. Rather, you have
speakers with hands-on experience in license compliance either from a
community point of view, or from a corporate point of view, i.e. how
companies are managing license compliance processes internally.
Thus, the event is not a sales show for proprietary software, but an
event that brings together various people genuinely interested in
license compliance matters. The organizers very clearly understand that
they have to keep that kind of separation. So it's actually more like a
community event, sponsored by a commercial entity - and that in turn is
true for most technology conferences.
So I have no ethical problems with speaking at their event. People who
know me, know that I don't like proprietary software at all for ethical
reasons, and avoid it personally as far as possible. I certainly don't
promote Black Ducks products. I promote license compliance.
Let's look at it like this: If companies building products based on
Free Software think they need software tools to help them with license
compliance, and they don't want to develop such tools together in a
collaborative Free Software project themselves, then that's their
decision to take. To state using words of Rosa Luxemburg:
Freedom is always the freedom of those who think different
I may not like that others want to use proprietary software, but if they
think it's good for them, it's their decision to take.
May 26, 2016
Have you ever used mobile data on your phone or using Tethering?
In packet-switched cellular networks (aka mobile data) from GPRS to
EDGE, from UMTS to HSPA and all the way into modern LTE networks, there
is a tunneling protocol called GTP (GPRS Tunneling Protocol).
This was the first cellular protocol that involved transport over
TCP/IP, as opposed to all the ISDN/E1/T1/FrameRelay world with their
weird protocol stacks. So it should have been something super easy to
implement on and in Linux, and nobody should have had a reason to run a
proprietary GGSN, ever.
However, the cellular telecom world lives in a different universe, and to
this day you can be safe to assume that all production GGSNs are
proprietary hardware and/or software :(
In 2002, Jens Jakobsen at Mondru AB released the initial version of
OpenGGSN, a userspace
implementation of this tunneling protocol and the GGSN network element.
Development however ceased in 2005, and we at the Osmocom project
thus adopted OpenGGSN maintenance in 2016.
Having a userspace implementation of any tunneling protocol of course
only works for relatively low bandwidth, due to the scheduling and
memory-copying overhead between kernel, userspace, and kernel again.
So OpenGGSN might have been useful for early GPRS networks where the
maximum data rate per subscriber is in the hundreds of kilobits, but it
certainly is not possible for any real operator, particularly not at
today's data rates.
That's why for decades, all commonly used IP tunneling protocols have
been implemented inside the Linux kernel, which has some tunneling
infrastructure used with tunnels like IP-IP, SIT, GRE, PPTP, L2TP and
But then again, the cellular world lives in a universe where Free and
Open Source Software didn't exit until OpenBTS and OpenBSC changed all o
that from 2008 onwards. So nobody ever bothered to add GTP support to
the in-kernel tunneling framework.
In 2012, I started an in-kernel implementation of GTP-U (the user
plane with actual user IP data) as part of my work at sysmocom. My former netfilter colleague and current
netfilter core team leader Pablo Neira was contracted to bring it
further along, but unfortunately the customer project funding the effort
was discontinued, and we didn't have time to complete it.
Luckily, in 2015 Andreas Schultz of Travelping came around and has forward-ported the old
code to a more modern kernel, fixed the numerous bugs and started to
test and use it. He also kept pushing Pablo and me for review and
submission, thanks for that!
Finally, in May 2016, the code was merged into the mainline kernel,
and now every upcoming version of the Linux kernel will have a fast and
efficient in-kernel implementation of GTP-U. It is configured via
netlink from userspace, where you are expected to run a corresponding
daemon for the control plane, such as either OpenGGSN, or the new GGSN +
PDN-GW implementation in Erlang called erGW.
You can find the kernel code at drivers/net/gtp.c,
and the userspace netlink library code (libgtpnl) at git.osmocom.org.
I haven't done actual benchmarking of the performance that you can get
on modern x86 hardware with this, but I would expect it to be the same
of what you can also get from other similar in-kernel tunneling
Now that the cellular industry has failed for decades to realize how
easy and little effort would have been needed to have a fast and
inexpensive GGSN around, let's see if now that other people did it for
them, there will be some adoption.
If you're interested in testing or running a GGSN or PDN-GW and become
an early adopter, feel free to reach out to Andreas, Pablo and/or me.
The osmocom-net-gprs mailing list might be a good way to discuss further development and/or testing.
May 21, 2016
According to some news report, including this report at softpedia,
a 26 year old student at the Faculty of Criminal Justice and Security in
Maribor, Slovenia has received a suspended prison sentence for finding
flaws in Slovenian police and army TETRA network using OsmocomTETRA
As the Osmocom project leader and main author of OsmocomTETRA, this
is highly disturbing news to me. OsmocomTETRA was precisely developed
to enable people to perform research and analysis in TETRA networks, and
to audit their safe and secure configuration.
If a TETRA network (like any other network) is configured with broken
security, then the people responsible for configuring and operating that
network are to be blamed, and not the researcher who invests his
personal time and effort into demonstrating that police radio
communications safety is broken. On the outside, the court sentence
really sounds like "shoot the messenger". They should instead have
jailed the people responsible for deploying such an insecure network in
the first place, as well as those responsible for not doing the most
basic air-interface interception tests before putting such a network
According to all reports, the student had shared the results of his
research with the authorities and there are public detailed reports from
2015, like the report (in Slovenian) at
The statement that he should have asked the authorities for permission
before starting his research is moot. I've seen many such cases and you
would normally never get permission to do this, or you would most
likely get no response from the (in)competent authorities in the first
From my point of view, they should give the student a medal of honor,
instead of sentencing him. He has provided a significant service to the
security of the public sector communications in his country.
To be fair, the news report also indicates that there were other charges
involved, like impersonating a police officer. I can of course not
comment on those.
Please note that I do not know the student or his research first-hand,
nor did I know any of his actions or was involved in them. OsmocomTETRA
is a Free / Open Source Software project available to anyone in source
code form. It is a vital tool in demonstrating the lack of security in
many TETRA networks, whether networks for public safety or private
May 01, 2016
Right now I'm feeling sad. I really shouldn't, but I still do.
Many years ago I started OpenBSC and Osmocom in order to bring Free
Software into an area where it barely existed before: Cellular
Infrastructure. For the first few years, it was "just for fun", without
any professional users. A FOSS project by enthusiasts. Then we got
some commercial / professional users, and with them funding, paying for
e.g. Holger and my freelance work. Still, implementing all protocol
stacks, interfaces and functional elements of GSM and GPRS from the
radio network to the core network is something that large corporations
typically spend hundreds of man-years on. So funding for Osmocom GSM
implementations was always short, and we always tried to make the best
out of it.
After Holger and I started sysmocom in 2011, we had a chance to use
funds from BTS sales to hire more developers, and we were growing our
team of developers. We finally could pay some developers other than
ourselves from working on Free Software cellular network infrastructure.
In 2014 and 2015, sysmocom got side-tracked with some projects where
Osmocom and the cellular network was only one small part of a much
larger scope. In Q4/2015 and in 2016, we are back on track with
focussing 100% at Osmocom projects, which you can probably see by a lot
more associated commits to the respective project repositories.
By now, we are in the lucky situation that the work we've done in the
Osmocom project on providing Free Software implementations of cellular
technologies like GSM, GPRS, EDGE and now also UMTS is receiving a lot
of attention. This attention translates into companies approaching us
(particularly at sysmocom) regarding funding for implementing new
features, fixing existing bugs and short-comings, etc. As part of that,
we can even work on much needed infrastructural changes in the software.
So now we are in the opposite situation: There's a lot of interest in
funding Osmocom work, but there are few people in the Osmocom community
interested and/or capable to follow-up to that. Some of the early
contributors have moved into other areas, and are now working on
proprietary cellular stacks at large multi-national corporations. Some
others think of GSM as a fun hobby and want to keep it that way.
At sysmocom, we are trying hard to do what we can to keep up with the
demand. We've been looking to add people to our staff, but right now we
are struggling only to compensate for the regular fluctuation of
employees (i.e. keep the team size as is), let alone actually adding new
members to our team to help to move free software cellular networks
I am struggling to understand why that is. I think Free Software in
cellular communications is one of the most interesting and challenging
frontiers for Free Software to work on. And there are many FOSS
developers who love nothing more than to conquer new areas of
At sysmocom, we can now offer what would have been my personal dream job
for many years:
- paid work on Free Software that is available to the general public,
rather than something only of value to the employer
- interesting technical challenges in an area of technology where you
will not find the answer to all your problems on stackoverflow or the
- work in a small company consisting almost entirely only of die-hard
engineers, without corporate managers, marketing departments, etc.
- work in an environment free of Microsoft and Apple software or cloud
services; use exclusively Free Software to get your work done
I would hope that more developers would appreciate such an environment.
If you're interested in helping FOSS cellular networks ahead, feel free
to have a look at http://sysmocom.de/jobs or contact us at
firstname.lastname@example.org. Together, we can try to move Free Software for mobile
communications to the next level!
March 27, 2016
This is great news: You can now install a GSM network using apt-get!
Thanks to the efforts of Debian developer Ruben Undheim, there's now
an OpenBSC (with all its flavors like OsmoBSC, OsmoNITB, OsmoSGSN,
...) package in the official Debian repository.
Here is the link to the e-mail indicating acceptance into Debian:
I think for the past many years into the OpenBSC (and wider Osmocom)
projects I always assumed that distribution packaging is not really
something all that important, as all the people using OpenBSC surely
would be technical enough to build it from the source. And in fact, I
believe that building from source brings you one step closer to
actually modifying the code, and thus contribution.
Nevertheless, the project has matured to a point where it is not used
only by developers anymore, and particularly also (god beware) by
people with limited experience with Linux in general. That such
people still exist is surprisingly hard to realize for somebody like
myself who has spent more than 20 years in Linux land by now.
So all in all, today I think that having packages in a Distribution
like Debian actually is important for the further adoption of the
project - pretty much like I believe that more and better public
Looking forward to seeing the first bug reports reported through
bugs.debian.org rather than https://projects.osmocom.org/ . Once that
happens, we know that people are actually using the official Debian
As an unrelated side note, the Osmocom project now also has nightly
builds available for Debian 7.0, Debian 8.0 and Ubunut 14.04 on both
i586 and x86_64 architecture from
nightly builds are for people who want to stay on the bleeding edge of
the code, but who don't want to go through building everything from
scratch. See Holgers post on the openbsc mailing list
for more information.
March 14, 2016
While preparing my presentation for the Troopers 2016 TelcoSecDay
I was thinking once again about the importance of having FOSS
implementations of cellular protocol stacks, interfaces and network
elements in order to enable security researches (aka Hackers) to work on
improving security in mobile communications.
From the very beginning, this was the motivation of creating OpenBSC and
OsmocomBB: To enable more research in this area, to make it at least in
some ways easier to work in this field. To close a little bit of the
massive gap on how easy it is to do applied security research (aka
hacking) in the TCP/IP/Internet world vs. the cellular world.
We have definitely succeeded in that. Many people have successfully the
various Osmocom projects in order to do cellular security research, and
I'm very happy about that.
However, there is a back-side to that, which I'm less happy about. In
those past eight years, we have not managed to attract significant
amount of contributions to the Osmocom projects from those people that
benefit most from it: Neither from those very security researchers that
use it in the first place, nor from the Telecom industry as a whole.
I can understand that the large telecom equipment suppliers may think
that FOSS implementations are somewhat a competition and thus might not
be particularly enthusiastic about contributing. However, the story for
the cellular operators and the IT security crowd is definitely quite
different. They should have no good reason not to contribute.
So as a result of that, we still have a relatively small amount of
people contributing to Osmocom projects, which is a pity. They can
currently be divided into two groups:
- the enthusiasts: People contributing because they are enthusiastic
about cellular protocols and technologies.
- the commercial users, who operate 2G/2.5G networks based on the
Osmocom protocol stack and who either contribute directly or fund
development work at sysmocom. They typically operate small/private
networks, so if they want data, they simply use Wifi. There's thus
not a big interest or need in 3G or 4G technologies.
On the other hand, the security folks would love to have 3G and 4G
implementations that they could use to talk to either mobile devices
over a radio interface, or towards the wired infrastructure components
in the radio access and core networks. But we don't see significant
contributions from that sphere, and I wonder why that is.
At least that part of the IT security industry that I know typically
works with very comfortable budgets and profit rates, and investing in
better infrastructure/tools is not charity anyway, but an actual
investment into working more efficiently and/or extending the possible
scope of related pen-testing or audits.
So it seems we might want to think what we could do in order to motivate
such interested potential users of FOSS 3G/4G to contribute to it by
either writing code or funding associated developments...
If you have any thoughts on that, feel free to share them with me by
e-mail to email@example.com.