January 22, 2017
A few days ago, Autodesk has announecd
that the popular EAGLE electronics design automation (EDA) software is
moving to a subscription based model.
When previously you paid once for a license and could use that
version/license as long as you wanted, there now is a monthly
subscription fee. Once you stop paying, you loose the right to use the
software. Welcome to the brave new world.
I have remotely observed this subscription model as a general trend in
the proprietary software universe. So far it hasn't affected me at all,
as the only two proprietary applications I use on a regular basis
during the last decade are IDA and EAGLE.
I already have ethical issues with using non-free software, but those
two cases have been the exceptions, in order to get to the productivity
required by the job. While I can somehow convince my consciousness in
those two cases that it's OK - using software under a subscription model is
completely out of the question, period. Not only would I end up paying
for the rest of my professional career in order to be able to open and
maintain old design files, but I would also have to accept software that
"calls home" and has "remote kill" features. This is clearly not
something I would ever want to use on any of my computers. Also, I
don't want software to be associated with any account, and it's not the
bloody business of the software maker to know when and where I use my
For me - and I hope for many, many other EAGLE users - this move is
utterly unacceptable and certainly marks the end of any business between
the EAGLE makers and myself and/or my companies. I will happily use
my current "old-style" EAGLE 7.x licenses for the near future, and theS
see what kind of improvements I would need to contribute to KiCAD or
other FOSS EDA software in order to eventually migrate to those.
As expected, this doesn't only upset me, but many other customers, some
of whom have been loyal to using EAGLE for many years if not decades,
back to the DOS version. This is reflected by some media reports (like
this one at hackaday
or user posts at element14.com or eaglecentral.ca
who are similarly critical of this move.
Rest in Peace, EAGLE. I hope Autodesk gets what they deserve: A new
influx of migrations away from EAGLE into the direction of Open Source
EDA software like KiCAD.
In fact, the more I think about it, I'm actually very much inclined to
work on good FOSS migration tools / converters - not only for my own
use, but to help more people move away from EAGLE. It's not that I
don't have enough projects at my hand already, but at least I'm
motivated to do something about this betrayal by Autodesk. Let's see
what (if any) will come out of this.
So let's see it that way: What Autodesk is doing is raising the level
off pain of using EAGLE so high that more people will use and contribute
FOSS EDA software. And that is actually a good thing!
December 30, 2016
I've just had the pleasure of attending all four days of 33C3 and have returned
home with somewhat mixed feelings.
I've been a regular visitor and speaker at CCC events since 15C3 in
1998, which among other things
means I'm an old man now. But I digress ;)
The event has come extremely far in those years. And to be honest, I
struggle with the size. Back then, it was a meeting of like-minded
hackers. You had the feeling that you know a significant portion of the
attendees, and it was easy to connect to fellow hackers.
These days, both the number of attendees and the size of the event make
you feel much rather that you're in general public, rather than at some
meeting of fellow hackers. Yes, it is good to see that more people are
interested in what the CCC (and the selected speakers) have to say, but
somehow it comes at the price that I (and I suspect other old-timers)
feel less at home. It feels too much like various other technology
One aspect creating a certain feeling of estrangement is also the venue
itself. There are an incredible number of rooms, with a labyrinth of
hallways, stairs, lobbies, etc. The size of the venue simply makes it
impossible to simply _accidentally_ running into all of your fellow
hackers and friends. If I want to meet somebody, I have to make an
explicit appointment. That is an option that exits most of the rest of
the year, too.
While fefe is happy about the many small children attending
the event, to me this seems
somewhat alien and possibly inappropriate. I guess from teenage years
onward it certainly makes sense, as they can follow the talks and
participate in the workshop. But below that age?
The range of topics covered at the event also becomes wider, at least I
feel that way. Topics like IT security, data protection, privacy,
intelligence/espionage and learning about technology have always been
present during all those years. But these days we have bloggers sitting
on stage and talking about bottles of wine (seriously?).
Contrary to many, I also really don't get the excitement about shows
like 'Methodisch Inkorrekt'. Seems to me like mainstream
compatible entertainment in the spirit of the 1990ies Knoff Hoff Show without much
potential to make the audience want to dig deeper into (information)
Yesterday, together with Holger 'zecke' Freyther, I co-presented at 33C3 about
Dissectiong modern (3G/4G) cellular modems.
This presentation covers some of our recent explorations into a specific
type of 3G/4G cellular modems, which next to the regular modem/baseband
processor also contain a Cortex-A5 core that (unexpectedly) runs Linux.
We want to use such modems for building self-contained M2M devices that
run the entire application inside the modem itself, without any external
needs except electrical power, SIM card and antenna.
Next to that, they also pose an ideal platform for testing the Osmocom
network-side projects for running GSM, GPRS, EDGE, UMTS and HSPA
You can find the Slides
and the Video recordings
in case you're interested in more details about our work.
The results of our reverse engineering can be found in the wiki at
http://osmocom.org/projects/quectel-modems/wiki together with links to
the various git repositories containing related tools.
As with all the many projects that I happen to end up doing, it would be
great to get more people contributing to them. If you're interested in
cellular technology and want to help out, feel free to register at the
osmocom.org site and start adding/updating/correcting information to the
You can e.g. help by
- playing with the modem and documenting your findings
- reviewing the source code released by Qualcomm + Quectel and
documenting your findings
- help us to create a working OE build with our own kernel and rootfs
images as well as opkg package feeds for the modems
- help reverse engineering DIAG and QMI protocols as well as the open
source programs to interact with them
December 29, 2016
In 2016, Osmocom gained initial 3.5G support with osmo-iuh and the Iu
interface extensions of our libmsc and OsmoSGSN code. This means you can run
your own small open source 3.5G cellular network for SMS, Voice and Data
However, the project needs more contributors: Become an active
member in the Osmocom development community and get your nano3G
femtocell for free.
I'm happy to announce that my company sysmocom hereby issues a call for
proposals to the general public. Please describe in a short proposal
how you would help us improving the Osmocom project if you were to
receive one of those free femtocells.
Details of this proposal can be found at
Please contact mailto:email@example.com in case of any
December 16, 2016
When you work with GSM/cellular systems, the definite resource are the
specifications. They were originally released by ETSI, later by 3GPP.
The problem start with the fact that there are separate numbering
schemes. Everyone in the cellular industry I know always uses the
GSM/3GPP TS numbering scheme, i.e. something like 3GPP TS 44.008.
However, ETSI assigns its own numbers to the specs, like ETSI TS
144008. Now in most cases, it is as simple s removing the '.' and
prefixing the '1' in the beginning. However, that's not always true and
there are exceptions such as 3GPP TS 01.01 mapping to ETSI TS
101855. To make things harder, there doesn't seem to be a
machine-readable translation table between the spec numbers, but there's
a website for spec number conversion at http://webapp.etsi.org/key/queryform.asp
When I started to work on GSM related topics somewhere between my work
at Openmoko and the start of the OpenBSC project, I manually downloaded
the PDF files of GSM specifications from the ETSI website. This was a
cumbersome process, as you had to enter the spec number (e.g. TS 04.08)
in a search window, look for the latest version in the search results,
click on that and then click again for accessing the PDF file (rather
than a proprietary Microsoft Word file).
At some point a poor girlfriend of mine was kind enough to do this
manual process for each and every 3GPP spec, and then create a
corresponding symbolic link so that you could type something like evince
/spae/openmoko/gsm-specs/by_chapter/44.008.pdf into your command line
and get instant access to the respective spec.
However, of course, this gets out of date over time, and by now almost a
decade has passed without a systematic update of that archive.
To the rescue, 3GPP started at some long time ago to not only provide
the obnoxious M$ Word DOC files, but have deep links to ETSI. So you
could go to http://www.3gpp.org/DynaReport/44-series.htm and then click
on 44.008, and one further click you had the desired PDF, served by
ETSI (3GPP apparently never provided PDF files).
However, in their infinite wisdom, at some point in 2016 the 3GPP
webmaster decided to remove those deep links. Rather than a nice long
list of released versions of a given spec,
http://www.3gpp.org/DynaReport/44008.htm now points to some crappy
then get a ZIP file with a single Word DOC file inside. You can hardly
male it any more inconvenient and cumbersome. The PDF links would open
favorite PDF viewer. Single click to the information you want. But no,
the PDF links had to go and replaced with ZIP file downloads that you
first need to extract, and then open in something like LibreOffice,
taking ages to load the document, rendering it improperly in a word
processor. I don't want to edit the spec, I want to read it, sigh.
So since the usability of this 3GPP specification resource had been
artificially crippled, I was annoyed sufficiently well to come up with a
- first create a complete mirror of all ETSI TS (technical
specifications) by using a recursive wget on
- then use a shell script that utilizes pdfgrep and awk to determine the
3GPP specification number (it is written in the title on the first
page of the document) and creating a symlink. Now I have something
like 44.008-4.0.0.pdf -> ts_144008v040000p.pdf
It's such a waste of resources to have to download all those files and
then write a script using pdfgrep+awk to re-gain the same usability that
the 3GPP chose to remove from their website. Now we can wait for ETSI
to disable indexing/recursion on their server, and easy and quick spec
access would be gone forever :/
Why does nobody care about efficiency these days?
If you're also an avid 3GPP spec reader, I'm publishing the rather
trivial scripts used at http://git.osmocom.org/3gpp-etsi-pdf-links
If you have contacts to the 3GPP webmaster, please try to motivate them
to reinstate the direct PDF links.
December 07, 2016
Many years ago, in the aftermath of Openmoko shutting down, fellow
former Linux kernel hacker Werner Almesberger
was working on an IEEE 802.15.4 (WPAN) adapter for the
As a spin-off to that, the ATUSB device was
designed: A general-purpose open hardware (and FOSS firmware + driver)
IEEE 802.15.4 adapter that can be plugged into any USB port.
This adapter has received a mainline Linux kernel driver written by
Werner Almesberger and Stefan Schmidt, which was eventually merged into
mainline Linux in May 2015 (kernel v4.2 and later).
Earlier in 2016, Stefan Schmidt (the current ATUSB Linux driver
maintainer) approached me about the situation that ATUSB hardware was
frequently asked for, but currently unavailable in its
physical/manufactured form. As we run a shop with smaller electronics
items for the wider Osmocom community at sysmocom, and we also
frequently deal with contract manufacturers for low-volume electronics
like the SIMtrace device anyway, it was easy to say "yes, we'll do it".
As a result, ready-built, programmed and tested ATUSB devices are now
finally available from the sysmocom webshop
Note: I was never involved with the development of the ATUSB hardware,
firmware or driver software at any point in time. All credits go to
Werner, Stefan and other contributors around ATUSB.
December 06, 2016
In a previous life I used to do a lot of IT security work, probably even
at a time when most people had no idea what IT security actually is. I
grew up with the Chaos Computer Club, as it was a great place to meet
people with common interests, skills and ethics. People were hacking
(aka 'doing security research') for fun, to grow their skills, to
advance society, to point out corporate stupidities and to raise
awareness about issues.
I've always shared any results worth noting with the general public.
Whether it was in RFID security, on GSM security, TETRA security, etc.
Even more so, I always shared the tools, creating free software
implementations of systems that - at that time - were very difficult to
impossible to access unless you worked for the vendors of related
device, who obviously had a different agenda then to disclose security
concerns to the general public.
Publishing security related findings at related conferences can be
interpreted in two ways:
On the one hand, presenting at a major event will add to your
credibility and reputation. That's a nice byproduct, but that shouldn't
be the primarily reason, unless you're some kind of a egocentric stage
On the other hand, presenting findings or giving any kind of
presentation or lecture at an event is a statement of support for that
event. When I submit a presentation at a given event, I think carefully
if that topic actually matches the event.
The reason that I didn't submit any talks in recent years at CCC events
is not that I didn't do technically exciting stuff that I could talk
about - or that I wouldn't have the reputation that would make people
consider my submission in the programme committee. I just thought there
was nothing in my work relevant enough to bother the CCC attendees with.
So when Holger 'zecke' Freyther and I chose to present about our recent
journeys into exploring modern cellular modems at the annual Chaos
Communications Congress, we did so because the CCC Congress is the right
audience for this talk. We did so, because we think the people there
are the kind of community of like-minded spirits that we would like to
contribute to. Whom we would like to give something back, for the many
years of excellent presentations and conversations had.
So far so good.
However, in 2016, something happened that I haven't seen yet in my 17
years of speaking at Free Software, Linux, IT Security and other
conferences: A select industry group (in this case the GSMA) asking me
out of the blue to give them the talk one month in advance at a private
I could hardly believe it. How could they? Who am I? Am I spending
sleepless nights and non-existing spare time into security research of
cellular modems to give a free presentation to corporate guys at a
closed industry meeting? The same kind of industries that create the
problems in the first place, and who don't get their act together in
building secure devices that respect people's privacy? Certainly not.
I spend sleepless nights of hacking because I want to share the results
with my friends. To share it with people who have the same passion,
whom I respect and trust. To help my fellow hackers to understand
technology one step more.
If that kind of request to undermine the researcher/authors initial
publication among friends is happening to me, I'm quite sure it must be
happening to other speakers at the 33C3 or other events, too. And that
makes me very sad. I think the initial publication is something that
connects the speaker/author with his audience.
Let's hope the researchers/hackers/speakers have sufficiently strong
ethics to refuse such requests. If certain findings are initially
published at a certain conference, then that is the initial publication.
Period. Sure, you can ask afterwards if an author wants to repeat the
presentation (or a similar one) at other events. But pre-empting the
initial publication? Certainly not with me.
I offered the GSMA that I could talk on the importance of having FOSS
implementations of cellular protocol stacks as enabler for security
research, but apparently this was not to their interest. Seems like all
they wanted is an exclusive heads-up on work they neither commissioned
or supported in any other way.
And btw, I don't think what Holger and I will present about is all that
exciting in the first place. More or less the standard kind of security
nightmares. By now we are all so numbed down by nobody considering
security and/or privacy in design of IT systems, that is is hardly any
news. IoT how it is done so far might very well be the doom of
mankind. An unstoppable tsunami of insecure and privacy-invading
devices, built on ever more complex technology with way too many
security issues. We shall henceforth call IoT the Industry of
I typically prefer to blog about technical topics, but the occasional
stupidity in every-day (business) life is simply too hard to resist.
Today I updated the shipping pricing / zones in the ERP system of my
company to predict shipping rates based on weight and destination of
Deutsche Post, the German Postal system is using their DHL brand for
postal packages. They divide the world into four zones:
- Zone 1 (EU)
- Zone 2 (Europe outside EU)
- Zone 3 (World)
You would assume that "World" encompasses everything that's not part of
the other zones. So far so good. However, I then stumbled about Zone 4 (rest of
world). See for yourself:
So the World according to DHL is a very small group of countries
including Libya and Syria, while countries like Mexico are rest of
Quite charming, I wonder which PR, communicatoins or marketing guru came
up with such a disqualifying name. Maybe they should hve called id 3rd
world and 4th world instead? Or even discworld?
November 27, 2016
In 2006 I first visited Taiwan. The reason back then was Sean Moss-Pultz
contacting me about a new Linux and Free Software based Phone that he
wanted to do at FIC in Taiwan. This later became the Neo1973 and
the Openmoko project and finally became part
of both Free Software as well as smartphone history.
Ten years later, it might be worth to share a bit of a retrospective.
It was about building a smartphone before Android or the iPhone existed
or even were announced. It was about doing things "right" from a Free
Software point of view, with FOSS requirements going all the way down to
component selection of each part of the electrical design.
Of course it was quite crazy in many ways. First of all, it was a
bunch of white, long-nosed western guys in Taiwan, starting a company
around Linux and Free Software, at a time where that was not really
well-perceived in the embedded and consumer electronics world yet.
It was also crazy in terms of the many cultural 'impedance mismatches',
and I think at some point it might even be worth to write a book about
the many stories we experienced. The biggest problem here is of course
that I wouldn't want to expose any of the companies or people in the
many instances something went wrong. So probably it will remain a
secret to those present at the time :/
In any case, it was a great project and definitely one of the most
exciting (albeit busy) times in my professional career so far. It was
also great that I could involve many friends and FOSS-compatriots from
other projects in Openmoko, such as Holger Freyther, Mickey Lauer,
Stefan Schmidt, Daniel Willmann, Joachim Steiger, Werner Almesberger,
Milosch Meriac and others. I am happy to still work on a daily basis
with some of that group, while others have moved on to other areas.
I think we all had a lot of fun, learned a lot (not only about Taiwan),
and were working really hard to get the hardware and software into
shape. However, the constantly growing scope, the [for western terms]
quite unclear and constantly changing funding/budget situation and the
many changes in direction have ultimately lead to missing the market
opportunity. At the time the iPhone and later Android entered the
market, it was too late for a small crazy Taiwanese group of
FOSS-enthusiastic hackers to still have a major impact on the landscape
of Smartphones. We tried our best, but in the end, after a lot of hype
and publicity, it never was a commercial success.
What's more sad to me than the lack of commercial success is also the
lack of successful free software that resulted. Sure, there were some
u-boot and linux kernel drivers that got merged mainline, but none of
the three generations of UI stacks (GTK, Qt or EFL based), nor the GSM
Modem abstraction gsmd/libgsmd nor middleware (freesmartphone.org) has
manage to survive the end of the Openmoko company, despite having
deserved to survive.
Probably the most important part that survived Openmoko was the
pioneering spirit of building free software based phones. This spirit
has inspired pure volunteer based projects like
GTA04/Openphoenux/Tinkerphone, who have achieved extraordinary results -
but who are in a very small niche.
What does this mean in practise? We're stuck with a smartphone world in
which we can hardly escape any vendor lock-in. It's virtually
impossible in the non-free-software iPhone world, and it's difficult in
the Android world. In 2016, we have more Linux based smartphones than
ever - yet we have less freedom on them than ever before. Why?
- the amount of hardware documentation on the processors and chipsets to
day is typically less than 10 years ago. Back then, you could still
get the full manual for the S3C2410/S3C2440/S3C6410 SoCs. Today,
this is not possible for the application processors of any vendor
- the tighter integration of application processor and baseband
processor means that it is no longer possible on most phone designs to
have the 'non-free baseband + free application processor' approach
that we had at Openmoko. It might still be possible if you designed
your own hardware, but it's impossible with any actually existing
hardware in the market.
- Google blurring the line between FOSS and proprietary code in the
Android OS. Yes, there's AOSP - but how many features are lacking?
And on how many real-world phones can you install it? Particularly
with the Google Nexus line being EOL'd? One of the popular exceptions
Fairphone2 with it's alternative AOSP operating system,
even though that's not the default of what they ship.
- The many binary-only drivers / blobs, from the graphics stack to wifi
to the cellular modem drivers. It's a nightmare and really scary if
you look at all of that, e.g. at the binary blob downloads for
to get an idea about all the binary-only blobs on a relatively current
Qualcomm SoC based design. That's compressed 70 Megabytes, probably
as large as all of the software we had on the Openmoko devices back
So yes, the smartphone world is much more restricted, locked-down and
proprietary than it was back in the Openmoko days. If we had been more
successful then, that world might be quite different today. It was a
lost opportunity to make the world embrace more freedom in terms of
software and hardware. Without single-vendor lock-in and proprietary
November 25, 2016
Early in 2016, a friend sent me a paper by Phillip Rogaway entitle, “The Moral Character of Cryptographic Work“. I have read it many times this year. Here’s the abstract:
Cryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension. The Snowden revelations motivate a reassessment of the political and moral positioning of cryptography. They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance. I plead for a reinvention of our disciplinary culture to attend not only to puzzles and math, but, also, to the societal implications of our work.
The ability to take control of our lives, again, has been on my mind this month. Loss of control is often rooted in reframed language. Rogaway shows how privacy, anonymity, and even security are now associated with terrorism. His suggestion? Reframe the work cryptography as building tools for anti-surveillance. Making “surveillance more expensive” is aligned with democracy and freedom. I think this is a great observation. Hopefully others will enjoy reading this paper as much as I did.
November 24, 2016
During the past 16 years I have been playing a lot with a variety of
One of the most important tasks for debugging or analyzing embedded
devices is usually to get access to the serial console on the UART of
the device. That UART is often exposed at whatever logic level the main
CPU/SOC/uC is running on. For 5V and 3.3V that is easy, but for ever
more and more unusual voltages I always had to build a custom cable or a
custom level shifter.
In 2016, I finally couldn't resist any longer and built a multi-voltage
USB UART adapter.
This board exposes two UARTs at a user-selectable voltage of 1.8, 2.3,
2.5, 2.8, 3.0 or 3.3V. It can also use whatever other logic voltage
between 1.8 and 3.3V, if it can source a reference of that voltage from
the target embedded board.
Rather than just building one for myself, I released the design as open
hardware under CC-BY-SA license terms. Full schematics + PCB layout
design files are available. For more information see
In case you don't want to build it from scratch, ready-made machine
assembled boards are also made available from
There are plenty of cellular modems on the market in the mPCIe form
Playing with such modems is reasonably easy, you can simply insert them
in a mPCIe slot of a laptop or an embedded device (soekris, pc-engines
or the like).
However, many of those modems actually export interesting signals like
digital PCM audio or UART ports on some of the mPCIe pins, both in
standard and in non-standard ways. Those signals are inaccessible in
those embedded devices or in your laptop.
So I built a small break-out board which performs the basic function of
exposing the mPCIe USB signals on a USB mini-B socket, providing power
supply to the mPCIe modem, offering a SIM card slot at the bottom, and
exposing all additional pins of the mPCIe header on a standard 2.54mm
pitch header for further experimentation.
The design of the board (including schematics and PCB layout design
files) is available as open hardware under CC-BY-SA license terms. For
more information see http://osmocom.org/projects/mpcie-breakout/wiki
If you don't want to build your own board, fully assembled and tested
boards are available from
September 29, 2016
A long time ago I wrote the OpenEmbedded User Manual and back then the obvious choice was to make it a docbook. In my community there were plenty of other examples that used docbook and it helped to get started. The great thing of docbook was with one XML input one could generate output in many different formats like HTML, XHTML, ePub or PDF. It separated the format from the presentation and was tailored for technical documents and articles with advanced features like generating a change history, appendix and many more. With XML entities it was possible to share chapters and parts between different manuals.
When creating Sysmocom and starting to write our usermanuals we have continued to use docbook. After all besides the many tags in XML it is a format that can be committed to git, allowing review and the publishing is just like a software build and can be triggered through git.
On the other hand writing XML by hand, indenting paragraphs to match the tree structure of the document is painful. In hindsight writing a docbook feels more like writing xml tags than writing content. I started to look for alternatives and heard about asciidoc, discarded it and then re-evaluated and started to use it as default. The ratio of content to formatting is really great. With a2x we continued to use docbook/dblatex to render the document. With some trial and error we could even continue to use the docbook docinfo (-a docinfo and a file manual-docinfo.xml). And finally asciidoc can be used on github as well. It works by adding .adoc to the filename and will be rendered nicely.
So with asciidoc, restructured text (rst), markdown (md) and many more (textile, pillar, …) we have great tools that make it easier to focus on the content and have an okay look. The downside is that there are so many of them now (and incompatible dialects). This leads to rendering tools having big differences, e.g. not being able to use a docinfo for PDF generation, being able to add raw PDF commands, etc.
I am currently exploring to publish documentation on readthedocs.org and my issue is that they are using Python sphinx which only works with markdown or restructure text. As github can’t
In the attempt to pick-up users where they are I am exploring to use readthedocs.org as an additional channel for documents. The website can integrate with github to automatically rebuild the documentation. One issue is that they exclusively use Python sphinx to render the documentation and that means it needs to use rst or markdown (or both) as input.
I can go down the xkcd way and create a meta-format to rule them all. Try to use pandoc to convert these documents on the fly (but pandoc already had some issues with basic tables rst) or switch the format. I looked at rst2pdf but while powerful seems to be lacking the docinfo support and markdown. I am currently exploring to stay with asciidoc and then use asciidoc -> docbook -> markdown_github for readthedocs. Let’s see how far this gets.
September 20, 2016
Going from 2G/3G requires to learn a new set of abbreviations. The network is referred to as IP Multimedia Subsystem (IMS) and the HLR becomes Home subscriber server (HSS). ITU ASN1 to define the RPCs (request, response, potential errors), message structure and encoding in 2G/3G is replaced with a set of IETF RFCs. From my point of view names of messages, names of attributes change but the basic broken trust model remains.
Having worked on probably the best ASN1/TCAP/MAP stack in Free Software it is time to move to the future and apply the good parts and lessons learned to Diameter. The first RFC is to look at is RFC 6733 – Diameter Base Protocol. This defines the basic encoding of messages, the requests, responses and errors, a BNF grammar to define these messages, when and how to connect to remote systems, etc.
The core part of our ASN1/TCAP/MAP stack is that the 3GPP ASN1 files are parsed and instead of just generating structs for the types (like done with asn1c and many other compilers) we have a model that contains the complete relationship between application-context, contract, package, argument, result and errors. From what I know this is quite unique (at least in the FOSS world) and it has allowed rapid development of a HLR, SMSC, SCF, security research and more.
So getting a complete model is the first step. This will allow us to generate encoders/decoders for languages like C/C++, be the base of a stack in Smalltalk, allow to browse the model graphically, generate fancy pictures, …. The RFC defines a grammar of how messages and grouped Attribute-Value-Pairs (AVP) are formatted and then a list of base messages. The Erlang/OTP framework has then extended this grammar to define a module and relationships between modules.
I started by converting the BNF into a PetitParser grammar. Which means each rule of the grammar becomes a method in the parser class, then one can create a unit test for this method and test the rule. To build a complete parser the rules are being combined (and, or, min, max, star, plus, etc.) with each other. One nice tool to help with debugging and testing the parser is the PetitParser Browser. It is pictured above and it can visualize the rule, show how rules are combined with each other, generate an example based on the grammar and can partially parse a message and provide debug hints (e.g. ‘::=’ was expected as next token).
After having written the grammar I tried to parse the RFC example and it didn’t work. The sad truth is that while the issue was known in RFC 3588, it has not been fixed. I created another errata item and let’s see when and if it is being picked up in future revisions of the base protocol.
The next step is to convert the grammar into a module. I will progress as time permits and contributions are more than welcome.
September 18, 2016
Previously I have written about connectivity options for IoT devices and today I assume that a cellular technology (e.g. names like GSM, 3G, UMTS, LTE, 4G) has been chosen. Unless you are a big vendor you will end up using a module (instead of a chipset) and either you are curious what the module is doing behind its AT command interface or you are trying to understand a real problem. The following is going to help you or at least be entertaining.
The xgoldmon project was a first to provide air interface traces and logging to the general public but it was limited to Infineon baseband (and some Gemalto devices), needed special commands to enable and didn’t include all messages all the time.
In the last months I have intensively worked with modules of a vendor called Quectel. They are using Qualcomm chipsets and have built the GSM/UMTS Quectel UC20 and the GSM/UMTS/LTE Quectel EC20 modules. They are available as a variant to solder but for speeding up development they provide them as miniPCI express as well. I ended up putting them into a PCengines APU2, soldered an additional SIM card holder for the second SIM card, placed U.FL to SMA connectors and put it into one of their standard cases. While the UC20 and EC20 are pretty similar the software is not the same and some basic features are missing from the EC20, e.g. the SIM ToolKit support. The easiest way to acquire these modules in Europe seems to be through the above links.
The extremely nice feature is that both modules export Qualcomm’s bi-directional DIAG debug interface by USB (without having to activate it through an undocumented AT command). It is a framed protocol with a simple checksum at the end of a frame and many general (e.g. logging and how regions are described) types of frames are known and used in projects like ModemManager to extract additional information. Some parts that include things like Tx-power are not well understood yet.
I have made a very simple utility available on github that will enable logging and then convert radio messages to the Osmocom GSMTAP protocol and send it to a remote host using UDP or write it to a pcap file. The result can be analyzed using wireshark.
You will need a new enough Linux kernel (e.g. >= Linux 4.4) to have the modems be recognized and initialized properly. This will create four ttyUSB serial devices, a /dev/cdc-wdmX and a wwanX interface. The later two can be used to have data as a normal network interface instead of launching pppd. In short these modules are super convenient to add connectivity to a product.
PCengines APU2 with Quectel EC20 and Quectel UC20
The repository includes a shell script to build some dependencies and the main utility. You will need to install autoconf, automake, libtool, pkg-config, libtalloc, make, gcc on your Linux distribution.
git clone git://github.com/moiji-mobile/diag-parser
Assuming that your modem has exposed the DIAG debug interface on /dev/ttyUSB0 and you have your wireshark running on a system with the internal IPv4 address of 10.23.42.7 you can run the following command.
./diag-parser -g 10.23.42.7 -i /dev/ttyUSB0
Analyzing UMTS with wireshark. The below shows a UMTS capture taken with the Quectel module. It allows you to see the radio messages used to register to the network, when sending a SMS and when placing calls.
Wireshark dissecting UMTS
August 22, 2016
Many of us deal or will deal with (connected) M2M/IoT devices. This might be writing firmware for microcontrollers, using a RTOS like NuttX
or a full blown Unix (like) operating system like FreeBSD or Yocto/Poky Linux, creating and building code to run on the device, processing data in the backend or somewhere inbetween. Many of these devices will have sensors to collect data like GNSS
position/time, temperature, light detector, measuring acceleration, see airplanes, detect lightnings
, etc.The backend problem is work but mostly “solved”
. One can rely on something like Amazon IoT or creating a powerful infrastructure using many of the FOSS options for message routing, data storage, indexing and retrieval in C++. In this post I want to focus about the little detail of how data can go from the device to the backend.
To make this thought experiment a bit more real let’s imagine we want to build a bicycle lock/tracker. Many of my colleagues ride their bicycle to work and bikes being stolen remains a big tragedy. So the primary focus of an IoT device would be to prevent theft (make other bikes a more easy target) or making selling a stolen bicycle more difficult (e.g. by easily checking if something has been stolen) and in case it has been stolen to make it more easy to find the current location.
Let’s assume two different architectures. One possibility is to have the bicycle actively acquire the position and then try to push this information to a server (“active push”). Another approach is to have fixed installed scanning stations or users to scan/report bicycles (“passive pull”). Both lead to very different designs.
The system would need some sort of GNSS module, a microcontroller or some full blown SoC to run Linux, an accelerator meter and maybe more sensors. It should somehow fit into an average bicycle frame, have good antennas to work from inside the frame, last/work for the lifetime of a bicycle and most importantly a way to bridge the air-gap from the bicycle to the server.
The device would not know its position or if it is moved. It might be a simple barcode/QR code/NFC/iBeacon/etc. In case of a barcode it could be the serial number of the frame and some owner/registration information. In case of NFC it should be a randomized serial number (if possible to increase privacy). Users would need to scan the barcode/QR-code and an application would annotate the found bicycle with the current location (cell towers, wifi networks, WGS 84 coordinate) and upload it to the server. For NFC the smartphone might be able to scan the tag and one can try to put readers at busy locations.
The incentive for the app user is to feel good collecting points for scanning bicycles, maybe some rewards if a stolen bicycle is found. Buyers could easily check bicycles if they were reported as stolen (not considering the difficulty of how to establish ownership).
The technologies that come to my mind are Barcode
, play some humanly not hearable noise and decode in an app
, Bluetooth Smart
. Next I will look at the main differentiation/constraints of these technologies and provide a small explanation and finish how these constraints interact with each other.
World wide usable
Radio Technology operates on a specific set of radio frequencies (Bands). Each country may manage these frequencies separately and this can lead to having to use the same technology on different bands depending on the current country. This will increase the complexity of the antenna design (or require multiple of them), make mechanical design more complex, makes software testing more difficult, production testing, etc. Or there might be multiple users/technologies on the same band (e.g. wifi + bluetooth or just too many wifis).
Each radio technology requires to broadcast and might require to listen or permanently monitor the air for incoming messages (“paging”). With NFC the scanner might be able to power the device but for other technologies this is unlikely to be true. One will need to define the lifetime of the device and the size of the battery or look into ways of replacing/recycling batteries or to charge them.
Different technologies were designed to work with sender/receiver being away at different min/max. distances (and speeds but that is not relevant for the lock nor is the bandwidth for our application). E.g. with Near Field Communication (NFC) the workable range is meters while with GSM it will be many kilometers and with UMTS the cell size depends on how many phones are currently using it (the cell is breathing).
Pick two of three
Ideally we want something that works over long distances, requires no battery to send/receive and the system is still pushing out the position/acceleration/event report to servers. Sadly this is not how reality works and we will have to set priorities.
The more bands to support, the more complicated the antenna design, production, calibration, testing. It might be that one technology does not work in all countries or that it is not equally popular or the market situation is different, e.g. some cities have city wide public hotspots, some don’t.
Higher power transmission increases the range but increases the power consumption even more. More current will be used during transmission which requires a better hardware design to buffer the spikes, a bigger battery and ultimately a way to charge or efficiently replace batteries.Given these constraints it is time to explore some technologies. I will use the one already mentioned at the beginning of this section.
||Scan Device needed
||Cost of device
||App scanning barcode required
||Sticker needs to be hard to remove and visible, maybe embedded to the frame
||Non human hearable audio
||App recording audio
||Button to play audio?
||Yes, but not on single band
||Centimeters to meters
||Many bands, specific readers needed
||Yes, but common
||Competes with Wifi for spectrum
||Yes, but not on single band
||Not commonly deployed, software more involved
||Uses ZigBee physical layer and then IPv6. Requires 6LoWPAN to Internet translation
||Almost besides South Korea, Japan, some islands
||Almost global coverage, direct communication with backend possible
||Less than GSM but South Korea, Japan
||Meters to Kilometers depends on usage
||Higher power usage than GSM, higher device cost
||Less than GSM
||Designed for kilometers
||Expensive, higher power consumption
||Not deployed and coming in the future. Can embed GSM equally well into a LTE carrier
Both a push and pull architecture seem to be feasible and create different challenges and possibilities. A pull architecture will require at least Smartphone App support and maybe a custom receiver device. It will only work in regions with lots of users and making privacy/tracking more difficult is something to solve.
For push technology using GSM is a good approach. If coverage in South Korea or Japan is required a mix of GSM/UMTS might be an option. NB-IOT seems nice but right now it is not deployed and it is not clear if a module will require less power than a GSM module. NB-IOT might only be in the interest of basestation vendors (the future will tell). Using GSM/UMTS brings its own set of problems on the device side but that is for other posts.
August 21, 2016
As part of running infrastructure it might make sense or be required to store logs of transactions. A good way might be to capture the raw unmodified network traffic. For our GSM backend this is what we (have) to do and I wrote a client
that is using libpcap to capture data and sends it to a central server
for storing the trace. The system is rather simple and in production at various customers. The benefit of having a central server is having access to a lot of storage without granting too many systems and users access, central log rotation and compression, an easy way to grab all relevant traces and many more.
Recently the topic of doing real-time processing of captured data came up. I wanted to add some kind of side-channel that distributes data to interested clients before writing it to the disk. E.g. one might analyze a RTP audio flow for packet loss, jitter, without actually storing the personal conversation.
I didn’t create a custom protocol but decided to try ØMQ
(Zeromq). It has many built-in strategies (publish / subscribe, round robin routing, pipeline, request / reply, proxying, …) for connecting distributed system. The framework abstracts DNS resolving, connect, re-connect and exposes very easy to build the standard message exchange patterns. I opted for the publish / subscribe pattern because the collector server (acting as publisher) does not care if anyone is consuming the events or data. The message I sent are quite simple as well. There are two kind of multi-part messages, one for events and one for data. A subscriber is able to easily filter for events or data and filter for a specific capture source.
The support for Zeromq was added in two commits. The first
one adds basic zeromq context/socket support and configuration and the second
adds sending out the events and data in a fire and forget manner. And in a simple test set-up it seems to work just fine.
Since moving to Amsterdam I try to attend more meetups. Recently I went to talk at the local Elasticsearch
group and found out about packetbeat
. It is program written in Go that is using a PCAP library to capture network traffic, has protocol decoders written in go to make IP re-assembly and decoding and will upload the extracted information to an instance of Elasticsearch. In principle it is somewhere between my PCAP system and a distributed wireshark (without the same amount of protocol decoders). In our network we wouldn’t want the edge systems to directly talk to the Elasticsearch system and I wouldn’t want to run decoders as root (or at least with extended capabilities).
As an exercise to learn a bit more about the Go language I tried to modify packetbeat to consume trace data from my new data interface. The result can be found here
and I do understand (though I am still hooked on Smalltalk/Pharo
) why a lot of people like Go. The built-in fetching of dependencies from github is very neat, the module and interface/implementation approach is easy to comprehend and powerful.
The result of my work allows something like in the picture below. First we centralize traffic capturing at the pcap collector and then have packetbeat pick-up data, decode and forward for analysis into Elasticsearch. Let’s see if upstream is merging my changes.
Some Free Software projects have already moved to Github, some probably plan it and the Python project will move soon
. I have not followed the reasons for why the Python project is moving but there is a long list of reasons to move to a platform like github.com. They seem to have a good uptime, offer checkouts through ssh, git, http (good for corporate firewalls) and a subversion interface, they have integrated wiki and ticket management, the fork feature allows an upstream to discover what is being done to the software, the pull requests and the integration with third party providers is great. The last item allows many nice things, specially integrating with a ton of Continuous Integration tools (Travis, Semaphore, Circle, who knows).
From a freedom point of view I think Gitlab is a lot worse than Github. They try to create the illusion that this is a Free Software alternative to Github.com, they offer to host your project but if you want to have the same features for self hosting you will notice that you fell for their marketing. Their website prominently states “Runs GitLab Enterprise” Edition. If you have a look at the feature comparison between the “Community Edition” (the Free Software project) and their open core additions (Enterprise edition) you will notice that many of the extra features are essential.
So when deciding putting your project on github.com or gitlab.com the question is not between proprietary and Free Software but essentially between proprietary and proprietary and as such there is no difference.
Imagine you run a GSM network and you have multiple systems at the edge of your network that communicate with other systems. For debugging reasons you might want to collect traffic and then look at it to explore an issue or look at it systematically to improve your network, your roaming traffic, etc.
The first approach might be to run tcpdump on each of these systems, run it in a round-robin manner, compress the old traffic and then have a script that downloads/uploads it once a day to a central place. The issue is that each node needs to have enough disk space, you might not feel happy to keep old files on the edge or you just don’t know when is a good time to copy it.
Another approach is to create an aggregation framework. A client will use libpcap to capture the traffic and then redirect it to a central server. The central server will then store the traffic and might rotate based on size or age of the file. Old files can then be compressed and removed.
I created the osmo-pcap tool many years ago and have recently fixed a 64bit PCAP header issue (the timeval in the header is 32bit), collection of jumbo frames and now updated the README.md file of the project and created packages for Debian, Ubuntu, CentOS, OpenSUSE, SLES and I made sure that it can be compiled and use on FreeBSD10 as well.
If you are using or decided not to use this software I would be very happy to hear about it.
Berlin continues to gain a lot of popularity, culturally and culinarily it is an awesome place and besides increasing rents it still remains more affordable than other cities. In terms of economy Berlin attracts new companies and branches/offices as well. At the same time I felt the itch and it was time to leave my home town once again. In the end I settled for the bicycle friendly (and sometimes sunny) city of Amsterdam.
My main interest remains building reliable systems with Smalltalk, C/C++, Qt and learn new technology (Tensorflow? Rust? ElasticSearch, Mongo, UUCP) and talk about GSM (SCCP, SIGTRAN, TCAP, ROS, MAP, Diameter, GTP) or get re-exposed to WebKit/Blink.
If you are in Amsterdam or if you know people or companies I am happy to meet and make new contacts.
In the past I have written about my usage of Tufao and Qt to build REST services. This time I am writing about my experience of using the TreeFrog framework
to build a full web application.
You might wonder why one would want to build such a thing in a statically and compiled language instead of something more dynamic. There are a few reasons for it:
- Performance: The application is intended to run on our sysmoBTS GSM Basestation (TI Davinci DM644x). By modern standards it is a very low-end SoC (ARMv5te instruction set, single core, etc, low amount of RAM) and at the same time still perfectly fine to run a GSM network.
- Interface: For GSM we have various libraries with a C programming interface and they are easy to consume from C++.
- Compilation/Distribution: By (cross-)building the application there is a “single” executable and we don’t have the dependency mess of Ruby.
The second decision was to not use Tufao and search for a framework that has user management and a template/rendering/canvas engine built-in. At the Chaos Computer Camp in 2007 I remember to have heard a conversation of “Qt” for the Web (Wt, C++ Web Toolkit) and this was the first framework I looked at. It seems like a fine project/product but interfacing with Qt seemed like an after thought. I continued to look and ended up finding and trying the TreeFrog framework.
I am really surprised how long this project exists without having heard about it. It is using/built on top of Qt, uses QtSQL for the ORM mapping, QMetaObject for dispatching to controllers and the template engine and resembles Ruby on Rails a lot. It has two template engines, routing of URLs to controllers/slots, one can embed any C++ in the template. The documentation is complete and by using the search on the website I found everything I was searching for my “advanced” topics. Because of my own stupidity I ended up single stepping through the code and a Qt coder should feel right at home.
My favorite features:
- tspawn model TableName will autogenerate (and update) a C++ model based on the table in the database. The updating is working as well.
- The application builds a libmodel.so, libhelper.so (I removed that) and libcontroller.so. When using the -r option of the application the application will respawn itself. At first I thought I would not like it but it improves round trip times.
- C++ in the template. The ERB template is parsed and a C++ class will be generated and the ::toString() method will generate the HTML code. So in case something is going wrong, it is very easy to inspect.
If you are currently using Ruby on Rails, Django but would like to do it with C++, have a look at TreeFrog. I really like it so far.
As part of the Osmocom.org
software development we have a Jenkins set-up
that is executing unit and system tests. For OpenBSC we will compile the software, then execute the unit tests and finally run a bunch of system tests. The system tests will verify making configuration changes through the telnet interface, the machine control interface, might try to connect to other parts, etc.
In the past this was executed after
a committer had pushed his changes to the repository and the build time didn’t matter. As part of the move to the Gerrit
code review we execute them before
and this means that people might need to wait for the result… (and waiting for a computer shouldn’t be necessary these days).
is renting a dedicated build machine to speed-up compilation and I have looked at how to execute the system tests in parallel. The issue is that during a system test we bind to ports on localhost and that means we can not have two test runs at the same time.
I decided to use the Linux network namespace support and opted for using docker to achieve it. There are some hick-ups but in general it is a great step forward. Using a statement like the following we execute our CI script in a clean environment.
$ docker run –rm=true -e HOME=/build -w /build -i -u build -v $PWD:/build osmocom:amd64 /build/contrib/jenkins.sh
As part of the OpenBSC build we are re-building dependencies and thanks to building in the virtual /build directory we can look at archiving libosmocore/libosmo-sccp/libosmo-abis and not rebuild it all the time.
August 16, 2016
For many years I've always been wanting to do some motorbike riding
across the Alps, but somehow never managed to do so. It seems when in
Germany I've always been too busy - contrary to the many motorbike tours
around and across Taiwan which I did during my frequent holidays there.
This year I finally took the opportunity to combine visiting some
friends in Hungary and Bavaria with a nice tour starting from Berlin
over Prague and Brno (CZ), Bratislava (SK) to Tata and Budapeest (HU),
further along lake Balaton (HU) towards Maribor (SI) and finally across
the Grossglockner High Alpine Road (AT) to Salzburg and Bavaria before heading back to Berlin.
It was eight fun (but sometimes long) days riding. For some strange
turn of luck, not a single drop of rain was encountered during all that
time, traveling across six countries.
The most interesting parts of the tour were:
- Along the Elbe river from Pirna (DE) to Lovosice (CZ). Beautiful
scenery along the river valley, most parts of the road immediately on
either side of the river. Quite touristy on the German side, much
more pleasant and quiet on the Czech side.
- From Mosonmagyarovar via Gyor to Tata (all HU). Very little traffic
alongside road '1'. Beautiful scenery with lots of agriculture and
forests left and right.
- The Northern coast of Lake Balaton, particularly from Tinany to
Keszthely (HU). Way too many tourists and traffic for my taste, but
still very impressive to realize how large/long that lake really is.
- From Maribor to Dravograd (SI) alongside the Drau/Drav river valley.
- Finally, of course, the Grossglockner High Alpine Road,
which reminded me in many ways of the high mountain tours I did in
Taiwan. Not a big surprise, given that both lead you up to about
2500 meters above sea level.
Finally, I have to say I've been very happy with the performance of my
1996 model BMW F 650ST bike, who has coincidentally just celebrated its
20ieth anniversary. I know it's an odd bike design (650cc
single-cylinder with two spark plugs, ignition coils and two
carburetors) but consider it an acquired taste ;)
I've also published a map with a track log of the trip
In one month from now, I should be reporting from motorbike tours in
Taiwan on the equally trusted small Yamaha TW-225 - which of course
plays in a totally different league ;)