As I just wrote in my post about TelcoSecDay, I sometimes
worry about the choices I made with Osmocom, particularly when I see
all the great stuff people doing in fields that I previously was working
in, such as applied IT security as well as Linux Kernel development.
When people like Dieter, Holger and I started to play with what later
became OpenBSC, it was just for fun. A challenge to master. A closed
world to break open and which to attack with the tools, the mindset and
the values that we brought with us.
Later, Holger and I started to do freelance development for commercial
users of Osmocom (initially basically only OpenBSC, but then OsmoSGSN,
OsmoBSC, OsmoBTS, OsmoPCU and all the other bits on the infrastructure
side). This lead to the creation of sysmocom in 2011, and ever since we
are trying to use revenue from hardware sales as well as development
contracts to subsidize and grow the Osmocom projects. We're investing
most of our earnings directly into more staff that in turn works on
Osmocom related projects.
It's important to draw the distinction betewen the Osmocom cellular
which are mostly driven by commercial users and sysmocom these days,
and all the many other pure juts-for-fun community projects under
the Osmocom umbrella, like OsmocomTETRA, OsmocomGMR, rtl-sdr, etc.
I'm focussing only on the cellular infrastructure projects, as they
are in the center of my life during the past 6+ years.
In order to do this, I basically gave up my previous career[s] in IT
security and Linux kernel development (as well as put things like
gpl-violations.org on hold). This is a big price to pay for crating
more FOSS in the mobile communications world, and sometimes I'm a bit
melancholic about the "old days" before.
Financial wealth is clearly not my primary motivation, but let me be
honest: I could have easily earned a shitload of money continuing to do
freelance Linux kernel development, IT security or related consulting.
There's a lot of demand for related skills, particularly with some
experience and reputation attached. But I decided against it, and
worked several years without a salary (or almost none) on Osmocom
related stuff [as did Holger].
But then, even with all the sacrifices made, and the amount of revenue
we can direct from sysmocom into Osmocom development: The complexity of
cellular infrastructure vs. the amount of funding and resources is always
only a fraction of what one would normally want to have to do a proper
implementation. So it's constant resource shortage, combined with lots
of unpaid work on those areas that are on the immediate short-term
feature list of customers, and that nobody else in the community feels
like he wants to work on. And that can be a bit frustrating at times.
Is it worth it?
So after 7 years of OpenBSC, OsmocomBB and all the related projects, I'm
sometimes asking myself whether it has been worth the effort, and
whether it was the right choice.
It was right from the point that cellular technology is still an area
that's obscure and unknown to many, and that has very little FOSS
(though Improving!). At the same time, cellular networks are becoming
more and more essential to many users and applications. So on an
abstract level, I think that every step in the direction of FOSS for
cellular is as urgently needed as before, and we have had quite some
success in implementing many different protocols and network elements.
Unfortunately, in most cases incompletely, as the amount of funding
and/or resources were always extremely limited.
On the other hand, when it comes to metrics such as personal
satisfaction or professional pride, I'm not very happy or satisfied.
The community remains small, the commercial interest remains limited,
and as opposed to the Linux world, most players have a complete lack of
understanding that FOSS is not a one-way road, but that it is important
for all stakeholders to contribute to the development in terms of
I think a collaborative development project (which to me is what FOSS is
about) is only then truly successful, if its success is not related to
a single individual, a single small group of individuals or a single
entity (company). And no matter how much I would like the above to be
the case, it is not true for the Osmocom cellular infrastructure
projects. Take away Holger and me, or take away sysmocom, and I think
it would be pretty much dead. And I don't think I'm exaggerating here.
This makes me sad, and after all these years, and after knowing quite a
number of commercial players using our software, I would have hoped that
the project rests on many more shoulders by now.
This is not to belittle the efforts of all the people contributing to
it, whether the team of developers at sysmocom, whether those in the
community that still work on it 'just for fun', or whether those
commercial users that contract sysmocom for some of the work we do.
Also, there are known and unknown donors/funders, like the NLnet
foundation for some parts of the work. Thanks to all of you, and
clearly we wouldn't be where we are now without all of that!
But I feel it's not sufficient for the overall scope, and it's not [yet]
sustainable at this point. We need more support from all sides,
particularly those not currently contributing. From vendors of BTSs and
related equipment that use Osmocom components. From operators that use
it. From individuals. From academia.
Yes, we're making progress. I'm happy about new developments like the
Iu and Iuh support, the OsmoHLR/VLR split and 2G/3G authentication that Neels just blogged about. And
there's progress on the SIMtrace2 firmware with card emulation and MITM,
just as well as there's progress on libosmo-sigtran (with a more
complete SUA, M3UA and connection-oriented SCCP stack), etc.
But there are too little people working on this, and those people are
mostly coming from one particular corner, while most of the [commercial]
users do not contribute the way you would expect them to contribute in
collaborative FOSS projects. You can argue that most people in the
Linux world also don't contribute, but then the large commercial
beneficiaries (like the chipset and hardware makers) mostly do, as are
the large commercial users.
All in all, I have the feeling that Osmocom is as important as it
ever was, but it's not grown up yet to really walk on its own feet. It
may be able to crawl, though ;)
So for now, don't panic. I'm not suffering from burn-out, mid-life
crisis and I don't plan on any big changes of where I put my energy: It
will continue to be Osmocom. But I also think we have to have a more
open discussion with everyone on how to move beyond the current
situation. There's no point in staying quiet about it, or to claim that
everything is fine the way it is. We need more commitment. Not from
the people already actively involved, but from those who are not [yet].
If that doesn't happen in the next let's say 1-2 years, I think it's
fair that I might seriously re-consider in which field and in which way
I'd like to dedicate my [I would think considerable] productive energy and
I'm just on my way back from the Telecom Security Day 2017
<https://www.troopers.de/troopers17/telco-sec-day/>, which is an
invitation-only event about telecom security issues hosted by ERNW
back-to-back with their Troopers 2017 <https://www.troopers.de/troopers17/>
I've been presenting at TelcoSecDay in previous years and hence was
again invited to join (as attendee). The event has really gained quite
some traction. Where early on you could find lots of IT security /
hacker crowds, the number of participants from the operator (and to
smaller extent also equipment maker) industry has been growing.
The quality of talks was great, and I enjoyed meeting various familiar
faces. It's just a pity that it's only a single day - plus I had to
head back to Berlin still today so I had to skip the dinner + social
When attending events like this, and seeing the interesting hacks that
people are working on, it pains me a bit that I haven't really been
doing much security work in recent years. netfilter/iptables was at
least somewhat security related. My work on OpenPCD / librfid was
clearly RFID security oriented, as was the work on airprobe,
OsmocomTETRA, or even the EasyCard payment system hack
I have the same feeling when attending Linux kernel development related
events. I have very fond memories of working in both fields, and it was
a lot of fun. Also, to be honest, I believe that the work in Linux
kernel land and the general IT security research was/is appreciated much
more than the endless months and years I'm now spending my time with
improving and extending the Osmocom cellular infrastructure stack.
Beyond the appreciation, it's also the fact that both the IT security
and the Linux kernel communities are much larger. There are more
people to learn from and learn with, to engage in discussions and
ping-pong ideas. In Osmocom, the community is too small (and I have the
feeling, it's actually shrinking), and in many areas it rather seems
like I am the "ultimate resource" to ask, whether about 3GPP specs or
about Osmocom code structure. What I'm missing is the feeling of being
part of a bigger community. So in essence, my current role in the "Open
Source Cellular" corner can be a very lonely one.
But hey, I don't want to sound more depressed than I am, this was
supposed to be a post about TelcoSecDay. It just happens that attending
IT Security and/or Linux Kernel events makes me somewhat gloomy for the
Meanwhile, if you have some interesting projcets/ideas at the border
between cellular protocols/systems and security, I'd of course love to
hear if there's some way to get my hands dirty in that area again :)
As we can read in recent news, VMware has become a gold member of the
Linux foundation. That causes - to say the least - very mixed feelings to me.
One thing to keep in mind: The Linux Foundation is an industry
association, it exists to act in the joint interest of it's paying
members. It is not a charity, and it does not act for the public good.
I know and respect that, while some people sometimes appear to be
confused about its function.
However, allowing an entity like VMware to join, despite their many
years long disrespect for the most basic principles of the FOSS
Community (such as: Following the GPL and its copyleft principle),
really is hard to understand and accept.
I wouldn't have any issue if VMware would (prior to joining LF) have
said: Ok, we had some bad policies in the past, but now we fully comply
with the license of the Linux kernel, and we release all
derivative/collective works in source code. This would be a positive
spin: Acknowledge past issues, resolve the issues, become clean and then
publicly underlining your support of Linux by (among other things)
joining the Linux Foundation. I'm not one to hold grudges against
people who accept their past mistakes, fix the presence and then move
on. But no, they haven't fixed any issues.
They are having one of the worst track records in terms of intentional
GPL compliance issues for many years, showing outright disrespect for Linux,
the GPL and ultimately the rights of the Linux developers, not resolving
those issues and at the same time joining the Linux Foundation? What
kind of message sends that?
It sends the following messages:
- you can abuse Linux, the GPL and copyleft while still being accepted
amidst the Linux Foundation Members
- it means the Linux Foundations has no ethical concerns whatsoever
about accepting such entities without previously asking them to become
- it also means that VMware has still not understood that Linux and FOSS
is about your actions, particularly the kind of choices you make how
to technically work with the community, and not against it.
So all in all, I think this move has seriously damaged the image of both
entities involved. I wouldn't have expected different of VMware, but I
would have hoped the Linux Foundation had some form of standards as to
which entities they permit amongst their ranks. I guess I was being
overly naive :(
It's a slap in the face of every developer who writes code not because
he gets paid, but because it is rewarding to know that copyleft will
continue to ensure the freedom of related code.
|UPDATE (March 8, 2017):|
| ||I was mistaken in my original post in that VMware didn't just join,
but was a Linux Foundation member already before, it is "just" their
upgrade from silver to gold that made the news recently. I stand
corrected. Still doesn't make it any better that the are involved
inside LF while engaging in stepping over the lines of license
|UPDATE2 (March 8, 2017):|
| ||As some people pointed out, there is no verdict against VMware. Yes,
that's true. But the mere fact that they rather distribute derivative
works of GPL licensed software and take this to court with an armada
of lawyers (instead of simply complying with the license like everyone
else) is sad enough. By the time there will be a final verdict, the
product is EOL. That's probably their strategy to begin with :/
I always though I understood UMTS AKA (authentication and key
agreement), including the re-synchronization procedure. It's been years
since I wrote tools like osmo-sim-auth which you can use to
perform UMTS AKA with a SIM card inserted into a PC reader, i.e.
simulate what happens between the AUC (authentication center) in a
network and the USIM card.
However, it is only now as the sysmocom team works on 3G support of the
dedicated OsmoHLR (outside of
OsmoNITB!), that I seem to understand all the nasty little details.
I always thought for re-synchronization it is sufficient to simply
increment the SQN (sequence number). It turns out, it isn't as there is
a MSB-portion called SEQ and a lower-bit portion called IND, used for
some fancy array indexing scheme of buckets of highest-used-SEQ within
that IND bucket.
If you're interested in all the dirty details and associated spec
references (the always hide the important parts in some Annex) see the
discussion between Neels and me in Osmocom redmine issue 1965.
For those of you who don't know what the tinkerphones/OpenPhoenux GTA04 is: It is a
'professional hobbyist' hardware project (with at least public
schematics, even if not open hardware in the sense that editable
schematics and PCB design files are published) creating updated
mainboards that can be used to upgrade Openmoko phones. They fit into
the same enclosure and can use the same display/speaker/microphone.
What the GTA04 guys have been doing for many years is close to a miracle
anyway: Trying to build a modern-day smartphone in low quantities,
using off-the-shelf components available in those low quantities, and
without a large company with its associated financial backing.
Smartphones are complex because they are highly integrated devices. A
seemingly unlimited amount of components is squeezed in the tiniest
form-factors. This leads to complex circuit boards with many layers
that take a lot of effort to design, and are expensive to build in low
quantities. The fine-pitch components mandated by the integration
density is another issue.
Building the original GTA01 (Neo1937) and GTA02 (FreeRunner) devices at
Openmoko, Inc. must seem like a piece of cake compared to what the GTA04
guys are up to. We had a team of engineers that were familiar at last
with feature phone design before, and we had the backing of a consumer
electronics company with all its manufacturing resources and expertise.
Nevertheless, a small group of people around Dr. Nikolaus Schaller has
been pushing the limits of what you can do in a small for fun
project, and the have my utmost respect. Well done!
Unfortunately, there are bad news. Manufacturing of their latest
generation of phones (GTA04A5) has been stopped due to massive soldering
problems with the TI OMAP3 package-on-package (PoP).
Those PoPs are basically "RAM chip soldered onto the CPU, and the stack
of both soldered to the PCB". This is used to save PCB footprint and to
avoid having to route tons of extra (sensitive, matched) traces between
the SDRAM and the CPU.
According to the mailing list posts, it seems to be incredibly difficult
to solder the PoP stack due to the way TI has designed the packaging of
the DM3730. If you want more gory details, see
and yet another post.
It is very sad to see that what appears to be bad design choices at TI
are going to bring the GTA04 project to a halt. The financial hit by
having only 33% yield is already more than the small community can take,
let alone unused parts that are now in stock or even thinking about
further experiments related to the manufacturability of those chips.
If there's anyone with hands-on manufacturing experience on the DM3730
(or similar) TI PoP reading this: Please reach out to the GTA04 guys and
see if there's anything that can be done to help them.
|UPDATE (March 8, 2017):|
| ||In an earlier post I was asserting that the GTA04 is open hardware
(which I actually believed up to that point) until some readers have
pointed out to me that it isn't. It's sad it isn't, but still it has
The recent Amazon S3 outage should make a strong argument that centralized services have severe issues, technically but from a business point of view as well(you don’t own the destiny of your own product!) and I whole heartily agree with “There is no cloud, it’s only someone else’s computer”.
Still from time to time I like to see beyond my own nose (and I prefer the German version of that proverb!) and the current exploration involves ReactJS (which I like), Tensorflow (which I don’t have enough time for) and generally looking at Docker/Mesos/Kubernetes to manage services, zero downtime rolling updates. I have browsed and read the documentation over the last year, like the concepts (services, replication controller, pods, agents, masters), planned how to use it but because it doesn’t support SCTP never looked into actually using it.
Microsoft Azure has the Azure Container Services and since end of February it is possible to create Kubernetes clusters. This can be done using the v2 of the Azure CLI or through the portal. I finally decided to learn some new tricks.
Azure asks for a clientId and password and I entered garbage and hoped the necessary accounts would be created. It turns out that the portal is not creating it and also not doing a sanity check of these credentials and second when booting the master it will not properly start. The Microsoft support was very efficient and quick to point that out. I wish the portal would make a sanity check though. So make sure to create a principal first and use it correctly. I ended up creating it on the CLI.
I re-created the cluster and executed kubectl get nodes. It started to look better but one agent was missing from the list of nodes. After logging in I noticed that kubelet was not running. Trying to start it by hand shows that docker.service is missing. Now why it is missing is probably for Microsoft engineering to figure out but the Microsoft support gave me:
sudo rm -rf /var/lib/cloud/instances
sudo cloud-init -d init
sudo cloud-init -d modules -m config
sudo cloud-init -d modules -m final
sudo systemctl restart kubelet
After these commands my system would have a docker.service, kubelet would start and it will be listed as a node. Commands like kubectl expose are well integrated and use a public IPv4 address that is different from the one used for ssh/management. So all in all it was quite easy to get a cluster up and I am sure that some of the hick-ups will be fixed…
In May 2016 we got the GTP-U tunnel encapsulation/decapsulation
module developed by Pablo Neira, Andreas Schultz and myself merged into
the 4.8.0 mainline kernel.
During the second half of 2016, the code basically stayed untouched. In
early 2017, several patch series of (at least) three authors have been
published on the netdev mailing list for review and merge.
This poses the very valid question on how do we test those (sometimes
quite intrusive) changes. Setting up a complete cellular network with
either GPRS/EGPRS or even UMTS/HSPA is possible using OsmoSGSN and
related Osmocom components. But it's of course a luxury that not many
Linux kernel networking hackers have, as it involves the availability of
a supported GSM BTS or UMTS hNodeB. And even if that is available,
there's still the issue of having a spectrum license, or a wired setup
with coaxial cable.
So as part of the recent discussions on netdev, I tested and described a
minimal test setup using libgtpnl, OpenGGSN and sgsnemu.
This setup will start a mobile station + SGSN emulator inside a Linux
network namespace, which talks GTP-C to OpenGGSN on the host, as well as
GTP-U to the Linux kernel GTP-U implementation.
In case you're interested, feel free to check the following wiki page:
This is of course just for manual testing, and for functional (not
performance) testing only. It would be great if somebody would pick up
on my recent mail containing some suggestions about an automatic
regression testing setup for the kernel GTP-U code. I have way
too many spare-time projects in desperate need of some attention to work
on this myself. And unfortunately, none of the telecom operators (who
are the ones benefiting most from a Free Software accelerated GTP-U
implementation) seems to be interested in at least co-funding or
otherwise contributing to this effort :/
Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.
So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.
You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.
The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.
We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.
One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.
Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.
Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.
Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.
I've recently attended a seminar that (among other topics) also covered
RF interference hunting. The speaker was talking about various
real-world cases of RF interference and illustrating them in detail.
Of course everyone who has any interest in RF or cellular will know
about fundamental issues of radio frequency interference. To the
biggest part, you have
- cells of the same operator interfering with each other due to too
frequent frequency re-use, adjacent channel interference, etc.
- cells of different operators interfering with each other due to
intermodulation products and the like
- cells interfering with cable TV, terrestrial TV
- DECT interfering with cells
- cells or microwave links interfering with SAT-TV reception
- all types of general EMC problems
But what the speaker of this seminar covered was actually a cellular
base-station being re-broadcast all over Europe via a commercial
It is a well-known fact that most satellites in the sky are basically
just "bent pipes", i.e. they consist of a RF receiver on one frequency,
a mixer to shift the frequency, and a power amplifier. So basically
whatever is sent up on one frequency to the satellite gets
re-transmitted back down to earth on another frequency. This is abused
by "satellite hijacking" or "transponder hijacking" and has been covered
for decades in various publications.
Ok, but how does cellular relate to this? Well, apparently some people
are running VSAT terminals (bi-directional satellite terminals) with
improperly shielded or broken cables/connectors. In that case, the RF
emitted from a nearby cellular base station leaks into that cable, and
will get amplified + up-converted by the block up-converter of that VSAT
The bent-pipe satellite subsequently picks this signal up and
re-transmits it all over its coverage area!
I've tried to find some public documents about this, an there's
surprisingly little public information about this phenomenon.
However, I could find a slide set from SES, presented at a
Satellite Interference Reduction Group: Identifying Rebroadcast (GSM)
It describes a surprisingly manual and low-tech approach at hunting down
the source of the interference by using an old nokia net-monitor phone
to display the MCC/MNC/LAC/CID of the cell. Even in 2011 there were
already open source projects such as airprobe that could have done the
job based on sampled IF data. And I'm not even starting to consider
It should be relatively simple to have a SDR that you can tune to a
given satellite transponder, and which then would look for any
GSM/UMTS/LTE carrier within its spectrum and dump their identities in a
fully automatic way.
But then, maybe it really doesn't happen all that often after all to
rectify such a development...
In the good old days ever since the late 1980ies - and a surprising
amount even still today - telecom signaling traffic is still carried
over circuit-switched SS7 with its TDM lines as physical layer, and not
an IP/Ethernet based transport.
When Holger first created OsmoBSC, the BSC-only version of OpenBSC some
7-8 years ago, he needed to implement a minimal subset of SCCP wrapped
in TCP called SCCP Lite. This was due to the simple fact that the MSC
to which it should operate implemented this non-standard protocol
stacking that was developed + deployed before the IETF SIGTRAN WG
specified M3UA or SUA came around. But even after those were specified
in 2004, the 3GPP didn't specify how to carry A over IP in a standard
way until the end of 2008, when a first A interface over IP study
As time passese, more modern MSCs of course still implement classic
circuit-switched SS7, but appear to have dropped SCCPlite in favor of
real AoIP as specified by 3GPP meanwhile. So it's time to add this to
the osmocom universe and OsmoBSC.
A couple of years ago (2010-2013) implemented both classic SS7
(MTP2/MTP3/SCCP) as well as SIGTRAN stackings (M2PA/M2UA/M3UA/SUA in
Erlang. The result has been used in some production deployments, but
only with a relatively limited feature set. Unfortunately, this code
has nto received any contributions in the time since, and I have to say
that as an open source community project, it has failed. Also, while
Erlang might be fine for core network equipment, running it on a BSC
really is overkill. Keep in miond that we often run OpenBSC on
really small ARM926EJS based embedded systems, much more resource
constrained than any single smartphone during the late decade.
In the meantime (2015/2016) we also implemented some minimal SUA support
for interfacing with UMTS femto/small cells via Iuh (see OsmoHNBGW).
So in order to proceed to implement the required
SCCP-over-M3UA-over-SCTP stacking, I originally thought well, take
Holgers old SCCP code, remove it from the IPA multiplex below, stack it
on top of a new M3UA codebase that is copied partially from SUA.
However, this falls short of the goals in several ways:
- The application shouldn't care whether it runs on top of SUA or SCCP,
it should use a unified interface towards the SCCP Provider.
OsmoHNBGW and the SUA code already introduce such an interface baed on
the SCCP-User-SAP implemented using Osmocom primitives (osmo_prim).
However, the old OsmoBSC/SCCPlite code doesn't have such abstraction.
- The code should be modular and reusable for other SIGTRAN stackings
as required in the future
So I found myself sketching out what needs to be done and I ended up
pretty much with a re-implementation of large parts. Not quite fun, but
definitely worth it.
The strategy is:
And then finally stack all those bits on top of each other, rendering a
fairly clean and modern implementation that can be used with the IuCS of
the virtually unmodified OsmmoHNBGW, OsmoCSCN and OsmoSGSN for testing.
Next steps in the direction of the AoIP are:
- Implementation of the MTP-SAP based on the IPA transport
- Binding the new SCCP code on top of that
- Converting OsmoBSC code base to use the SCCP-User-SAP for its
From that point onwards, OsmoBSC doesn't care anymore whether it
transports the BSSAP/BSSMAP messages of the A interface over
SCCP/IPA/TCP/IP (SCCPlite) SCCP/M3UA/SCTP/IP (3GPP AoIP), or even
something like SUA/SCTP/IP.
However, the 3GPP AoIP specs (unlike SCCPlite) actually modify the
BSSAP/BSSMAP payload. Rather than using Circuit Identifier Codes and
then mapping the CICs to UDP ports based on some secret conventions,
they actually encapsulate the IP address and UDP port information for
the RTP streams. This is of course the cleaner and more flexible
approach, but it means we'll have to do some further changes inside the
actual BSC code to accommodate this.
When implementing any kind of communication protocol, one always dreams
of some existing test suite that one can simply run against the
implementation to check if it performs correct in at least those use
cases that matter to the given application.
Of course in the real world, there rarely are protocols where this is
true. If test specifications exist at all, they are often just very
abstract texts for human consumption that you as the reader should
For some (by far not all) of the protocols found in cellular networks,
every so often I have seen some formal/abstract machine-parseable test
specifications. Sometimes it was TTCN-2, and sometimes TTCN-3.
If you haven't heard about TTCN-3, it is basically a way to create
functional tests in an abstract description (textual + graphical), and
then compile that into an actual executable tests suite that you can run
against the implementation under test.
However, when I last did some research into this several years ago, I
couldn't find any Free / Open Source tools to actually use those
formally specified test suites. This is not a big surprise, as even
much more fundamental tools for many telecom protocols are missing, such
as good/complete ASN.1 compilers, or even CSN.1 compilers.
To my big surprise I now discovered that Ericsson had released their
(formerly internal) TITAN TTCN3 Toolset
as Free / Open Source Software under EPL 1.0. The project is even part
of the Eclipse Foundation. Now I'm certainly not a friend of Java or
Eclipse by all means, but well, for running tests I'd certainly not
The project also doesn't seem like it was a one-time code-drop but seems
very active with many repositories on gitub. For example for the core
module, titan.core shows
plenty of activity on an almost daily basis. Also, binary releases for
a variety of distributions are made available. They
even have a video showing the installation ;)
If you're curious about TTCN-3 and TITAN, Ericsson also have made
available a great 200+ pages slide set about TTCN-3 and TITAN.
I haven't yet had time to play with it, but it definitely is rather high
on my TODO list to try.
ETSI provides a couple of test suites in TTCN-3 for protocols like
DIAMETER, GTP2-C, DMR, IPv6, S1AP, LTE-NAS, 6LoWPAN, SIP, and others at
http://forge.etsi.org/websvn/ (It's also the first time I've seen that
ETSI has a SVN server. Everyone else is using git these days, but yes,
revision control systems rather than periodic ZIP files is definitely a
big progress. They should do that for their reference codecs and ASN.1
I'm not sure once I'll get around to it. Sadly, there is no TTCN-3 for
SCCP, SUA, M3UA or any SIGTRAN related stuff, otherwise I would want to
try it right away. But it definitely seems like a very interesting
technology (and tool).
Last weekend I had the pleasure of attending FOSDEM 2017. For many years, it is probably the most
exciting event exclusively on Free Software to attend every year.
My personal highlights (next to meeting plenty of old and new friends)
in terms of the talks were:
I was attending but not so excited by Georg Greve's OpenPOWER talk. It was a
great talk, and it is an important topic, but the engineer in me would
have hoped for some actual beefy technical stuff. But well, I was just
not the right audience. I had heard about OpenPOWER quite some time ago
and have been following it from a distance.
The LoRaWAN talk
couldn't have been any less technical, despite stating technical,
political and cultural in the topic. But then, well, just recently
33C3 had the most exciting LoRa PHY Reverse Engineering Talk by Matt
Other talks whose recordings I still want to watch one of these days:
I'm very happy that in 2017, we will have the first ever technical
conference on the Osmocom cellular infrastructure projects.
For many years, we have had a small, invitation only event by Osmocom
developers for Osmocom developers called OsmoDevCon. This was fine for
the early years of Osmocom, but during the last few years it became
apparent that we also need a public event for our many users. Those
range from commercial cellular operators to community based efforts like
Rhizomatica, and of course include the many
research/lab type users with whom we started.
So now we'll have the public OsmoCon on April 21st, back-to-back with
the invitation-only OsmoDevcon from April 22nd through 23rd.
I'm hoping we can bring together a representative sample of our user
base at OsmoCon 2017 in April. Looking forward to meet you all. I hope
you're also curious to hear more from other users, and of course the
A few days ago, Autodesk has announecd
that the popular EAGLE electronics design automation (EDA) software is
moving to a subscription based model.
When previously you paid once for a license and could use that
version/license as long as you wanted, there now is a monthly
subscription fee. Once you stop paying, you loose the right to use the
software. Welcome to the brave new world.
I have remotely observed this subscription model as a general trend in
the proprietary software universe. So far it hasn't affected me at all,
as the only two proprietary applications I use on a regular basis
during the last decade are IDA and EAGLE.
I already have ethical issues with using non-free software, but those
two cases have been the exceptions, in order to get to the productivity
required by the job. While I can somehow convince my consciousness in
those two cases that it's OK - using software under a subscription model is
completely out of the question, period. Not only would I end up paying
for the rest of my professional career in order to be able to open and
maintain old design files, but I would also have to accept software that
"calls home" and has "remote kill" features. This is clearly not
something I would ever want to use on any of my computers. Also, I
don't want software to be associated with any account, and it's not the
bloody business of the software maker to know when and where I use my
For me - and I hope for many, many other EAGLE users - this move is
utterly unacceptable and certainly marks the end of any business between
the EAGLE makers and myself and/or my companies. I will happily use
my current "old-style" EAGLE 7.x licenses for the near future, and theS
see what kind of improvements I would need to contribute to KiCAD or
other FOSS EDA software in order to eventually migrate to those.
As expected, this doesn't only upset me, but many other customers, some
of whom have been loyal to using EAGLE for many years if not decades,
back to the DOS version. This is reflected by some media reports (like
this one at hackaday
or user posts at element14.com or eaglecentral.ca
who are similarly critical of this move.
Rest in Peace, EAGLE. I hope Autodesk gets what they deserve: A new
influx of migrations away from EAGLE into the direction of Open Source
EDA software like KiCAD.
In fact, the more I think about it, I'm actually very much inclined to
work on good FOSS migration tools / converters - not only for my own
use, but to help more people move away from EAGLE. It's not that I
don't have enough projects at my hand already, but at least I'm
motivated to do something about this betrayal by Autodesk. Let's see
what (if any) will come out of this.
So let's see it that way: What Autodesk is doing is raising the level
off pain of using EAGLE so high that more people will use and contribute
FOSS EDA software. And that is actually a good thing!
I've just had the pleasure of attending all four days of 33C3 and have returned
home with somewhat mixed feelings.
I've been a regular visitor and speaker at CCC events since 15C3 in
1998, which among other things
means I'm an old man now. But I digress ;)
The event has come extremely far in those years. And to be honest, I
struggle with the size. Back then, it was a meeting of like-minded
hackers. You had the feeling that you know a significant portion of the
attendees, and it was easy to connect to fellow hackers.
These days, both the number of attendees and the size of the event make
you feel much rather that you're in general public, rather than at some
meeting of fellow hackers. Yes, it is good to see that more people are
interested in what the CCC (and the selected speakers) have to say, but
somehow it comes at the price that I (and I suspect other old-timers)
feel less at home. It feels too much like various other technology
One aspect creating a certain feeling of estrangement is also the venue
itself. There are an incredible number of rooms, with a labyrinth of
hallways, stairs, lobbies, etc. The size of the venue simply makes it
impossible to simply _accidentally_ running into all of your fellow
hackers and friends. If I want to meet somebody, I have to make an
explicit appointment. That is an option that exits most of the rest of
the year, too.
While fefe is happy about the many small children attending
the event, to me this seems
somewhat alien and possibly inappropriate. I guess from teenage years
onward it certainly makes sense, as they can follow the talks and
participate in the workshop. But below that age?
The range of topics covered at the event also becomes wider, at least I
feel that way. Topics like IT security, data protection, privacy,
intelligence/espionage and learning about technology have always been
present during all those years. But these days we have bloggers sitting
on stage and talking about bottles of wine (seriously?).
Contrary to many, I also really don't get the excitement about shows
like 'Methodisch Inkorrekt'. Seems to me like mainstream
compatible entertainment in the spirit of the 1990ies Knoff Hoff Show without much
potential to make the audience want to dig deeper into (information)
Yesterday, together with Holger 'zecke' Freyther, I co-presented at 33C3 about
Dissectiong modern (3G/4G) cellular modems.
This presentation covers some of our recent explorations into a specific
type of 3G/4G cellular modems, which next to the regular modem/baseband
processor also contain a Cortex-A5 core that (unexpectedly) runs Linux.
We want to use such modems for building self-contained M2M devices that
run the entire application inside the modem itself, without any external
needs except electrical power, SIM card and antenna.
Next to that, they also pose an ideal platform for testing the Osmocom
network-side projects for running GSM, GPRS, EDGE, UMTS and HSPA
You can find the Slides
and the Video recordings
in case you're interested in more details about our work.
The results of our reverse engineering can be found in the wiki at
http://osmocom.org/projects/quectel-modems/wiki together with links to
the various git repositories containing related tools.
As with all the many projects that I happen to end up doing, it would be
great to get more people contributing to them. If you're interested in
cellular technology and want to help out, feel free to register at the
osmocom.org site and start adding/updating/correcting information to the
You can e.g. help by
- playing with the modem and documenting your findings
- reviewing the source code released by Qualcomm + Quectel and
documenting your findings
- help us to create a working OE build with our own kernel and rootfs
images as well as opkg package feeds for the modems
- help reverse engineering DIAG and QMI protocols as well as the open
source programs to interact with them
In 2016, Osmocom gained initial 3.5G support with osmo-iuh and the Iu
interface extensions of our libmsc and OsmoSGSN code. This means you can run
your own small open source 3.5G cellular network for SMS, Voice and Data
However, the project needs more contributors: Become an active
member in the Osmocom development community and get your nano3G
femtocell for free.
I'm happy to announce that my company sysmocom hereby issues a call for
proposals to the general public. Please describe in a short proposal
how you would help us improving the Osmocom project if you were to
receive one of those free femtocells.
Details of this proposal can be found at
Please contact mailto:firstname.lastname@example.org in case of any
When you work with GSM/cellular systems, the definite resource are the
specifications. They were originally released by ETSI, later by 3GPP.
The problem start with the fact that there are separate numbering
schemes. Everyone in the cellular industry I know always uses the
GSM/3GPP TS numbering scheme, i.e. something like 3GPP TS 44.008.
However, ETSI assigns its own numbers to the specs, like ETSI TS
144008. Now in most cases, it is as simple s removing the '.' and
prefixing the '1' in the beginning. However, that's not always true and
there are exceptions such as 3GPP TS 01.01 mapping to ETSI TS
101855. To make things harder, there doesn't seem to be a
machine-readable translation table between the spec numbers, but there's
a website for spec number conversion at http://webapp.etsi.org/key/queryform.asp
When I started to work on GSM related topics somewhere between my work
at Openmoko and the start of the OpenBSC project, I manually downloaded
the PDF files of GSM specifications from the ETSI website. This was a
cumbersome process, as you had to enter the spec number (e.g. TS 04.08)
in a search window, look for the latest version in the search results,
click on that and then click again for accessing the PDF file (rather
than a proprietary Microsoft Word file).
At some point a poor girlfriend of mine was kind enough to do this
manual process for each and every 3GPP spec, and then create a
corresponding symbolic link so that you could type something like evince
/spae/openmoko/gsm-specs/by_chapter/44.008.pdf into your command line
and get instant access to the respective spec.
However, of course, this gets out of date over time, and by now almost a
decade has passed without a systematic update of that archive.
To the rescue, 3GPP started at some long time ago to not only provide
the obnoxious M$ Word DOC files, but have deep links to ETSI. So you
could go to http://www.3gpp.org/DynaReport/44-series.htm and then click
on 44.008, and one further click you had the desired PDF, served by
ETSI (3GPP apparently never provided PDF files).
However, in their infinite wisdom, at some point in 2016 the 3GPP
webmaster decided to remove those deep links. Rather than a nice long
list of released versions of a given spec,
http://www.3gpp.org/DynaReport/44008.htm now points to some crappy
then get a ZIP file with a single Word DOC file inside. You can hardly
male it any more inconvenient and cumbersome. The PDF links would open
favorite PDF viewer. Single click to the information you want. But no,
the PDF links had to go and replaced with ZIP file downloads that you
first need to extract, and then open in something like LibreOffice,
taking ages to load the document, rendering it improperly in a word
processor. I don't want to edit the spec, I want to read it, sigh.
So since the usability of this 3GPP specification resource had been
artificially crippled, I was annoyed sufficiently well to come up with a
- first create a complete mirror of all ETSI TS (technical
specifications) by using a recursive wget on
- then use a shell script that utilizes pdfgrep and awk to determine the
3GPP specification number (it is written in the title on the first
page of the document) and creating a symlink. Now I have something
like 44.008-4.0.0.pdf -> ts_144008v040000p.pdf
It's such a waste of resources to have to download all those files and
then write a script using pdfgrep+awk to re-gain the same usability that
the 3GPP chose to remove from their website. Now we can wait for ETSI
to disable indexing/recursion on their server, and easy and quick spec
access would be gone forever :/
Why does nobody care about efficiency these days?
If you're also an avid 3GPP spec reader, I'm publishing the rather
trivial scripts used at http://git.osmocom.org/3gpp-etsi-pdf-links
If you have contacts to the 3GPP webmaster, please try to motivate them
to reinstate the direct PDF links.
Many years ago, in the aftermath of Openmoko shutting down, fellow
former Linux kernel hacker Werner Almesberger
was working on an IEEE 802.15.4 (WPAN) adapter for the
As a spin-off to that, the ATUSB device was
designed: A general-purpose open hardware (and FOSS firmware + driver)
IEEE 802.15.4 adapter that can be plugged into any USB port.
This adapter has received a mainline Linux kernel driver written by
Werner Almesberger and Stefan Schmidt, which was eventually merged into
mainline Linux in May 2015 (kernel v4.2 and later).
Earlier in 2016, Stefan Schmidt (the current ATUSB Linux driver
maintainer) approached me about the situation that ATUSB hardware was
frequently asked for, but currently unavailable in its
physical/manufactured form. As we run a shop with smaller electronics
items for the wider Osmocom community at sysmocom, and we also
frequently deal with contract manufacturers for low-volume electronics
like the SIMtrace device anyway, it was easy to say "yes, we'll do it".
As a result, ready-built, programmed and tested ATUSB devices are now
finally available from the sysmocom webshop
Note: I was never involved with the development of the ATUSB hardware,
firmware or driver software at any point in time. All credits go to
Werner, Stefan and other contributors around ATUSB.
In a previous life I used to do a lot of IT security work, probably even
at a time when most people had no idea what IT security actually is. I
grew up with the Chaos Computer Club, as it was a great place to meet
people with common interests, skills and ethics. People were hacking
(aka 'doing security research') for fun, to grow their skills, to
advance society, to point out corporate stupidities and to raise
awareness about issues.
I've always shared any results worth noting with the general public.
Whether it was in RFID security, on GSM security, TETRA security, etc.
Even more so, I always shared the tools, creating free software
implementations of systems that - at that time - were very difficult to
impossible to access unless you worked for the vendors of related
device, who obviously had a different agenda then to disclose security
concerns to the general public.
Publishing security related findings at related conferences can be
interpreted in two ways:
On the one hand, presenting at a major event will add to your
credibility and reputation. That's a nice byproduct, but that shouldn't
be the primarily reason, unless you're some kind of a egocentric stage
On the other hand, presenting findings or giving any kind of
presentation or lecture at an event is a statement of support for that
event. When I submit a presentation at a given event, I think carefully
if that topic actually matches the event.
The reason that I didn't submit any talks in recent years at CCC events
is not that I didn't do technically exciting stuff that I could talk
about - or that I wouldn't have the reputation that would make people
consider my submission in the programme committee. I just thought there
was nothing in my work relevant enough to bother the CCC attendees with.
So when Holger 'zecke' Freyther and I chose to present about our recent
journeys into exploring modern cellular modems at the annual Chaos
Communications Congress, we did so because the CCC Congress is the right
audience for this talk. We did so, because we think the people there
are the kind of community of like-minded spirits that we would like to
contribute to. Whom we would like to give something back, for the many
years of excellent presentations and conversations had.
So far so good.
However, in 2016, something happened that I haven't seen yet in my 17
years of speaking at Free Software, Linux, IT Security and other
conferences: A select industry group (in this case the GSMA) asking me
out of the blue to give them the talk one month in advance at a private
I could hardly believe it. How could they? Who am I? Am I spending
sleepless nights and non-existing spare time into security research of
cellular modems to give a free presentation to corporate guys at a
closed industry meeting? The same kind of industries that create the
problems in the first place, and who don't get their act together in
building secure devices that respect people's privacy? Certainly not.
I spend sleepless nights of hacking because I want to share the results
with my friends. To share it with people who have the same passion,
whom I respect and trust. To help my fellow hackers to understand
technology one step more.
If that kind of request to undermine the researcher/authors initial
publication among friends is happening to me, I'm quite sure it must be
happening to other speakers at the 33C3 or other events, too. And that
makes me very sad. I think the initial publication is something that
connects the speaker/author with his audience.
Let's hope the researchers/hackers/speakers have sufficiently strong
ethics to refuse such requests. If certain findings are initially
published at a certain conference, then that is the initial publication.
Period. Sure, you can ask afterwards if an author wants to repeat the
presentation (or a similar one) at other events. But pre-empting the
initial publication? Certainly not with me.
I offered the GSMA that I could talk on the importance of having FOSS
implementations of cellular protocol stacks as enabler for security
research, but apparently this was not to their interest. Seems like all
they wanted is an exclusive heads-up on work they neither commissioned
or supported in any other way.
And btw, I don't think what Holger and I will present about is all that
exciting in the first place. More or less the standard kind of security
nightmares. By now we are all so numbed down by nobody considering
security and/or privacy in design of IT systems, that is is hardly any
news. IoT how it is done so far might very well be the doom of
mankind. An unstoppable tsunami of insecure and privacy-invading
devices, built on ever more complex technology with way too many
security issues. We shall henceforth call IoT the Industry of
I typically prefer to blog about technical topics, but the occasional
stupidity in every-day (business) life is simply too hard to resist.
Today I updated the shipping pricing / zones in the ERP system of my
company to predict shipping rates based on weight and destination of
Deutsche Post, the German Postal system is using their DHL brand for
postal packages. They divide the world into four zones:
- Zone 1 (EU)
- Zone 2 (Europe outside EU)
- Zone 3 (World)
You would assume that "World" encompasses everything that's not part of
the other zones. So far so good. However, I then stumbled about Zone 4 (rest of
world). See for yourself:
So the World according to DHL is a very small group of countries
including Libya and Syria, while countries like Mexico are rest of
Quite charming, I wonder which PR, communicatoins or marketing guru came
up with such a disqualifying name. Maybe they should hve called id 3rd
world and 4th world instead? Or even discworld?
In 2006 I first visited Taiwan. The reason back then was Sean Moss-Pultz
contacting me about a new Linux and Free Software based Phone that he
wanted to do at FIC in Taiwan. This later became the Neo1973 and
the Openmoko project and finally became part
of both Free Software as well as smartphone history.
Ten years later, it might be worth to share a bit of a retrospective.
It was about building a smartphone before Android or the iPhone existed
or even were announced. It was about doing things "right" from a Free
Software point of view, with FOSS requirements going all the way down to
component selection of each part of the electrical design.
Of course it was quite crazy in many ways. First of all, it was a
bunch of white, long-nosed western guys in Taiwan, starting a company
around Linux and Free Software, at a time where that was not really
well-perceived in the embedded and consumer electronics world yet.
It was also crazy in terms of the many cultural 'impedance mismatches',
and I think at some point it might even be worth to write a book about
the many stories we experienced. The biggest problem here is of course
that I wouldn't want to expose any of the companies or people in the
many instances something went wrong. So probably it will remain a
secret to those present at the time :/
In any case, it was a great project and definitely one of the most
exciting (albeit busy) times in my professional career so far. It was
also great that I could involve many friends and FOSS-compatriots from
other projects in Openmoko, such as Holger Freyther, Mickey Lauer,
Stefan Schmidt, Daniel Willmann, Joachim Steiger, Werner Almesberger,
Milosch Meriac and others. I am happy to still work on a daily basis
with some of that group, while others have moved on to other areas.
I think we all had a lot of fun, learned a lot (not only about Taiwan),
and were working really hard to get the hardware and software into
shape. However, the constantly growing scope, the [for western terms]
quite unclear and constantly changing funding/budget situation and the
many changes in direction have ultimately lead to missing the market
opportunity. At the time the iPhone and later Android entered the
market, it was too late for a small crazy Taiwanese group of
FOSS-enthusiastic hackers to still have a major impact on the landscape
of Smartphones. We tried our best, but in the end, after a lot of hype
and publicity, it never was a commercial success.
What's more sad to me than the lack of commercial success is also the
lack of successful free software that resulted. Sure, there were some
u-boot and linux kernel drivers that got merged mainline, but none of
the three generations of UI stacks (GTK, Qt or EFL based), nor the GSM
Modem abstraction gsmd/libgsmd nor middleware (freesmartphone.org) has
manage to survive the end of the Openmoko company, despite having
deserved to survive.
Probably the most important part that survived Openmoko was the
pioneering spirit of building free software based phones. This spirit
has inspired pure volunteer based projects like
GTA04/Openphoenux/Tinkerphone, who have achieved extraordinary results -
but who are in a very small niche.
What does this mean in practise? We're stuck with a smartphone world in
which we can hardly escape any vendor lock-in. It's virtually
impossible in the non-free-software iPhone world, and it's difficult in
the Android world. In 2016, we have more Linux based smartphones than
ever - yet we have less freedom on them than ever before. Why?
- the amount of hardware documentation on the processors and chipsets to
day is typically less than 10 years ago. Back then, you could still
get the full manual for the S3C2410/S3C2440/S3C6410 SoCs. Today,
this is not possible for the application processors of any vendor
- the tighter integration of application processor and baseband
processor means that it is no longer possible on most phone designs to
have the 'non-free baseband + free application processor' approach
that we had at Openmoko. It might still be possible if you designed
your own hardware, but it's impossible with any actually existing
hardware in the market.
- Google blurring the line between FOSS and proprietary code in the
Android OS. Yes, there's AOSP - but how many features are lacking?
And on how many real-world phones can you install it? Particularly
with the Google Nexus line being EOL'd? One of the popular exceptions
Fairphone2 with it's alternative AOSP operating system,
even though that's not the default of what they ship.
- The many binary-only drivers / blobs, from the graphics stack to wifi
to the cellular modem drivers. It's a nightmare and really scary if
you look at all of that, e.g. at the binary blob downloads for
to get an idea about all the binary-only blobs on a relatively current
Qualcomm SoC based design. That's compressed 70 Megabytes, probably
as large as all of the software we had on the Openmoko devices back
So yes, the smartphone world is much more restricted, locked-down and
proprietary than it was back in the Openmoko days. If we had been more
successful then, that world might be quite different today. It was a
lost opportunity to make the world embrace more freedom in terms of
software and hardware. Without single-vendor lock-in and proprietary
Early in 2016, a friend sent me a paper by Phillip Rogaway entitle, “The Moral Character of Cryptographic Work“. I have read it many times this year. Here’s the abstract:
Cryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension. The Snowden revelations motivate a reassessment of the political and moral positioning of cryptography. They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance. I plead for a reinvention of our disciplinary culture to attend not only to puzzles and math, but, also, to the societal implications of our work.
The ability to take control of our lives, again, has been on my mind this month. Loss of control is often rooted in reframed language. Rogaway shows how privacy, anonymity, and even security are now associated with terrorism. His suggestion? Reframe the work cryptography as building tools for anti-surveillance. Making “surveillance more expensive” is aligned with democracy and freedom. I think this is a great observation. Hopefully others will enjoy reading this paper as much as I did.