For the 10th anniversary since the legendary OpenMoko announcement at the „Open Source in Mobile“ (7th of November 2006 in Amsterdam), I’ve been meaning to write an anthology or – as Paul Fertser suggested on #openmoko-cdevel – an obituary. I’ve been thinking about objectively describing the motivation, the momentum, how it all began and – sadly – ended. I did even plan to include interviews with Sean, Harald, Werner, and some of the other veterans. But as with oh so many projects of (too) wide scope this would probably never be completed.
As November 2016 passed without any progress, I decided to do something different instead. Something way more limited in scope, but something I can actually finish. My subjective view of the project, my participation, and what I think is left behind: My story, as OpenMoko employee #2. On top of that you will see a bunch of previously unreleased photos (bear with me, I’m not a good photographer and the camera sucked as well).
Prehistoric
I’ve always been a programmer. Well… not always, but for a quite some time. I got into computer science when my dad brought a Commodore PET from work home for some days. That was around 1980/1981, with me being 8 years old and massively impressed by the green text scrolling down on a black monitor, depending on what you typed on the keyboard. He showed me how to do small programs in BASIC. It was cool.

Unfortunately he had to bring it back after a few weeks, but I was already infected and begged for an own computer. In 1982 – a few months before his sudden and completely unexpected death – he bought me a Commodore 64, which opened up a whole world for me. I learned to program in BASIC, SYSed, POKEed and PEEKed my way through the hardware registers, and had way more fun than with any other toy I possessed.
Naturally, not all of that was programming. The Commodore 64 was an excellent gaming machine, in particular due to the massive amount of cough „free“ games available. The first game I actually bought was The Hobbit, a text adventure with the occasional graphic here and there. Rendering an image back then took minutes, and you could watch the computer painting line by line, and sometimes pixel by pixel. But I was patient. I was young and time seemed basically unlimited.

Fast forward to 1985 – when the Commodore 64 suddenly seemed somewhat obsolete at the time the AMIGA was announced. This machine looked so much more powerful that it found its way into my dreams… until that one great day in 1986 when my mother surprised me with lots of white boxes, labelled Commodore AMIGA, in the hallway.
Apart from the natural amount of gaming (which made even more fun on the AMIGA), I learned Motorola 68K assembler and became part of the early demo scene – here’s more about my AMIGA history, if you’re interested.
Back in those days, it was completely normal that a demo shuts down the operating system (OS) and takes over the whole hardware. Usually you had to reboot after quitting the demo. I did some early experiments with working on my own OS. Alas, my knowledge back then was not sufficient enough to really make one, though I quite liked the idea of being able to hack every part of a system.

This was one thing I started missing when I migrated to a shock, horror DOS/WINDOWS machine in the early 90s. Found my way doing C++, MS Foundation Classes, Win32-API, and the lot, but it never felt the same as in the good ‚ole days of the AMIGA. (A bit of that came back when a few years later I installed Linux a PC and learned a lot of different UNIX flavors as part of my computer science studies.)
After completing my diploma thesis, I was asked by a professor (Prof. Drobnik at the institute of telematics, whom I deeply admire and thank for mentoring me!) whether I had interest in pursuing an academic career. I felt my computer science knowledge wasn’t complete enough for the „world outside“ yet, hence I agreed to work on a Ph.D. in his department.
Since by then I had quite a bit of Linux-experience, one of my first tasks was to help a colleague flashing Linux on his COMPAQ IPAQ. His work involved routing algorithms and handover strategies, so they had a test WiFi network with a bunch of laptops and PDAs equiped with 802.11b PCMCIA cards. Since he ran into lots of problems with the locked down Windows Mobile on the IPAQ, he wanted to give Linux a try.
So I learned about Familiar Linux, got into GPE and Opie, became maintainer of Opie, helped with OpenZaurus (which was the an open source distribution for the soon-to-appear-on-the-stage SHARP ZAURUS), co-founded OpenEmbedded, and eventually received my Ph.D. for µMiddle, a component-based adaptive middleware for ad-hoc networks.
The birth of a project
Some months before my graduation though, it was mid 2006, I was working on OpenEZX, an alternative Linux distribution for Motorola’s EZX series of Linux-phones. Smartphones had just began to render PDAs obsolete and were the new hot thing. I was adding EZX support to OpenEmbedded and Opie, when Linux-hacker Harald ‚LaF0rge‘ Welte (who worked on OpenEZX kernel drivers) asked me one day whether I wanted to work on a new Linux-based Smartphone project with a completely open distribution right from the start – as opposed to OpenEZX which was based on a lot of reverse engineering due to the closed nature of that hardware platform (Motorola promised a proper EZX SDK for almost a decade, but never delievered…).

The mastermind behind the project was Sean Moss-Pultz, an american living in Taiwan, working as a product designer for First International Computer (FIC).
Naturally I was excited and started to work on it. I was supposed to be responsible for the Linux distribution build system aspect (e.g. OpenEmbedded integration) and some UI tasks, in particular to create something the chinese engineers could use to base their applications on. It was already decided that we should base the display subsystem on X11 and the UI on GTK+.
While I wasn’t lucky with that decision, at this point of time I was not strong enough to question and discuss this. In hindsight I view this as my earliest (unfortunately not the last one though) mistake in the project.
While doing the first work on some GOBJECTs, Sean Moss-Pultz came over to Germany and we met for one week to design the basic human interface guidelines & streamlining his interface mockups. We decided that we want to distinguish two basic types of applications, so-called „finger apps“ and „stylus apps“. Here are some of the results of this phase. Note that these have been designed before the actual dimensions of the device (hence display) were finalized. Working on HIGs without being able to create actual paper prototypes (to check finger distances and gesture dimensions) must have been my 2nd mistake.

While (in my opinion) these mockups are looking beautiful – even by today’s standards – from the viewpoint of a developer who needs to implement these, they’re just crazy. Non-rectangular widgets, semi-transparency, shadows, gradients, etc. everywhere. Have fun doing this with GTK+ 2.6 in 2006. I’m not sure it’s even possible today with version 4 – let alone the necessary hardware requirements for blending and compositing.
Alas, I tried my best to come up with an UI framework which came close to the renderings to give our Shanghai Team (which were supposed to create the actual applications) the necessary tools. I even created a bunch of demo applications. Here’s a guitar toolkit application I programmed using an arm development board:

Speaking about development boards… as I’ve mentioned before, a lot of the early UI concepts and prototyping code was done while the final device specifics and capabilities (and the housing!) were still unknown. This lead to a series of expectations which were nowhere to be met by the actual hardware.
Back then my idea of the ideal software stack for Openmoko looked like that:

If you’re curious about the code for the Openmoko Application Framework libraries, feel free to browse the Openmoko SVN.
Here is the first successful run of kdrive and matchbox on the 3rd development hardware revision of the Neo1973. The picture was taken on the 4th of November 2006 on my IKEA desk. Next to the PCB you are seeing the glimpse of a SHARP ZAURUS Stylus 

The Announcement
In November 2006, the OpenMoko Core Team (which at that time consisted of Sean Moss-Pultz, Harald Welte, and me) flew to Amsterdam where Sean made the legendary announcement of the Neo1973 as „mystery guest speaker“ on the „Open Source in Mobile“ conference. More details about that (including the slides of the presentation) can be found at the linuxdevices article „Cheap, hackable Linux smartphone due soon“.
After the announcement we received a lot of publicity. The mailing lists were flooding with many great ideas (many of those still waiting to be realized). That one small company had the balls to create something hackable from the start against the ongoing trend of locking down mobile embedded devices was very well received. Many interesting leads were made, Universities and labs contacted us, Hardware vendors approached us, etc.
I remember one great quote I picked up from Harald during a presentation that pretty much summed up our approach:
WARRANTY VOID WHEN NOT OPENED – Harald Welte
However, we had yet to deliver and when the first hardware prototypes approached me, I was devastated. Things didn’t look good. The device was tiny, much slower than I had expected (the PXA270 in the MOTOROLA EZX series ran circles around our S3C2410), the resistive touchscreen needed too much pressure, and the display was framed by a massive bezel:

We were already struggling on many software layers (in particular to come anywhere near 80% of the mockups with GTK+), but the hardware constraints killed most of our early HIG ideas.
At this point, we brought the London-based OpenedHand (later being acquired by Intel) on board to work on the launcher and PIM applications. They already had the Matchbox window manager and a set of applications based around an embedded port of the Evolution Data Server running on arm and were quite experienced with GTK+. They came up with a massively reduced version of our mockups, but at least something that worked on the Neo1973.
To handle various low-level device aspects (buttons, powermanagement), I wrote neod.
Phase 0
One of the OpenMoko special features was the so-called phase 0, in which we sent out a dozen of Neo1973 pre-versions (for free, without any obligations) to well-known people with a history in open source. The idea was to get some early feedback, perhaps even some contributions, and have those people spread the news about it, hence act as some kind of multiplicator.
On 14th of February 2007, the OpenMoko.org website went live, with it all our source code and the tools necessary to build the current flash images. Also the mailing lists and IRC channels were populated. For most of its lifetime, OpenMoko was really run much more like an Open Source project (with all implications, good and bad) rather than like a proprietary corporate project.
On 25th of February 2007, we shipped the Phase 0 developer devices – and even though we didn’t get as much feedback as we would have liked, we felt it was a good practice to do this.
The phase 0 developer devices shipped with Openmoko 2007.1, a pretty bare-bones Linux distribution based on an OpenEmbedded matchbox+kdrive image with the Openmoko GTK+ theme and some rudimentary applications.
Neo1973 is shipping
We originally planned to ship the Neo1973 in January 2007, but both hard- and software issues made us postpone shipping until July 2007. By then we decided that we would need a faster design to target the general audience. So the Neo1973 was repurposed as a developer’s device and the Openmoko (by then someone decided that the capital ‚M‘ had to be dropped) „Freerunner“ was announced for early 2008. Apart from a faster display subsystem it was supposed to add WiFi (which was missing in the Neo1973), LED buttons, and removing the proprietary GPS chip for something that spoke NMEA. Unfortunately it had also been decided that there could not be a change in the casing, hence we needed to continue living with the bezel and the physical display size.
In June 2007 – shortly before the official launch of the Neo1973 – I had the opportunity to join Harald visiting the OpenMoko office in Taipeh, Taiwan. This is a picture from Computex 2007, where OpenMoko presented the Neo1973 with its slogan „Free Your Phone“:

This is our room in the FIC building, where Harald and me worked on the Neo1973 phase 1 (general availability through the webshop) release code:

The Hacker’s Lunchbox, an enhanced version of the Neo1973 release package with an additional battery, debug board, and some more goodies:

If for nothing else, for this first visit in Taipeh alone it was worth joining the project. I’ve never been further away from home, but still felt so much being welcome.
Back in Germany, work continued – with a bunch of hackathons and presentations on conferences. Here’s the OpenMoko tent from Chaos Communication Camp 2007:

Although we were hired by OpenMoko to make sure the software we wrote was working great on their devices, we always tried to do our work in the most generic way as possible. We still had a sweet spot for OpenEZX and tried to make the OpenMoko software work there:

On the 15h of September 2007, all the produced Neo1973 were sold-out. Since we already announced the Freerunner for 2008, we had nothing to sell for almost a whole year. And this exactly in the phase where Google and Apple had an opportunity to sell lots of devices. This was another major fault in the project.
Another OpenMoko speciality was the release of the CAD data. On the 4th of March 2008, we released the full CAD data to allow for 3rd party cases.
freesmartphone.org
By the time the Neo1973 was in the hands of developers, I was getting more and more frustrated with the non-UI part of the system: gsmd was unstable, there was still only my prototype neod to handle lowlewel device specifics and I felt there could be much more experimentation if only there was a solid and uniform set of APIs available. I asked Sean whether I could switch my focus to that and he agreed, allowing me to work with my colleagues in Braunschweig Stefan Schmidt, Daniel Willmann, and Jan Lübbe as an independent unit.
By that time, dbus was spreading more and more and looked like a good choice for inter-process-communication, hence I decided to specify a set of dbus APIs that would allow people to come up with all kinds of cool applications using whatever language they wanted. Since I did not want to tie it to the Openmoko devices, I created the FSO project (in analogy to the freedesktop.org project which did great work in standardizing desktop APIs before).
If there was anything I learned out of the TCP/IP vs. ISO/OSI debate in the 1980s, then that APIs without a reference implementation are worthless and it is often more important to create things that work (de facto), than to have huge committees negotiate (de jure) standards. Which meant I had to also come up with code. In order to get things up and running quickly (and to allow hacking directly on the device), I chose one of my favorite languages, Python. The choice of using an interpretated (comparatively slow) language on an already slow device may sound odd, but back then I wanted to give people something to base on as fast as possible, hence development efficiency seemed way more important than runtime efficiency.
Within only a few months, we created a working set of APIs and a Python-based reference implementation that finally allowed us to a) make dozens of calls in a row without the modem hanging up, and b) an EFL-based slick proof-of-concept-and-testing UI to evaluate our APIs (eating our own dogfood was very important to us).
Here’s the server code for an SMS echo service, which illustrates how simple it was to access the telephony parts:

For a while, I did regular status updates (e.g. Status Update 5) to keep the community informed about how the framework project went on.
Some of this work has been motivated by a very energetic group lead by Michael ‚Emdete‘ Dietrich, which I dubbed the „Openmoko Underground“ effort. They showed me that a) Python was indeed fast enough to handle things like AT, NMEA, and echoing characters to sysfs files, and also convinced me to give the Enlightenment Foundation Libraries a go.
Freerunner is shipping
By Summer 2008, we finally shipped the Freerunner. In the meantime, we were joined by Carsten ‚Rasterman‘ Heitzler, who worked on optimizing Enlightenment (Foundation Libraries) for embedded platforms. Unfortunately the plans to make the Freerunner a much faster version of the Neo1973 didn’t quite work out: The SMEDIA gfx chip which was supposed to be a display accelerator turned out to be an actual deccelerator. The chip could (under pressure) handle VGA, but was running much better with QVGA – hence its framebuffer interface was even slower than the Neo1973’s.
And there were more hardware problems. As the frameworks and APIs increased in stability, they exposed more and more – partly long standing – issues with the hardware design, in particular the infamous bug #1024 (which yours truly originally found). This bug had the effect of the TI Calypso GSM chip losing network connection when in deep sleep mode. Although we later found a workaround, it massively damaged the reputation of the Openmoko devices, since it made the devices lose calls.
In September 2008, I visited Taipei again, joined by the FSO team plus Harald and Carsten, to present the current status, fix bugs, and teach the local engineers how to best use the framework APIs. For two weeks, we lived in an apartment rented by Openmoko and coded almost 24/7:

Here’s a view of the Openmoko office plus the room where we experimented with different case color & material combinations:

Trials and Tribulations
2008 was one of the most intense years in the project’s lifetime. With more Freerunner devices in the wild, a lot of people (finally) started building alternatives to the „official“ Openmoko distribution. As always, this was good and bad. While I personally was satisfied that people tried out different ideas on all layers of the system – after all, this kind of freedom was one of the very reasons why I joined the project and created FSO in the first place – the multitude of possibilities scared many non-technical, but interested, buyers away. (The kind of people who don’t want to actively work on open source projects, but want to support them. In the same way as chosing Firefox over IE, or OpenOffice over Word).
Here is a bunch of screenshots taken from a presentation I held at the FOSDEM 2010, where we had an Openmoko developer room:

Here’s a photo (courtesy Josch) from the first Openmoko user meeting in Karlsruhe, which also shows a glimpse of the Openmoko devices‘ diversity:

Freedom of choice truly is a double-edged sword: Many people would have preferred one solid software stack rather than having 10 half-done ones, each of them lacking in different areas. In- and outside Openmoko, tensions arose whether to continue with the Enlightenment-based route, switch to a Qtopia-based system, revive the GTK+-based stack, or just stop doing any work above the kernel and move to Android.
Personally, I think we should have limited our software-contributions to a kernel, FSO, and a monolithic application that would have covered voice & messages and super stable day2day operation.
Here are some more screenshots of 3rd-party distributions (Thanks, Walter!).

One distribution I particulary liked was the one from SHR Project, a community of skilled hackers working closely together with the freesmartphone.org framework team. We did a FSOSHRUDCON (FSO+SHR Users and Developers Conference) in May 2009, in the LinuxHotel in Essen, Germany. Even though it was officially „past-Openmoko“, we had an incredible time there. If anyone of you still has a group photo which shows all of us, please send it to me!
With a much smaller audience, we came back to LinuxHotel in 2011 for the 2nd – and unfortunately also the last – FSOSHR conference. By the way… this was some weeks after my daughter Lara Marie was born and I enjoyed sleeping a bit longer than usually 
Freerunner in Space
One of the most remarkable, nah… frickin‘ coolest things ever done with an Openmoko device was to mount a Freerunner inside a rocket and send it into space. In early 2009, a german space agency (DLR: Deutsches Zentrum für Luft- und Raumfahrt e.V.) carried out an experiment to measure how accelerometers (and other parts of consumer electronics) reacted to massive changes in velocity. One of the DLR directors, Prof. Dr. Felix Huber himself wrote the Freerunner-software for this experiment. He based it on an FSO image (and also submitted a bunch of patches to FSO and Zhone, but that’s another story)! According to him, the Freerunner was the only smartphone where he trusted application software to have control over the GSM part, hence no other device could be used for this experiment. Images (C) DLR e.V.:

Afterwards, Prof. Huber invited me to the DLR in Oberpfaffenhofen and gave me a tour. It was a great experience which I’ll never forget. If you want to read more about the MAPHEUS experiments, please also see this paper.
GTA03 – Our last Hope for Freedom
After it was clear that the Freerunner was not the silver bullet we had hoped for, plans to design a successor launched. This one – codenamed GTA03 (and later 3D7K… for a reason I don’t want to disclose) – was supposed to be designed free of any existing legacy we inherited from FIC’s stock.
GTA03 was supposed to contain a new S3C chip, a Cinterion modem, a capacitive (!) touchscreen, and a slick round design with a semitransparent cutout where you could see a part of the PCB (an elegant self-reference to the project’s transparency & openness). This device would have made a significant change and put Openmoko into another league.
Here’s a plastic prototype of the device:

Here’s the PCB:

Here’s me hacking FSO and OE to incorporate the GTA03-specifics using development boards:

Alas, due to a number of circumstances, the device was cancelled – although it was already 80% done.
Nails in the coffin
On the OpenExpo 2009 in Switzerland, Sean announced that Openmoko was quitting smartphone development. To me, at the very point, when framework-wise things finally started to look good. The 2nd reference implementation of the freesmartphone.org middleware had just begun and there were many promising side-projects using the FSO API.
Here’s a screenshot of the freesmartphone.org website back then (thanks to the Wayback Machine):

For two more years, I continued to work on FSO in my spare time, trying to find an alternative hardware reference platform to run on, but nothing convincing showed up. All reverse-engineering-based efforts to replace other operating systems with our stack failed due to the short hardware life-time.
When I dropped out in June 2011 due to the birth of my daughter Lara-Marie, those projects more or less came to a full stop.
It’s not easy to pinpoint exactly what went wrong with the project. I think I can mention a couple of nails in the coffin though:
- Transparency – if you design hardware in the open, every single bug (that can perhaps be worked around in software) is immediately being revealed and talked to death. This scares potential buyers.
- The financial crisis of 2008 – Openmoko’s venture capital dried up when some of the investors had serious cash problems.
- The competition – With Apple and Google two major players came out very soon after our initial announcement. Apple’s iPhone made it tough to compete hardware-wise, and Google’s seemingly open Android dragged a lot of people who perceived it as being open enough out of our community.
- Not enough focus, not enough structure – As mentioned before, the basics (phone, messages, power management) were never stable enough to make the devices really work well as your main phone. I’m afraid we wanted too much too fast.
What’s left behind
During 2007 – 2011, I travelled a lot and presented on conferences, if I recall correctly I’ve been in Aalborg (Denmark), Bern (Switzerland), Berlin, Brussels (Belgium), Birmingham (UK), Chemnitz, St.Augustin, Munich, Paris (France), Porto de Galinhas (Brazil) [Summerville Beach Resort, best conference venue ever], Taipeh (Taiwan), Vienna (Austria), and Zürich (Switzerland). It was an incredible time and I rediscovered a spirit that I had first experienced 20 years ago during the early days of the C64 and AMIGA demo scene. I’m glad having met so many great people. I learned more about (prototype) hardware than I would have ever wanted.
Over the two years of operation, Openmoko sold ~13000 phones (3000 Neo1973, 10000 Freerunner). Openmoko Inc. rose from 4 people (Sean, Harald, me, Werner) to about 50 in their high-time.
Alas, this concludes the positive aspects though. I wished that we could have left a bigger footprint in history, but as things stand, the mobile soft- and hardware landscape in 2017 is way more closed than it was ten years ago. I really had hoped for the opposite. Many projects we started are now either obsolete or on hiatus, waiting (forever?) for an open hardware platform to run on. However, they are still there and could be revived, if there was enough interest.
Two notable active projects are Dr. Nikolaus Schaller’s GTA04 – which is a completely new design fitting in the Freerunner’s case and the Neo900 by ex-Openmoko engineer Jörg Reisenweber – attempting to do the same with the Nokia N900.
Both of these projects share the approach to build upon an existing case and retrofitting most of the innards with a newer PCB (think „second live“). While the Neo900 is still in conceptual phase (they just announced the next prototype PCB a couple of days ago), the GTA04 has already seen a number of shipping board revisions and there is still a small, die-hard, community following tireless Nikolaus‘ progress to keep the Openmoko spirit alive.
Basic support for the GTA04 has already been added to FSO and there are people actively working on kernel support as well as porting various userlands like QtMoko and Replicant to it. I really suggest browsing the archives of gta04-owner to understand the incredible amount of problems a small series of custom smartphone hardware brings – including component sourcing, CAD programs, fighting against Linux mainline (people which apparantly are not really interested in smartphone code), defending a relatively high price, etc. It’s hell of a ride.
While I applaud all these efforts, something broke inside me when Openmoko shut down – and that’s one of the reasons why I find it pretty hard to motivate myself working on FSO again. Another one is that Vala – the language of the 2nd reference implementation – had their own set of problems with parents and maintainers losing interest… although it just recently seemed to have found new love).
The Future?
By now, Android and iOS have conquered the mobile world. Blackberry is dead, HP killed WebOS and the Palm Pre, Windows Phone is fading into oblivion and even big players such as Ubuntu or Mozilla are having a hard time coming up with an alternative to platforms that contain millions of apps running on a wide variety of the finest hardware.
IF YOU’RE SERIOUS ABOUT SOFTWARE, YOU HAVE TO DESIGN YOUR OWN HARDWARE – Alan Kay
Right now my main occupation is writing software for Apple’s platforms – and while it’s nice to work on apps using a massive set of luxury frameworks and APIs, you’re locked and sandboxed within the software layers Apple allows you. I’d love to be able to work on an open source Linux-based middleware again.
However, the sad truth is that it looks like there is no business case anymore for a truly open platform based on custom-designed hardware, since people refuse to spend extra money for tweakability, freedom, and security. Despite us living in times where privacy is massively endangered.
If anyone out there thinks different and plans a project, please holler and get me on board!
Acknowledgements
Thanks to you for reading that far! Thanks to Sean Moss-Pultz for his crazy and ambitioned idea. Thanks to Harald Welte for getting me on board. Thanks to Daniel, Jan, and Stefan for working with me on FSO. Thanks to the countless organizers and helpers on conferences where we presented our work. Thanks to Nikolaus and Walter for commenting on an early draft of this. And finally… thanks to all enthusiasts who used a Neo1973 and/or a Freerunner in the past, present, or future.
Der Beitrag OpenMoko: 10 Years After (Mickey’s Story) erschien zuerst auf Vanille.de.
Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I’ve been here for 6 years and a bit and it’s been quite an experience. I think it’s worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.
I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I’d been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn’t completely unfamiliar with the code-base, but it still took a long time to get to grips with. We’re talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).
I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn’t long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn’t quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I’d still consider to be one of Android’s best browsers.
Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn’t easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).
Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I’ve contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn’t do maintaining them, I suppose.
Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow’s DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android’s dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.
I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn’t the only one that wasn’t happy about it. The graphics team was very different to the mobile platform team and I don’t feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn’t seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.
I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won’t see me credited in the tree unfortunately. I’m still a little bit sore about that. It wasn’t long after this that I requested to move to the FirefoxOS systems front-end team. I’d been doing some work there already and I’d long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I’m glad I didn’t leave at this point.
Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager – who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.
I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management’s focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.
If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won’t work. Certainly not the way we did it anyway. The idea, I think, was that we’d be running several internal start-ups and we’d hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.
The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.
Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.
The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I’ve practically ended up on this team by a series of accidents and random happenstance. It’s been very interesting so far, I’ve learnt a lot and I think I’ve made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I’m pretty pleased with. But at the end of the day, it doesn’t feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I’m just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I’ve added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We’re starting to get noticed and starting to get external contributions, but I worry that we still aren’t transparent enough and still aren’t truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it’ll be one to watch.
Next week, I start working at a new job doing a new thing. It’s odd to say goodbye to Mozilla after 6 years. It’s not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I’m moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn’t just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!
Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin.
1. No main-thread UI
The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen.
2. Contextually-aware compositor
This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually).
3. Memory bandwidth budget
This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms).
It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS).
4. Level-of-detail
This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago.
Pitfalls
I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so.
You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so.
Don’t lose sight of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves.
One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic.