March 29, 2015 API docs live again

By Michael "mickeyl" Lauer

After the outage of the VM where has been hosted on, we are now almost fully back. I have integrated the DBus API documentation (that has been hosted on the doc subdomain) into the top level documentation which now lives at The source code has already been moved to and the new mailing list has been alive for a few months at goldelico.

Now that the documentation is live again, I have plans for short, mid, and long term:

1. Short-term I’m working on completing the merge to libgee-0.8 and then cut the next point release.

2. Mid-term I want to discuss integrating the unmerged branches to the individual subprojects and continue cleaning up.

3. Long-term I’m looking for a new reference hardware platform, funding, contributors, and decide whether to move the existing reference platform to kdbus (or another IPC.)

If you have any plans or questions with regards to the initiative and its subprojects, please contact me via the FSO mailing list (preferred) or personally.

Der Beitrag API docs live again erschien zuerst auf

March 22, 2015

RFC: Future of SidPlayer, ModPlayer, PokeyPlayer for iOS

By Michael "mickeyl" Lauer

This is a post about the state of Sid-, Mod, or PokeyPlayer on iOS.

Coming from the background of the C64 and AMIGA demo scenes, I always thought that every platform needs a way to play back the musical artwork created by those great musicians in the 80s and 90s on machines like the Commodore C64, the AMIGA, and the ATARI XL.

Fast forward to the iPhone: Being excited about the new platform, me and another guy from the good ole’ AMIGA days started working on SidPlayer in 2008, shortly after Apple opened the developer program for european developers. After some months of work, we had the first version ready for the Apple review, standing on the shoulder of the great libsidplay and the HVSC. Due to libsidplay being GPL, we had to open source the whole iOS app. To our surprise, _this_ hasn’t been a problem with the Apple review.

SidPlayer for iOS was available for some months, then we developed adaptations for AMIGA .mod files (ModPlayer) and Atari XL pokey sound files (PokeyPlayer). In the meantime, iOS development went from being a hobby to our profession (we formed the LaTe App-Developers GbR), which unfortunately had great impact on our pet projects. Being busy with paid projects, we could not find enough time to do serious updates to the players.

The original plan in 2008 was to create an app that has additional value around the core asset of a high quality retro computing player, such as a retro-museum-in-a-box (giving background information about those classic computing machines) and a community that shares playlists (important given the amount of songs), comments, statistics, and ratings. Alas, due to our time constraints during the lifetime of the apps, we could only do small updates in order to fix bugs with newer operating system versions. There was not enough time to add features, do an iPad adaptation, nor to unify the three distinct player apps. In the meantime, other apps came along that also could play some of those tunes, although we weren’t (and still aren’t) very excited about their user interfaces and sound quality.

The final nail for the coffin came in 2013, when – much to our surprise – out of the blue (not even due to reviewing an update), we received a letter from Apple where they claimed that our player apps would violate the review guidelines, in particular the dreaded sections 2.7 / 2.8, which read “2.7: Apps that download code in any way or form will be rejected.” and “2.8: Apps that install or launch other executable code will be rejected”. Although we went past this guideline for several years, this turned into a showstopper – some weeks later, Apple removed our apps from the store.

Unfortunately, those sections really apply – at least for the Sid- and PokeyPlayer. Both players rely on emulating parts of the CPU and custom chip infrastructure of the C64 / Atari XL (hence run “executable” code, albeit for a foreign processor architecture) and said code gets downloaded from the internet (we didn’t want to ship the actual music files with the app for licensing reasons). ModPlayer actually was an exception, since the .mod format does not contain code, but is a descriptive format, however back then I did not have the energy to argue with Apple on that, hence ModPlayer has been removed without a valid reason.

In the meantime, my priorities have shifted a bit and we had to shutdown our iOS company LaTe AppDevelopers for a number of reasons. Still I have great motivation to work on the original goal for those players. Due to the improved hard- and software of the iOS platform, these days we could add some major improvements to the playing routines, such as using recent filter distortion improvements in libsidplay2, audio post-processing with reverb and chorus, etc.

The chance of the existing apps coming back into the store is – thanks to Apple – zero. It wouldn’t be a pleasant experience anyways, since the code base is very old and rather unmaintainable (remember, it was our first app for a new platform, and neither one of us had any Mac OS X experience to rely on).

Basically, three question come to my mind now:

1. Would there be enough interest in a player that fulfills the original goal or is the competition on the store “good enough”?
2. Will it be possible to get past Apple’s review, if we ship the App with all the sound (code) files, thus not downloading any code?
3. How can I fund working on this app? To honor all the countless hours the original authors put into creating the music and the big community working on preserving the files, I want this app to be free for everyone.

As you may have guessed, I do not have any concrete answers (let alone a timeframe), but just some ideas and the track record of having created one of the most popular set of C64/AMIGA/Atari XL music player apps. So I wanted to use this opportunity to gather some feedback. If you have any comments, feel free to send them to me. If you even want to collaborate on such a project, I’m all ears. If there’s sufficient interest, we can create some project infrastructure, i.e. mailing list.

Der Beitrag RFC: Future of SidPlayer, ModPlayer, PokeyPlayer for iOS erschien zuerst auf

February 23, 2015

Wayback Machine

By Michael "mickeyl" Lauer

Thanks to the fabulous wayback machine, I have imported my blog from between 1999 and 2006. It’s not properly formatted and most of the images are missing, but it’s somewhat interesting to read the things my younger self wrote about 15 years ago.

Der Beitrag Wayback Machine erschien zuerst auf

January 26, 2015

To web or not?

By Michael "mickeyl" Lauer

I have pondered a long time whether to learn web programming for my customers’ services app, so that they can access their user & device statistics, crash logs, manage service messages, push messages, etc.

I now have decided not to pursue this path. Web technology is a mess, even more than mobile technology. It’s lacking a clear separation of layers and although many frameworks nowadays are using MVC or similar patterns, I feel I have to do too many things at once (web service, html templating, css design, java script for interactive stuff, etc.) to really make a professional web app.

I’m going to make a mobile client instead, using the technologies I already have mastered and in which I’m productive. Yes, I still want to learn something new, that’s why I’m working with a NoSQL database now for the first time.

Of course the downside is my customers can no longer use their web browsers to manage all that, but since they have their iPhones and iPads always around anyways, I’m sure they can cope with that.

Der Beitrag To web or not? erschien zuerst auf

January 04, 2015

Printing in PolyProlylene

By Talpadk

Bowl printed in PP

PP bowl printed on a piece of cutting board

I recently purchased a small sample of white polypropylene (PP) plastic from a shop in China.

While it was relatively expensive at ~$11 for 200g worth of plastic it allowed me to try out printing in PP without buying an entire spool of filament.
And since it isn’t supposed to be the easiest thing to print due to a thermal contraction that should make it more warp prone than ABS and additionally is slightly slippery and doesn’t stick that well to other materials.

You may then ask why I would want to attempt to print in PP then, after all PLA prints just fine… well sort of anyway.

Polypropylene is/has the following features:

  • Relatively heat resistant, plastic handles on dishwasher safe cutlery are for instance often made of it.
  • Good chemical resistance.
  • Handles bending and flexing relatively well, living hinges can be made of it.
  • Is relatively soft, not always a good thing.

Well back to the printing business…

For the experiments I used my trusty RepRapPro Huxley with a smaller 0.3mm nozzle more on that later.
Yes I really should get that Mendle90 build, that would have allowed me to borrow some 3mm PP welding rod from work.

Anyway I’m by far not the first to print in polypropylene, but as with NinjaFlex I though it could use another post on the internet about the material.(Some links to “prior art”: RapMan wiki and a forum post)

While it seemed that PP and especially HDPE are good candidates for the print bed I had to try out polyimide and some generic blue masking tape as well.
As I expected they didn’t seem to work too well for me. But didn’t experiment too much with them.

I there for proceeded and bought some cheap plastic cutting boards from Biltema.
They don’t specify the type of plastic but they don’t feel very much like PP so I assume they are made of HDPE.

Once cut to size I actually managed to print unheated onto the 5mm thick sheet of plastic!
Some of the prints actually stuck too good to the print bed and got damaged while being removed.

I also encountered some problems with jams/the plastic coiling up inside extruder.
This led me to increasing the extrusion temperature to 235C and reduce the speed down to 15/20 mm/s for the perimeter/infill.
In an attempt to reduce the print bed adhesion I used a lower 225C for the first layer.

In hindsight increasing the temperature might not have been necessary, at least when manually pushing PP@235C and PLA@215C the pressure seems to be in the same range.
The extruder problems may simply be caused by the PP filament being softer than PLA.
Reducing the speed print might have been enough.

As some of the prints had left thin layers of PP on the print bed surface and new prints stuck annoyingly well to those spots I decided to try to sand the surface.
This removed both the PP residue and the grid of ridges in the plastic due to it’s cutting board origin.After this the surface seemed to be less problematic with regard to local over sticking.

I have yet to attempt to heat up this print surface, as I have previous had bad a experience with an experiment using a SAN sheet that warped badly when heated.
Besides it actually looks quite promising to use the print bed unheated.

While the chopping board isn’t that bad or expensive it is 5mm thick which is too much for my bulldog clips to handle.
I therefore looked for alternative sources of PP and PE.

The next experiment involved plastic wrap.
Here in Denmark PVC based warps have fallen out of favour and been replaced by PE based products (Assumed to be LDPE as it is soft).

The wrap was applied to a mirror surface and clamped onto the regular print bed.
The first unheated print had way too much warping.
I then cleaned the wrap using rubbing alcohol (which visually roughened the surface a little) and may have heated the bed to 90C.This resulted in a slightly better print but not quite as good as the chopping board.
Plastic wrap may be promising but I quickly stopped playing with it as it would probably have to be glued to the glass which would complicate the process.

PP printed on tape

PP printed on tape

Next up was packaging tape.
At least some of it are made of PP, biltema has some brown tape that is I did however just use some clear stuff I had laying in the drawer.

The first unheated on glass attempt had too little adhesion.
I then roughed the surface using a scouring pad and heated the bed to 70C which made the part stick relatively well to the tape.

Thoughts and notes:

  • Running the extruder at 235C might be too warm (stringing in the bowl print)
  • Maybe the glass surface conducts too much heat away too fast, might be why the cutting board sticks so well unheated?
    Perhaps experiment with a more insulating/lower heat capacity base material.
  • For flexible/soft materials I expect it is better with a thicker filament as it is harder to curl up inside the extruder
    (Note to self: Find time to build that mendel90)
  • Also for softer materials I probably ought to switch to my 0.5mm nozzle.
  • Tape based print bed materials have an advantage over solid ones, if the print is really stuck pealing the tape off might help to remove the part without damaging it.

November 06, 2014

OpenPhoenux Hard- & Software Workshop 2014

By SlyBlog

Soon this year’s OpenPhoenux Hard- & Software Workshop (OHSW) will take place in Garching (near Munich) at the TUM Campus in Garching. There will be a lot of intetresting topics to discuss and people to meet. Make sure to drop by if you find some time!

The agenda and further details are now available online:


October 23, 2014

Simplify your life

By Michael "mickeyl" Lauer

After 6 years being co-director and CTO of LaTe App-Developers, I feel it is time to make some changes.

It’s not that mobile development is no longer interesting to me, however after doing (too) many small (5-20 person days) iOS projects, I need some new challenges. Project work has been limiting my creativity and enforcing too much regularity in my daily routine. Besides, there’s hardly any room to do compelling software architecture work in projects of said size. You’re rather constantly working against the time in order to make some profit with those fixed-price projects.

This year I took three months off in order to decide on what to do next and finally, I have made up my mind. As per the end of this year, I’m resigning as co-director and CTO of LaTe. I will still be involved as freelance collaborator though in order to continue supporting our biggest client.

With the regained freedom, I plan to explore some new directions with regards to own apps and services. I need to catch up with what happened in (Embedded)-Linux and I also want to polish my almost rusty Python and Vala skills.

Last but not least, I’m not going to do 40 hours per week any more – instead I want to spend more time with my family.

Der Beitrag Simplify your life erschien zuerst auf

August 17, 2014

HowTo: Linux Hard Disk Encryption With LUKS [ cryptsetup Command ]

By Xiangfu Liu

via HowTo: Linux Hard Disk Encryption With LUKS [ cryptsetup Command ].

July 29, 2014

New Site

By Michael "mickeyl" Lauer

Every 6 years or so I’m revamping my website. This is the 3rd incarnation now (yes, I started early) featuring a new wordpress theme, a clean layout, and – most important – serious content improvements.

Der Beitrag New Site erschien zuerst auf

July 20, 2014


By Xiangfu Liu



May 07, 2014

10 years ago I got write access to OpenEmbedded

By Marcin "hrw" Juszkiewicz

It was 8th May of 2004 when I did first push to OpenEmbedded repository. It was BitKeeper at that time but if someone wants to look then commit can be seen in git.

I will not write about my OE history as there are several posts about it on my blog already:

It was nice to be there through all those years to see how it grows. From a tool used by bunch of open source lovers who wanted to build stuff for own toys/devices, to a tool used by more and more companies. First ones like OpenedHand, Vernier. Then SoC vendors started to appear: Atmel, Texas Instruments and more. New architectures were added. New rewrites, updates (tons of those).

Speaking of updates… According to statistics from I am still in top 5 contributors in OpenEmbedded and Yocto project ;)

There were commercial devices on a market with OpenEmbedded derived distributions running on them. I wonder how many Palm Pre users knew that they can build extra packages with OE. And that work was not lost — LG Electronics uses WebOS on their current TV sets and switched whole development team to use OpenEmbedded.

Since 2006 we got annual meetings and this year we have two of them: European as usual and North America one for first time (there was one few years ago during ELC but I do not remember was it official).

There is OpenEmbedded e.V. which is non-profit organization to take care of OE finances and infrastructure. I was one step from being one of its founders but birth of my daughter was more important ;)

And of course there is the Yocto project. Born from OpenedHand’s Poky helped to bring order into OpenEmbedded. Layers (which were discussed since 2006 at least) were created and enforced so recipes are better organized than it was before. It also helped with visibility. Note that when I write OpenEmbedded I mean OpenEmbedded and Yocto project as they are connected.

I remember days when Montavista was seen as kind of competitor (“kind of” because they were big and expensive while we were just a bunch of guys). Then they moved to OpenEmbedded and dropped own tools. Other company with such switch was Denx. 3 years ago they released ELDK 5.0 which was OE based and made several releases since then.

What future will bring? No idea but it will be bright. And I will still be somewhere nearby.

All rights reserved © Marcin Juszkiewicz
10 years ago I got write access to OpenEmbedded was originally posted on Marcin Juszkiewicz website

April 20, 2014

Measuring printbed tempeartures on a RepRapPro Huxley

By Talpadk

I have finally gotten around to measuring the surface temperature of my Huxley.

Temperature as function of set-point

Temperature as function of set-point

Method and instruments used

For measuring the temperature a Agilent U1233A with a U11186A (k type thermocouple) has used.

The ambient temperature has measured by waiting for the display to settle and the taking a readout.

The heat bed temperatures has measured on top of the aluminium print surface with the polyarmide tape left in place.
The thermocouple was held in place by another piece of polyarmide tape.

The thermocouple was left on the print bed for 1 minute for the temperature to stabilize, the temperature was then measured on  the multimeter using the “avg” function after a 2 minute sampling period.


The temperatures were measured at the centre and approximately 1cm from the edge.
The center temperature was measured an additional time at the end of the measurement cycle.
The print bed was in its forward position with the print head to the left at the end stop (cooling fan running)

The ambient temperature was measured as 22.1C at start of the surface scan, and 24.4C at the end.
The heat bed has maintained at 85C using the 3d printer firmware.

NA 71.2C 75.8C
77.6C 71.1C
 75.6C  77.1C  72.8C

After this the thermocouple was reapplied using a fresh piece of polyarmide tape at the centre of the print bed and left there.
The print bed set point was then reduced and the surface temperature measured.

Set point [C] Measured [C] Percentage
85 76.2 90
70 63.1 90
55 50.2 91
40 37.8 95


Some of the variances in the measurements across the bed might be related probe mounting relative to the surface and cooling to ambient.
Using a piece of foam or another insulator might improve this.
The lower measurement points may simply be caused by a bad thermal contact to the print bed.
Heat sink compound could perhaps have alliviated some of this as well (and made a lot of mess).

Also even though the measurements was taken as a 2 minute average, the temperature swings of the heat bed regulation may have contributed with some noise.

Also a thermal camera would have made this much easier and quicker, too bad they are so expensive.
(And that Fluke VT02/VT04 visual thermometers has such a bad resolution)


I would consider the bed temperature constant across the print bed within the uncertainty of my measurements.

At “higher” temperatures the surface temperature seems to be roughly 90% of the set point.

March 23, 2014

A fast and beautiful terminal

By Talpadk

xrvt-unicode/uxrvt has long been my favourite terminal, it is fast and it supports faked transparency.

rxvt terminal with transparencyOne problem with using a darkened background was however that some terminal colour simply were bit too dark.

After a quick googling and short man page reading it was however clear that this can actually easily be resolved.
Additionally I can store some extra settings making my keyboard short cur for launching the terminal nice and simple.


sudo apt-get install xrvt-unicode 
sudo apt-get install tango-icon-theme

The last line is only for getting the terminal icon, and is optional if you comment out the iconFile resource

Configuring rxvt-unicode

In the file ~/.Xdefaults add the following lines:

!===== rxvt-unicode resource definitions =====!
!The number of scrollback lines
URxvt*saveLine: 5000

!Add fading for unfocused windows
URxvt*fading: 33

!Specify the icon for the terminal window, requieres the "tango-icon-theme" package
URxvt*iconFile: /usr/share/icons/Tango/16x16/apps/terminal.png

!Transparency setting
URxvt*transparent: true
URxvt*shading: 25
URxvt*background: Black
URxvt*foreground: White

!Colour setup for the darker background
URxvt*color0:  Black
URxvt*color1:  #ffa2a2
URxvt*color2:  #afffa2
URxvt*color3:  #feffa2
URxvt*color4:  #a2d0ff
URxvt*color5:  #a2a2ff
URxvt*color6:  #a2f5ff
URxvt*color7:  #ffffff
URxvt*color8:  #000000
URxvt*color9:  #ffa2a2
URxvt*color10: #afffa2
URxvt*color11: #feffa2
URxvt*color12: #a2d0ff
URxvt*color13: #a2a2ff
URxvt*color14: #a2f5ff
URxvt*color15: White

!Colour notes from the man page
!color0       (black)            = Black
!color1       (red)              = Red3
!color2       (green)            = Green3
!color3       (yellow)           = Yellow3
!color4       (blue)             = Blue3
!color5       (magenta)          = Magenta3
!color6       (cyan)             = Cyan3
!color7       (white)            = AntiqueWhite
!color8       (bright black)     = Grey25
!color9       (bright red)       = Red
!color10      (bright green)     = Green
!color11      (bright yellow)    = Yellow
!color12      (bright blue)      = Blue
!color13      (bright magenta)   = Magenta
!color14      (bright cyan)      = Cyan
!color15      (bright white)     = White

The last comments can of course be left out but is handy if you need to find a particular colour that you want to change.

Also adjust the shading resource to your liking.

After saving the file you may start the terminal using urxvt or rxvt-unicode and enjoy it fast and good looks.

March 01, 2014


By John Sullivan

Spritz seems like a very interesting way to read quickly. It's the opposite of everything I've read (slowly) about speed reading, which focuses on using peripheral vision and not reading word-by-word. You're supposed to do things like move your eyes straight down the page, taking in whole lines at a time.

Interruptions seem like a big problem; interruptions that make me look away, or interruptions in my brain, where I might realize I've not been paying attention for some amount of time. Maybe they should have navigation buttons similar to video players, so you can skip backward 15 seconds at a time. I also do want to go back and review previous pages sometimes for reasons that have nothing to do with interruption, so I wouldn't want word-by-word to be the only way to view a text -- especially when reading nonfiction. I might event want it to work in a mode where you hold down the button on the side of your phone or tablet in order to move the words, and then have them automatically pause when you release. It feels like I'd want a lot of short breaks when reading in this style.

It should also be free software, but unfortunately I'm guessing it won't be. I hope someone will make a free software application along these lines -- the basics seem pretty basic.

February 21, 2014

Antialiased openscad rendering

By Talpadk

OpenSCAD rendering

Std. 512x512 OpenSCAD rendering

Std. 512×512 OpenSCAD rendering

Recent versions of OpenSCAD is capable of rendering objects/assemblies to images.
To the right there is an example of the default 512×512 image quality produced by the command:

openscad -o render.png assembly.scad

Below it is an anti-aliased version of the same scad file.
I used the common trick of generating an oversized image and downscaling it.
It was created with the following two commands:

openscad -o render.png  --imgsize=2048,2048 assembly.scad
convert render.png -resize 512x512 render.png

If you update your project renderings using a makefile/script I don’t consider it much of a hassle considering the improvement in image quality.
Also at least on my laptop with the currently relativity simple scad file rendering is still fast.

2048x2048 OpenSCAD render downscaled to 512x512

2048×2048 OpenSCAD render downscaled to 512×512

In case you are wondering the assembly is a new CNC mill I’m designing.
Which hopefully is an improvement over the last design.

The old design is available HERE
The new design is being created HERE

Unlike the old design the new one is being pre-assembled in openscad, hopefully preventing having to print parts that only fitted together in my head, saving both time and plastic.

Both designs are hosted on Cubehero, my favourite site for sharing designs on.
It comes with build in version control though git (it also has a web interface for “kittens”)
Wil that runs the site is a friendly and helpful guy, and it is not bogged down with stupid End User License Agreements like another site…
I highly recommend it…

February 11, 2014

It is 10 years of Linux on ARM for me

By Marcin "hrw" Juszkiewicz

It was somewhere between 7th and 11th February 2004 when I got package with my first Linux/ARM device. It was Sharp Zaurus SL-5500 (also named “collie”) and all started…

At that time I had Palm M105 (still own) and Sony CLIE SJ30 (both running PalmOS/m68k) but wanted hackable device. But I did not have idea what this device will do with my life.

Took me about three years to get to the point where I could abandon my daily work as PHP programmer and move to a bit risky business of embedded Linux consulting. But it was worth it. Not only from financial perspective (I paid more tax in first year then earned in previous) but also from my development. I met a lot of great hackers, people with knowledge which I did not have and I worked hard to be a part of that group.

I was a developer in multiple distributions: OpenZaurus, Poky Linux, Ångström, Debian, Maemo, Ubuntu. My patches landed also in many other embedded and “normal” ones. I patched uncountable amount of software packages to get them built and working. Sure, not all of those changes were sent upstream, some were just ugly hacks but this started to change one day.

Worked as distribution leader in OpenZaurus. My duties (still in free time only) were user support, maintaining repositories and images. I organized testing of pre-release images with over one hundred users — we had all supported devices covered. There was “updates” repository where we provided security fixes, kernel updates and other improvements. I also officially ended development of this distribution when we merged into Ångström.

I worked as one of main developers of Poky Linux which later became Yocto Linux. Learnt about build automation, QA control, build-after-commit workflow and many other things. During my work with OpenedHand I also spent some time on learning differences between British and American versions of English.

Worked with some companies based in USA. This allowed me to learn how to organize teamwork with people from quite far timezones (Vernier was based in Portland so 9 hours difference). It was useful then and still is as most of Red Hat ARM team is US based.

I remember moments when I had to explain what I am doing at work to some people (including my mom). For last 1.5 year I used to say “building software for computers which do not exist” but this is slowly changing as AArch64 hardware exists but is not on a mass market yet.

Now I got to a point when I am recognized at conferences by some random people when at FOSDEM 2007 I knew just few guys from OpenEmbedded (but connected many faces with names/nicknames there).

Played with more hardware then wanted. I still have some devices which I never booted (FRI2 for example). There are boards/devices which I would like to get rid of but most of them is so outdated that may go to electronic trash only.

But if I would have an option to move back that 10 years and think again about buying Sharp Zaurus SL-5500 I would not change it as it was one of the best things I did.

All rights reserved © Marcin Juszkiewicz
It is 10 years of Linux on ARM for me was originally posted on Marcin Juszkiewicz website

February 09, 2014

Welcome, 2014

By Michael "mickeyl" Lauer

So 2013 is finally over and it’s been an energy-sapping year, business-, baby-, and building wise.

Business. The stagnation that was present for pretty much the first half of the year and which forced us to downsize a bit, had been replaced by too many projects all at once in the 2nd half of the year. And while it was welcome since it saved us from closing doors, it prevented working on our private projects, i.e. our apps in the store, but also personal pet projects – let alone anything open source.

Baby. After 7 horrible months between Lara Marie’s 5th and 13th month, she finally began sleeping great, often 12 hours without waking up. She’s now 2.5 years and everything is good. Still, she’s a demanding little one, enjoying being offered a selection of everything instead of deciding on her behalf. I love her.

Building. With a bit of (natural) delay, our new house was finished by November and we did move on 9th of December. We’re now 2 months in here and it’s feeling mostly great. We had to monitor and decide on a LOT of things during the construction phase, but apart from the usual minor issues, the building quality is good and we enjoy the comfort of having a dedicated room for Lara Marie. Being able to use the living room again after 20:00 is nice :)

I took the liberty to install a dedicated server for the house which is living in a 19″ rack in the utility room. I’m going to post about the networking infrastructure soon.

Referring to my last post, I’m still planning on doing the sabbatical, but due to some unforeseeable circumstances with my wife’s health, it had to be postponed for a bit. It’s going to happen in 2014 though, which is why I’m sure, 2014 is going to be better than 2013.

Der Beitrag Welcome, 2014 erschien zuerst auf

December 16, 2013

Linking CSS properties with scroll position: A proposal

By Chris Lord

As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first.

It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:

scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]*

    where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ )
      and transition-stop        = <relative-scroll-position> <property-value>

This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:

scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);

This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point.

But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)).

I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way.

[Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds.

What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation.

November 29, 2013

Efficient animation for games on the (mobile) web

By Chris Lord

Drawing on some of my limited HTML5 games experience, and marginally less limited general games and app writing experience, I’d like to write a bit about efficient animation for games on the web. I usually prefer to write about my experiences, rather than just straight advice-giving, so I apologise profusely for how condescending this will likely sound. I’ll try to improve in the future :)

There are a few things worth knowing that will really help your game (or indeed app) run better and use less battery life, especially on low-end devices. I think it’s worth getting some of these things down, as there’s evidence to suggest (in popular and widely-used UI libraries, for example) that it isn’t necessarily common knowledge. I’d also love to know if I’m just being delightfully/frustratingly naive in my assumptions.

First off, let’s get the basic stuff out of the way.

Help the browser help you

If you’re using DOM for your UI, which I’d certainly recommend, you really ought to use CSS transitions and/or animations, rather than JavaScript-powered animations. Though JS animations can be easier to express at times, unless you have a great need to synchronise UI animation state with game animation state, you’re unlikely to be able to do a better job than the browser. The reason for this is that CSS transitions/animations are much higher level than JavaScript, and express a very specific intent. Because of this, the browser can make some assumptions that it can’t easily make when you’re manually tweaking values in JavaScript. To take a concrete example, if you start a CSS transition to move something from off-screen so that it’s fully visible on-screen, the browser knows that the related content will end up completely visible to the user and can pre-render that content. When you animate position with JavaScript, the browser can’t easily make that same assumption, and so you might end up causing it to draw only the newly-exposed region of content, which may introduce slow-down. There are signals at the beginning and end of animations that allow you to attach JS callbacks and form a rudimentary form of synchronisation (though there are no guarantees on how promptly these callbacks will happen).

Speaking of assumptions the browser can make, you want to avoid causing it to have to relayout during animations. In this vein, it’s worth trying to stick to animating only transform and opacity properties. Though some browsers make some effort for other properties to be fast, these are pretty much the only ones semi-guaranteed to be fast across all browsers. Something to be careful of is that overflow may end up causing relayouting, or other expensive calculations. If you’re setting a transform on something that would overlap its container’s bounds, you may want to set overflow: hidden on that container for the duration of the animation.

Use requestAnimationFrame

When you’re animating canvas content, or when your DOM animations absolutely must synchronise with canvas content animations, do make sure to use requestAnimationFrame. Assuming you’re running in an arbitrary browsing session, you can never really know how long the browser will take to draw a particular frame. requestAnimationFrame causes the browser to redraw and call your function before that frame gets to the screen. The downside of using this vs. setTimeout, is that your animations must be time-based instead of frame-based. i.e. you must keep track of time and set your animation properties based on elapsed time. requestAnimationFrame includes a time-stamp in its callback function prototype, which you most definitely should use (as opposed to using the Date object), as this will be the time the frame began rendering, and ought to make your animations look more fluid. You may have a callback that ends up looking something like this:

var startTime = -1;
var animationLength = 2000; // Animation length in milliseconds

function doAnimation(timestamp) {
 // Calculate animation progress
 var progress = 0;
 if (startTime < 0) {
   startTime = timestamp;
 } else {
   progress = Math.min(1.0, animationLength /
                              (timestamp - startTime));

 // Do animation ...

 if (progress < 1.0) {

// Start animation

You’ll note that I set startTime to -1 at the beginning, when I could just as easily set the time using the Date object and avoid the extra code in the animation callback. I do this so that any setup or processes that happen between the start of the animation and the callback being processed don’t affect the start of the animation, and so that all the animations I start before the frame is processed are synchronised.

To save battery life, it’s best to only draw when there are things going on, so that would mean calling requestAnimationFrame (or your refresh function, which in turn calls that) in response to events happening in your game. Unfortunately, this makes it very easy to end up drawing things multiple times per frame. I would recommend keeping track of when requestAnimationFrame has been called and only having a single handler for it. As far as I know, there aren’t solid guarantees of what order things will be called in with requestAnimationFrame (though in my experience, it’s in the order in which they were requested), so this also helps cut out any ambiguity. An easy way to do this is to declare your own refresh function that sets a flag when it calls requestAnimationFrame. When the callback is executed, you can unset that flag so that calls to that function will request a new frame again, like this:

function redraw() {
  drawPending = false;

  // Do drawing ...

var drawPending = false;
function requestRedraw() {
  if (!drawPending) {
    drawPending = true;

Following this pattern, or something similar, means that no matter how many times you call requestRedraw, your drawing function will only be called once per frame.

Remember, that when you do drawing in requestAnimationFrame (and in general), you may be blocking the browser from updating other things. Try to keep unnecessary work outside of your animation functions. For example, it may make sense for animation setup to happen in a timeout callback rather than a requestAnimationFrame callback, and likewise if you have a computationally heavy thing that will happen at the end of an animation. Though I think it’s certainly overkill for simple games, you may want to consider using Worker threads. It’s worth trying to batch similar operations, and to schedule them at a time when screen updates are unlikely to occur, or when such updates are of a more subtle nature. Modern console games, for example, tend to prioritise framerate during player movement and combat, but may prioritise image quality or physics detail when compromise to framerate and input response would be less noticeable.

Measure performance

One of the reasons I bring this topic up, is that there exist some popular animation-related libraries, or popular UI toolkits with animation functions, that still do things like using setTimeout to drive their animations, drive all their animations completely individually, or other similar things that aren’t conducive to maintaining a high frame-rate. One of the goals for my game Puzzowl is for it to be a solid 60fps on reasonable hardware (for the record, it’s almost there on Galaxy Nexus-class hardware) and playable on low-end (almost there on a Geeksphone Keon). I’d have liked to use as much third party software as possible, but most of what I tried was either too complicated for simple use-cases, or had performance issues on mobile.

How I came to this conclusion is more important than the conclusion itself, however. To begin with, my priority was to write the code quickly to iterate on gameplay (and I’d certainly recommend doing this). I assumed that my own, naive code was making the game slower than I’d like. To an extent, this was true, I found plenty to optimise in my own code, but it go to the point where I knew what I was doing ought to perform quite well, and I still wasn’t quite there. At this point, I turned to the Firefox JavaScript profiler, and this told me almost exactly what low-hanging-fruit was left to address to improve performance. As it turned out, I suffered from some of the things I’ve mentioned in this post; my animation code had some corner cases where they could cause redraws to happen several times per frame, some of my animations caused Firefox to need to redraw everything (they were fine in other browsers, as it happens – that particular issue is now fixed), and some of the third party code I was using was poorly optimised.

A take-away

To help combat poor animation performance, I wrote Animator.js. It’s a simple animation library, and I’d like to think it’s efficient and easy to use. It’s heavily influenced by various parts of Clutter, but I’ve tried to avoid scope-creep. It does one thing, and it does it well (or adequately, at least). Animator.js is a fire-and-forget style animation library, designed to be used with games, or other situations where you need many, synchronised, custom animations. It includes a handful of built-in tweening functions, the facility to add your own, and helper functions for animating object properties. I use it to drive all the drawing updates and transitions in Puzzowl, by overriding its requestAnimationFrame function with a custom version that makes the request, but appends the game’s drawing function onto the end of the callback, like so:

animator.requestAnimationFrame =
  function(callback) {
    requestAnimationFrame(function(t) {

My game’s redraw function does all drawing, and my animation callbacks just update state. When I request a redraw outside of animations, I just check the animator’s activeAnimations property first to stop from mistakenly drawing multiple times in a single animation frame. This gives me nice, synchronised animations at very low cost. Puzzowl isn’t out yet, but there’s a little screencast of it running on a Nexus 5:

Alternative, low-framerate YouTube link.

November 02, 2013

Introducing OpenPhoenux Neo900

By SlyBlog

openphoenux-logoThe latest device in the OpenPhoenux open hardware familiy is the Neo900, the first true successor to the Nokia N900. The Neo900 is a joint project of the Openmoko veteran Jörg Reisenweber and the creators of the GTA04/Letux2804 open hardware smartphone at Golden Delicious Computers. Furthermore, it is supported by the N900 Maemo5/Fremantle community, the Openmoko community and the OpenPhoenux community, who are working together to get closer to their common goal of providing an open hardware smartphone, which is able to run 100% free and open source software, while being independent of any big hardware manufacturer.

OpenPhoenux Neo900

OpenPhoenux Neo900

With the big ecosystem of free and open Maemo5/Fermantle applications, the hacker friendly N900, which provides an excelent hardware keyboard, the variety of free operating systems of the Openmoko community (SHR, QtMoko, Replicant, …) and the experience in designing and producing open hardware devices of the OpenPhoenux community (e.g. GTA04), they want to bring the best of all worlds together in one single device, the Neo900.

The Neo900 is meant to be an upgraded N900, with a newly designed and more powerfull motherboard, which is based upon the existing and tested OpenPhoenux GTA04 design. Together with the nice housing of the N900 (e.g. slider, hardware keyboard, big screen, …), this is trying to get “the hackers most beloved device”. In the same spirit of the OpenPhoenux community, which created unique cases for their GTA04 devices out of aluminium, wood or 3D printing, there is also an effort to build an aluminium housing for the N900, which might lead to personalized and self-produced cases for the Neo900 in the future and thus the independence of spare parts of N900 smartphones.

n900-cover1 n900-cover2

Due to the fact that the Neo900’s new motherboard is very similar to the GTA04, it is possible to reuse most of the low level software stack like development tools, the Bootloader and the Linux Kernel from the GTA04 project, with just minor modifications applied. This will speed up the software development process of this new open hardware platform a lot!

To fund the development and prototyping of this new open hardware device, which is made in Germany, a crowdfunding campain has been started a few days ago, in order to collect 25.000€ (which is by now already halfway reached!). Depending on the outcome of this fundraising the project might be able to provide better hardware specs than the following minimum keyfeature set:

  • TI DM3730 CPU (OMAP3 ARM Cortex A8) with 1+ GHz
  • 512+ MB RAM, 1+ GB NAND flash, 32+ GB eMMC, Micro-SD-Reader
  • 3.75G module for UMTS/CDMA; 4G (LTE) optional
  • USB 2.0 OTG High Speed
  • GPS, WLAN, Bluetooth
  • Accelerometer, barometric Altimeter, Magnetometer, Gyroscope
  • support of N900 camera module

DSC01773 DSC01774

If you want to see the N900 to live on, help the independet open hardware community to succeed, or are looking for a new, hacker friendly smartphone, you should consider to support the fundraising with a donation. If you donate 100€ or more, your donation will also serve as a rebate for a finished device, once they are ready.

[Update 2013-11-04] The goal of 25.000€ is now reached, less than a week after the fundraiser started! Thanks to everybody who donated and spread the word and thus helped to make that happen. If you want to qualify for the rebate on the finished device, it is still possible to donate.

Let the OpenPhoenux fly on!


November 01, 2013

Nouveau matériel annoncé : Neo900 / Développment du GTA04

By openmoko-fr

Bonjour tout le monde!

Cela fait longtemps que je n'ai plus écrit sur ce blog, mais ce n'est pas pour autant que l'activité autour d'OpenMoko est morte.

En effet, Radek a décidé que QtMoko était suffisamment stable pour baisser le rythme de développement et il a expliqué à la mailing-list de la communauté que, pour lui, sa distribution est surtout utile en attendant un port d'Android sur le GTA04.

À propos d'Android justement, le projet Replicant a essayé de porter leur version d'Android sur le GTA04, mais ils ont eu des soucis avec le noyau qui a quelques incompatibilités avec Android. Comme il n'y a que deux développeurs chez Replicant et que le plus actif ne connaît pas assez le développement noyau pour porter Replicant sur le GTA04, il a été décidé d'attendre que le noyau soit utilisable avant de continuer les efforts.

C'est pourquoi Golden Delicious est en train de suivre le développement noyau Linux à chaque RC (donc leurs développements sont actuellement fait sur la version 3.12), puisque Android essaie de fusionner avec le noyau Linux au fil des versions. Donc avec un peu de temps encore, j'espère que l'on pourra profiter de l'expertise noyau de Golden Delicious et l'expertise Android de Replicant, pour avoir enfin un port d'Android 4.x utilisable sur le GTA04 :)

Sinon, Golden Delicious a décidé de ne pas baisser les bras pour faire du matériel "le plus libre possible" et propose un nouveau projet avec la communauté du Nokia N900 : le Neo900.

Le but de ce projet est de profiter du développement du GTA04 pour raviver la communauté du N900 et de leur proposer du matériel un peu plus libre (le N900 a un OS libre Maemo, mais le matériel n'a pas été ouvert par Nokia). Ainsi, l'idée est de reformer la carte du GTA04 pour passer dans le boîtier du N900 et d'en profiter pour mettre un module LTE.

Ne vous inquiétez pas pour la multiplication des projets du Golden Delicious : le but est d'utiliser le plus possible de chipset en commun pour pouvoir faire des commandes plus grosses que si un seul projet existait.

Le plus enthousiasment dans ce projet est qu'il permet de réunir les différentes communautés open-source/libre autour du développement d'un même matériel et ainsi proposer un support encore meilleur aux utilisateurs.

Qui dit nouveau projet, dit également financement : Golden Delicious a lancé sa campagne de don depuis le 30 octobre. N'hésitez pas à contribuer !

Notez également que le premier pallier de 5'000€ pour débuter le développement à déjà était atteint hier et qu'au moment de l'écriture de cet article la campagne vient de passer au dessus des 10'000€.

À bientôt!


October 28, 2013

Sabbatical Over

By Chris Lord

Aww, my 8-week sabbatical is now over. I wish I had more time, but I feel I used it well and there are certainly lots of Firefox bugs I want to work on too, so perhaps it’s about that time now (also, it’s not that long till Christmas anyway!)

So, what did I do on my sabbatical?

As I mentioned in the previous post, I took the time off primarily to work on a game, and that’s pretty much what I did. Except, I ended up working on two games. After realising the scope for our first game was much larger than we’d reckoned for, we decided to work on a smaller puzzle game too. I had a prototype working in a day, then that same prototype rewritten because DOM is slow in another day, then it rewritten again in another day because it ends up, canvas isn’t particularly fast either. After that, it’s been polish and refinement; it still isn’t done, but it’s fun to play and there’s promise. We’re not sure what the long-term plan is for this, but I’d like to package it with a runtime and distribute it on the major mobile app-stores (it runs in every modern browser, IE included).

The first project ended up being a first-person, rogue-like, dungeon crawler. None of those genres are known for being particularly brief or trivial games, so I’m not sure what we expected, but yes, it’s a lot of work. In this time, we’ve gotten our idea of the game a bit more solid, designed some interaction, worked on various bits of art (texture-sets, rough monsters) and have an engine that lets you walk around an area, pick things up and features deferred, per-pixel lighting. It doesn’t run very well on your average phone at the moment, and it has layout bugs in WebKit/Blink based browsers. IE11′s WebGL also isn’t complete enough to render it as it is, though I expect I could get a basic version of it working there. I’ve put this on the back-burner slightly to focus on smaller projects that can be demoed and completed in a reasonable time-frame, but I hope to have the time to return to it intermittently and gradually bring it up to the point where it’s recognisable as a game.

You can read a short paragraph and see a screenshot of both of these games at our team website, or see a few more on our Twitter feed.

What did I learn on my sabbatical?

Well, despite what many people are pretty eager to say, the web really isn’t ready as a games platform. Or an app platform, in my humble opinion. You can get around the issues if you have a decent knowledge of how rendering engines are implemented and a reasonable grasp of debugging and profiling tools, but there are too many performance and layout bugs for it to be comfortable right now, considering the alternatives. While it isn’t ready, I can say that it’s going to be amazing when it is. You really can write an app that, with relatively little effort, will run everywhere. Between CSS media queries, viewport units and flexbox, you can finally, easily write a responsive layout that can be markedly different for desktop, tablet and phone, and CSS transitions and a little JavaScript give you great expressive power for UI animations. WebGL is good enough for writing most mobile games you see, if you can avoid jank caused by garbage collection and reflow. Technologies like CocoonJS makes this really easy to deploy too.

Given how positive that all sounds, why isn’t it ready? These are the top bugs I encountered while working on some games (from a mobile specific viewpoint):

WebGL cannot be relied upon

WebGL has finally hit Chrome for Android release version, and has been enabled in Firefox and Opera for Android for ages now. The aforementioned CocoonJS lets you use it on iOS too, even. Availability isn’t the problem. The problem is that it frequently crashes the browser, or you frequently lose context, for no good reason. Changing the orientation of your phone, or resizing the browser on desktop has often caused the browser to crash in my testing. I’ve had lost contexts when my app is the only page running, no DOM manipulation is happening, no textures are being created or destroyed and the phone isn’t visibly busy with anything else. You can handle it, but having to recreate everything when this happens is not a great user experience. This happens frequently enough to be noticeable, and annoying. This seems to vary a lot per phone, but is not something I’ve experienced with native development at this scale.

An aside, Chrome also has an odd bug that causes a security exception if you load an image (on the same domain), render it scaled into a canvas, then try to upload that canvas. This, unfortunately, means we can’t use WebGL on Chrome in our puzzle game.

Canvas performance isn’t great

Canvas ought to be enough for simple 2d games, and there are certainly lots of compelling demos about, but I find it’s near impossible to get 60fps, full-screen, full-resolution performance out of even quite simple cases, across browsers. Chrome has great canvas acceleration and Firefox has an accelerated canvas too (possibly Aurora+ only at the moment), and it does work, but not well enough that you can rely on it. My puzzle game uses canvas as a fallback renderer on mobile, when WebGL isn’t an option, but it has markedly worse performance.

Porting to Chrome is a pain

A bit controversial, and perhaps a pot/kettle situation coming from a Firefox developer, but it seems that if Chrome isn’t your primary target, you’re going to have fun porting to it later. I don’t want to get into specifics, but I’ve found that Chrome often lays out differently (and incorrectly, according to specification) when compared to Firefox and IE10+, especially when flexbox becomes involved. Its transform implementation is also quite buggy too, and often ignores set perspective. There’s also the small annoyance that some features that are unprefixed in other browsers are still prefixed in Chrome (animations, 3d transforms). I actually found Chrome to be more of a pain than IE. In modern IE (10+), things tend to either work, or not work. I had fewer situations where something purported to work, but was buggy or incorrectly implemented.

Another aside, touch input in Chrome for Android has unacceptable latency and there doesn’t seem to be any way of working around it. No such issue in Firefox.

Appcache is awful

Uh, seriously. Who thought it was a good idea that appcache should work entirely independently of the browser cache? Because it isn’t a good idea. Took me a while to figure out that I have to change my server settings so that the browser won’t cache images/documents independently of appcache, breaking appcache updates. I tend to think that the most obvious and useful way for something to work should be how it works by default, and this is really not the case here.

Aside, Firefox has a bug that means that any two pages that have the same appcache manifest will cause a browser crash when accessing the second page. This includes an installed version of an online page using the same manifest.

CSS transitions/animations leak implementation details

This is the most annoying one, and I’ll make sure to file bugs about this in Firefox at least. Because setting of style properties gets coalesced, animations often don’t run. Removing display:none from an element and setting a style class to run a transition on it won’t work unless you force a reflow in-between. Similarly, switching to one style class, then back again won’t cause the animation on the first style-class to re-run. This is the case at least in Firefox and Chrome, I’ve not tested in IE. I can’t believe that this behaviour is explicitly specified, and it’s certainly extremely unintuitive. There are plenty of articles that talk about working around this, I’m kind of amazed that we haven’t fixed this yet. I’m equally concerned about the bad habits that this encourages too.

DOM rendering is slow

One of the big strengths of HTML5 as an app platform is how expressive HTML/CSS are and how you can easily create user interfaces in it, visually tweak and debugging them. You would naturally want to use this in any app or game that you were developing for the web primarily. Except, at least for games, if you use the DOM for your UI, you are going to spend an awful lot of time profiling, tweaking and making seemingly irrelevant changes to your CSS to try and improve rendering speed. This is no good at all, in my opinion, as this is the big advantage that the web has over native development. If you’re using WebGL only, you may as well just develop a native app and port it to wherever you want it, because using WebGL doesn’t make cross-device testing any easier and it certainly introduces a performance penalty. On the other hand, if you have a simple game, or a UI-heavy game, the web makes that much easier to work on. The one exception to this seems to be IE, which has absolutely stellar rendering performance. Well done IE.

This has been my experience with making web apps. Although those problems exist, when things come together, the result is quite beautiful. My puzzle game, though there are still browser-specific bugs to work around and performance issues to fix, works across varying size and specification of phone, in every major, modern browser. It even allows you to install it in Firefox as a dedicated app, or add it to your homescreen in iOS and Chrome beta. Being able to point someone to a URL to play a game, with no further requirement, and no limitation of distribution or questionable agreements to adheer to is a real game-changer. I love that the web fosters creativity and empowers the individual, despite the best efforts of various powers that be. We have work to do, but the future’s bright.

October 23, 2013

3D printing using Ninja Flex filament

By Talpadk

Yesterday I received some of the relatively new “Ninja Flex” filament sold by 

As the internet doesn’t seem to overflow with print reviews / settings for it yet I decided to post some words about it.

NinjaFlex Sapphire 1.75mm

NinjaFlex Sapphire 1.75mm

The Filament

It is always difficult to measure a soft material but using my caliber I measured the diameter to be 1.75mm as it is supposed to.
The filament also seems to be nice and round.

I ordered the “sapphire” version of the filament, and it has a nice (mat) blue color  which turns glossy when printed.
It is also slightly translucent when printed thinly.

The filament is very flexible (I can tie a tight knot on it without it breaking)
The filament is also elastic but not as much a a regular rubber band… perhaps 5-8 times harder if I should make a guess.

The material is not known to me, but I strongly suspect it to be polyurethane (PUR) with a surface coating/treatment to make it less sticky.
Fennerdrives already produces PUR belting  which have been used in 3D printing prior to this material appearing and due to the mat to glossy change.
(Update: it has been confirmed that it is polyurethane)

The Fennerdrives recommended settings are:

Recommended extruder temperature: 210 – 225°C
Recommended platform temperature: 30 – 40°C

The filament isn’t exactly cheap I would say roughly 3x the cost of PLA/ABS including shipping compared to the cheap PLA/ABS I normally buy.
Then again soft/specialty filaments doesn’t seem to come cheaply normally.
(Actually a lot of the cost comes from the somewhat expensive USP shipping)

Fennerdrives does ship both from the US and the UK, living in Denmark (inside the EU) this is a big plus for me.

3D model for the rubber feet

3D model for the rubber feet

The test prints

As I’m currently designing and building a tabletop CNC mill I thought that I might as well print some rubber feet for it.

The print isn’t necessarily the simplest one to print due to the outwards sloping unsupported  walls.
However the angle is quite close to vertical and wouldn’t normally be causing problems.

The 3D model was created using FreeCAD which is my preferred open source CAD package.

I used Slic3r for generating the G-code.

And my printer is a RepRapPro Huxly which has a bowden extruder which might actually not be ideal for extruding a soft and springy filament.

Print 1

Was done using my regular PLA/ABS profile.

I had to abort the very first attempt as the filament wasn’t printed continually.

  • I increased the extrude temperature from the low temp that felt right while manually extruding the filament
  • Reduced the speed using the M220 command
  • And upped the heat bed temperature to 85 deg C

Much to my amazement the rubber foot actually printed sort of  okay.
It was however sticking so hard to the “Kapton” tape that removing it actually pulled the tape off the print bed!

Prints 1 though 4

Prints 1 though 4

Print 2

I then tried to create a specific profile for printing the rubber filament.

  • Reduced the printing speeds to avoid having to scale them using the M220 command
  • Removed the “Kapton” tape as it had become wrinkled any way
  • Printed without having heat on the bare aluminium print bed.

It printed with roughly the same quality at the first print but was very very easy to remove.

Print 3

I noticed that the hot end seemed quite “laggy” probably caused by the flexible nature of the filament and i therefore made some additional changes.

  • All print speeds were set to 15 mm/s to avoid having the extruder changing speed
  • Retract was disabled, again to keep a constant pressure in the hot end
  • “Skirt loops” was increased to 4, to give the hot end more time to build up a constant pressure.
  • Infill was reduced from 50% to 0% to see if the vibrations caused the surface defects
  • The hot bed was set to 40 deg C

Just after starting the print I realized that setting infill to 0% would cause some parts to be printed in mid air with nothing supporting them from below.
Out of curiosity I did however allow the print to continue.

The printer managed to print the part despite the fact that is was “unprintable”…
Also the surface finish was very satisfying.

Due to the 0% infill the part was slightly softer as was to be expected

Print 4

I don’t like printing the impossible as it may or may not succeed I made one small change

  •  I changed the infill back to 50%

I’m pleased to report that the surface finish seems to be just as good as before.

Printer settings

Please keep in mind that  printer settings varies from printer to printer and that the one described here may not be optimal even for my own printer.

The following list is semi sorted by what “I think is probably the most important settings”

  • No retract
  • Uniform print speed (of 15 mm/s)
  • Multi loop skrit (4 loops)
  • Hot end temperature 240 deg C
  • Print bed temperature 40 deg C
  • Travel speed 100 mm/s
  • Extrusion width 0.5 mm with a 0.5 mm nozzle
  • First layer 50% (might actually be a bad idea)
  • Layer height 0.3 mm

Again while reading this keep in mind that I haven’t played very much with the temperatures.

I had some undocumented failures after print 1 where the extruder/hot end seemed to jam and I haven’t dared reducing the temperature again as I needed/wanted some functional prints.
The problems may however be related to too fast extrusions, filament loading and or the filament being deformed by the retracts.

My prints was stringing slightly internally lowering the temp may be able to reduce this…



  • It has been confirmed by the friendly customer support at Fennerdrives that the material is actually polyurethane.
  • Even without any heat on the hotbed it still sticks very very well to “Kapton”