September 14, 2015


By Xiangfu Liu


August 31, 2015

Is anyone still sampling?

By Michael "mickeyl" Lauer

Hi fellow readers, this time we’re talking music. For quite some time, I’ve been wondering about the sound of a certain area which seems to be abandoned by pretty much all bands. It was the area when sampling technology became affordable and where a bunch of musicians adopted this (back then very limited) technology in creative ways to use any kind of noise in a musical context.


In this period (say, 1984-1990), bands like „The Art Of Noise“, „Jean-Michel Jarre“, „Depeche Mode“, „Moskwa TV“, and many more, emerged, who thrilled their audience with sounds that had been previously unheard. Obviously we lost this form of art somewhere along the way – I can’t think of any currently active band embracing these kind of instruments. Even the aforementioned pioneers have „developed“ (why must every successful band „develop“ and lose the distinct quality that made them big? but that’s a topic for another blogpost) and turned their back on this.


Is it because listeners became tired or did the massive improvements in sampling quality and length (due to the vast price decline in computer memory we now have hours of sampling time where back in the 80s we only had seconds) thus the limitless possibilities strangled the creativity (again!)?


I still love to hear (and produce) those kinds of sound. So let me ask: Is anyone still sampling? (except sample library vendors, that is)

Cheerio – Gone fishing, err… field recording!

Der Beitrag Is anyone still sampling? erschien zuerst auf

August 17, 2015


By Xiangfu Liu


July 31, 2015


By Xiangfu Liu


Avalon nano 2

By Xiangfu Liu


July 28, 2015

LUSH again

By Xiangfu Liu


April 24, 2015

Web Navigation Transitions

By Chris Lord

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.

 Navigation Transitions specification proposal


An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" />
<link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.


Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport {
  navigate-away-duration: 500ms;

@media (navigate-away) {

I think this would indeed be nicer, though I think the exact naming might need some work.


When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards          | Backwards
normal            | reverse
reverse           | normal
alternate         | alternate-reverse
alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.


When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.


Simple slide between two pages:


  <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" />
    body {
      border: 0;
      height: 100%;

    #bg {
      width: 100%;
      height: 100%;
      background-color: red;
  <div id="bg" onclick="window.location='page-2.html'"></div>


#bg {
  animation-name: slide-left;
  animation-duration: 0.25s;

@keyframes slide-left {
  from {}
  to { transform: translateX(-100%); }


  <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" />
    body {
      border: 0;
      height: 100%;

    #bg {
      width: 100%;
      height: 100%;
      background-color: green;
  <div id="bg" onclick="history.back()"></div>


#bg {
  animation-name: slide-from-left;
  animation-duration: 0.25s;

@keyframes slide-from-left {
  from { transform: translateX(100%) }
  to {}

I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

March 29, 2015 API docs live again

By Michael "mickeyl" Lauer

After the outage of the VM where has been hosted on, we are now almost fully back. I have integrated the DBus API documentation (that has been hosted on the doc subdomain) into the top level documentation which now lives at The source code has already been moved to and the new mailing list has been alive for a few months at goldelico.

Now that the documentation is live again, I have plans for short, mid, and long term:

1. Short-term I’m working on completing the merge to libgee-0.8 and then cut the next point release.

2. Mid-term I want to discuss integrating the unmerged branches to the individual subprojects and continue cleaning up.

3. Long-term I’m looking for a new reference hardware platform, funding, contributors, and decide whether to move the existing reference platform to kdbus (or another IPC.)

If you have any plans or questions with regards to the initiative and its subprojects, please contact me via the FSO mailing list (preferred) or personally.

Der Beitrag API docs live again erschien zuerst auf

March 22, 2015

RFC: Future of SidPlayer, ModPlayer, PokeyPlayer for iOS

By Michael "mickeyl" Lauer

This is a post about the state of Sid-, Mod, or PokeyPlayer on iOS.

Coming from the background of the C64 and AMIGA demo scenes, I always thought that every platform needs a way to play back the musical artwork created by those great musicians in the 80s and 90s on machines like the Commodore C64, the AMIGA, and the ATARI XL.

Fast forward to the iPhone: Being excited about the new platform, me and another guy from the good ole‘ AMIGA days started working on SidPlayer in 2008, shortly after Apple opened the developer program for european developers. After some months of work, we had the first version ready for the Apple review, standing on the shoulder of the great libsidplay and the HVSC. Due to libsidplay being GPL, we had to open source the whole iOS app. To our surprise, _this_ hasn’t been a problem with the Apple review.

SidPlayer for iOS was available for some months, then we developed adaptations for AMIGA .mod files (ModPlayer) and Atari XL pokey sound files (PokeyPlayer). In the meantime, iOS development went from being a hobby to our profession (we formed the LaTe App-Developers GbR), which unfortunately had great impact on our pet projects. Being busy with paid projects, we could not find enough time to do serious updates to the players.

The original plan in 2008 was to create an app that has additional value around the core asset of a high quality retro computing player, such as a retro-museum-in-a-box (giving background information about those classic computing machines) and a community that shares playlists (important given the amount of songs), comments, statistics, and ratings. Alas, due to our time constraints during the lifetime of the apps, we could only do small updates in order to fix bugs with newer operating system versions. There was not enough time to add features, do an iPad adaptation, nor to unify the three distinct player apps. In the meantime, other apps came along that also could play some of those tunes, although we weren’t (and still aren’t) very excited about their user interfaces and sound quality.

The final nail for the coffin came in 2013, when – much to our surprise – out of the blue (not even due to reviewing an update), we received a letter from Apple where they claimed that our player apps would violate the review guidelines, in particular the dreaded sections 2.7 / 2.8, which read „2.7: Apps that download code in any way or form will be rejected.“ and „2.8: Apps that install or launch other executable code will be rejected“. Although we went past this guideline for several years, this turned into a showstopper – some weeks later, Apple removed our apps from the store.

Unfortunately, those sections really apply – at least for the Sid- and PokeyPlayer. Both players rely on emulating parts of the CPU and custom chip infrastructure of the C64 / Atari XL (hence run „executable“ code, albeit for a foreign processor architecture) and said code gets downloaded from the internet (we didn’t want to ship the actual music files with the app for licensing reasons). ModPlayer actually was an exception, since the .mod format does not contain code, but is a descriptive format, however back then I did not have the energy to argue with Apple on that, hence ModPlayer has been removed without a valid reason.

In the meantime, my priorities have shifted a bit and we had to shutdown our iOS company LaTe AppDevelopers for a number of reasons. Still I have great motivation to work on the original goal for those players. Due to the improved hard- and software of the iOS platform, these days we could add some major improvements to the playing routines, such as using recent filter distortion improvements in libsidplay2, audio post-processing with reverb and chorus, etc.

The chance of the existing apps coming back into the store is – thanks to Apple – zero. It wouldn’t be a pleasant experience anyways, since the code base is very old and rather unmaintainable (remember, it was our first app for a new platform, and neither one of us had any Mac OS X experience to rely on).

Basically, three question come to my mind now:

1. Would there be enough interest in a player that fulfills the original goal or is the competition on the store „good enough“?
2. Will it be possible to get past Apple’s review, if we ship the App with all the sound (code) files, thus not downloading any code?
3. How can I fund working on this app? To honor all the countless hours the original authors put into creating the music and the big community working on preserving the files, I want this app to be free for everyone.

As you may have guessed, I do not have any concrete answers (let alone a timeframe), but just some ideas and the track record of having created one of the most popular set of C64/AMIGA/Atari XL music player apps. So I wanted to use this opportunity to gather some feedback. If you have any comments, feel free to send them to me. If you even want to collaborate on such a project, I’m all ears. If there’s sufficient interest, we can create some project infrastructure, i.e. mailing list.

Der Beitrag RFC: Future of SidPlayer, ModPlayer, PokeyPlayer for iOS erschien zuerst auf

February 23, 2015

Wayback Machine

By Michael "mickeyl" Lauer

Thanks to the fabulous wayback machine, I have imported my blog from between 1999 and 2006. It’s not properly formatted and most of the images are missing, but it’s somewhat interesting to read the things my younger self wrote about 15 years ago.

Der Beitrag Wayback Machine erschien zuerst auf

January 26, 2015

To web or not?

By Michael "mickeyl" Lauer

I have pondered a long time whether to learn web programming for my customers‘ services app, so that they can access their user & device statistics, crash logs, manage service messages, push messages, etc.

I now have decided not to pursue this path. Web technology is a mess, even more than mobile technology. It’s lacking a clear separation of layers and although many frameworks nowadays are using MVC or similar patterns, I feel I have to do too many things at once (web service, html templating, css design, java script for interactive stuff, etc.) to really make a professional web app.

I’m going to make a mobile client instead, using the technologies I already have mastered and in which I’m productive. Yes, I still want to learn something new, that’s why I’m working with a NoSQL database now for the first time.

Of course the downside is my customers can no longer use their web browsers to manage all that, but since they have their iPhones and iPads always around anyways, I’m sure they can cope with that.

Der Beitrag To web or not? erschien zuerst auf

January 04, 2015

Printing in PolyProlylene

By Talpadk

Bowl printed in PP

PP bowl printed on a piece of cutting board

I recently purchased a small sample of white polypropylene (PP) plastic from a shop in China.

While it was relatively expensive at ~$11 for 200g worth of plastic it allowed me to try out printing in PP without buying an entire spool of filament.
And since it isn’t supposed to be the easiest thing to print due to a thermal contraction that should make it more warp prone than ABS and additionally is slightly slippery and doesn’t stick that well to other materials.

You may then ask why I would want to attempt to print in PP then, after all PLA prints just fine… well sort of anyway.

Polypropylene is/has the following features:

  • Relatively heat resistant, plastic handles on dishwasher safe cutlery are for instance often made of it.
  • Good chemical resistance.
  • Handles bending and flexing relatively well, living hinges can be made of it.
  • Is relatively soft, not always a good thing.

Well back to the printing business…

For the experiments I used my trusty RepRapPro Huxley with a smaller 0.3mm nozzle more on that later.
Yes I really should get that Mendle90 build, that would have allowed me to borrow some 3mm PP welding rod from work.

Anyway I’m by far not the first to print in polypropylene, but as with NinjaFlex I though it could use another post on the internet about the material.(Some links to “prior art”: RapMan wiki and a forum post)

While it seemed that PP and especially HDPE are good candidates for the print bed I had to try out polyimide and some generic blue masking tape as well.
As I expected they didn’t seem to work too well for me. But didn’t experiment too much with them.

I there for proceeded and bought some cheap plastic cutting boards from Biltema.
They don’t specify the type of plastic but they don’t feel very much like PP so I assume they are made of HDPE.

Once cut to size I actually managed to print unheated onto the 5mm thick sheet of plastic!
Some of the prints actually stuck too good to the print bed and got damaged while being removed.

I also encountered some problems with jams/the plastic coiling up inside extruder.
This led me to increasing the extrusion temperature to 235C and reduce the speed down to 15/20 mm/s for the perimeter/infill.
In an attempt to reduce the print bed adhesion I used a lower 225C for the first layer.

In hindsight increasing the temperature might not have been necessary, at least when manually pushing PP@235C and PLA@215C the pressure seems to be in the same range.
The extruder problems may simply be caused by the PP filament being softer than PLA.
Reducing the speed print might have been enough.

As some of the prints had left thin layers of PP on the print bed surface and new prints stuck annoyingly well to those spots I decided to try to sand the surface.
This removed both the PP residue and the grid of ridges in the plastic due to it’s cutting board origin.After this the surface seemed to be less problematic with regard to local over sticking.

I have yet to attempt to heat up this print surface, as I have previous had bad a experience with an experiment using a SAN sheet that warped badly when heated.
Besides it actually looks quite promising to use the print bed unheated.

While the chopping board isn’t that bad or expensive it is 5mm thick which is too much for my bulldog clips to handle.
I therefore looked for alternative sources of PP and PE.

The next experiment involved plastic wrap.
Here in Denmark PVC based warps have fallen out of favour and been replaced by PE based products (Assumed to be LDPE as it is soft).

The wrap was applied to a mirror surface and clamped onto the regular print bed.
The first unheated print had way too much warping.
I then cleaned the wrap using rubbing alcohol (which visually roughened the surface a little) and may have heated the bed to 90C.This resulted in a slightly better print but not quite as good as the chopping board.
Plastic wrap may be promising but I quickly stopped playing with it as it would probably have to be glued to the glass which would complicate the process.

PP printed on tape

PP printed on tape

Next up was packaging tape.
At least some of it are made of PP, biltema has some brown tape that is I did however just use some clear stuff I had laying in the drawer.

The first unheated on glass attempt had too little adhesion.
I then roughed the surface using a scouring pad and heated the bed to 70C which made the part stick relatively well to the tape.

Thoughts and notes:

  • Running the extruder at 235C might be too warm (stringing in the bowl print)
  • Maybe the glass surface conducts too much heat away too fast, might be why the cutting board sticks so well unheated?
    Perhaps experiment with a more insulating/lower heat capacity base material.
  • For flexible/soft materials I expect it is better with a thicker filament as it is harder to curl up inside the extruder
    (Note to self: Find time to build that mendel90)
  • Also for softer materials I probably ought to switch to my 0.5mm nozzle.
  • Tape based print bed materials have an advantage over solid ones, if the print is really stuck pealing the tape off might help to remove the part without damaging it.

November 06, 2014

OpenPhoenux Hard- & Software Workshop 2014

By SlyBlog

Soon this year’s OpenPhoenux Hard- & Software Workshop (OHSW) will take place in Garching (near Munich) at the TUM Campus in Garching. There will be a lot of intetresting topics to discuss and people to meet. Make sure to drop by if you find some time!

The agenda and further details are now available online:


October 23, 2014

Simplify your life

By Michael "mickeyl" Lauer

After 6 years being co-director and CTO of LaTe App-Developers, I feel it is time to make some changes.

It’s not that mobile development is no longer interesting to me, however after doing (too) many small (5-20 person days) iOS projects, I need some new challenges. Project work has been limiting my creativity and enforcing too much regularity in my daily routine. Besides, there’s hardly any room to do compelling software architecture work in projects of said size. You’re rather constantly working against the time in order to make some profit with those fixed-price projects.

This year I took three months off in order to decide on what to do next and finally, I have made up my mind. As per the end of this year, I’m resigning as co-director and CTO of LaTe. I will still be involved as freelance collaborator though in order to continue supporting our biggest client.

With the regained freedom, I plan to explore some new directions with regards to own apps and services. I need to catch up with what happened in (Embedded)-Linux and I also want to polish my almost rusty Python and Vala skills.

Last but not least, I’m not going to do 40 hours per week any more – instead I want to spend more time with my family.

Der Beitrag Simplify your life erschien zuerst auf

July 29, 2014

New Site

By Michael "mickeyl" Lauer

Every 6 years or so I’m revamping my website. This is the 3rd incarnation now (yes, I started early) featuring a new wordpress theme, a clean layout, and – most important – serious content improvements.

Der Beitrag New Site erschien zuerst auf

May 07, 2014

10 years ago I got write access to OpenEmbedded

By Marcin "hrw" Juszkiewicz

It was 8th May of 2004 when I did first push to OpenEmbedded repository. It was BitKeeper at that time but if someone wants to look then commit can be seen in git.

I will not write about my OE history as there are several posts about it on my blog already:

It was nice to be there through all those years to see how it grows. From a tool used by bunch of open source lovers who wanted to build stuff for own toys/devices, to a tool used by more and more companies. First ones like OpenedHand, Vernier. Then SoC vendors started to appear: Atmel, Texas Instruments and more. New architectures were added. New rewrites, updates (tons of those).

Speaking of updates… According to statistics from I am still in top 5 contributors in OpenEmbedded and Yocto project ;)

There were commercial devices on a market with OpenEmbedded derived distributions running on them. I wonder how many Palm Pre users knew that they can build extra packages with OE. And that work was not lost — LG Electronics uses WebOS on their current TV sets and switched whole development team to use OpenEmbedded.

Since 2006 we got annual meetings and this year we have two of them: European as usual and North America one for first time (there was one few years ago during ELC but I do not remember was it official).

There is OpenEmbedded e.V. which is non-profit organization to take care of OE finances and infrastructure. I was one step from being one of its founders but birth of my daughter was more important ;)

And of course there is the Yocto project. Born from OpenedHand’s Poky helped to bring order into OpenEmbedded. Layers (which were discussed since 2006 at least) were created and enforced so recipes are better organized than it was before. It also helped with visibility. Note that when I write OpenEmbedded I mean OpenEmbedded and Yocto project as they are connected.

I remember days when Montavista was seen as kind of competitor (“kind of” because they were big and expensive while we were just a bunch of guys). Then they moved to OpenEmbedded and dropped own tools. Other company with such switch was Denx. 3 years ago they released ELDK 5.0 which was OE based and made several releases since then.

What future will bring? No idea but it will be bright. And I will still be somewhere nearby.

April 20, 2014

Measuring printbed tempeartures on a RepRapPro Huxley

By Talpadk

I have finally gotten around to measuring the surface temperature of my Huxley.

Temperature as function of set-point

Temperature as function of set-point

Method and instruments used

For measuring the temperature a Agilent U1233A with a U11186A (k type thermocouple) has used.

The ambient temperature has measured by waiting for the display to settle and the taking a readout.

The heat bed temperatures has measured on top of the aluminium print surface with the polyarmide tape left in place.
The thermocouple was held in place by another piece of polyarmide tape.

The thermocouple was left on the print bed for 1 minute for the temperature to stabilize, the temperature was then measured on  the multimeter using the “avg” function after a 2 minute sampling period.


The temperatures were measured at the centre and approximately 1cm from the edge.
The center temperature was measured an additional time at the end of the measurement cycle.
The print bed was in its forward position with the print head to the left at the end stop (cooling fan running)

The ambient temperature was measured as 22.1C at start of the surface scan, and 24.4C at the end.
The heat bed has maintained at 85C using the 3d printer firmware.

NA 71.2C 75.8C
77.6C 71.1C
 75.6C  77.1C  72.8C

After this the thermocouple was reapplied using a fresh piece of polyarmide tape at the centre of the print bed and left there.
The print bed set point was then reduced and the surface temperature measured.

Set point [C] Measured [C] Percentage
85 76.2 90
70 63.1 90
55 50.2 91
40 37.8 95


Some of the variances in the measurements across the bed might be related probe mounting relative to the surface and cooling to ambient.
Using a piece of foam or another insulator might improve this.
The lower measurement points may simply be caused by a bad thermal contact to the print bed.
Heat sink compound could perhaps have alliviated some of this as well (and made a lot of mess).

Also even though the measurements was taken as a 2 minute average, the temperature swings of the heat bed regulation may have contributed with some noise.

Also a thermal camera would have made this much easier and quicker, too bad they are so expensive.
(And that Fluke VT02/VT04 visual thermometers has such a bad resolution)


I would consider the bed temperature constant across the print bed within the uncertainty of my measurements.

At “higher” temperatures the surface temperature seems to be roughly 90% of the set point.

March 23, 2014

A fast and beautiful terminal

By Talpadk

xrvt-unicode/uxrvt has long been my favourite terminal, it is fast and it supports faked transparency.

rxvt terminal with transparencyOne problem with using a darkened background was however that some terminal colour simply were bit too dark.

After a quick googling and short man page reading it was however clear that this can actually easily be resolved.
Additionally I can store some extra settings making my keyboard short cur for launching the terminal nice and simple.


sudo apt-get install xrvt-unicode 
sudo apt-get install tango-icon-theme

The last line is only for getting the terminal icon, and is optional if you comment out the iconFile resource

Configuring rxvt-unicode

In the file ~/.Xdefaults add the following lines:

!===== rxvt-unicode resource definitions =====!
!The number of scrollback lines
URxvt*saveLine: 5000

!Add fading for unfocused windows
URxvt*fading: 33

!Specify the icon for the terminal window, requieres the "tango-icon-theme" package
URxvt*iconFile: /usr/share/icons/Tango/16x16/apps/terminal.png

!Transparency setting
URxvt*transparent: true
URxvt*shading: 25
URxvt*background: Black
URxvt*foreground: White

!Colour setup for the darker background
URxvt*color0:  Black
URxvt*color1:  #ffa2a2
URxvt*color2:  #afffa2
URxvt*color3:  #feffa2
URxvt*color4:  #a2d0ff
URxvt*color5:  #a2a2ff
URxvt*color6:  #a2f5ff
URxvt*color7:  #ffffff
URxvt*color8:  #000000
URxvt*color9:  #ffa2a2
URxvt*color10: #afffa2
URxvt*color11: #feffa2
URxvt*color12: #a2d0ff
URxvt*color13: #a2a2ff
URxvt*color14: #a2f5ff
URxvt*color15: White

!Colour notes from the man page
!color0       (black)            = Black
!color1       (red)              = Red3
!color2       (green)            = Green3
!color3       (yellow)           = Yellow3
!color4       (blue)             = Blue3
!color5       (magenta)          = Magenta3
!color6       (cyan)             = Cyan3
!color7       (white)            = AntiqueWhite
!color8       (bright black)     = Grey25
!color9       (bright red)       = Red
!color10      (bright green)     = Green
!color11      (bright yellow)    = Yellow
!color12      (bright blue)      = Blue
!color13      (bright magenta)   = Magenta
!color14      (bright cyan)      = Cyan
!color15      (bright white)     = White

The last comments can of course be left out but is handy if you need to find a particular colour that you want to change.

Also adjust the shading resource to your liking.

After saving the file you may start the terminal using urxvt or rxvt-unicode and enjoy it fast and good looks.

March 01, 2014


By John Sullivan

Spritz seems like a very interesting way to read quickly. It's the opposite of everything I've read (slowly) about speed reading, which focuses on using peripheral vision and not reading word-by-word. You're supposed to do things like move your eyes straight down the page, taking in whole lines at a time.

Interruptions seem like a big problem; interruptions that make me look away, or interruptions in my brain, where I might realize I've not been paying attention for some amount of time. Maybe they should have navigation buttons similar to video players, so you can skip backward 15 seconds at a time. I also do want to go back and review previous pages sometimes for reasons that have nothing to do with interruption, so I wouldn't want word-by-word to be the only way to view a text -- especially when reading nonfiction. I might event want it to work in a mode where you hold down the button on the side of your phone or tablet in order to move the words, and then have them automatically pause when you release. It feels like I'd want a lot of short breaks when reading in this style.

It should also be free software, but unfortunately I'm guessing it won't be. I hope someone will make a free software application along these lines -- the basics seem pretty basic.

February 21, 2014

Antialiased openscad rendering

By Talpadk

OpenSCAD rendering

Std. 512x512 OpenSCAD rendering

Std. 512×512 OpenSCAD rendering

Recent versions of OpenSCAD is capable of rendering objects/assemblies to images.
To the right there is an example of the default 512×512 image quality produced by the command:

openscad -o render.png assembly.scad

Below it is an anti-aliased version of the same scad file.
I used the common trick of generating an oversized image and downscaling it.
It was created with the following two commands:

openscad -o render.png  --imgsize=2048,2048 assembly.scad
convert render.png -resize 512x512 render.png

If you update your project renderings using a makefile/script I don’t consider it much of a hassle considering the improvement in image quality.
Also at least on my laptop with the currently relativity simple scad file rendering is still fast.

2048x2048 OpenSCAD render downscaled to 512x512

2048×2048 OpenSCAD render downscaled to 512×512

In case you are wondering the assembly is a new CNC mill I’m designing.
Which hopefully is an improvement over the last design.

The old design is available HERE
The new design is being created HERE

Unlike the old design the new one is being pre-assembled in openscad, hopefully preventing having to print parts that only fitted together in my head, saving both time and plastic.

Both designs are hosted on Cubehero, my favourite site for sharing designs on.
It comes with build in version control though git (it also has a web interface for “kittens”)
Wil that runs the site is a friendly and helpful guy, and it is not bogged down with stupid End User License Agreements like another site…
I highly recommend it…

February 11, 2014

It is 10 years of Linux on ARM for me

By Marcin "hrw" Juszkiewicz

It was somewhere between 7th and 11th February 2004 when I got package with my first Linux/ARM device. It was Sharp Zaurus SL-5500 (also named “collie”) and all started…

At that time I had Palm M105 (still own) and Sony CLIE SJ30 (both running PalmOS/m68k) but wanted hackable device. But I did not have idea what this device will do with my life.

Took me about three years to get to the point where I could abandon my daily work as PHP programmer and move to a bit risky business of embedded Linux consulting. But it was worth it. Not only from financial perspective (I paid more tax in first year then earned in previous) but also from my development. I met a lot of great hackers, people with knowledge which I did not have and I worked hard to be a part of that group.

I was a developer in multiple distributions: OpenZaurus, Poky Linux, Ångström, Debian, Maemo, Ubuntu. My patches landed also in many other embedded and “normal” ones. I patched uncountable amount of software packages to get them built and working. Sure, not all of those changes were sent upstream, some were just ugly hacks but this started to change one day.

Worked as distribution leader in OpenZaurus. My duties (still in free time only) were user support, maintaining repositories and images. I organized testing of pre-release images with over one hundred users — we had all supported devices covered. There was “updates” repository where we provided security fixes, kernel updates and other improvements. I also officially ended development of this distribution when we merged into Ångström.

I worked as one of main developers of Poky Linux which later became Yocto Linux. Learnt about build automation, QA control, build-after-commit workflow and many other things. During my work with OpenedHand I also spent some time on learning differences between British and American versions of English.

Worked with some companies based in USA. This allowed me to learn how to organize teamwork with people from quite far timezones (Vernier was based in Portland so 9 hours difference). It was useful then and still is as most of Red Hat ARM team is US based.

I remember moments when I had to explain what I am doing at work to some people (including my mom). For last 1.5 year I used to say “building software for computers which do not exist” but this is slowly changing as AArch64 hardware exists but is not on a mass market yet.

Now I got to a point when I am recognized at conferences by some random people when at FOSDEM 2007 I knew just few guys from OpenEmbedded (but connected many faces with names/nicknames there).

Played with more hardware then wanted. I still have some devices which I never booted (FRI2 for example). There are boards/devices which I would like to get rid of but most of them is so outdated that may go to electronic trash only.

But if I would have an option to move back that 10 years and think again about buying Sharp Zaurus SL-5500 I would not change it as it was one of the best things I did.

February 09, 2014

Welcome, 2014

By Michael "mickeyl" Lauer

So 2013 is finally over and it’s been an energy-sapping year, business-, baby-, and building wise.

Business. The stagnation that was present for pretty much the first half of the year and which forced us to downsize a bit, had been replaced by too many projects all at once in the 2nd half of the year. And while it was welcome since it saved us from closing doors, it prevented working on our private projects, i.e. our apps in the store, but also personal pet projects – let alone anything open source.

Baby. After 7 horrible months between Lara Marie’s 5th and 13th month, she finally began sleeping great, often 12 hours without waking up. She’s now 2.5 years and everything is good. Still, she’s a demanding little one, enjoying being offered a selection of everything instead of deciding on her behalf. I love her.

Building. With a bit of (natural) delay, our new house was finished by November and we did move on 9th of December. We’re now 2 months in here and it’s feeling mostly great. We had to monitor and decide on a LOT of things during the construction phase, but apart from the usual minor issues, the building quality is good and we enjoy the comfort of having a dedicated room for Lara Marie. Being able to use the living room again after 20:00 is nice :)

I took the liberty to install a dedicated server for the house which is living in a 19″ rack in the utility room. I’m going to post about the networking infrastructure soon.

Referring to my last post, I’m still planning on doing the sabbatical, but due to some unforeseeable circumstances with my wife’s health, it had to be postponed for a bit. It’s going to happen in 2014 though, which is why I’m sure, 2014 is going to be better than 2013.

Der Beitrag Welcome, 2014 erschien zuerst auf

December 16, 2013

Linking CSS properties with scroll position: A proposal

By Chris Lord

As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first.

It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:

scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]*

    where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ )
      and transition-stop        = <relative-scroll-position> <property-value>

This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:

scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);

This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point.

But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)).

I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way.

[Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds.

What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation.