Eternal Editor Search

Finding the perfect code editor

In my eternal search for a no resource hungry (Python) code editor that can help you getting things done easily and quickly, I started using SublimeText at its 2.0 version (albeit not being an open source project).

I never used Mac OS X TextMate, but some people described SublimeText as a valid alternative to that editor, and I always heard good words about TextMate.

After using SublimeText for almost 4 months, I’m finding myself really happy with the cool working experience it provides and its capabilities.

What I found most interesting in these months are:

  • Loads of plugins
  • Code completion (through one of its many plugins)
  • JSON based configuration files
  • Great project support, with almost instant project switch through the command palette
  • A very responsive command palette
  • Integration with static code analysis tools
  • Powerful editing capabilities (multi-selection editing in primis)
  • Possibility to integrate the Python debugger
  • Integrated Python console
  • A “disctraction free” mode (now called it like that, but it is just full screen editing

It has extensive documentation, and also nifty video tutorial to set-up a dev environment with ease.

In the next months I will polish my vim-fu, I want to do some tests using the Chromebook with its Chrome OS for development purposes. Almost all of my dev activities happen online, and through a Google hosted account and Chrome OS has a decent shell with SSH support (and also different SSH plugins available in the Chrome Web Store). I just need a VM, or a VPS somewhere, where to store the code and all the development tools, and see how it plays out.

Think Like a Programmer

Another quick post for a small promotion, always from the lovely No Starch Press. This time is the turn of the book “Think Like a Programmer“, by V. Anton Spraul.

As for the previous promotion, you can get 40% off on the paper version, plus the DRM-free ebook versions. The promotion lasts one week, and here is your chance to have it:

I’m reading the book now, No Starch asked me to write a review, and I have to say it is an enjoyable and challenging reading.

Go get it, for new programmers and also for seasoned ones, it is always interesting to challenge your mind and your knowledge.

Multiple Boards and Bootloaders on a Single Hardware Pack

Multiple Boards and Bootloaders

Finally, Linaro Image Tools has support for multiple boards and multiple bootloaders on single configuration file and hardware pack.

Linaro Image Tools

Linaro Image Tools is a set of command line utilities that help in the creation and installation of Linaro built operating system images so they can be run on ARM based computers.

With Linaro Image Tools you can take a generic ARM Ubuntu or Android operating system image and customise it with the hardware specific packages needed to make it run on a specific board. These hardware specific packages are found in a hardware pack, which itself is generated using a tool, linaro-hwpack-create and a configuration file.

The old days

Old days computer
The Old Days of Computer

In the “old days”, this configuration file (an INI-style configuration file) and hardware pack held information for just one ARM board: it was not possible to define a single configuration file for multiple boards that shared most, if not all, of the same configuration and as a consequence, the resulting hardware pack could only be used with a single device too.

Starting with the new 2012.07 Linaro Image Tools release, it is now possible to support multiple boards/devices and multiple bootloaders with a single hardware pack. This should speed up development because one hardware pack can be used for several boards running similar hardware, reducing the number of hardware packs that need to be created to test new code on multiple devices.

A new configuration file format has been created: now based on YAML, it enables engineers to express more complex scenarios, and Linaro Image Tools has been expanded to support this new format.

Backward compatibility is maintained: the old version 2 format is still supported, but it will be deprecated in the Linaro Image Tools 2012.08 release, when the new version 3 format will have had enough use to have any bugs found and fixed.

The very old version 1 format is now completely unsupported and the ability to read these files will be dropped from Linaro Image Tools with the 2012.08 release.

With these changes, a new command line tool has been written to help engineers convert an old version 2 configuration file format into a version 3 one: running linaro-hwpack-convert <config-file> will create a version 3 configuration file called <config-file>.yaml (after that you can remove the suffix, it is not necessary, but we keep the old file for you). This file is then used by linaro-hwpack-create to create a new style hardware pack. The procedure for creating a hardware pack has not changed and Linaro Image Tools will automatically detect and use the new format.

The only thing that changes when you use linaro-media-create is you can now specify a bootloader for a hardware pack that can provide more than one. Predictably, this option is –bootloader <bootloader name> and if you want to know what your options are, you can query a hardware pack by using the –read-hwpack option:

linaro-media-create --hwpack hwpack_linaro-lt-panda_1_armhf_v3.tar.gz --read-hwpack
Supported boards                       | Supported bootloaders
linaro-lt-panda                        | uefi,u_boot

linaro-media-create –hwpack hwpack_linaro-lt-panda_1_armhf_v3.tar.gz –bootloader u_boot …

Unfortunately, not all of the features described here are already available in the 2012.07 release of Linaro Image Tools. What is available at the moment is:

  • Converter from old configuration file to new one.
  • Support for the new YAML syntax.

The code with all the features was merged after the release, but is already in for the 2012.08 one. And if you feel adventurous, you can get the development version and test it out.

If you find any bugs, or want to suggest improvements, please do so in the Linaro Image Tools Launchpad page.

This post was written by James Tunnicliffe and me

Ubuntu Made Easy Promo Code

For the English speaking audience (but not only). If you are interested in a new book on the latest LTS version of Ubuntu, No Starch Press is promoting “Ubuntu Made Easy” for one week, 40% off on the paper version, and you get DRM-free ebooks format (PDF, mobi, epub) with it.

The link to the promotional code is here:

Spread the voice, and grab it as fast as you can!

The book is really worth it if you are getting closer to Linux for the first time, but still an interesting reading for all.

PS: I technically reviewed the book.

Kindle 4 PC Under Linux

If you are trying to install or use Kindle for PC under Linux, I had a problem with the version of Wine shipped by default in Ubuntu 12.04 (that is Wine version 1.4).

After installing Wine PPA and upgrading to version 1.5, I had another problem, but this one is easily solvable: it is necessary ro rename or remove one file from the Wine installation directory. The file is:


and Kindle 4 PC will work in all its glory. Just saying it here since I found different results on the Internet, with different solutions, none of which were really working. Somebody is also reporting the necessity to have ttf-mscorefonts installed to have it work, I didn’t install them, or they have been installed by default.

Why using Kindle for PC? I’m trying to export books bought via the Kindle Store, but without the DRM. Looks like Calibre is able to do it, but I had no luck. There are plugins that should help you with that, but I still have errors while trying to import a DRMed book.

What should be necessary is a Kindle PID, not the serial number, that can be found out easily, plus your Kindle serial number. With both of them, nothing will change. I do not know if with the latest Kindle generation Amazon changed something in their encryption mechanism…

If anybody out there had more luck, fancy sharing your experience?

Panoramix or half-Gpixel Panorama

Before heading to the UDS-Q me and my girlfriend went to Paris for a long weekend (since in 1yr that I live in France I had never visited it properly). We spent 4 fantastic days there, heading in and out from the Parisien metro and walking our way throught out the city, even in not so tourist places.

As usual, coming back home, I had like 5 GByte of pictures in my camera, and due to the few days before leaving for the USA, I hadn’t had the time to process them. Come back from the UDS, and here there is the Paris photostream.

One picture was still missing though, since it required a little bit of work, and yesterday night I eventually managed to “compose” it. Compose because it is a “small” panorama, made of 7 pictures, taken while sitting on the Seine banks close to Notre Dame. The view goes from Notre Dame on the left to the Hotel de Ville on the right, plus other buildings looking at the river. The original TIFF format of the panorama wheighs in at 2.6 GB of disk space, measuring 46366*14910 pixels, it takes like 5 minutes on my machine to open with Gimp, and it took me something like 4 hours of work: loading the 7 TIFF images with Hugin, processing it for the first time, manually adding as much matching-points as possible, waiting for the final result, and finally opening it up with Gimp to play with it in different ways (next time I will use imagemagick).

This is a small-size result:

Paris Panorama

A little bit bigger image can be found on my gallery.

I’m happy with the outcome, it’s the hugest panorama I have ever created. I think me and my girlfriend will print it out (not in the original size, ’cause it will be close to 7 meters!) and hang it somewhere around the house.


Report from UDS-Q Day 1

Here I am, writing from San Franciso, the first report for the (ongoing) day 1 of the Ubuntu Developer Summit (UDS) that will shape Ubuntu 12.10.

It all started in a very good way, flight was (almost) on time, Alessio was blocked for a couple of hours at immigration, Leo was stopped and had to open up all of his bags, but eventually we made it to the hotel safe and sound.

Oakland, on the other side of the San Francisco Bay, looks to be a nice city to hang around: there are small restaurants around the corner from our hotel, some local breweries, a board games shop just in from of us, a sunny and warm weather, everything that you need!

Already quite a lot of interesting stuff heard and discussed about: Mark and Calxeda showcasing the first Ubuntu ARM server, numbers of Ubuntu installation around the world, HP talking about its certification for 12.04, a lot of chats about juju and charms, and devop, Linaro… Looks like cloud is the big word around here (tomorrow there will be a cloud summit too).

Interesting week ahead.

Revamping Launchpad Translators

Hey, you! Yes you!

You are a translator, right?

Would you like to bring new life and new force to a wonderful (small) group of people?

Who are they, you ask? The Launchpad Translators Coordinators, of course!

To get a little bit more serious, this is one of those “help needed” kind of post. If you are a translator with some proven experience, either running (or being part of) an Ubuntu translation team, or one of the many other translation groups in Launchpad, you can help us. New life, new people, new forces and new ideas are always welcome! And probably needed…

What we do is not hard, nor is difficult, nor you need to be an astrophysicist, nor somebody who scored 677 at TOEFL: we deal with some “questions” (or is it “answers”? I never get it…) in Launchpad from people needing help set up a new translation team (we have documentation to help us out too), we try to spread the word (and the world) about translatable software, help developers if they need to set up translation for their projects and to understand the different translation policies Launchpad offers. It is not a busy team, nor a demanding task.

If you are interested, hop by the Launchpad Translator team, join the mailing list, and express your interest!

JavaMail Session to the Rescue

There might comes the time when you need to send emails to your users base within your Java application, deployed in Glassfish. So you start to code some simple Java mail classes, and you find yourself hardcoding host names, user names, passwords and all the other good sensible information in your code, that is open source.

This is the situation I found myself in while doing some maintenance on our code base: subscription emails, or any other emails for the matter, were sent out using one simple Mail Java class, that had everything hardcoded in it. Not good.

But we are using Glassfish, and this is good (well, it depends who you are asking, but in this case it is good as probably any other app-server out there). We can use Glassfish to handle our “mail session”, and inject the necessary values inside our class when needed, leaving us free from storing sensible data in our Java code.

Obviously this is all good in theory, in practice this works if your Java code is managed directly by your app-server, and our is not, since we do not need that. But do not despair, all is possible.

With non-managed code, you need to access the “context” of your Java application, where you have objects bound to an exclusive name.

So, lets make this work.

Creating a new JavaMail Session in Glassfish is very simple either through the admin interface, or via the command line. There are  two important aspects to keep in mind: whatever you need SSL enabled or not, and your JNDI name. Since we are using Gmail, and we want to use its SMTP server, we are going to use SSL for this. The other piece of information that has to be kept in mind, is the last value in the command line: that is the JNDI name, the one you will use in your code to retrieve the JavaMail session. The command line is very simple, you can find it on github.

Now you need to retrieve the JavaMail session from Glassfish, so that it is possible to use it in a MimeMessage Java object. Code to do that is again very simple, and you can find an example here on github.

So, all in all, the situation is now better: we do not store values in the code, and the code is a little bit more flexible and can handle different JavaMail sessions in order to send emails with different accounts. We started with one Java class that handled everything, now we have four classes and one interface, and all the values that need to be retrieved (JNDI names) are stored in a separate Java properties file. Since I was at that, I added attachments support to the email creation, you never know when it might comes handy. 😉

Last step in this work: create HTML templates for sending nice email instead of boring black character emails, and handle internationalization and localization of the templates. Another funny task ahead. :)

Mobile Web & Internationalization

For my work, we are building the trending-trend for the mobile world: mobile web applications. Web applications, or whatever you prefer to call them, thought and optimized for being used through a mobile device. This is all great and cool, you can exploit your HTML5-CSS-JavaScript-fu, and you do not have to learn to program natively on the various mobile platform out there. It is more or less a win-win situation: write once, use on every device. There are drawbacks of course: no real power from your device, you are doomed by the Lord of the Internet Connections and offline access to the data is not really good, and you loose a little bit of that native feeling. Even with all of these, you are still able to create great mobile experiences: the available tool-kits are really well done and are actively developed (jQuery Mobile, Sencha, KendoUI), there are tools to help you building a “native” app converting your HTML5 code, and you are even able to access (with some tricks) some of the hardware resources. But there is always one problem that sometimes people forget to think about: provide users with content in their own language (or at least try to get close to that very language).

The problem we are facing now is exactly this: how to do it? How to provide users with localized content? How to better handle the localization process?

Since we are on the web, we can get language information from different sources, which one to trust is open to debate: should we trust the web browser? Should we get the language via the geolocation of the user or should we get, in some way or another, the information from the underlying operating system?

I usually consider my case: I’m Italian, I live in France, my desktop environment is in Italian, but I prefer, where possible, to read websites in English (that is because websites tend to be better in English if not properly translated).

OK, deciding that is a little bit tricky, you can get into nasty discussions about how to render time and dates, monetary currency, the direction of the text, plural forms, left aside cultural changes if you embrace a broader users base (colors, icons…).

But, if we know which source to trust, how can we “easily” extract the text to be localized, translate it, and reconstruct everything after? Our software stack is composed of HTML + PHP, JavaScript (that comes from jQuery Mobile and Sencha), and Java.

Java provides us our backend, and some messages comes from it too: error messages if something goes kaput, email messages for authenticating a user, plus other small things. But with Java we are more or less safe: there is support for gettext in Java or we can use the Java built-in features (message bundles and properties file, that I do not like much). PHP has gettext support, so even here we are safe.
JavaScript seems a little bit more problematic. Around the web the are a lot of different approaches one can take, even if they all share a small common idea. jQuery Mobile seems to have some sort of internationalization support, Sencha I wasn’t able to find any, but there is a JavaScript implementation of the gettext library (it is not clear if it supports MO file loading).

All these JavaScript approaches looks like they are made with the idea to load the translations dynamically (a-la-gettext), but what if we want to create the final translated page on the server, and send it already translated to the user? Caching the pages directly on server side and serving content a little bit faster? These, and probably others, are questions that I will have to find an answer in the coming months, and they look interesting.