Monday, March 7, 2011

LibreOffice on Maverick

I installed LibreOffice a few weeks ago, leaving OpenOffice in the past -- it was a great thing, but it is going to change I opt for LibreOffice as it keeps its spirit of freedom.
These instructions include installing from an official PPA, so updates will come from there until LibreOffice makes it into the main ubuntu repos. This comes from novatillasku: In short,
  • sudo apt-get purge "openoffice*.*"
  • sudo add-apt-repository ppa:libreoffice/ppa
  • sudo apt-get update
  • sudo apt-get install libreoffice libreoffice-gnome libreoffice-l10n-es
You might want to use libreoffice-kde instead of the -gnome version, and a different language pack instead of -es as well.

Wednesday, February 23, 2011

Bits from the past

A few weeks ago my father found in a backup disk some old texts he had written years ago: composed in WordStar for DOS; likely in our first computer, a PC XT (8-12Mhz!). Digging these files we concluded they were from about 1991 or 1992.... Too long ago! All I can say in my defense is that... I was younger? A child? The fact is that I used that WordStar version a lot.

Unfortunately these files were unreadable by any word processor I use and tried. Import filters promised a lot, a none of them worked for me. So it deserved some deeper digging.... First of all, for nostalgy sake, this is how WS looked like (now within a "modern" Win XP, running virtualized in my linux):

Now back to importing these files: I found the site wordstar.org with plenty of information, but most downloads were for Windows, and also most were not free. Here is a list of downloads. I tried some of them (under Wine), like: WS-Con, WSRTF and a few more. None of them were fully working; most have problems with accents or whatever encoding these files used.

Fortunately I found a text from in site, which describes the file format. This format is quite simple, and has a nice design allowing to extract "most" of the text by just looking the lowest 7 bits of each bytes, and discarding everything with the 8th bit set. If you want formatting, you would have to interpret those high bytes, though they are not too complex and we used very few in our old texts.

So I read this and wrote a python script to process them... The first few attempts were mangling, again, all my accents and the 'ñ' character (these are in Spanish) so I had to start digging at ascii codes. I have almost tatooed in my memory that 'ñ' = 164, after typing "Alt-1-6-4" so many times in DOS. (There were only US keyboards by that time... and I still use them). But a character 164 means something else in python or in nowadays encodings... sometimes as bad as:

>>> chr(164)
'\xa4'
>>> chr(164).encode('utf-8')
Traceback (most recent call last):
  File "", line 1, in
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa4 in position 0: ordinal not in range(128)
while the 'ñ' has different codes today:

>>> 'ñ'
'\xc3\xb1'   ### This is UTF-16 I think
>>> 'ñ'.decode('utf-8')
u'\xf1'      ### Its UTF-8 encoding
so which was the correct encoding? There is a small note in this page, saying "how to type in microsoft windows", and later a note saying DOS was using "codepage 437". That's cool, I had already found the list of all encodings python was bundled with in /usr/lib/python2.6/encodings (and there is indeed a file cp437.py)

So the real key was to do something like chr(164).decode('cp437'). That returns the unicode string u'\xf1', which is the "real" 'ñ'. That made the trick, and the script was done.

As a side note: I found some more characters I could not filter out initially: Double-byte codes like ESC-'4' or ESC-'5' around words... what was that? Some sleepy neuron remembered that we use to have a Star NX-1001 (multifont!), and I suspected that was a printer code. In fact, the manual (which still exists! go Star!) says those are the start of italicized text, and the return to normal font face. So that's not part of WordStar format, it's another problem - the way we handled formatting in our printer. (It was good to remember that, our best printer, again!).

Now if you read this far you must be a nostalgic looking for old memories, of really looking to translate a wordstar file. I uploaded the script to http://pastebin.com/pfY8Dbgv - it converts to plain text or a basic HTML. I hope it helps you!

Sunday, February 13, 2011

Automatically set VGA output as "primary" in Gnome

I always liked gnome-display-properties in that by just clicking its tray icon it will enable (or disable) the VGA output of my laptop if my "secondary" LCD is connected or not. I switch to laptop-LCD only to laptop-plus-external-LCD daily. Recently I started using this external monitor as "primary", using also a external keyboard and mouse, and wanted the gnome panels (menu, taskbar, etc.) to appear there.

However, gnome-display-properties assumes the monitor at the "left" is primary, so menus are always displayed there. It has no UI, no gesture to configure it differently. Fortunately, xrandr (which I think gnome-display-properties uses) allows to freely configure evertyhing related to displays: enable/disable, geometry, layout, etc. After experimenting a little I came up with this little script:
OUT=`xrandr|grep connected|grep -v disconnected| wc -l`
if [ $OUT -gt 1 ]; then
  xrandr --output LVDS1 --auto --output VGA1 --auto --right-of LVDS1 --primary
else
  xrandr --output LVDS1 --auto --output VGA1 --off
fi
which enables my VGA output if a monitor is connected, but setting it as primary (the command line is self-explanatory). I bound this script to an unused key (the blue "ThinkVantage" key on my thinkpad) and now I switch configuration exactly as I wanted: with one key press.

Note: I played a little with XFCE4.8 and I found a different (though related) issue: panels not moving from one to the other display automatically. I asked in the forums and this is unfortunately not supported -- though it was scheduled for 4.10 as soon as I posted a ticket, as Nick from XFCE recommended.