Unix/Linux

There are 5 entries in this Category.

Raw graphics output in Linux: Part 2

DrawingOnLinux2

In Part 1 of this series, we’ve set up a command-line Linux in the VirtualBox emulator with support for direct frame buffer access, the git version control system and the clang compiler. Now let’s use this to draw graphics to the screen “by hand”.

Getting the code

The code we’ll be using is on my Github. So check it out, e.g. by doing:

mkdir ~/Programming
cd ~/Programming
git clone 'https://github.com/uliwitness/winner.git'

Now you’ll have a ‘winner’ folder in a ‘Programming’ folder inside your home folder. Let’s build and run the code:

cd winner
make
sudo ./winner

Screen Shot 2015-10-03 at 16.48.13

This code just drew a few shapes on the screen and then immediately quit. The Terminal was rather surprised about that, so just prints its last line on top of that.

How to access the screen

It took me a bit of googling, but eventually I found out that, to draw on the screen in Linux, you use the framebuffer. As most things in Linux, the frame buffer is a pseudo-file that you can just open and write to. This pseudo-file resides at /dev/fb0, and is the whole reason for the extra hoops we jumped through in Part 1 because a minimal Ubuntu doesn’t have this file.

So if you look at the file linux/framebuffer.hpp in our winner subversion repository, it simply opens that file and maps it into memory, using the ioctl() function and some selector constants defined in the system header linux/fb.h to find out how large our screen is and how the pixels are laid out.

This is necessary, as at this low level, a screen is simply a long chain of bytes. Third row chained after second row after first row. Each row consists of pixels, which consist of R, G, B and optionally alpha components.

By mapping it into memory, we can use the screen just like any other block of memory and don’t have to resort to seek() and write() to change pixels on the screen.

Esoterica

Since computers are sometimes faster when memory is aligned on certain multiples of numbers, and you also sometimes want to provide a frame buffer that is a subset of a bigger one (e.g. if a windowed operating system wanted to launch a framebuffer-based application and just trick it into thinking that the rectangle occupied by its window was the screen), the frame buffer includes a line length, x-offset and y-offset.

X and Y offset effectively shift all coordinates, so define the upper left corner of your screen inside the larger buffer. They’re usually 0 for our use case.

The line length is the number of bytes in one row of pixels, which may be larger than the number of pixels * number of bytes in one pixel, because it may include additional, unused “filler” bytes that the computer needs to more quickly access the memory (some computers access memory faster if it is e.g. on an even-numbered address).

Actually drawing into the frame buffer

The actual drawing code is in our image class, which doesn’t know about frame buffers. It just knows about a huge block of memory containing pixels, and its layout.

The main method in this class is set_pixel() which calculates a pointer to the first byte of a pixel at a given coordinate, and then, depending on the bit depth of the pixels in the bitmap, composes a 2-byte (16 bit) or 4-byte (32 bit) color value by filing out the given bits of our buffer.

All other drawing methods depend on this one:

Drawing rectangles

If you look at fill_rect, it simply takes a starting point (upper left corner of the rectangle) and then fills rows of pixels with that color.

To draw a frame around a rectangle is almost the same. We simply fill as many top and bottom rows as our line width dictates, and the rows in between get filled with a pixel (or whatever our line width is) at the left and right of our rectangle.

Drawing lines

Drawing one-pixel lines involves a tad of basic maths, but it’s nothing that you couldn’t get from a quick glance at Wikipedia. You take the line equation called the “point-slope-form”.

Then you calculate the line’s slope based on your start and end point. If the line is more horizontal than vertical, you loop over the X coordinate from start to end and use that and the slope to calculate the corresponding Y. If it is more vertical than horizontal, you loop over the Y coordinate to get the X instead.

Now, if you use this naïve approach, you may get small gaps in the line, because lines work with fractional numbers, while our computer screen only has full, integer pixels. This is why this example uses a variation on the same process that was invented by someone named “Bresenham”, which keeps track of the loss of precision and adds pixels in as needed.

Now drawing a line of more than one pixel width is a little harder. You see, lines are really infinitely thin, and don’t have a width. When you draw a line of a certain width, what computers usually do is either draw a rotated rectangle that is centered over the line and is as long as it is, and as wide as your line width, or it simply rubber-stamps a filled square or circle of the line width centered over each point on the line, which gives a similar look.

I essentially go with the latter approach in this example, but since I plan to eventually support different opacity for pixels, I do not want to draw whole boxes each time, because they would overlap and a 10% opaque line would end up 20% opaque in every spot where they overlap. So I just detect whether a line is mainly horizontal or vertical, then draw a horizontal or vertical 1 pixel line of the line width through each point.

This isn’t quite perfect and gives diagonal lines a slanted edge, and makes them a bit too wide, so I eventually plan to at least change the code so the small lines are drawn at a 90° angle to the actual line you’re drawing. But that’s not done yet.

Drawing circles

Again, I just get the equation for circles off Wikipedia. It says that r2 = (x-centerX)2+(y-centerY)2. Where “r” is the radius of the circle you want to draw, x and y are the coordinates of any point which you want to test whether it is on the circle, and centerX and centerY are the center of the circle.

Once you know that, you can draw a circle like you draw a rectangle. You calculate the enclosing rectangle of our circle (by subtracting/adding the radius from/to the center point) and then, instead of just drawing the rectangle, you insert each point into the circle equation. If the right-hand-side equates to r2 or less, the point is in the circle, and you can draw it, otherwise you skip this point.

Drawing the outline of a circle is just a specialized version of filling it here. Instead of checking whether the equation comes up as < r2, you also check whether it is greater than (r -lineWidth)2. So essentially you’re checking whether a point lies between two circles, the inner edge of your outline, and the outer edge of it.

This is probably not the optimal way to draw a circle, but it looks decent and is easy enough to understand. There are many tricks. For example, you could calculate only the upper right quarter of the circle, then flip the coordinate horizontally and vertically around the center and thus draw 4 points with every calculation. Bresenham even came with an algorithm where you only calculate 1/8th of a circle’s pixels.

Ovals

The library doesn’t do ovals yet, but I think they could be implemented by using the circle equation and multiplying the coordinate of the longer side of the surrounding rectangle by the ratio between width and height. That way, your coordinates are “projected onto a square”, in which you can use the circle equation.

There are probably more efficient ways to do this.

Drawing bitmaps and text

To draw a bitmap (or rather, a pixel map) is basically a special case of rect drawing again. You take a buffer that already contains the raw pixels (like letterA in our example main.cpp). For simplicity, the code currently assumes that all images that you want to draw to the screen use 32-bit pixels. That also allows us to have a transparency value in the last 8 bits.

It simply draws a rectangle that is the size of the image, but instead of calling set_pixel() with a fixed color, it reads the color from the corresponding pixel in the pixel buffer we are supposed to draw. It also only draws pixels that are 100% opaque.

Text drawing is now simply a special case of this. You create a bitmap for every letter, then when asked to draw a certain character, load the corresponding bitmap and draw that. Of course, serious text processing would be more complex than that, but that is the foundational process as far as a drawing engine is concerned.

You’d of course need a text layout engine on top of that to handle wrapping, and other code to e.g. combine decomposed characters. Also, if you wanted to support the full Unicode character set (or even just all Chinese glyphs), you’d probably want to make your look-up happen in a way that you don’t need to load all bitmaps immediately, but can rather lazy-load them as they are used.

Clipping

When we later implement our own window manager, we will need to be able to have windows overlap. To do that, we need to be able to designate areas as “covered” and have set_pixel() just not draw when asked to draw into those.

This is not yet implemented. The general approach is to have a bitmap (i.e. a pixel buffer whose pixels only occupy 1 bit, on or off) of the same size as our pixel buffer that indicates which pixels may be drawn into (usually that’s called a “mask”).

There are of course various optimizations you can apply to this. The original Macintosh’s QuickDraw engine used a compressed form of a bitmap called a “Region”, which simply contained entries for pixels in each line indicating the length of each color. I.e. “5 pixels off, 10 pixels full”. Some graphics engines simply only allow to clip to rectangles (which can be described by 4 coordinates). If all your windows are rectangular, that is sufficient.

The only clipping the image class currently implements is that circles that fall off any of the edges get clipped, and that rectangles and bitmaps that fall off the bottom or right edges get clipped. The way rectangles are currently specified, it is impossible to have them fall off the left or top, as that would require negative coordinates.

If you currently try to draw outside the image’s defined area using set_pixel(), you will corrupt memory. For a shipping drawing system you’d want to avoid this, and we’ll get to this once we implement a higher-level drawing system on top of this one that deals with clipping, coordinate systems and transformations.

Raw graphics output on Linux: Part 1

DrawingOnLinux1

In my quest to understand better how my computer works, I decided I want to write a very minimal window server. The first step in that is to create something that performs raw graphics output to the screen, directly to its back buffer.

So, as a test bed, I decided to grab the VirtualBox emulator and install Ubuntu Minimal on it. Ubuntu Minimal is a (comparatively) small Linux that is still easy to install, and will provide the graphics drivers we’ll be talking to, and a file system and a loader to load the code to run.

If you just want to know how drawing itself works, feel free to skip to Part 2 in this blog series.

Setting up the virtual machine

Setting up a VM is fairly self-explanatory with the setup assistant in VirtualBox. It has presets for Linux and even for various Ubuntus, and most of the time the defaults are fine for us:

Screen Shot 2015-10-03 at 01.15.15

Screen Shot 2015-10-03 at 01.15.44

Screen Shot 2015-10-03 at 01.15.51

Screen Shot 2015-10-03 at 01.16.06

Screen Shot 2015-10-03 at 01.16.19

I’m choosing to name the VM “Winner”, short for window server, but you can choose whatever name you like:

Screen Shot 2015-10-03 at 01.16.34

Now you have a nice emulated empty computer

Screen Shot 2015-10-03 at 01.16.50

Now, we need to tell it to pretend that the mini.iso Linux disk image file we downloaded from Ubuntu was a CD inserted in its optical drive by selecting the “Empty” entry under the CD, then clicking the little disc icon next to the popup on the right to select a file:

Screen Shot 2015-10-03 at 01.17.14

Note that you would have to use the “Choose Virtual Optical Disk File…” item, I have the mini.iso entry in here already because I previously selected the file.

Screen Shot 2015-10-03 at 01.17.28

Screen Shot 2015-10-03 at 01.17.40

Now you can close the window using the “OK” button and click the green “Start” arrow toolbar icon to boot the emulated computer.

Installing Ubuntu Minimal

Screen Shot 2015-10-03 at 01.18.35

Ubuntu will boot up. Choose “Command-Line install” and use the arrow and return keys to navigate through the set-up. Pick your language, country and keyboard layout (if you’re on a Mac, choose to tell it instead of having it detect, and pick the “Macintosh” variant they offer):

Screen Shot 2015-10-03 at 01.18.49

It will then churn a bit:

Screen Shot 2015-10-03 at 01.21.03

And then it will ask you to name your computer:

Screen Shot 2015-10-03 at 01.21.24

You can pick pretty much any name for your emulated home computer, it doesn’t really matter for what we are doing. I picked “winner”.

Then it will ask you to choose the country you are currently in, so it can pick the closest server for downloading additional components:

Screen Shot 2015-10-03 at 01.21.35

And if they have several servers in your country, they’ll offer a choice. Just pick whatever it offers you, it’ll be fine.

Screen Shot 2015-10-03 at 01.21.58

Then it will ask you if you need to use a proxy. Unless you’re in a weird restrictive company or university network or trying to get around an oppressive government’s firewall, you can just leave the field empty and press return here to indicate no proxy is needed:

Screen Shot 2015-10-03 at 01.22.18

Then it will churn some more, downloading stuff off the internet etc.:

Screen Shot 2015-10-03 at 01.22.42

Now it’s time to set up your user account, password (twice) etc.:

Screen Shot 2015-10-03 at 01.23.39

Screen Shot 2015-10-03 at 01.23.45

In this emulator, we don’t need an encrypted hard disk (If you need it, your computer’s hard disk is probably already encrypted, and your emulated computer’s files are all stored on that anyway).

Screen Shot 2015-10-03 at 01.24.40

Then it will ask you about some system clock settings (the defaults should all be fine here:

Screen Shot 2015-10-03 at 01.25.06

Then it will ask how to partition and format the hard disk. You’re not dual-booting anything, the emulated computer is for Linux only, so just let it use the entire disk:

Screen Shot 2015-10-03 at 01.25.31

And don’t worry about selecting the wrong disk, it will only offer the emulated hard disk we created. Tell it to create whatever partitions it thinks are right:

Screen Shot 2015-10-03 at 01.26.02

And it will churn and download some more:

Screen Shot 2015-10-03 at 01.26.11

Since we may want to keep using this for a while, let’s play it safe and tell it to apply any important updates automatically:

Screen Shot 2015-10-03 at 01.36.03

And when it asks if it is OK to install the boot loader in the MBR, just say yes:

Screen Shot 2015-10-03 at 01.38.22

Again, there is no other operating system inside this emulation, they’re just being overly cautious because so many linux users have weird setups.

For the same reason, you can just let it run the emulator with a UTC system clock as it suggests:

Screen Shot 2015-10-03 at 01.38.38

That’s pretty much all. Tell it to restart, and quickly eject the CD disk image by un-checking it from your “Devices” menu:

Screen Shot 2015-10-03 at 01.38.39

Setting up Ubuntu

Ubuntu is pretty much ready to go. You’ll have a neat command line OS. However, for our purposes, we want to have graphics card drivers. Since this is the minimal Ubuntu, a lot is turned off, so let’s turn that back on again and install some missing parts that we want for our experiments. Log in with your username and password and edit the configuration file /etc/default/grub which tells the bootloader what to do:

Screen Shot 2015-10-03 at 12.22.58

If you’re unfamiliar with the Unix Terminal, just type sudo nano /etc/default/grub and enter your password once it asks. sudo means pretend you’re the computer’s administrator (as we’re changing basic system settings, that’s why it wants your password). nano is a small but fairly easy to use text editor. It shows you all the commands you can use at the bottom in little white boxes, with the keyboard shortcuts used to trigger them right in them (“^” stands for the control key there):

Screen Shot 2015-10-03 at 12.23.33

Most of the lines in this file are deactivated (commented out) using the “#” character. Remove the one in front of GRUB_GFXMODE to tell it we want it to use a graphical display of that size, not the usual text mode that we’re currently using.

Save and close the file (WriteOut and Exit, i.e. Ctrl+O, Ctrl+X in nano).

Now usually this would be enough, but Ubuntu Minimal is missing a few components. So now type sudo apt-get install v86d. This tells Ubuntu to install the v86d package that does … something. If you left out this step, you would get an error message telling you that v86d doesn’t work on the next step. Confirm that you want to install these whopping 370kb of code by pressing “y” when asked. It will churn a bit.

Type in sudo modprobe uvesafb. The graphics drivers on Linux all implement the so-called “framebuffer” commands. That’s what “fb” here stands for. VirtualBox emulates a “VESA” display, and “uvesafb” is the modern version of the “vesafb” graphics driver you’d want for that. So we’re telling our Kernel to load that module now.

If all works, all that you should see is that your screen resizes to 640×480, i.e. becomes more square-ish:

Screen Shot 2015-10-03 at 12.25.54

Now we don’t want to manually have to activate the frame buffer every time, so let’s add it to the list of modules the Kernel loads automatically at startup. Type sudo nano /etc/initramfs-tools/modules to edit the module list and add “uvesafb” to the end of the list (in my case, that list is empty):

Screen Shot 2015-10-03 at 14.51.45

The professionals also suggest that you check the file /etc/modprobe.d/blacklist-framebuffer.conf to make sure it doesn’t list “uvesafb” as one of the modules not to load. If it does, just put a “#” in front of it to deactivate it.

Screen Shot 2015-10-03 at 12.51.22

Now run sudo update-initramfs -u which tells the system to re-generate some of the startup files that are affected by us adding a new module to the list. It will churn for a moment.

Now we need a nice compiler to compile our code with. There’s probably a copy of GCC already on here, but just for kicks, let’s use clang instead, which gives nicer error messages. Enter sudo apt-get install clang:

Screen Shot 2015-10-03 at 12.26.34

Finally, we need a way to get our source code on this machine, so let’s install the git version control system:

sudo apt-get install git

OK, now pretty much everything we need is set up. Part 2 in this series will get us to actually running some code against this graphics card driver.

You can shut down your virtual Linux box until you’re ready to try Part 2 by typing sudo poweroff.

Playing with Objective C on Debian

[Debian showing Objective C source code in gedit]

I felt like playing with Linux a bit, so I went and installed Debian on a partition. Apart from a few failed attempts to install it on an external drive (something which works fine with MacOS X, so I was spoiled) and a bit of confusion when it asked me for my drive name (how would I know a cryptic character combination like hdb4? And I selected that drive in your UI before, can’t you let me use that same selector again?), it went pretty smooth.

Once having installed Debian, I wanted to play a little with GCC on there. However, by default, like a nice desktop OS, the developer tools weren’t installed, so I opened a root Terminal window and typed in

apt-get install gcc cpp binutils libc6-dev make gobjc

Most of the items after install are what any web site on Debian will tell you is needed to use GCC: GCC itself, the C preprocessor, binutils (mainly for the ld linker and the as assembler), the C standard library and the make command line tool for making it easier to build complex build commands (think a command line version of Xcode project files).

But the last one, gobjc, installs the GNU Objective-C runtime and compiler. This is only the runtime with the old Object base class, i.e. without -retain and -release. You get Object by including objc/Object.h. You’ll also want to add a -lobjc option to your GCC command line, or you’ll get lots of error message about missing objc_msgSend() etc.

These headers get installed in /usr/lib/gxx/x86_64-linux-gnu/4.3/include (or whatever GCC version you installed, and whatever architecture your Mac has), in case you want to find out how Object looks..

To get a more familiar Foundation “framework”, I built and installed libNuFound, a Foundation clone that works alongside the GNU runtime, written for use with the Nu programming language. The basic installation is detailed on their github page, essentially the traditional

./configure
make
make install

dance, except that you need to copy the headers yourself:

cp -r Foundation /usr/local/include

Then, if you wanted to build a Foundation tool whose source code is in a file main.m, what you need to do is:

export LD_RUN_PATH="$LD_RUN_PATH:/usr/local/lib"gcc main.m -lobjc -lNuFound -lm -ldl -fconstant-strings-class=NSConstantString

The thing with LD_RUN_PATH is needed so the linker can write the full path of the library into the executable. Otherwise you get an error like

error while loading shared libraries: libNuFound.so.0: cannot open shared object file:No such file or directory

There are other options to solve this problem that make will tell you about when you build/install libNuFound. The -lobjc option pulls in the GNU ObjC library, -lNuFound grabs the Foundation library we just built, and -lm and -ldl grabs the standard C math library and a code-loading library needed by libNuFound. The last parameter tells the ObjC compiler that Objective C string constants like @”Cool” should generate NSConstantString objects, not the old String flavor.

But hey, now I have a Debian installation that runs Objective-C code. Neat :-) No UI libraries, though.

Update: Note that you will probably want to install this on a 32-bit CPU with a 32-bit Debian. With some investigation, Mr. Mo and me found that libNuFound seems to have a few bugs on 64-bit CPUs at the moment.

 

Porting to the Macintosh

It comes up a lot on the mailing lists, so I thought I’d write a little piece on the best approach to port an application from another platform (like Windows or a Linux) to the Macintosh. I won’t go into much detail, but outline the general way and the most common pitfalls.

Do not use Carbon or Cocoa-Java

Apple has, for historical and political reasons, provided several different programming APIs. One of them is Carbon, which is a C-based API that many cross-platform developers want to use because it’s more like MFC and other frameworks they already know. At this year’s Worldwide Developers’ Conference (WWDC) a long-running fight inside Apple finally came to a close, and Apple effectively announced they were killing Carbon. There is still some old documentation up on Apple’s developer web site that says otherwise, but don’t let that fool you.

If you are just getting started programming the Mac, use Cocoa to write your end-user application with a graphical user interface. A few of Carbon’s prettiest cherries have been “rescued”, but look for a Cocoa solution first. In particular, any GUI code written in Carbon (control manager, HIToolbox) is not a good investment of your time at this point.

There are also a number of bridges that allow you to use Cocoa with other languages. They’re a great technology, but most of them are fairly new, and you will face issues. Also, since they map Cocoa/Objective C concepts to another language, there will always be something lost in the translation. You will have to know how Objective C does things to understand these oddities. So why not go for Objective C in the first place, where Apple spends its main effort? All the documentation is for Objective C, too.

Don’t even try to use the Cocoa-Java bridge. Apple has already made clear it will see no more development.

You can use C++ with Cocoa

Many people think they will have to rewrite their application in Objective C to make it use Cocoa. That’s not true. Apple has taken great care to make it possible to mix Objective C and C++ in a project. The key here is the “Objective C++ compiler”, which is part of Xcode. You will still have to write the Mac-specific parts in Objective C, but the rest of your application can stay in C++ and can be easily shared with your Windows or Linux developers.

Don’t be afraid of Objective C, Objective C is essentially C: It has a few simple extensions to the language, which are quickly learned. These extensions may seem like a fancy way of duplicating C++ just for the heck of it, but actually, they’re completely different. Objective C essentially is a very elegant and simple way of wrapping the features of COM or CORBA, Qt’s preprocessor, and many other neat things that are done separately on many other platforms. Moreover, the Cocoa frameworks have been designed with Objective C’s particular strengths and weaknesses in mind. Porting Cocoa to C++ would result in such an ugly mess of nested template code you wouldn’t want to do it.

There is no MFC on the Mac

If you’ve never before ported an application to another platform (and I don’t count Windows CE as a different platform than desktop Win32), don’t think you can just look at each function and map it to “the Mac equivalent”. Each platform has been designed at a different time, and thus incorporated the currently accepted programming practices. To create a usable application, you will have to do things the way they are intended to be done on a particular platform. For the Mac, this means making sure your application is split up into a model-, a view- and a controller-layer, according to the MVC design pattern. This is not a Mac-ism: It is a standard design pattern listed in the GOF book, and commonly accepted as the best way to structure a cross-platform application.

If you find that something seems impossibly hard to do in Cocoa, chances are that you’re doing something that you’re either not supposed to be doing, or that you are supposed to be doing in a completely different way. The benefit of this is that you will find yourself being boosted by the framework in ways you didn’t think possible, instead of finding yourself fighting it every step of the way.

Cross-platform frameworks

There are frameworks that have been designed to sit on top of a platform’s native frameworks and hide away the details, for example Qt, Quaqua or wxWidgets. In short: If you aren’t porting a game, where all user interface is custom and you’re full screen without a menu bar anyway, don’t do it. Most of these frameworks don’t really behave like Mac users expect them to. Menus look subtly wrong, don’t scroll and don’t indicate overflows correctly. Keyboard shortcuts don’t work as expected. Pushbuttons are used with the wrong style, disabled icons look black-and-white instead of faded out…

The long story: There are ways to make them work, but in the end, you can’t really share much code regarding UI behaviour, so you might as well go fully native on each platform. Most cross-platform toolkits get the look and the basic behaviour right, but at some point fall into an uncanny valley where they frustrate Mac users. However, of course you can wrap the Mac’s drawing APIs to share code for some of your custom views and displays.

Resources for Mac programmers

I already mentioned Apple’s developer web site above, which contains a lot of resources. In particular, you can find documentation, sample code etc. there. A lot of this stuff gets automatically installed on your Mac when you install the Xcode tools (in /Developer/Examples you’ll get a lot of the sample code, though not all of it, and the documentation can be found in Xcode’s Help menu, and it will automatically download the newest version periodically).

There is a whole section on porting from other platforms on Appe’s developer web site. Just keep in mind that anything suggesting Carbon is probably outdated.

Apple also runs a bunch of mailing lists for developers. These are mainly a place to meet other developers, and are not an official support channel. Nonetheless, make sure you post on the right list: Many people post Cocoa questions on the Objective-C mailing list, which is mainly about the language itself and the language standard, and rarely the one you want to post on as a Mac developer.

Finally, if you find issues, use Apple’s bug reporter, also known as RADAR. You need a free “ADC Online” account, but that’s just so you don’t have to enter your info twice. You can use the same account in Apple’s store and iTunes, BTW. Do not post your bug reports to the mailing lists. You can ask if someone has found a workaround, but the mailing lists aren’t an official bug report channel, and unless you bother filing a bug, Apple will just think you’re venting and it’s not important enough, and will focus on the bugs somebody actually filed and thus indicated they matter to them.

The state of Desktop Linux… or so

Today I had the urge to try installing Linux on my Intel Mac’s external hard disk. I’m still busy doing this, and I’ll add to this entry as I progress. I picked KUbuntu, a version of Ubuntu that’s preconfigured to use KDE as its desktop environment. I downloaded the live CD and applied Apple’s Firmware Update 1.0.1, and after burning the live CD under OS X I was able to boot into Linux. Worked pretty fine. There’s a few snags I hit, though:

Startup

It’s a little odd that we have some odd (but frozen, unmoving) progress bar at startup. Okay, I didn’t think it had hung because that check-list of tasks it does at startup scrolled past, but having a general idea how long it’ll take would be better (even if it’s number of tasks and not actual time it’ll take).

The Installer on the Live CD

The Live CD includes a neat installer that you can use to install Kubuntu onto your computer as soon as you’ve tried it out. This is neat, and the installer is pretty simple. Except for the partition selection, that is. I just don’t understand why:

  • The language selector list doesn’t have keyboard focus from the start? Instead of just hitting the up arrow to select my language, I need to reach for the mouse, or wildly tab around. Now, I’m not disabled, but for those who are this could be a real annoyance.
  • The option to select an existing partition is hidden behind the option to “manually edit the partition table”. I didn’t choose that as I thought it’d drop me into fdisk at best. It stayed in the GUI, in a fairly rudimentary partition editor, that requires use of a contextual menu to reformat a drive. Why not give me a nice UI like QParted, the partition editor included?
  • The only other understandable option it offered was to completely flatten the external hard disk. Sadly, when I clicked Next after choosing that, it hung without a progress indicator or anything. I killed it after an hour.
  • Only on the next page after that It shows me all partitions it can make sense of and lets me pick one of them. It even shows the drive names of the volume formats it can make sense of (though UFS and HFS+ aren’t among them), so I can see whether I’m trashing the right partition. The silly part is that the menu to select the drive is hidden behind a tiny chevron in the upper right of the window. But why is this page so well-hidden? I never expected to find that in a section on “manually editing the partition table”. Be sure to remember the device “names” of the partitions you want to use here, e.g. /dev/hdb5 or whatever…
  • On the page after that, there’s oodles of popups for the different purposes for which you can attach partitions. Of course, all these partitions only show Unix device names, not the actual partition names, so I have to go back and look up what device the partition I named “Swap” is at… If they offer partition names (or “labels” as they call them), they really should show them for every volume. Also, they need to change these popups. I have no idea whether I have to set all of them, or just the minimal three (what to mount at “/”, the “SystemPartition” and “Swap”?), and what I do to indicate I don’t want to mess with one of these. It’d really help here if you had checkboxes in front of those that can be turned off because they’re optional. I just selected the “empty popup item” for those where I didn’t know what to do… was that right?
  • The different pages need to interact more. I originally forgot to reformat my system partition to ext3 (left it as MS-DOS), but selected the option to reformat it next to its popup, and got all the way to the end only to be told fat32 wasn’t supported for the boot volume. This warning should already come when I select the wrong volume, maybe with an offer to retroactively change my settings to a more sensible volume format.
  • Every time I complete a page, a little progress window named “Installer” (aren’t I already in the installer?) comes up. Once that’s finished, there is a moment where I can still mess with the current page’s GUI, but it’s already in the process of switching to the next. Trouble is, changes I make at that time stick in the GUI, but aren’t actually used. So, when I go back, it’ll act as if I had only just changed this setting and only then it gets applied. Either lock the GUI at this time, or write code to actually carry over the changes right away. The way it is now sends wrong messages.
  • The installer is Denglisch. I.e. half the stuff is localised German as I requested, but other parts aren’t, often in the same window. I downloaded a “final” release, and KDE has lots of developers and supporters in Germany, so I’m a little surprised I get buttons named “Cancel” next to a German description of what to do and a “Zurück” button.

Networking

I entered my AirPort connection info in the network settings for the WLAN interface, but still only got very quick “server not found” messages.

After a while, I found the Wifi Assistant in the K Menu’s “Internet” submenu that let me discover my WLAN and the ‘net is working now. Why they couldn’t put a button for that in System Settings’ Network pane, I don’t know.

Also, when I click “Apply” in System Settings, it doesn’t remember I already saved my changes. I get asked again and again whether I want to save my changes to this panel.

Konqueror/Desktop

I guess this is a detail of the new Intel Macs that’s not yet supported, but why does an “EFI” partition show up on my desktop in KDE?

Also, my external USB hard disk has an icon of a USB stick. If you can’t reliably detect what kind of USB device it is, use a generic icon. Also, the icon for a mounted partition (with a green triangle next to it) is really un-intuitive. The concept of “mounting” a partition (especially if it’s a non-ejectable thing like a hard disk) is hard enough for newbies to understand as is, but if the icons are so completely off the mark and don’t illustrate it correctly, it just gets confusing.

After the installer finishes

Once the installer has finished, I get an error about it being unable to install GRUB, which is some sort of boot-loader that’s supposed to start up the right system when the computer is turned on. I googled a while, until I found a tutorial on installing ubuntu on a MacBook. According to that, this is OK, and I have to do some command-line magic to get it working on my Mac.

For one, it tells me to install rEFIt. There’s no information on what it actually is on their site, but a chat on IRC and a Google on a different topic eventually informed me that rEFIt apparently is a “shell” for EFI, as Macs by default don’t include one. Once that’s installed, you can supposedly do the equivalent of “booting into Open Firmware” or “booting into BIOS” that other computers offer.

According to bin-false, I need to execute the following commands in a Terminal window now, however those only worked for me in a root shell, so I added the first line:

sudo /bin/bash
mkdir /mnt/ubuntu
mount /dev/sdb5 /mnt/ubuntu/
mount -t proc none /mnt/ubuntu/proc
mount -o bind /dev /mnt/ubuntu/dev
chroot /mnt/ubuntu /bin/bash

apt-get install lilo lilo-doc linux-686-smp linux-restricted-modules-2.6.15-23-686 linux-kernel-headers

My limited understanding of Linuxery tells me this mounts the new Linux partition we just created (which is why I wrote sdb5 here instead of sda3 as the original text said) and its proc and dev directories, and then opens a root shell that thinks that this partition was at / for me to work in. Then it uses apt-get to download lilo (the good old “Linux Loader” boot loader), and some Kernel sources. Why we need to do that? No clue, but I guess that proves that rEFIt isn’t a boot loader…?!

As apt-get is wont to do, this will spend some time downloading and installing stuff for us. Then we’ll be shown a short message from LILO, which tells us we’ll have to run liloconfig(8) and /sbin/lilo afterwards.

It will do some more setup, and then it’ll drop you in the console. Now I’m entering

liloconfig

And it asks me a few questions. Yes, I want a partition boot record, and yes I want LBA32 for large disks (I guess?). Then it asks for the kernel image bitmap I want. WTF? It offers me sarge, sid, coffee and debianlilo …