Running the Puppet Learning VM on a Mac OS/X

This post describes how to get the Puppet Learning VM running on a Mac OS/X system. It uses Parallels as the VM hosting system (for reasons which will become apparent).

Puppet is a popular infrastructure automation tool and the learning environment they provide can be downloaded from here

Virtual Box Fail (Oh no it didn’t)

The recommendation for the VM download which is an OVA archive is to use either VMWare or Virtual Box as the host. As I have a Mac the VM Ware product is VM Ware Fusion which is not free. Virtual Box is free for personal use so I decided to use that.

I imported the OVA into Virtual Box (version 5) but found that when I started the VM it threw errors about not finding the scsi disk. I played around with different hardware configs in the Virtual Box settings but it didn’t seem to make any difference.

UPDATE: I emailed the Puppet Learning Team to let them know about my issues and they asked me to gather some stats from the problem. However wouldn’t you know it, I re-ran the import and it all worked fine in Virtual Box. Looking into it I think running the VM as 2 CPUs on my 2 core iMac was just a bit too much of a strain for it so it was losing CPU cycles and lost connection with the virtual disk.

As I normally use Parallels for VM hosting on my Mac I decided to see if there was a way to import the Puppet Learning VM into Parallels.

Parallels isn’t free either but as I have already paid for it and use it to run other systems it made sense for me to try it once Virtual Box failed.

Converting OVA files into Parallels

There is a very handy knowledge Base article here on how to convert OVA files to vmx files for Parallels to then convert.

Following that KB article as a guide I first downloaded the OVF Conversion tools from the VMWare site (You’ll need to register for an account on the VMWare site but it is free).

Run the installer for the OVF tool and you are then ready to create the VMX and VMDK files from the OVA archive you have previously downloaded and unzipped.

Open a Terminal session and change directory to where the ova file is. Then run the following command.

/Applications/VMware\ OVF\ Tool/ovftool --lax puppet-2015.2.0-learning-2.30.ova puppet.vmx
Opening OVA source: puppet-2015.2.0-learning-2.30.ova
The manifest validates
Opening VMX target: puppet.vmx
- Hardware compatibility check is disabled.
Writing VMX file: puppet.vmx
Transfer Completed
- No manifest entry found for: 'puppet-2015.2.0-learning-2.30.ovf'.
- File is missing from the manifest: 'puppet-2015.2.0-learning-2.30.ovf'.
Completed successfully

Then launch Parallels Desktop and go to File -> Open and chose the puppet.vmx file. A message comes up saying it needs to convert the file. (Click the pic to embiggen)

Convert Puppet

From here click Convert and then choose the location where you want to store the converted VM.

You will see a warning like the one below saying Parallels cannot determine the VM Guest O/S but you can ignore that and just continue.

Convert Puppet 2

The conversion process takes a few minutes and at the end you will be asked if you want to start the VM to complete the conversion i.e. installed Parallels Tools.

Convert Puppet 3

Click No here as you want to change some settings on the network card before starting the VM.

Then choose Actions->Configure from the Puppet VM window (or click on the Gear in the top right, or go to the Parallels Desktop Control Center (sic) and click the gear there).

This will bring up the hardware config window for the VM. Confirm it has 2 CPUs and 2048Mb of memory and then click the Network tab and change the network card to be “virtio” and the network type to be Bridged to the default adapter ( or choose a specific adapter if you know what you need for your Mac). Finally click on the Generate button by the mac address and generate a new one just to be on the safe side.

Convert Puppet 4

You can now start your VM and it should pick up its own IP address from your default DHCP using the same network settings as your Mac.

When the VM has started it will display the IP address it has and you can use this in a browser to access the first quest. You can also ssh into it from another terminal session on your Mac.

Convert Puppet 5

If there is no IP address shown after the http:// on the screen then double check the network settings in Parallels for the VM as it means it hasn’t acquired an IP address. (You’ll need to shutdown the VM to change most settings).

At this point you could ( and maybe should ) install Parallels Tools however as I don’t want to mess with the VM I have left it until I feel it really needs them.

Setting up a Red Hat / Centos 7 yum repository with vsftpd, firewalld and SE Linux


This post describes how to set up your Red Hat or Centos 7 server to be a yum repository for both the local server and also serve other servers on the network via ftp using vsftpd. It uses the distro ISO as a source for the packages.

You need to be the root superuser to set this up.

These instructions create a local repo first and then using that insatll vsftpd and set up a remote repo available via ftp

Mount the ISO

Create a mount point and mount the iso image using a loopback mount.

# mkdir /mnt/iso
# mount -t iso9660 -o loop,ro rhel-server-7.1-x86_64-dvd.iso /mnt/iso
# df /mnt/iso
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop0 3798292 3798292 0 100% /mnt/iso

Create the repo directory and copy the packages to it.

mkdir -p /var/yum/repos.d/rhel7
cp -rpv /mnt/iso/Packages/ /var/yum/repos.d/rhel7

The cp command will take a while so the -v flag will show what it is doing.

Note: Instead of creating the repo in /var/yum/repos.d you could create it directly in the public ftp directory, see the steps for vstpd. However that assumes you can install vsftpd from somewhere already and you are happy to have the files directly in /var/ftp/pub. See the note in the section on configuring vsftpd.

Create the local repo with createrepo comamnd

# createrepo /var/yum/repos.d/rhel7
Spawning worker 0 with 1093 pkgs
Spawning worker 1 with 1093 pkgs
Spawning worker 2 with 1093 pkgs
Spawning worker 3 with 1092 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Again this will take a few minutes as it analyses all the Packages.

If you don’t have the createrepo command installed then you can install it with yum if you currently have access to a remote repo on the internet or you can install the rpm from the Packages directory you just created

yum install createrepo


cd /var/yum/repos.d/rhel7/Packages/
# ls createrepo*
# rpm -ivh createrepo-0.9.9-23.el7.noarch.rpm
Preparing... ################################# [100%]
Updating / installing...
1:createrepo-0.9.9-23.el7 ################################# [100%]

Set up your local repository

Now you have the repo created you can use it on the local system by setting up a repo conf file for it. Use your editor of choice (which is vi of course) to create repo

vi /etc/yum.repos.d/rhel7.repo

name=Repo of installation iso packages

Note the three /s in the file URI. gpgcheck is set to zero so that it will not look for signatures.

Confirm the repo is now available locally.

# yum clean all
Loaded plugins: langpacks, product-id, subscription-manager
Cleaning repos: rhel7
Cleaning up everything
# yum repolist enabled
Loaded plugins: langpacks, product-id, subscription-manager
rhel7 | 2.9 kB 00:00:00
rhel7/primary_db | 3.4 MB 00:00:00
repo id repo name status
rhel7 Repo of installation iso packages 4,371
repolist: 4,371

Install and configure vsftpd

Now the repo is available you can install it with yum. Then set the service to start automatically and allow it operate through your firewall if it is running.

# systemctl start vsftpd
# systemctl status vsftpd
vsftpd.service - Vsftpd ftp daemon
Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled)
Active: active (running) since Sat 2015-09-05 14:14:58 BST; 14s ago
Process: 17389 ExecStart=/usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf (code=exited, status=0/SUCCESS)
Main PID: 17390 (vsftpd)
CGroup: /system.slice/vsftpd.service
└─17390 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf

Sep 05 14:14:58 systemd[1]: Starting Vsftpd ftp daemon...
Sep 05 14:14:58 systemd[1]: Started Vsftpd ftp daemon.
systemctl enable vsftpd
ln -s '/usr/lib/systemd/system/vsftpd.service' '/etc/systemd/system/'

We are going to use the default anonymous ftp configuration so the repo needs to be made available via /var/ftp/pub . You could have installed the packages into that directory directly but these instructions assume you have it set up elsewhere and want to be able to “link” it to /var/ftp/pub. You can’t use a symbolic link as vsftpd specifically disallows following links out from the chroot dir of the ftp user. So instead you can mount it locally.

Before all that though we have to test vsftpd is working and set up the firewall rules if applicable….

# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Sat 2015-09-05 11:10:16 BST; 3h 12min ago
Main PID: 12625 (firewalld)
CGroup: /system.slice/firewalld.service
└─12625 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

If you are not using a firewall then you can skip the commands below that allow the ftp service

firewall-cmd --get-default-zone
# firewall-cmd --query-service=ftp
# firewall-cmd --query-service=ftp --permanent

If the service is not allowed then add it both in the runtime config and the permanent config.

# firewall-cmd --add-service=ftp
# firewall-cmd --add-service=ftp --permanent
# firewall-cmd --query-service=ftp
# firewall-cmd --query-service=ftp --permanent

You can now test vsftp by going to a remote server and using an ftp client to login anonymously. ( You can also test it locally ). If you don’t have an ftp client you can install a basic command line one using

yum install ftp

You should be able to log in and see the root directory. ( Which is chrooted to /var/ftp/ by default).

# ftp
Connected to (
220 (vsFTPd 3.0.2)
Name ( anonymous
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> pwd
257 "/"
ftp> ls
227 Entering Passive Mode (192,168,0,16,201,135).
150 Here comes the directory listing.
drwxr-xr-x 3 0 0 18 Sep 05 13:41 pub
226 Directory send OK.
ftp> quit
221 Goodbye.

Now we need to create a directory for the repository to be mounted and do a local bind mount of the local repo.

# mkdir /var/ftp/pub/rhel7
# mount --bind /var/yum/repos.d/rhel7/ /var/ftp/pub/rhel7/
# ls -l /var/ftp/pub/rhel7/
total 300
dr-xr-xr-x. 2 root root 229376 Feb 19 2015 Packages
drwxr-xr-x. 2 root root 4096 Sep 5 13:40 repodata

This only mounts the directory temporarily. So we need to umount it, add an entry to /etc/fstab and check it can automount

# umount /var/ftp/pub/rhel7/
# vi /etc/fstab

Append the following line

/var/yum/repos.d/rhel7/ /var/ftp/pub/rhel7/ none defaults,bind 0 0

Save the file and try the mount

# mount /var/ftp/pub/rhel7/
# ls -l /var/ftp/pub/rhel7/
total 300
dr-xr-xr-x. 2 root root 229376 Feb 19 2015 Packages
drwxr-xr-x. 2 root root 4096 Sep 5 13:40 repodata

Now at this point the only thing stopping ftp from accessing these files is if you have SE Linux running.

Check to see if it is in enforcing mode and what the contexts are for /var/ftp/pub and /var/yum/repos.d/rhel7/

# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 28
# ls -lZ /var/ftp/
drwxr-xr-x. root root system_u:object_r:public_content_t:s0 pub
# ls -lZ /var/ftp/pub/rhel7/
dr-xr-xr-x. root root unconfined_u:object_r:var_t:s0 Packages
drwxr-xr-x. root root unconfined_u:object_r:var_t:s0 repodata

Change the context type of the rhel7 dir and all its contents to be publicly readable:-

# chcon -R -t public_content_t /var/ftp/pub/rhel7/
# ls -lZ /var/ftp/pub/rhel7/
dr-xr-xr-x. root root unconfined_u:object_r:public_content_t:s0 Packages
drwxr-xr-x. root root unconfined_u:object_r:public_content_t:s0 repodata

Now when I connect with anonymous ftp I can see the contents of the directories.

ftp> pwd
257 "/"
ftp> ls
227 Entering Passive Mode (192,168,0,16,201,135).
150 Here comes the directory listing.
drwxr-xr-x 3 0 0 18 Sep 05 13:41 pub
226 Directory send OK.
ftp> cd pub
250 Directory successfully changed.
ftp> ls
227 Entering Passive Mode (192,168,0,16,131,200).
150 Here comes the directory listing.
drwxr-xr-x 4 0 0 36 Sep 05 12:40 rhel7
226 Directory send OK.
ftp> ls rhel7
227 Entering Passive Mode (192,168,0,16,134,133).
150 Here comes the directory listing.
dr-xr-xr-x 2 0 0 229376 Feb 19 2015 Packages
drwxr-xr-x 2 0 0 4096 Sep 05 12:40 repodata
226 Directory send OK.
ftp> quit
221 Goodbye.

The final step is to now log on to the remote client that wants to use this repo and set up the repos conf file.

# vi /etc/yum.repos.d/remote.repo

name=Remote Repo from fitpc4

Now you can install from this remote depot e.g.

# yum clean all
Loaded plugins: langpacks, product-id
Cleaning repos: remote
Cleaning up everything
# yum repolist
Loaded plugins: langpacks, product-id
remote | 2.9 kB 00:00:00
remote/primary_db | 3.4 MB 00:00:01
repo id repo name status
remote Remote Repo from fitpc4 4,371
repolist: 4,371
# yum install ftp
Loaded plugins: langpacks, product-id
Resolving Dependencies
--> Running transaction check
---> Package ftp.x86_64 0:0.17-66.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
ftp x86_64 0.17-66.el7 remote 61 k

Transaction Summary
Install 1 Package

Total download size: 61 k
Installed size: 96 k
Is this ok [y/d/N]: y
Downloading packages:
ftp-0.17-66.el7.x86_64.rpm | 61 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ftp-0.17-66.el7.x86_64 1/1
Verifying : ftp-0.17-66.el7.x86_64 1/1

ftp.x86_64 0:0.17-66.el7



If you have problems check the SE linux logs in /var/log/audit.
If you get really stuck try temporarily disabling the firewall and see if that helps. Similarly try temporarily put the SE Linux into permissive mode ( that needs a reboot).
See references below for how to do those things.
These measures should only be temporary to let you diagnose where the issue is.


I used lots of sources whilst I was trying to set this up. None of them quite covered all the steps (hence writing this blog to put it all in one place for RHEL 7) but the ones below helped a lot.

FTP & SE Linux
SELinux and vsftpd on CENTOS
Creating a repo using vsftpd
Mount –bind and fstab
Disable firewall
SELinux Permissive Mode

Internet Hotkeys – Amarok dcop play/pause

Well the solution to getting my Play/Pause button to actually work as a play/pause toggle was pretty easy.

Amarok supports a playPause() method that is registered to the dcop server so in my hotkeys.conf file the command for the Play button became

dcop amarok player playPause

dcop is the command line based client to talk to the dcop server, amarok is of course the application I want to talk to.

player is the section of the amarok services and playPause is the function/method I want to call.

To find this out I used kdcop the graphical interface and explored what it offered under the amarok application.

Internet Hotkeys

My keyboard is a Logitech Internet Pro and it has 7 keys at the top for special functions that I’ve never really made use of in Kubuntu, but now I’ve got them all functioning using the handy application “hotkeys”.

Here’s what I did to get them working:

First the actual keys are labelled

Media ,Play/Pause,Mute,Vol +,Vol -,Favorites,Email,WWW

I tried to use KDE keyboard variants to get them working but this didn’t really work that well so I installed the application hotkeys:-

sudo apt-get install hotkeys

The hotkeys application intercepts keys and processes actions according to a couple of configuration files.

The first config file is the definition of the keycodes that are generated by your keyboard and what hotkeys command name to map them to. There is one definition file for each type of keyboard that hotkeys supports. To see the list of supported keyboards run the command

hotkeys -l

This actually reads the contents of various .def files from the config directory ( /usr/share/hotkeys in Kubuntu)

The second config file defines what action or programs are run when the various keys are pressed. By default in Kubuntu this is installed as /etc/hotkeys.conf. The best way to customise this is to create a directory in your homedir called .hotkeys and copy /etc/hotkeys.conf into there.

mkdir ~/.hotkeys

cp /etc/hotkeys.conf ~/.hotkeys/hotkeys.conf

The hotkeys.conf file consists of simple key/value pairs and you can edit it to launch the applications you require. The setting for Kbd defines what keyboard definition file is loaded when you run hotkeys.

Here is the final version of my hotkeys.conf ( note that Kbd is set to logitech-internet-pro which is not a standard definition, it’s one I created myself. More of which anon.)

# Global configuration for hotkeys #

# These are the default values.
# A line starting with # is a comment.

### Specify the default keyboard (without the .def extension) so you
### don’t need to specify -t every time

#using my own definition based on itouch

Play=amarok –pause


# osd_font=-arphic-ar pl kaitim big5-bold-i-normal–0-250-0-0-c-0-*-*
### For the color, you can either use the strings in /etc/X11/rgb.txt,
### or use the RGB syntax #RRGGBB, e.g. ##A086FF
# osd_color=LawnGreen
# osd_timeout=3
### osd_position is either ‘top’ or ‘bottom’
# osd_position=bottom
# osd_offset=25

The syntax is pretty obvious, when the WebBrowser key is pressed the command firefox is executed. To test the config just run the command


You’ll see a splash screen appear briefly and the application is now running. Press a key and you see an on screen display in green telling you what is happening and the relevant action will be executed.

You’ll notice I haven’t mapped anything for mute, volume etc. these all work with the defaults.

To get hotkeys to always be loaded when I am running KDE I added a link to the hotkeys executable to the .kde/Autostart directory:-

cd ~/.kde/Autostart

ln -s /usr/bin/hotkeys hotkeys

The Play/Pause keys executes “amarok –pause” , unfortunately from the command line this is not a toggle thus I can press the “Play/Pause” key and it will pause amarok but a second press won’t restart it. The command for that is “amarok –play” so I have mapped this to my Media key at the moment.

The “Media” key is not a standard hotkeys command name but the keyboard definition files allow you define commands to executed directly in there. The “key” (pun intended) to the keyboard definition files are the keycodes your keyboard generates.

I started with the itouch.def file and found it worked for most of they keys. Using the excellent application “xev” I was able to discover what keycodes my keyboard was generating and create my own variant called logitech-internet-pro.def the contents are reproduced below. I shall have to see if there is anywhere appropriate I can upload the file to make it available for others.

<?xml version=”1.0″?>


<config model=”Logitech Internet Pro”>

<Play keycode=”162″/>

<VolUp keycode=”176″ adj=”2″/>
<VolDown keycode=”174″ adj=”2″/>
<Mute keycode=”160″/>

<WebBrowser keycode=”178″/>
<Email keycode=”236″/>
<Favorites keycode=”230″/>

<!– Feel free to customize this – the media key –>

<userdef keycode=”237″ command=”amarok -p”>Amarok</userdef>


<name>Simon Stanford</name>
<email>sjs atraetsel dot co dot uk</email>

My next task is to see if I can get the Play/Pause button to actually act as a toggle and for this I think I am going to need to use dcop to interrogate the state of amarok and/or pass it the appropriate command.

Linux – For all your hardware driver needs

I’ve recently bought myself a new desktop PC ( more of which in another post perhaps). The plan is to give my old desktop to my dad and he uses Windows rather than Linux. ( One day maybe I’ll convert him, I’m sure he’d be amenable to giving it a try).
I wiped the partitions on my old Desktop’s hard drive and booted from my XP install CD.
Unfortunately when it tried to install it said there were no hard drives detected. I thought maybe I needed to reset the SATA RAID JBOD but this made no difference.I booted from an Ubuntu live CD and the hard drive was detected no problem.It transpires that my ASUS motherboard with its Uli SATA RAID controller is not supported by the standard windows install CD. You have to get the drivers on a floppy disk and press F6 during the install to be able to include the drivers off the floppy.

Now, as is the way with a lot of desktops, there is no floppy drive fitted, however I managed to track down an old USB floppy drive. The next challenge was to find a floppy disk, this proved even harder but I found one in the back of the drawer. So I put the drivers on the disk and booted the Windows and duly pressed F6.

All to no avail as the USB floppy drive was not detected and a check of the Microsoft knowledge base confirmed only a couple are supported.

I thought this was game over at this point but I came across a product called nlite that enables you to roll your own Windows installations and customise them, including adding in additional drivers. You can then create an ISO that you can burn to disk.

This got be running and the tellytubby green hill is now showing nicely on my old PC.

Two things spring to mind about this.

Firstly nlite is a cool little programme and is at least free as in beer.

Secondly it shows the power of open source. Linux is able to detect my hardrive from the installation CD no problem presumably because the community decided this driver was important enough to be included or maybe because the way the kernel works the standard drivers just work better with a wider range of hardware.

Either way score another one for the penguin.

Snmpd filling up /var/log/messages

Update May 2009: This  post has generated lots of alternative ideas in the comments so make sure you read through them to see what might work for your server.
At work we have a central monitoring system for servers called Solarwinds Orion Network Manager, this uses standard snmp connections to servers to get their status, disk usage, CPU performance.
On my RHEL5 linux servers the standard snmpd daemon works well with Solarwinds but the monitoring server seems to make a lot of connections to the system and each one gets logged via the syslog daemon to /var/log/messages giving rise to lots of lines saying things like

snmpd[345435]: Connection from UDP: []:135

last message repeated 8 times

last message repeated 13 times

These are only information messages saying a connection has been established. This is rather annoying when you are trying to read other things in /var/log/messages. The way to turn off these messages is to change the logging options of the snmpd daemons.

On Redhat ( and Ubuntu) the default logging ( the -L options ) show:–

-Ls d

Meaning log to syslog using the facility of daemon ( see syslogd and syslog.conf for more information on what that means in detail, for now suffice it to say it means all messages are written to /var/log/messages ).

The man pages for snmpcmd ( common to all net-snmp programmes ) explain you can set this to only log messages above a certain priority.

Using priorities 0-4 means warning messages, errors, alerts and critical etc messages are logged but notice info and debug level messages are ignored.

The manual pages are not that clear, to me at least at first, hence this blog.

So if we change the -Ls d to the following this will stop those messages but still allow important messages to get through:–

LS 0-4 d

The capital S is crucial to the syntax.

So where and how do we set these options? Well the snmpd daemon is started by a standard init script /etc/init.d/snmpd

In both RHEL5 and Ubuntu the scripts have some default options but also read in settings from a config file. In Ubuntu the relevant portion of the script is:-

SNMPDOPTS=’-Lsd -Lf /dev/null -p /var/run/’
TRAPDOPTS=’-Lsd -p /var/run/’
#Reads config file (will override defaults above)
[ -r /etc/default/snmpd] && . /etc/default/snmpd

So this sets the variable SNMPDOPTS to the default value and then if the file /etc/default/snmpd is readable it “sources” the content of that file.

Thus if /etc/default/snmpd contains the line

SNMPDOPTS='-LS 0-4 d -Lf /dev/null -p /var/run/'

Then stopping and starting the snmpd daemon will make it run with the new logging options we want.

sudo /etc/init.d/snmpd restart

In RHEL5 the equivalent file is /etc/snmp/snmpd.options and the equivalent variable is OPTIONS rather than SNMPDOPTS

Now there could be security implications to not recording the IP address of every SNMP request on your server in case some other system is connecting that shouldn’t be, but there are ways with community strings and other authentication options for SNMP to reduce the risk of that.

All in all the I think the risk of missing an important message in /var/log/messages outweighs the risks from not logging the snmpd messages.

Hey look a whole post and I never mentioned FTP once :o)

Samba Shares, Spaces and fstab (With a bit of Octal thrown in)

It is a necessary evil at work that I have to get my laptop that runs Kubuntu to interact with the rest of the Windows systems at work.In order to show that Linux can hold its own I’ve not asked for any special changes to be made to the way the windows servers are set up. I just make Linux work with what the Windows PCs use.

The main area of interaction is the mounting of Samba shares to get at my network storage.

In general this is fine but I have found one little gotcha if you are using /etc/fstab to mount shares at boot up and the share names in question have spaces in them.

The problem is that spaces are delimiters in /etc/fstab and trying to avoid getting the space interpreted by usng quotes or backslashes won’t work with /etc/fstab.

The answer is to use the octal code for the ASCII number of the space character. (Wow so much jargon in one short sentence)

So first here are two lines from an /etc/fstab for mounting two windows shares. The windows shares on a server called nas001 and the share names are “Backup” and “My Documents”

# /etc/fstab: static file system information.#
# <file system> <mount point> <type> <options> <dump> <pass>
//nas001/Backup /mnt/backup cifs credentials=/home/raetsel/creds 0 0
//nas001/My\040Documents /mnt/mydocs cifs credentials=/home/raetsel/creds 0 0

So after the comments the first line shows mounting a share without a space, the second line shows mounting a share with a space where space is replaced with \040

So what’s with \040? Well the and a three digit code is interpreted as an ASCII value of a character in octal (base 8).

In an Linux command shell type man ascii to see a list ofASCII codes and their octal, decimal and hexadecimal equivalents.

Space is decimal 32 which is octal 40 ( but we need 3 digits for the interpretation to work so it is 040)

In a similar vein \134 is the octal code for a \ backslash so if you want to have a domain username pair in the options of the line in fstab you could do it with username=mydomain\134raetsel

Lug Radio Live 2007 Part 3

Day Two………

Michael Sparks

Michael is a research engineer at the BBC and he gave a talk about the Kamaelia component framework. This a framework and a set of tools for making programming for concurrency for things like content handling and scalable network services easy.

The talk was pretty hardcore and covered a lot of technical detail though as with many great ideas the core principle is simple: you build systems using components that have an inbox and an outbox, so you don’t have to worry about whether the generator or sink is ready, you just work on the components you need in essence in isolation.

The presentation itself was given on an interactive whiteboard application that was built using Kamaelia. It’s a fascinating technology that would well end up being the glue that sticks together a whole swathe of distributed, media based content systems.

Nat Friedman

Nat Friedman‘s talk was very wide ranging and as he himself said it was just a real grab bag of slides.

He started with the story of how the Oxford English Dictionary got started and how it was in essence an open source project: community based with citations solicited from anyone. There was a strong leader and then a team of lieutenants ( 26 editors one for each letter ). The work was divided in such a way as to allow people to work on small units (i.e. a single word) and contribute when they had a little time to spare.

The parallel with FLOSS was really striking.

Then Nat put up some stats to show the millions of lines of code that Novell has contributed back to the Flosscomm. (Lest we forget)

Nat also discussed the patent situation as some length and I think he was surprised he was not given a hard time about it in the debate the day before. I guess he wasn’t aware how “nice” and polite a British audience will be. He discussed different aspects of Intellectual Property namely, copyright, trade mark, trade secret and patent.

The whole software patent situation in the US is pretty scary but also rather boring. Maybe I’ve been over exposed to it in the last few months on the various blogs and feeds I read.

Nat however certainly wasn’t boring and did give some interesting views on how in the end Microsoft will end up being on the same side as us for patents because they are more and more the target of claims against them.

The Hour of Power

One hour of cool visual demos. There were a few technical hitches with getting laptops to talk to the AV kit but we got to see some cool stuff. There’s not much point in going into too much detail in this area as you had to see them to appreciate them so here are the runners and riders:-

Zahir from Fluendo : Istanbul screencast software and Elisa media centre

Neuro from Linden Labs : Second Life

Juski : MythTV

Alan Pope from Ubuntu : Ubuntu screen casts

Joe Shaw from Novell : Banshee Web UI (christened Webshee by an audience member). Written as part of Novell Hack Week.

The final hour of the day I saw two of the lightning talks upstairs ( prior to that I had been a total Main Stage whore).

Michael Barker

Michael spoke about the groupware application Meldware Buni that is written in Java aims to be cross-platform and ultimately rival Microsoft Exchange.

The cool thing about the Buni Meldware software is that it treats email and calendar information as just any other sort of data and stores it in a relational database.

The talk covered the ideas behind the project and in particular its use ofJava. The old days of Java being slow, it seems, are over.

Peter Stean

Peter gave an interesting talk about the UK government’s digital challenge competition that encouraged local authorities to find ways of bringing the benefits of the information age to those people who are already socially excluded and just ending up being left further and further behind.

Unfortunately he wasn’t able to provide details of how much open source software was involved in this though he knew some was.

He spoke about some interesting projects to make digital set top boxes much more interactive so they could be used as a medium for providing services to local authority service users electronically without the need for a full blown PC and Internet connection. Not only would this be cheaper but would also mean the user interface and learning curve would be less than that of a traditional PC.

So that was Lug Radio Live 2007. I had a great time and great big thank you to all the people involved in making it happen.

Lug Radio Live 2007 Part 2

The Mass Debate

This was an open Q&A session chaired by Jono and with 4 open source luminaries, actually 3 open source luminaries and one guy from Microsoft. The panel members were Chris DiBona ofGoogle, Nat Friedman of Novell, Becky Hogge from the Open Rights Group and Mr X from Microsoft. Unfortunately I did not catch the guy from Microsoft’s name but he described himself as an evangelist, which was kind of ironic given that Alan Cox’s talk earlier was saying how Open Source people used to shun the word marketing and talk about being an evangelist but now big corporations were catching on so it was becoming a dirty word.

As you might expect the guy from Microsoft got quite a “shoeing” particularly in relation to the Open XML standard and he played it quite well. When the question or sometimes the speech masquerading as a question was on the whole pretty rhetorical he would just smile and not comment.

Other topics included the panel’s views on the the BBC’s decision not to provide the iPlayer for any other platform than windows initially. On the whole they saw it as a poor decision but all said the way to get something done was to engage in dialog with the organisation rather than trying to do some sort of boycott. An interesting comment from Nat Friedman was that these large corporations are not monoliths and there may well be people in the organisation that have closer sympathies to your position. You should find these people out and enter a dialog with them.

Nat Friedman also made an interesting comment about the future of the Linux desktop and how as more applications go to the web the underlying platform in a sense becomes less important and this could be the point where Linux starts to gain real ground in the market, which has a certain irony about it. However he also suggested that this could mean the Linux desktop will go in some new direction providing a user experience that the web can’t provide.

On the whole it was a reasoned and fun debate.

Chris DiBona

The final session of the day was from Chris who is Google’s code manager, responsible for licence compliance, releasing google code and the Summer of Code.

After some entertaining slides about the hardware Google used over the years he went on to explain that the main reason Google use FLOSS is because they want always to be master of their own destiny and not beholden to any other software provider.

He finished by explaining how the Summer of Code works from both the students’ side and the projects’ side.

Chris is a really entertaining speaker and clearly very knowledgeable across a range of FLOSS issues.

Lug Radio Live 2007 Part 1

Well just under a year ago I wrote my first blog post which was a review of LRL 2006, how time time flies.

So today was the first day of Lug Radio Live 2007 held at the Lighthouse Media Centre in Wolverhampton. Here’s a review of what I’ve seen and heard today.

The Venue

A quick word about the venue. It’s the first time I’ve been to the Lighthouse Media Centre and I really like it. The central glass roofed atrium had a great light and airy feel to it helped by the unseasonably good weather: clear skies and sunshine.

The main stage is in fact the cinema so there are plenty of seats and they are nice and comfy. As you would hope the Audio/Visual set up was first class.

The whole effect was to make it feel like a real homely community event without any soulless institutional feel you can get from some places but still providing a high quality experience.

Ted Haeger

Ted Haeger gave a talk about the new start up company he has joined, Bungee Labs, which aims to provide a software as a service offering for developers. The basic concept is a browser based development environment that offers a full set of developer services from IDE to source control and deployment with tools to help you link to in easily to other WSDL based services such as those offered by Google Maps and Flickr.

The concept is that development is free and then you pay on an as used basis for your final deployment of production software. It’s an ambitious project and the company is pretty much betting the farm on the fact that software as a service is the way of the future and for small developers this will be the way to go.

Ted was as entertaining and as American as ever.

Alan Cox

The title of Alan‘s talk was “but I don’t write code” and looked at the different ways people can contribute to open source software even though they are not developers. It drew comparison with the functions that proprietary software houses have and how open source pretty much needs most of them as well.

The big four were probably testing, translation, marketing and documentation.

Of these, documentation seems to be something of an intractable problem but then this is true of the proprietary world and something the IT industry as a whole has been struggling with for as long as I have been involved in it (some 20 years now).

To draw an analogy with addictive behaviours I’d say we are past the denial stage and at least at admit that documentation is a problem. Mind you I did nearly buy a mug at one of the exhibitor stands that said “Document my code? There’s a reason they call it code”.

Matthew Garrett

After lunch I was keen to hear Matthew speak about the latest position of laptop support on Linux but I almost didn’t recognize the clean cut, bright eyed individual standing at the lectern. Last year his talk was the first slot on the Sunday and he admitted to being very hung over.

Matthew looked at the reasons why people have laptops and what that means they want from an operating system. This comes down to portability (and hence battery life), external monitors (for presentations or when in use at the office or home) and connectivity (which means wireless).

Matthew spoke at some length about the different areas that can affect power consumption and how some things are more effective than others. For example halving the speed of a CPU halves the power usage but halving the voltage quarters the power usage.

It comes down to how many Watts of power your laptop uses especially when it is theoretically idle and tools like Powertop seem to be going a long way to helping people find out that their applications or drivers are a lot more insistent that the CPU wake up than they need to be.

The problems with the use of external monitors, suspend resume and wireless use ultimately come down the fact that hardware manufacturers are not as forthcoming with assistance for Linux kernel hackers as they would like but Matthew seem to think things were getting better.

The use of standards like the d80211 stack for wifi will mean in the long run things will improve by leaps and bounds though it may be a couple of steps back first as existing support re-written (and broken) to fit the standards.

In summary to paraphrase (and Bowdlerise) Matthew “Linux on laptops is a bit crap but it’s less crap than it was 18 months ago”

Still to come on day one are “The Mass Debate” and “Chris Dibona” of Google but it’s past my bedtime and I need to check a few details and get people’s names before I post so watch this space.