Running the Puppet Learning VM on a Mac OS/X

This post describes how to get the Puppet Learning VM running on a Mac OS/X system. It uses Parallels as the VM hosting system (for reasons which will become apparent).

Puppet is a popular infrastructure automation tool and the learning environment they provide can be downloaded from here

Virtual Box Fail (Oh no it didn’t)

The recommendation for the VM download which is an OVA archive is to use either VMWare or Virtual Box as the host. As I have a Mac the VM Ware product is VM Ware Fusion which is not free. Virtual Box is free for personal use so I decided to use that.

I imported the OVA into Virtual Box (version 5) but found that when I started the VM it threw errors about not finding the scsi disk. I played around with different hardware configs in the Virtual Box settings but it didn’t seem to make any difference.

UPDATE: I emailed the Puppet Learning Team to let them know about my issues and they asked me to gather some stats from the problem. However wouldn’t you know it, I re-ran the import and it all worked fine in Virtual Box. Looking into it I think running the VM as 2 CPUs on my 2 core iMac was just a bit too much of a strain for it so it was losing CPU cycles and lost connection with the virtual disk.

As I normally use Parallels for VM hosting on my Mac I decided to see if there was a way to import the Puppet Learning VM into Parallels.

Parallels isn’t free either but as I have already paid for it and use it to run other systems it made sense for me to try it once Virtual Box failed.

Converting OVA files into Parallels

There is a very handy knowledge Base article here on how to convert OVA files to vmx files for Parallels to then convert.

Following that KB article as a guide I first downloaded the OVF Conversion tools from the VMWare site (You’ll need to register for an account on the VMWare site but it is free).

Run the installer for the OVF tool and you are then ready to create the VMX and VMDK files from the OVA archive you have previously downloaded and unzipped.

Open a Terminal session and change directory to where the ova file is. Then run the following command.

/Applications/VMware\ OVF\ Tool/ovftool --lax puppet-2015.2.0-learning-2.30.ova puppet.vmx
Opening OVA source: puppet-2015.2.0-learning-2.30.ova
The manifest validates
Opening VMX target: puppet.vmx
- Hardware compatibility check is disabled.
Writing VMX file: puppet.vmx
Transfer Completed
- No manifest entry found for: 'puppet-2015.2.0-learning-2.30.ovf'.
- File is missing from the manifest: 'puppet-2015.2.0-learning-2.30.ovf'.
Completed successfully

Then launch Parallels Desktop and go to File -> Open and chose the puppet.vmx file. A message comes up saying it needs to convert the file. (Click the pic to embiggen)

Convert Puppet

From here click Convert and then choose the location where you want to store the converted VM.

You will see a warning like the one below saying Parallels cannot determine the VM Guest O/S but you can ignore that and just continue.

Convert Puppet 2

The conversion process takes a few minutes and at the end you will be asked if you want to start the VM to complete the conversion i.e. installed Parallels Tools.

Convert Puppet 3

Click No here as you want to change some settings on the network card before starting the VM.

Then choose Actions->Configure from the Puppet VM window (or click on the Gear in the top right, or go to the Parallels Desktop Control Center (sic) and click the gear there).

This will bring up the hardware config window for the VM. Confirm it has 2 CPUs and 2048Mb of memory and then click the Network tab and change the network card to be “virtio” and the network type to be Bridged to the default adapter ( or choose a specific adapter if you know what you need for your Mac). Finally click on the Generate button by the mac address and generate a new one just to be on the safe side.

Convert Puppet 4

You can now start your VM and it should pick up its own IP address from your default DHCP using the same network settings as your Mac.

When the VM has started it will display the IP address it has and you can use this in a browser to access the first quest. You can also ssh into it from another terminal session on your Mac.

Convert Puppet 5

If there is no IP address shown after the http:// on the screen then double check the network settings in Parallels for the VM as it means it hasn’t acquired an IP address. (You’ll need to shutdown the VM to change most settings).

At this point you could ( and maybe should ) install Parallels Tools however as I don’t want to mess with the VM I have left it until I feel it really needs them.


Setting up a Red Hat / Centos 7 yum repository with vsftpd, firewalld and SE Linux


This post describes how to set up your Red Hat or Centos 7 server to be a yum repository for both the local server and also serve other servers on the network via ftp using vsftpd. It uses the distro ISO as a source for the packages.

You need to be the root superuser to set this up.

These instructions create a local repo first and then using that insatll vsftpd and set up a remote repo available via ftp

Mount the ISO

Create a mount point and mount the iso image using a loopback mount.

# mkdir /mnt/iso
# mount -t iso9660 -o loop,ro rhel-server-7.1-x86_64-dvd.iso /mnt/iso
# df /mnt/iso
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop0 3798292 3798292 0 100% /mnt/iso

Create the repo directory and copy the packages to it.

mkdir -p /var/yum/repos.d/rhel7
cp -rpv /mnt/iso/Packages/ /var/yum/repos.d/rhel7

The cp command will take a while so the -v flag will show what it is doing.

Note: Instead of creating the repo in /var/yum/repos.d you could create it directly in the public ftp directory, see the steps for vstpd. However that assumes you can install vsftpd from somewhere already and you are happy to have the files directly in /var/ftp/pub. See the note in the section on configuring vsftpd.

Create the local repo with createrepo comamnd

# createrepo /var/yum/repos.d/rhel7
Spawning worker 0 with 1093 pkgs
Spawning worker 1 with 1093 pkgs
Spawning worker 2 with 1093 pkgs
Spawning worker 3 with 1092 pkgs
Workers Finished
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Again this will take a few minutes as it analyses all the Packages.

If you don’t have the createrepo command installed then you can install it with yum if you currently have access to a remote repo on the internet or you can install the rpm from the Packages directory you just created

yum install createrepo


cd /var/yum/repos.d/rhel7/Packages/
# ls createrepo*
# rpm -ivh createrepo-0.9.9-23.el7.noarch.rpm
Preparing... ################################# [100%]
Updating / installing...
1:createrepo-0.9.9-23.el7 ################################# [100%]

Set up your local repository

Now you have the repo created you can use it on the local system by setting up a repo conf file for it. Use your editor of choice (which is vi of course) to create repo

vi /etc/yum.repos.d/rhel7.repo

name=Repo of installation iso packages

Note the three /s in the file URI. gpgcheck is set to zero so that it will not look for signatures.

Confirm the repo is now available locally.

# yum clean all
Loaded plugins: langpacks, product-id, subscription-manager
Cleaning repos: rhel7
Cleaning up everything
# yum repolist enabled
Loaded plugins: langpacks, product-id, subscription-manager
rhel7 | 2.9 kB 00:00:00
rhel7/primary_db | 3.4 MB 00:00:00
repo id repo name status
rhel7 Repo of installation iso packages 4,371
repolist: 4,371

Install and configure vsftpd

Now the repo is available you can install it with yum. Then set the service to start automatically and allow it operate through your firewall if it is running.

# systemctl start vsftpd
# systemctl status vsftpd
vsftpd.service - Vsftpd ftp daemon
Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled)
Active: active (running) since Sat 2015-09-05 14:14:58 BST; 14s ago
Process: 17389 ExecStart=/usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf (code=exited, status=0/SUCCESS)
Main PID: 17390 (vsftpd)
CGroup: /system.slice/vsftpd.service
└─17390 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf

Sep 05 14:14:58 systemd[1]: Starting Vsftpd ftp daemon...
Sep 05 14:14:58 systemd[1]: Started Vsftpd ftp daemon.
systemctl enable vsftpd
ln -s '/usr/lib/systemd/system/vsftpd.service' '/etc/systemd/system/'

We are going to use the default anonymous ftp configuration so the repo needs to be made available via /var/ftp/pub . You could have installed the packages into that directory directly but these instructions assume you have it set up elsewhere and want to be able to “link” it to /var/ftp/pub. You can’t use a symbolic link as vsftpd specifically disallows following links out from the chroot dir of the ftp user. So instead you can mount it locally.

Before all that though we have to test vsftpd is working and set up the firewall rules if applicable….

# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Sat 2015-09-05 11:10:16 BST; 3h 12min ago
Main PID: 12625 (firewalld)
CGroup: /system.slice/firewalld.service
└─12625 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

If you are not using a firewall then you can skip the commands below that allow the ftp service

firewall-cmd --get-default-zone
# firewall-cmd --query-service=ftp
# firewall-cmd --query-service=ftp --permanent

If the service is not allowed then add it both in the runtime config and the permanent config.

# firewall-cmd --add-service=ftp
# firewall-cmd --add-service=ftp --permanent
# firewall-cmd --query-service=ftp
# firewall-cmd --query-service=ftp --permanent

You can now test vsftp by going to a remote server and using an ftp client to login anonymously. ( You can also test it locally ). If you don’t have an ftp client you can install a basic command line one using

yum install ftp

You should be able to log in and see the root directory. ( Which is chrooted to /var/ftp/ by default).

# ftp
Connected to (
220 (vsFTPd 3.0.2)
Name ( anonymous
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> pwd
257 "/"
ftp> ls
227 Entering Passive Mode (192,168,0,16,201,135).
150 Here comes the directory listing.
drwxr-xr-x 3 0 0 18 Sep 05 13:41 pub
226 Directory send OK.
ftp> quit
221 Goodbye.

Now we need to create a directory for the repository to be mounted and do a local bind mount of the local repo.

# mkdir /var/ftp/pub/rhel7
# mount --bind /var/yum/repos.d/rhel7/ /var/ftp/pub/rhel7/
# ls -l /var/ftp/pub/rhel7/
total 300
dr-xr-xr-x. 2 root root 229376 Feb 19 2015 Packages
drwxr-xr-x. 2 root root 4096 Sep 5 13:40 repodata

This only mounts the directory temporarily. So we need to umount it, add an entry to /etc/fstab and check it can automount

# umount /var/ftp/pub/rhel7/
# vi /etc/fstab

Append the following line

/var/yum/repos.d/rhel7/ /var/ftp/pub/rhel7/ none defaults,bind 0 0

Save the file and try the mount

# mount /var/ftp/pub/rhel7/
# ls -l /var/ftp/pub/rhel7/
total 300
dr-xr-xr-x. 2 root root 229376 Feb 19 2015 Packages
drwxr-xr-x. 2 root root 4096 Sep 5 13:40 repodata

Now at this point the only thing stopping ftp from accessing these files is if you have SE Linux running.

Check to see if it is in enforcing mode and what the contexts are for /var/ftp/pub and /var/yum/repos.d/rhel7/

# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 28
# ls -lZ /var/ftp/
drwxr-xr-x. root root system_u:object_r:public_content_t:s0 pub
# ls -lZ /var/ftp/pub/rhel7/
dr-xr-xr-x. root root unconfined_u:object_r:var_t:s0 Packages
drwxr-xr-x. root root unconfined_u:object_r:var_t:s0 repodata

Change the context type of the rhel7 dir and all its contents to be publicly readable:-

# chcon -R -t public_content_t /var/ftp/pub/rhel7/
# ls -lZ /var/ftp/pub/rhel7/
dr-xr-xr-x. root root unconfined_u:object_r:public_content_t:s0 Packages
drwxr-xr-x. root root unconfined_u:object_r:public_content_t:s0 repodata

Now when I connect with anonymous ftp I can see the contents of the directories.

ftp> pwd
257 "/"
ftp> ls
227 Entering Passive Mode (192,168,0,16,201,135).
150 Here comes the directory listing.
drwxr-xr-x 3 0 0 18 Sep 05 13:41 pub
226 Directory send OK.
ftp> cd pub
250 Directory successfully changed.
ftp> ls
227 Entering Passive Mode (192,168,0,16,131,200).
150 Here comes the directory listing.
drwxr-xr-x 4 0 0 36 Sep 05 12:40 rhel7
226 Directory send OK.
ftp> ls rhel7
227 Entering Passive Mode (192,168,0,16,134,133).
150 Here comes the directory listing.
dr-xr-xr-x 2 0 0 229376 Feb 19 2015 Packages
drwxr-xr-x 2 0 0 4096 Sep 05 12:40 repodata
226 Directory send OK.
ftp> quit
221 Goodbye.

The final step is to now log on to the remote client that wants to use this repo and set up the repos conf file.

# vi /etc/yum.repos.d/remote.repo

name=Remote Repo from fitpc4

Now you can install from this remote depot e.g.

# yum clean all
Loaded plugins: langpacks, product-id
Cleaning repos: remote
Cleaning up everything
# yum repolist
Loaded plugins: langpacks, product-id
remote | 2.9 kB 00:00:00
remote/primary_db | 3.4 MB 00:00:01
repo id repo name status
remote Remote Repo from fitpc4 4,371
repolist: 4,371
# yum install ftp
Loaded plugins: langpacks, product-id
Resolving Dependencies
--> Running transaction check
---> Package ftp.x86_64 0:0.17-66.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

Package Arch Version Repository Size
ftp x86_64 0.17-66.el7 remote 61 k

Transaction Summary
Install 1 Package

Total download size: 61 k
Installed size: 96 k
Is this ok [y/d/N]: y
Downloading packages:
ftp-0.17-66.el7.x86_64.rpm | 61 kB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ftp-0.17-66.el7.x86_64 1/1
Verifying : ftp-0.17-66.el7.x86_64 1/1

ftp.x86_64 0:0.17-66.el7



If you have problems check the SE linux logs in /var/log/audit.
If you get really stuck try temporarily disabling the firewall and see if that helps. Similarly try temporarily put the SE Linux into permissive mode ( that needs a reboot).
See references below for how to do those things.
These measures should only be temporary to let you diagnose where the issue is.


I used lots of sources whilst I was trying to set this up. None of them quite covered all the steps (hence writing this blog to put it all in one place for RHEL 7) but the ones below helped a lot.

FTP & SE Linux
SELinux and vsftpd on CENTOS
Creating a repo using vsftpd
Mount –bind and fstab
Disable firewall
SELinux Permissive Mode

Embedded codes in the echo command \c etc

Following on from a post a long time ago about migrating from HP-UX to RHEL I have found another “gotcha” in relation to the use of the echo command.

In HP-UX you can use terminal codes like \c for continue on same line and \n for new line etc.

In RHEL by default these codes are not interpreted. To get them to be interpreted you need to use “echo -e”

Here is a before and after

echo “Here is\t some\b \n\nembedded code”
Here is\t some\b \n\nembedded code
echo -e “Here is\t some\b \n\nembedded code”
Here is  som
embedded code
If you are only use \c codes you could consider removing them and just using echo -n to suppress the new line but I think echo -e is more flexible

Script Naming for run-parts for /etc/cron.daily

I recently added in a new script to the directory /etc/cron.daily so it would be run once a day along with the other scripts in there but for some reason it wasn’t being run. After much messing about I discovered it was the because the name of the script had a . in it.

Specifically the script was called as is common with shell scripts. I changed this to be just get_xferlog and that is now working ok.

The files in /etc/cron.daily are executed as part of an entry in /etc/crontab:-

25 6    * * *   root    test -x /usr/sbin/anacron || ( cd / && run-parts -v /etc/cron.daily )

So it uses the default settings of run-parts to execute all scripts in the directory /etc/cron.daily and the man page for run-parts says.

If neither the –lsbsysinit option nor the –regex option is given then the names must consist entirely of upper and lower case  letters,  digits, underscores, and hyphens.

If you want to check what run parts will do you can use the –test flag which just lists the scripts that will be executed without actually executing them. Thus

run-parts -v –test /etc/cron.daily

This proved very handy in debugging the issue I was having without having to wait a day between tests for the cron job to run.

I should add that the above applies to Ubuntu based servers,  Red Hat servers using run-parts don’t seem to care about a dot in the filename

Mac OS X Snow Leopard & Cisco AnyConnect VPN

I’ve just upgraded to Mac OS X version 10.6 Snow Leopard. The results are impressive with increases in speed and reduction in memory usage.

However to connect to work I use Cisco AnyConnect VPN and that wouldn’t run, it just instantly quit.

I uninstalled it using the uninstaller in the Application folder and  then connected to our web portal at work via Safari and downloaded the AnyConnect installer. This installed fine and I can now run the local application like I used to.

The version of AnyConnect we run on is 2.3.0185

Some Logic

Recently on #logic via twitter Pete Lewis asked:-

a || b -> !(!a || !b)) ? #logic makes my brain hurt

I  did a quick truth table on paper and said yes but turns out I was wrong as I had  looked at the wrong columns. Doh! Here’s the truth table and you can see for yourself


A B A || B !A !B (!A || !B) !(!A || !B)

The colums A || B and !(!A || !B) are not equivalent.

Kubuntu Hardy Heron Upgrade

On Friday afternoon I upgraded my desktop PC to the KDE4 version Kubuntu Hardy Heron from the KDE 3.5 version of Gutsy Gibbon

Overall I have to say the process was very smooth and by far the most trouble free upgrade I have done. There were one or two funnies and these are outlined below.

Note: I decided to upgrade by downloading the alternate CD images and doing a cdromupgrade rather than doing an upgrade over the ‘net. I did this as I thought the Kubuntu sites might be a bit busy still, it being only one day after Hardy was released.

Overall the process took just 40 minutes including one false start.

Allow upgrades from the network hung

One of the options at the start of the CD ROM upgrade is to allow the system to connect to the ‘net to get the latest downloads. I decided to allow this figuring there wouldn’t be many updates to get. However maybe it was because the site was busy but the upgrade just seemed to hang. So after 10 minutes I cancelled and restarted it and chose not to get the upgrades from the net.

This restart initially hung with an error saying it could not get the lock file:


This was because the aborted upgrade had left the lock file behind. I deleted this file with

sudo rm /var/lib/apt/lists/lock

The upgrade process then started itself automatically without me having to go back out.

Remove the CD before rebooting

At the end of the upgrade the systems says it is going to reboot once you press OK. However I didn’t notice any warning to remove the CD before doing this. As my system is set to boot from CD ROM first the result was my system started the live CD on reboot and asked me to select a language.

I ejected the CD and rebooted my machine and all was fine.

KDE4 Not Installed when upgrading from KDE 3.5

After the upgrade was complete and the login screen came up I checked the available sessions and only KDE was listed. There was no option for KDE4 so I thought maybe it will automatically login to KDE4 and there is no KDE3.5 option.

However when I logged in all I saw was the KDE 3.5.9 desktop ( upgraded from 3.5.8 ).

Thinking about this, it kind of makes sense. Although I was using the KDE4 CD the system is an upgrade and since I’ve never had KDE4 on this machine before it just upgraded what was there.

I was able to easily solve the problem by using adept to install the package kubuntu-kde4-desktop, from the command line the same can be achieved with.

sudo apt-get install kubuntu-kde4-desktop

The upgrade took about 10 minutes and interestingly used the alternate CD ( which I had re-instered after the reboot ). I was fully expecting to to start pulling down the package from the ‘net but it didn’t.

This just leaves me with the lingering doubt that I’m not going to get updates for KDE4 over the ‘net. I need to check my sources.list to see if there is anything else I should be adding in there to get the KDE4 updates.

The installation of kubuntu-kde4-desktop asked me what login manager I wanted to use, KDM or KDE4-KDM. I chose the KDE4-KDM version.

Once the kubuntu-kde4-desktop package was installed I logged out and back in again and under the options for sessions I had KDE and KDE4.

Choosing KDE4 did exactly what it says on the tin.

No Sound

In both KDE 3.5.9 and KDE4 initially I had no sound at all. After a couple of dead ends with installing the pulse audio server the problem turned out to be the channel to my speakers was muted in kmix.

I had to choose Kmix from the Multimedia menu and then click on the speaker icon that appeared in the status bar and choose “mixer” to bring up the full mixer panel. For some reason there were two “Front” channels showing and one of them was muted. Un-muting this gave me my sound back. (Click the pic below to embiggen)

The second Front channel was initially muted

No Sound in Firefox for Realplayer plugin

Although sound was now working in KDE4 in general in Firefox the BBC Radio Player was going through the motions of playing but not producing any sound using Realplayer. This turned out to be the fact that the plugins directory had changed for firefox3 and I had to copy in the relevant plugins from /usr/lib/mozilla/plugins to /usr/lib/firefox-3.0b5/plugins

sudo cp /usr/lib/mozilla/plugins/nphelix* /usr/lib/firefox-3.0b5/plugins

NB: This is a bit of a sloppy way of doing this I should really use softlinks to the orginal plugin files rather than making a copy. Also it should be possible to set this up in your home directory .mozilla directory rather than the global /usr/lib

Virtual Box

I use VirtualBox to run an XP virtual machine for connecting to the VPN and work. When I fired this up after the upgrade I got an error message about the VirtualBox kernel drivers not being loaded. The new version of the main Linux kernel was the reason.

Cleverly the error message told you exactly what to do, run “/etc/init.d/vbdrv setup” as root so for Kubuntu this just meant:-

sudo /etc/init.d/vbdrv setup

I really like VirtualBox and much prefer it to VMWare server. The way it handled this error message just confirms it’s the best choice for me for running a VM.

Hotkeys not loaded by KDE Autostart

The hotkeys application I use to set up my multimedia keys was not loaded when I logged in to KDE4. This was because the Autostart directory for KDE4 is in a different place to KDE3.5

In KDE3.5 is it ~/.Kde/Autostart but for KDE4 it is ~/.kde4/Autostart

So all I had to do was recreate my soft links:

cd ~/.kde4/Autostart

ln -s /usr/bin/hotkeys  hotkeys

I am not sure if .kde4 is the official directory for KDE4 files or if this has been set up by Kubuntu because they are allowing you to run both KDE3.5 and KDE4

Skype Not Working

This is the only issue I have yet to resolve. After the upgrade Skype was completely uninstalled. I tried installing it from apt-get but this gave an error saying there was no valid install candidate.

I still had the .deb package I had downloaded from the Skype website so I just re-installed this using dpkg -i

This gave me Skype back on the menu and it ran ok but whenever I try to make a call it just fails.

I suspect this might be something to do with the sound system and the fact in fixing my lack of volume I installed the pulse audio server.

I will try un-installing pulse audio and see if it makes any difference. Though I would like to use pulse audio to see what it is like and what all the fuss is about.