Named routes in Laravel 4

Saturday, January 26, 2013

I've been doing some work with PHP & Laravel 4, and had an idea to make naming routes a bit nicer syntactically. Currently, to name a route you wrap the 2nd parameter in an array and add an item with an 'as' key, as in:

Route::get('user/profile', array('as' => 'profile', function(){// code here..}));

I think it'd be more natural to set the name using a method on the route (similar to adding a where() condition) , something like:

Route::get('user/profile', function(){// code here..})
    ->named('profile');

After a bit of poking around, I found this could be done fairly easily with a couple small additions to Route and Router to allow for renaming a route:

--- a/vendor/laravel/framework/src/Illuminate/Routing/Route.php Fri Jan 25 15:35:04 2013 -0600
+++ b/vendor/laravel/framework/src/Illuminate/Routing/Route.php Sat Jan 26 16:37:08 2013 -0600
@@ -411,4 +411,16 @@
                return $this;
        }

-}
+    /**
+     * Change the name for this route.
+     *
+     * @param string $name New Name
+     * @return Illuminate\Routing\Route
+     */
+    public function named($name)
+    {
+        $this->router->rename($this, $name);
+
+        return $this;
+    }
+}
\ No newline at end of file
diff -r 50adf81e2f0f vendor/laravel/framework/src/Illuminate/Routing/Router.php
--- a/vendor/laravel/framework/src/Illuminate/Routing/Router.php        Fri Jan 25 15:35:04 2013 -0600
+++ b/vendor/laravel/framework/src/Illuminate/Routing/Router.php        Sat Jan 26 16:37:08 2013 -0600
@@ -1159,4 +1159,21 @@
                $this->container = $container;
        }

+    /**
+     * Change the name of a route
+     *
+     * @param $route
+     * @param $name
+     * @return void
+     */
+    public function rename($route, $name)
+    {
+        foreach($this->routes->getIterator() as $n => $r) {
+            if ($r === $route) {
+                $this->routes->remove($n);
+                $this->routes->add($name, $r);
+                return;
+            }
+        }
+    }
 }

It's not the most efficient thing in the world to be iterating over all the previous routes. A small optimization would be to check the last-added route first, in the example usage that'd be the one you're renaming. But to do that would require also patching Symfony's RouteCollection class.

Integrated Windows Authentication with Apache on a Linux box

Sunday, April 15, 2012

Integrated Windows Authentication (IWA) is a useful feature for intranets, where a web browser on a Windows client joined to Active Directory (AD) can seamlessly pass authentication information to a web server - without needing to prompt the user for a password. It's supported by IE, Firefox and Chrome on the client side, and naturally by IIS on the server side. With just a little bit of effort, it can also be supported by Apache on a Linux or other Unix-type OS, and I'll take a look at doing that here.

IWA is a generic term that covers a few different protocols. One is the older NTLM Authentication, which can be setup on a Linux server with mod_auth_ntlm_winbind, but that's awfully clunky and requires setting up and running Samba's winbindd daemon. Another is Kerberos, which is fairly well supported in must Linux/Unix distros, and is an integral part of Active Directory.

There are a lot of writeups on integrating a Linux box with AD, but most of them get very complicated, trying to integrate everything, including login, file sharing, group mapping, etc. And deeply relying on Samba. I'm going to focus on just the one simple task of HTTP authentication in Apache, not using Samba, and being as explicit as possible on what needs to be done on both the Linux and Windows Active Directory sides of the setup.

Prerequisites

I'm going to do this for Ubuntu 10.04 and assume you have root access and are familiar with general Apache configuration. Other Linux distros or perhaps BSDs should be very very similar.

Some other things you're going to need to be able to do, or at least get someone in you organization to do for you are:

Examples

For the rest of this, we're going to assume:

AD Setup

Firstly, we need a User object in Active Directory that will represent the Apache service, and will hold a password which Kerberos tickets will be based on.

In the Active Directory Users and Computers utility, create a User object, the name doesn't matter much, so I'll go with Test-HTTP

User object creation

after hitting Next >, on the password page uncheck User must change password... and check Password never expires. Go ahead and enter anything as a password, it'll get changed to something random in a later step.

User object password

Go ahead and finish that up.

Next, we need to associate a Service Principal Name (SPN) with the User object we just created. Kerberos principals are usually <protocol>/<domain-name>@<kerberos-realm>. Since we're doing a web server, it'll be known in Kerberos as HTTP/test.foobar.edu@AD.FOOBAR.EDU Run this in a Command Prompt window:

setspn -A HTTP/test.foobar.edu Test-HTTP

(note that we left off the @AD.FOOBAR.EDU part, setspn knows to put that in)

Lastly, we're going to create a keytab file (I'll call it test-http.keytab), which holds encryption keys based on the User object's password. When a client requests a ticket to access our Linux box, AD will locate the User object based on the SPN we associated with it, and use the same encryption keys to create the Kerberos tickets our Linux's Apache will be setup to require.

(This is a one-line command, but I'm going to display it below as several lines for readability)

ktpass -out test-http.keytab 
    -princ HTTP/test.foobar.edu@AD.FOOBAR.EDU 
    -mapuser Test-HTTP 
    -mapOp set 
    +rndPass 
    -crypto All 
    -ptype KRB5_NT_PRINCIPAL

The +rndPass changes the User objects password to something random, you don't need to know what it is - the keytab is the thing you really care about here.

Securely copy that test-http.keytab to the Linux box, and delete it off the Windows machine. We're done with AD now, back to the real world...

Linux setup

Move the keytab file somewhere handy, such as /etc/apache2/test-http.keytab, and set the permissions so that the Apache process (and nobody else) has access:

chmod 440 test-http.keytab
chown www-data:www-data test-http.keytab

Install the Apache Kerberos module

aptitude install libapache2-mod-auth-kerb

You'll need an /etc/krb5.conf file. A simple one that leaves it up to Kerberos to discover what it needs might be as simple as:

[libdefaults]
default_realm = AD.FOOBAR.EDU

Here's a more explicit one that specifies Active Directory KDCs (Key Distrubution Centers), by IP

[libdefaults]
default_realm = AD.FOOBAR.EDU
default_keytab_name = FILE:/etc/krb5.keytab

[realms]
AD.FOOBAR.EDU = {
    kdc = 1.2.0.1
    kdc = 1.2.0.2
    kdc = 1.2.0.3
    default_domain = AD.FOOBAR.EDU
    }

[domain_realm]
.foobar.edu = AD.FOOBAR.EDU

That sort of thing is documented on the MIT website.

Apache Setup

We're in the home stretch now, Apache directives to protect a cgi-bin directory for example might look like:

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
    AllowOverride None
    Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
    Order allow,deny
    Allow from all

    AuthName "FOOBAR Active Directory"
    AuthType KerberosV5
    KrbServiceName HTTP
    Krb5Keytab /etc/apache2/test-http.keytab
    require valid-user
</Directory>

Those last 5 lines inside the <Directory> block are the key here. The KrbServiceName of HTTP corresponds to what we entered as the protocol part of the principal name back on the setspn and ktpass commands. The AuthName is what will be displayed if the browser falls back to basic autentication if Kerberos is not available.

Test it out

Here's a super-simple CGI test.sh script we can put in our Kerberos-protected cgi-bin directory, be sure to make it executable.

#!/bin/sh
echo 'Content-Type: text/plain' 
echo
echo You are: $REMOTE_USER

Go to a Windows client signed into Active Directory. To get IE or Chrome to attempt Kerberos authentication you'll have to add test.foobar.edu to the Local Intranet in the Internet Settings control panel. Here are some shots of where to go:

Local Intranet screenshot 1

Local Intranet screenshot 1

Local Intranet screenshot 1

For Firefox, you'll want the NTLMAuth add-in, which lets you specify which domains that Firefox should attempt Kerberos authentication with.

Once you've got the browser fixed up, try accessing http://test.foobar.edu/cgi-bin/test.sh, and if everything works out, you should be rewarded with something like:

You are: bob.smith@AD.UND.EDU

If you didn't follow the steps to configure the browser to attempt Kerberos auth with the site, the browser should pop-up a userid/password box, and if you enter the correct info, it should show the same info.

Conclusion

So there you have it, after 15 minutes of work you can now have a webpage tell you what your own AD userid is. OK, it's probably more useful than that - you can now write webapps proxied behind Apache, such as a Django app, that just have to look at the REMOTE_USER variable to tell who's on the other end of the connection.

You'll probably not want to require Kerberos auth for the whole app, but all you really need is to require Kerberos for one particular login URL that sets your userid into a session, and leave it up to the framework to check the session for authentication on the rest of the site.

Debian GNU/kFreeBSD in a FreeBSD Jail - part 2

Wednesday, February 29, 2012

Previously I wrote about getting Debian GNU/kFreeBSD working in a jail. I've worked on it a bit more, polishing things up so I've got it working pretty seamlessly with my existing ezjail FreeBSD jails, so everything starts automatically, and you can use the ezjail commands to stop/restart the jail.

Here are a few more notes about how things got setup for my jail I named debian:

Kernel Modules

In /boot/loader.conf, I added these lines:

fdescfs_load="YES"
linprocfs_load="YES"
linsysfs_load="YES"
tmpfs_load="YES"

Mounting Filesystems

Created /etc/fstab.debian and populated with:

linproc     /jails/debian/proc      linprocfs       rw 0 0
linsys      /jails/debian/sys       linsysfs        rw 0 0
tmpfs       /jails/debian/lib/init/rw   tmpfs       rw 0 0

ezjail Config

Created /usr/local/etc/ezjail/debian with these contents:

export jail_debian_hostname="debian"
export jail_debian_ip="127.0.0.6"
export jail_debian_interface="lo0"
export jail_debian_rootdir="/jails/debian"
export jail_debian_mount_enable="YES"
export jail_debian_devfs_enable="YES"
export jail_debian_devfs_ruleset="devfsrules_jail"
export jail_debian_fdescfs_enable="YES"
export jail_debian_exec_start="/etc/init.d/rc 3"
export jail_debian_flags="-l -u root"

I also tried adding an IPv6 address to the jail, and that seems to work OK

So you can now stop/start with jail with

service ezjail.sh stop debian
service ezjail.sh start debian

Connect to the jail console

If you create a symlink for login (so that from the jail's POV there's a /usr/bin/login, like there would be on a FreeBSD jail)

cd /jails/debian/usr/bin/
ln -s ../../bin/login .

then you can use the ezjail-admin command to get a console in the jail, with:

ezjail-admin console debian

Otherwise, I've been using my own script to get a console (which assumes bash is installed in the jail), named /usr/local/sbin/jlogin

#!/bin/sh
#
# log into a jail, running bash
#
JID=`jls | grep " $1 " | awk '{print $1}'`
exec jexec $JID env -i PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin TERM=$TERM EDITOR=$EDITOR LANG=$LANG HOME=/root bash -l

That runs as:

jlogin debian

Debian GNU/kFreeBSD in a FreeBSD Jail

Sunday, February 26, 2012

I've been a FreeBSD user for quite some time, going back to 3.3 or so, and for the last serveral years have also been working a lot with Ubuntu Linux. So when I ran across Debian GNU/kFreeBSD, which provides a Debian environment on top of a FreeBSD kernel, I was somewhat intrigued. It got even more interesting when I found a tutorial on setting up GNU/kFreeBSD in a jail. The notion of having a Debian environment on my home FreeBSD server without having to get something like VirtualBox running was just too good to pass up.

I got it running fairly decently, but along the way ran into some small problems - and thought I'd jot down what they were and what the fixes were.

FreeBSD Update

At first, I was using FreeBSD 8.2-RELEASE, and used debootstrap to install Debian Squeeze, as the tutorial showed. Once inside the jail, things sort of worked, but most commands, aptitude especially, would die with:

User defined signal 1

It turns out you need a newer kernel than 8.2 to run kFreeBSD in a chroot, as is mentioned in the FAQ. I upgraded my FreeBSD kernel/world to 8.3-PRERELEASE (2012-02-22), and the "signal 1" problem went away.

Debian Update

The next problem was that aptitude would still die, with:

Uncaught exception: Unable to read from stdin: Operation not permitted

After reading about this bug in cwidget, it seemed an upgrade to Wheezy was needed to fix the problem - and sure enough that problem went away.

kbdcontrol and /dev/console

The upgrade to Wheezy didn't go entirely smoothly, mainly due to the kbdcontrol package (required by sysvinit) being unable to access /dev/console in the jail. I wasn't worried about keeping things in the jail isolated for security reasons, so I went ahead and added /dev/console on-the-fly to the running jail by running outside the jail:

devfs -m /jails/debian/dev rule add path 'console*' unhide
devfs -m /jails/debian/dev rule applyset

After that, the kbdcontrol package was able to be upgraded, and I seem to have a Wheezy FreeBSD jail now. Very cool.

UPDATE: A followup talks more about the actual file changes made to run as an ezjail

VM Serial Console part 2

Wednesday, November 9, 2011

Fooling around a bit more with accessing a VM's serial console from a KVM hypervisor with

virsh console mymachine

I found one thing that doesn't carry over from the host to the VM is the terminal window size, so if you try to use something like vim through the console connection, it seems to assume a 80x25 or so window, and when vim exits your console is all screwed up.

It looks like a serial connection doesn't have an out-of-band way of passing that info the way telnet or ssh does, so you have set it manually. You can discover your settings on the host machine with

stty size

which should show something like:

60 142

on the VM, the same command probably shows

0 0

zero rows and columns, no wonder it's confused. Fix it by setting the VM to have the same rows and columns as the host with something like:

stty rows 60 columns 142

and you're in business.

Enabling VM serial console on stock Ubuntu 10.04 server

Wednesday, November 2, 2011

So I've been running Ubuntu 10.04 server virtual machines on a host running KVM as the hypervisor, and thought I should take a look at accessing the VM's console from the host, in case there's a problem with the networking on the VM.

The hosts's VM libvirt definition shows a serial port and console defined with

<serial type='pty'>
  <source path='/dev/pts/1'/>
  <target port='0'/>
  <alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/1'>
  <source path='/dev/pts/1'/>
  <target type='serial' port='0'/>
  <alias name='serial0'/>
</console>

and within the stock Ubuntu 10.04 server VM, dmesg | grep ttyS0 shows:

[    0.174722] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    0.175027] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A

So the virtual hardware is all setup on both ends, but ps aux | grep ttyS0 doesn't show anything

We need to have a process listening to that port. To do that, create a file named /etc/init/ttyS0.conf with these contents:

# ttyS0 - getty
#
# This service maintains a getty on ttyS0 from the point the system is
# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345]
stop on runlevel [!2345]

respawn
exec /sbin/getty -L 38400 ttyS0 xterm-color

and then run

initctl start ttyS0

back in the host machine run virsh list to find the name or id number of your VM, and then

virsh console <your-vm-name-or-number>

to connect, hit return and you should see a login prompt.

Customizing cloned Ubuntu VMs

Sunday, October 30, 2011

I was playing with creating and cloning Ubuntu virtual machines the other day, and got to the point where I had a nicely setup reference image that I could just copy to fire up additional VMs that would be in a pretty usable state.

There are a few things within a cloned VM that you'd want to change if you were going to keep the new instance around, such as the hostname, SSH host keys, and disk UUIDs. I threw together a simple shell script to take care of these things automatically.

#!/bin/sh
#
# Updates for cloned Ubuntu VM
#

#
# Some initial settings cloned from the master
#
ROOT=/dev/vda1
SWAP=/dev/vdb1
LONG_HOSTNAME=ubuntu.local
SHORT_HOSTNAME=ubuntu

if [ -z $1 ]
then
    echo "Usage: $0 <new-hostname>"
    exit 1
fi

# 
# Update hostname
#
shorthost=`echo $1 | cut -d . -f 1`
echo $1 >/etc/hostname
hostname $1
sed -i -e "s/$LONG_HOSTNAME/$1/g" /etc/hosts
sed -i -e "s/$SHORT_HOSTNAME/$shorthost/g" /etc/hosts

#
# Generate new SSH host keys
#
rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server

#
# Change root partition UUID
#
OLD_UUID=`blkid -o value $ROOT | head -n 1`
NEW_UUID=`uuidgen`
tune2fs -U $NEW_UUID $ROOT
sed -i -e "s/$OLD_UUID/$NEW_UUID/g" /etc/fstab /boot/grub/grub.cfg

#
# Change swap partition UUID
#
OLD_UUID=`blkid -o value $SWAP | head -n 1`
NEW_UUID=`uuidgen`
swapoff $SWAP
mkswap -U $NEW_UUID $SWAP
swapon $SWAP
sed -i -e "s/$OLD_UUID/$NEW_UUID/g" /etc/fstab

#
# Remove udev lines forcing new MAC address to probably show up as eth1
#
sed -i -e "/PCI device/d"     /etc/udev/rules.d/70-persistent-net.rules
sed -i -e "/SUBSYSTEM==/d" /etc/udev/rules.d/70-persistent-net.rules

echo "UUID and hostname updated, udev nic lines removed,  be sure to reboot"

I'd then run it on the cloned machine with something like

update_clone.sh mynewmachine.foobar.com

This somewhat particular to my specific master VM, in that it's expecting one disk dedicated to root and one disk dedicated to swap, and the VM was created with ubuntu.local as the hostname. Hopefully though it'll give some ideas about what to look for and how to script those changes.

WTF!, when did those files get deleted ?!

Tuesday, October 25, 2011

A guy I work with recently showed me a bad situation he had with iPhoto, some family videos had gone missing from his harddisk. The thumbnails were in iPhoto, but when he clicked on them, they wouldn't play because the files were gone. He had Time Machine backups, but they were gone even in the oldest copies. Apparently the files had been deleted quite a while ago.

This got me thinking about a huge problem with backups - you can be very diligent about keeping them, but if you have no idea that something's missing they don't do you much good.

What you need is something that would alert you of unexpected deletions. Thinking about my friend's experience, I whipped together a small shell script that would be run periodically to take an inventory of the iPhoto originals, and if something was removed compared to the last run, it would place a file on my desktop that hopefully I'd notice, listing a diff of the changes.

I saved this on my disk as /Users/barryp/bin/inventory_iphoto.sh

#!/bin/bash
#
# Check if anything has been deleted from the iPhoto Originals
# folder, and if so, place a file on the Desktop listing what's
# gone missing
#

CHECK_FILE=~/Library/Logs/com.diskcompare.inventory_iphoto.txt

find ~/Pictures/iPhoto\ Library/Originals -type f | sort >$CHECK_FILE.new
if [ -e $CHECK_FILE ]
then
    diff -u $CHECK_FILE $CHECK_FILE.new >$CHECK_FILE.diff
    grep '^-/' $CHECK_FILE.diff >$CHECK_FILE.gone
    if [ -s $CHECK_FILE.gone ]
    then
        mv $CHECK_FILE.diff "$HOME/Desktop/DELETED iPhoto files-`date "+%Y-%m-%d %H%M%S"`.txt"
    else
        rm $CHECK_FILE.diff
    fi
    rm $CHECK_FILE.gone
fi
mv $CHECK_FILE.new $CHECK_FILE

and made it executable with

chmod +x /Users/barryp/bin/inventory_iphoto.sh

Other than the directory name to check, there's nothing iPhoto or even Mac specific about this, it could be easily adapted for other uses.

You can run the script manually too anytime you want, and you can test this out by running once, editing ~/Library/Logs/com.diskcompare.inventory_iphoto.txt to add a line (starting with a /), and then running the script again to make sure a diff file pops up on your desktop showing how the line you manually added is gone in the updated inventory.

Next, I setup the Mac to run this once a day or so, by creating a launchd job saved as /Users/barryp/Library/LaunchAgents/com.diskcompare.inventory_iphoto.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.diskcompare.inventory_iphoto</string>
    <key>ProgramArguments</key>
    <array>
        <string>/Users/barryp/bin/inventory_photo.sh</string>
    </array>
    <key>StartInterval</key>
    <integer>86400</integer>
</dict>
</plist>

(You'll have to change the path to the script to suit your setup, unfortunately it doesn't seem you can use tilde expansion in a launchd job)

and then activated it in launchd with this command at the command prompt:

launchctl load ~/Library/LaunchAgents/com.diskcompare.inventory_iphoto.plist

Fortunately my friend found a really old harddisk that happened to have his missing videos on it, but he's even more lucky to have noticed the problem in the first place.

With a periodic inventory as described above, hopefully a person would become aware of a problem with in a day or two, in plenty of time get the files out of a backup system.

iPXE on OpenBSD

Saturday, October 1, 2011

I got a chance to try Enhanced PXE booting with iPXE on an OpenBSD box and found a couple things that don't work...

Firstly the stock DHCP daemon does not support if statements in the configuration, so this bit to keep iPXE from being told to load iPXE (a loop) didn't work

if exists user-class and option user-class = "iPXE" {
    filename "http://10.0.0.1/pxelinux.0";
    } 
else {
    filename "undionly.kpxe";
    }

To get it to work I had to follow the alternate advice on the chainloading documentation about Breaking the loop with an embedded script.

However, recompiling udnionly.kpxe on OpenBSD 4.9 failed, with the error:

net/fcels.c: In function 'fc_els_prli_detect':
net/fcels.c:1108: internal compiler error: Segmentation fault: 11
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://gcc.gnu.org/bugs.html> for instructions.
gmake: *** [bin/fcels.o] Error 1

(this was GCC 4.2.1). FreeBSD 8.2 also has the same version of GCC and comes up with the same error.

I ended up using an Ubuntu 10.04.3 box, which I believe was GCC 4.4.x, and that worked fine.

Automounting ISO images in FreeBSD

Monday, September 26, 2011

Since I've been playing with ISO images a lot lately (see posts tagged: pxe), I thought I'd take a look at making it easier to access their contents, since manually mounting and unmounting them gets to be a drag. It turns out than an Automounter is just what the doctor ordered - a service than will mount a filesystem on demand.

Typically, you'd see automounters mentioned in conjunction with physical CD drives, floppy drives, or NFS mounts - but the idea works just as well for ISO files. This way you can have available both the original ISO image and its contents - but without the contents taking up any additional space.

For FreeBSD, the amd utility will act as our automounter, on Linux systems amd is an option too, but another system called autofs seems to be widely used there - perhaps I'll take a look at that in another post.

Let's start with the desired end result ...

Directory Layout

On my home server I'd like to have this directory layout:

/data/iso/
    images/
        openbsd-4.9-i386.iso                    
        ubuntu-10.04.3-server-amd64.iso
        ubuntu-11.04-server-amd64.iso                    
            .
            .
            .

/data/iso/contents will be where the image contents will be accessible on-the-fly, by directory names based on the iso file names, for example:

/data/iso/
    contents/
        openbsd-4.9-i386/    
            4.9/
            TRANS.TBL
            etc/
        ubuntu-10.04.3-server-amd64/
            README.diskdefines
            cdromupgrade
            dists/
            doc/
            install/
            isolinux/
            md5sum.txt
            .
            .
            .
        ubuntu-11.04-server-amd64/             
        .
        .
        .

Mount/Unmount scripts

amd on FreeBSD doesn't deal directly with ISO files, so we need a couple very small shell scripts than can mount and unmount the images. Let's call the first one /local/iso_mount :

#!/bin/sh
mount -t cd9660 /dev/`mdconfig -f $1` $2

It does two things: first creating a md device based on the given iso filename (the first argument), and mounting the md device at the specified mountpoint (the second argument). Example usage might be:

/local/iso_mount /data/iso/images/ubuntu-11.04-server-amd64.iso /mnt

The second script we'll call /local/iso_unmount

#!/bin/sh
unit=`mdconfig -lv | grep $1 | cut -f 1`
num=`echo $unit | cut -d d -f 2`
umount /dev/$unit
sleep 10
mdconfig -d -u $num

It takes the same parameters as iso_mount. (the sleep call is a bit hackish, but the umount command seems a bit asychronous, and it doesn't seem you can destroy the md device immediately after umount returns - have to give the system a bit of time to finish with the device) To undo our test mount above would be:

/local/iso_unmount /data/iso/images/ubuntu-11.04-server-amd64.iso /mnt

amd Map File

amd is going to need a map file, so that when given a name of a directory that something is attempting to access, it can lookup a location of where to mount it from. For our needs, this can be a one-liner we'll save as /etc/amd.iso-file

*   type:=program;fs:=${autodir}/${key};mount:="/local/iso_mount /local/iso_mount /data/iso/images/${key}.iso ${fs}";unmount:="/local/iso_unmount /local/iso_unmount /data/iso/images/${key}.iso ${fs}"

A map file is a series of lines with

<key> <location>[,<location>,<location>,...]

In our case we've got the wildcard key *, so it'll apply to anything we try to access in /data/iso/contents/, and the location is a semicolon-separated series of directives. type:=program indicates we're specifying mount:= and unmount:= commands to handle this location. ${key} is expanded by amd to be the name of the directory we tried to access.

amd Config File

I decided to use a config file to set things up rather than doing it all as commandline flags, so this is my /etc/amd.conf file:

[ global ]
log_file = syslog

[ /data/iso/contents ]
map_name = /etc/amd.iso-file

Basically telling amd to watch the /data/iso/contents/ directory, and handle attempts to access it based on the map file /etc/amd.iso-file. Also set logging to go to syslog (typically you'd look in /var/log/messages)

Enable it and start

Added these lines to /etc/rc.conf

amd_enable="YES"
amd_flags="-F /etc/amd.conf"

Fire it up with:

service amd start

You should be in business. Unfortunately, if you try

ls /data/iso/contents

the directory will initially appear empty, but if you try

ls /data/iso/contents/openbsd-4.9-i386

you should see a listing of the image's top-level contents (assuming you have a /data/iso/images/openbsd-4.9-i386.iso file). Once an image has been automounted, you will see it in ls /data/iso/contents

Check the mount

If you try:

mount | grep amd

you'll probably seem something like:

/dev/md0 on /.amd_mnt/openbsd-4.9-i386 (cd9660, local, read-only)

The cool thing is, after a couple minutes of inactivity, the mount will go away, and /data/iso/contents will appear empty again.

Manually unmount

The amq utility lets you control the amd daemon, one possibility being to request an unmount to happen now, with for example:

amq -u /data/iso/contents/openbsd-4.9-i386

Conclusion

That's the basics. Now if you're setting up PXE booting and point your Nginx server for example to share /data/iso, you'll be able to reference files within the ISO images, and they'll be available as needed.

1 2 Next>>