Category Archives: Unix

Hard Drive Sustainability

Your hard drive with very important family pictures has just failed, and now all data is lost forever. Could you have prevented this from happening? This article is a quick walk though of how to detect hard drive errors before the disk is unusable.

Stephen Dunn
Stephen Dunn

Latest posts by Stephen Dunn (see all)

Continue reading

Installing Ubuntu: A Trial and Error Account

Recently I decided it was time to grab up a spare computer that I could use for tinkering as well as back up files from my other machine in the event that it goes down. The one big thing I wanted to do was to install a Linux OS and experience everything that comes with it. This would be my first time installing a Linux OS. I chose to install Ubuntu since it is the most widely used and has the most extensive documentation and help available.

The Download
When I bought this machine it had a fresh install of Windows XP and came with the recovery disk which was excellent because I had to use it multiple times before I got things just the way I wanted them. Since this computer didn’t have any files I didn’t have to worry about backing anything up but it would be a must if considering putting Ubuntu on an everyday machine. To install Ubuntu you need the install CD. Ubuntu community can mail you one if you so request online, but why not be a DIYer and burn it yourself? I downloaded the Ubuntu 9.10 Desktop version for a graphical install and then went straight to burning it onto a CD. This was a mistake. I didn’t figure that the piece of the installation instructions regarding running the checksum was all that important, but it absolutely is. If the download is the least bit wrong the installation will not work. I burned several CDs of a bad image. Eventually I followed the installation documentation more closely and actually downloaded winMd5Sum. With this free tool I was able to compare the checksum of the downloaded image with the correct checksum from the Ubuntu site. It took several attempts and switching to a Canadian mirror before getting a successful download. Finally I could burn it to a disc.

Continue reading

Ubuntu Live Network Boot using PXE

Requirements

  • Linux server with NFS (or compatible)
  • TFTP server
  • DHCP server
  • syslinux / pxelinux files

To simplify these instructions we are going to make the following assumptions.

  • DHCP server is 10.0.0.2
  • TFTP server is 10.0.0.3
  • NFS is a Ubuntu server at 10.0.0.4

In reality it’s likely your TFTP and NFS server are going to be the same server, however because we go by IP in this, it is hopefully easier to understand.
Continue reading

Unplugging an LVM partitioned USB drive

Recently I had the heartbreaking experience of having to reboot a Linux server. Normal usage should almost never require you to reboot the OS like you have to so frequently in Windows. In this case I had an external USB drive partitioned with LVM humming along on a Linux server. I needed to pull the drive, so like I’ve done with other drives I unmounted all partitions on the drive. Then proceeded to unplug it from the USB port. All well and good. But when I plugged it back in, the lvs command was showing error messages on the partitions and I was unable to mount them.

Some Google searches later I found that when it comes to LVM partitions the OS keeps references to it unless you explicitly tell it to unhook them. Only then can you tell the OS to hook the LVM partitions back up when you’ve plugged the drive back in. In my case I had to resort to rebooting the server in order for the OS to hook all the pieces together for the LVM partitions. Short of this I would have to manually delete certain files and move things around to get the LVM partitions to work again. So here are the magic incantations that will save you the headache.

Before you unplug an LVM partitioned USB drive, you must run the following commands:

#!/bin/bash
lvchange -an /dev/your_volume_group_name
vgexport -a

Use the man command to explore what these commands do.

Now you should be able to unplug the drive. When you are ready to plug it back in, stick it back in the USB port and run the following commands:

#!/bin/bash
vgimport -a
lvchange -ay /dev/your_volume_group_name

You should now be able to run lvs and see you LVM partitions on the USB drive without any errors and proceed to mount the partitions.

Hope you found this useful. Are there other or different ways of doing this? Please add your comments below and Happy Holidays!

GlusterFS Replication for Clustering

I recently was searching for a way to simulate shared physical storage in a VPS environment for clustering purposes.  In an enterprise data center we can expect some type of SAN available to provide shared physical storage.  GFS is a simple solution in this case to create a shared file system that can be used as a resource in a cluster.  GlusterFS allows us to provide this type of functionality to multiple nodes when we have no means of providing access to the same physical storage.

The gluster community site will be a great resource for anyone wanting to implement the file system and is located at http://www.gluster.org.

For the remainder of this post I will be referring to an environment consisting of two CentOS VPS nodes.

Preparing Ext3 File System for Sharing

Gluster will not share raw devices but instead will use an already mounted file system.  I will be assuming the use of a complete ext3 file system on the mount point /replicator.  If you can’t provide a unique storage device for this purpose you can just use a directory on the root file system for testing.

Installing GlusterFS Server and Client

The following commands need to be executed on each node to grab and install the necessary RPMs.

wget -r -l 1 http://ftp.gluster.com/pub/gluster/glusterfs/3.0/3.0.0/CentOS/
cd ftp.gluster.com/pub/gluster/glusterfs/3.0/3.0.0/CentOS/
rpm -Uvh glusterfs-*-3.0.0-1.x86_64.rpm

Execute the following on either node to generate the necessary configuration files in the current working directory.  This will create a client configuration along the lines of replicator-tcp.vol.  A server configuration file will be created for each node and begin with the appropriate node hostname.

glusterfs-volgen --name replicator --raid 1 node1:/replicator node2:/replicator

Move the client file to /etc/glusterfs/glusterfs.vol on each node.  Also move the appropriate server file to /etc/glusterfs/glusterfsd.vol for each node.

Mounting GlusterFS Volumes

The simplest way to configure mounting of the volumes is via /etc/fstab.  Place a line in fstab on each node.

/etc/glusterfs/glusterfs.vol    /data   glusterfs   defaults  0 0

This will mount the shared volumes to /data.  Try writing a file to one node and watch it appear on the other!

cd /data
dd if=/dev/zero of=/data/test bs=1M count=32

High Availability Implications

At this point I am still vetting gluster’s reliability as a HA solution.  It will most definitely keep data intact during planned maintenance.  If we properly stop the client/server on any node then changes can continue to occur on the other.  Also we can join a node to active shared storage and synchronization is automatic.

The real test is whether gluster will hold up in the not so routine situations.  Some crude tests involving yanking network connectivity from a node that is replicating changes seems to cause some issues.  For example, if I start the dd operation above on node1 and kill the connection to node2, one way or another, before it finishes then node1 still completes the operation fine.  When I reattach node2 even the active mount on /data seems to synchronize with node1 just fine.  Where some differences start to appear is in the /replicator directory on node2.  It seems that this does get out of whack and neither client pays attention to this server any longer.

If gluster can hold up to software and hardware failures without data corruption it can certainly be used as shared storage for clustering.  I’ll continue to explore these options and report back later.

Spring Roo Sample App Tutorial – Part 1

In this blog, I will start creating a web application used to organize bookmarks. Because only certain bookmarks are of interest to specific groups of people, I will use groups in our LDAP server to control which users see which groups of bookmarks.

The entire blog will be released in posts staggered over time. Part 1 will focus on initial setup of Roo, the core web application and authentication with a directory server. Subsequent posts will refine the Spring Roo application.

What is Roo?

It’s a great rapid prototyping tool because prototypes don’t need to be scrapped to proceed with fleshing out the application if a prototype proves itself.

Roo gives you Spring best practices, Rails-like scaffolding, an interactive shell, no additional run-time dependencies, and a big productivity boost while not locking you into yet another framework. You can re-use your existing Spring/JPA/Hibernate knowledge, while getting the productivity gains from Roo.

Setting up Roo

  • wget http://s3.amazonaws.com/dist.springframework.org/milestone/ROO/spring-roo-1.0.0.RC3.zip
  • unzip spring-roo-1.0.0.RC3.zip
  • sudo ln -s ~/Frameworks/spring-roo-1.0.0.RC3/bin/roo.sh /usr/bin/roo
  • mkdir ~/Workspaces/intranetlinks; cd ~/Workspaces/intranetlinks

Starting our Project

Once in your new project directory, type ‘roo’. Then once in the Roo shell, execute these commands. See this guide for an explanation of what these commands do:

project --topLevelPackage com.sourceallies.links
persistence setup --provider HIBERNATE --database MYSQL
database properties set --key database.password --value password
database properties set --key database.username --value username
database properties set --key database.url --value jdbc:mysql://localhost:3306/intranetlinks
 
entity --name ~.domain.LinkCategory
field string name --notNull --sizeMin 1 --sizeMax 255
 
entity --name ~.domain.Link
field string name --notNull --sizeMin 1 --sizeMax 60
field string url --notNull --sizeMin 1 --sizeMax 255
field string ldapSecurityGroup --notNull --sizeMin 1 --sizeMax 60
field reference --class ~.domain.Link --fieldName category --type ~.domain.LinkCategory
 
logging setup --level DEBUG
 
controller scaffold --name ~.web.LinkCategoryController --entity ~.domain.LinkCategory
controller scaffold --name ~.web.LinkController --entity ~.domain.Link
 
finder list --class com.sourceallies.links.domain.Link
finder add --finderName findLinksByCategory --class ~.domain.Link
 
security setup
test integration
perform test
perform eclipse

Then of course, create your local database inside the MySQL shell:

create database intranetlinks;
create user 'username'@'localhost' IDENTIFIED BY 'password';
grant all privileges on intranetlinks.* to 'username'@'localhost' with grant option;

Next, unless you’re using Roo 1.0.0.RC4 (not available at the time of this blog post), you’ll need to add the following config to near the bottom of your pom.xml (to fix this bug).

<profiles>
       <profile>
 <id>jaxb</id>
       <activation>
            <jdk>1.5</jdk>
        </activation>
        <dependencies>
               <dependency>
                    <groupId>javax.xml.bind</groupId>
                    <artifactId>jaxb-api</artifactId>
                    <version>2.1</version>
                </dependency>
               <dependency>
                    <groupId>com.sun.xml.bind</groupId>
                    <artifactId>jaxb-impl</artifactId>
                    <version>2.1.3</version>
                </dependency>
         </dependencies>
  </profile>
</profiles>

Then pull the JAXB JAR into your build by executing this maven command (outside of the Roo shell):

mvn package clean

Finally, per a prior blog, replace the body of your src/main/resources/META-INF/spring/applicationContext-security.xml with this:

    <http>
    	<form-login login-processing-url="/static/j_spring_security_check" login-page="/login" authentication-failure-url="/login?login_error=t"/>
        <logout logout-url="/static/j_spring_security_logout"/>
        <intercept-url pattern="/admin/**" access="ROLE_ADMIN"/>
        <intercept-url pattern="/member/**" access="IS_AUTHENTICATED_REMEMBERED" />
        <intercept-url pattern="/resources/**" access="IS_AUTHENTICATED_ANONYMOUSLY" />
        <intercept-url pattern="/static/**" access="IS_AUTHENTICATED_ANONYMOUSLY" />
        <intercept-url pattern="/images/**" filters="none" />
        <intercept-url pattern="/styles/**" filters="none" />
	<intercept-url pattern="/link/form" access="ROLE_INTRANETLINKS-ADMINS" />
	<!-- We're doing REST, only allow GETs to normal users -->
    	<intercept-url pattern="/link/**" access="ROLE_INTRANETLINKS-ADMINS" method="DELETE"/>
    	<intercept-url pattern="/link/**" access="ROLE_INTRANETLINKS-ADMINS" method="POST"/>
    	<intercept-url pattern="/link/**" access="ROLE_INTRANETLINKS-ADMINS" method="PUT"/>
        <intercept-url pattern="/link/**" access="IS_AUTHENTICATED_REMEMBERED" />
        <intercept-url pattern="/login/**" filters="none" />
	<intercept-url pattern="/**" access="ROLE_USERS"  />
	 <anonymous /> 
    </http>
 
    <ldap-server id="ldapServer" url="ldap://yourdirectoryserver:338899/" />
 
   <authentication-manager>
    <ldap-authentication-provider server-ref="ldapServer"  
       user-search-base="ou=people,dc=sourceallies,dc=com" 
       user-search-filter="(uid={0})"
       group-role-attribute="cn"
       group-search-base="ou=groups,dc=sourceallies,dc=com"
       group-search-filter="(memberUid={1})"
       role-prefix="ROLE_" />
   </authentication-manager>

Note that in Spring Security 3.0, Authentication Providers must now be declared from within the authentication-manager element (more information here).

Then add a few more dependencies to your pom.xml

    <dependency>
        <groupId>org.springframework.security</groupId>
        <artifactId>org.springframework.security.ldap</artifactId>
        <version>3.0.0.RC1</version>
    </dependency>
 
    <dependency>
        <groupId>org.springframework.ldap</groupId>
        <artifactId>spring-ldap-core</artifactId>
        <version>1.3.0.RELEASE</version>
    </dependency>
 
    <dependency>
        <groupId>org.springframework.ldap</groupId>
        <artifactId>spring-ldap-core-tiger</artifactId>
        <version>1.3.0.RELEASE</version>
    </dependency>

This will allow you to use Spring LDAP and also conditionally render pieces of your application like this:

<security:authorize ifAllGranted="ROLE_SUPERVISOR">
    <li id="finder_findlinksbycategory_menu_item">
        <c:url value="/link/find/ByCategory/form" var="finder_findlinksbycategory_menu_item_url"/>
        <a href="${finder_findlinksbycategory_menu_item_url}">
            <spring:message arguments="Category" code="global.menu.find"/>
        </a>
    </li>
</security:authorize>

Finally, run the following command to startup Tomcat and start refining your UI.

mvn tomcat:run

Stay tuned for Part 2 of this series!

Vim splits, an introduction.

First off, lets get some test files:

for i in foo bar cat dog ; do echo $i > $i ; done;

This creates 4 files named  foo, bar, cat and dog. Each file has a single line that contains the file’s own name.

Let’s open the first file:

vim foo

vim with single file

This would be the familiar vim with one file open view. Now to open a new split and open the bar file inside it:

:sp bar
vim with two splits

Focus is in the new split initially. To move between splits first press Ctrl-w (I remember this by Control Window, I’m not sure what the official mnemonic is) Then press a directional key to move the cursor to the split you’re interested in. Directional key could be the arrows or my preferred home row method.

We can split again and open the cat file:

:sp cat
vim with three splits

By now you may have noticed the every time you open new split all splits get an equal amount of screen real estate. The size of the current split can be adjusted by using Ctrl-w + and Ctl-w – (+ increases the split size by one line, – reduces the split size by one line) If the idea of bumping the size of the split one line at a time doesn’t sit well with you, prefix +/- with a multiplier. For example to increase our current split (which is the cat split) by 5 lines run the following:

Ctrl-w 5+
vim with adjusted split size

To quickly “maximize” the current split:

Ctrl-w _
vim with 3rd split maximized

And to return to equalized splits:

Ctrl-w =
vim with three splits

So far we have only been working with horizontal splits. Vim also supports vertical splits. To split the current split again, only vertically (and at the same time open the file named “dog”) run:

:vsp dog
vim with vertical split

Of course you can keep splitting until your head hurts. Vim even allows you to split the same file multiple times and it will automatically keep the contents in sync. This is very handy for referencing one section of a file while editing another.

Crazy vim splits

Split related commands:

Command Action
:sp filename Open filename in horizontal split
:vsp filename Open filename in vertical split
Ctrl-w h
Ctrl-w ←
Shift focus to split on left of current
Ctrl-w l
Ctrl-w →
Shift focus to split on right of current
Ctrl-w j
Ctrl-w ↓
Shift focus to split below the current
Ctrl-w k
Ctrl-w ↑
Shift focus to split above the current
Ctrl-w n+ Increase size of current split by n lines
Ctrl-w n Decrease size of current split by n lines