Author Archives: Max Kuipers

First Glance at PowerShell

A couple days ago I had the surprisingly excellent opportunity to learn and use Windows PowerShell… What? Don’t look at me like that. I disapprove of Microsoft just as much as the next Linux fanboy, but seriously, this was cool. Just give me a chance to explain. I swear, I was forced into the situation – one of the projects I was working on required a simple script be written to rename files on a Windows server, but for various reasons, I couldn’t use Cygwin. After a brief panic attack caused by the realization that I would have to be separated from my beloved Bash, I looked into which scripting language would be best. After an exhaustive, comprehensive, and fully extensive 30-second Google search I found myself with a choice between Powershell and classic Batch… Naturally, I chose Powershell.

Continue reading

How RAID 5 Works at a Bitwise Level

RAID 5 is a pretty magical thing overall, though a large portion of its magic lies in how it works on a bitwise level.  But before I get into the bitwise sorcery, I’d like to briefly explain what RAID5 is. RAID stands for Redundant Array of Inexpensive (or Independent) Disks. There are a number of different types of RAID such as RAID0, RAID1, RAID5 and RAID6 which each store data in different ways and have different space efficiencies and fault tolerances.
Continue reading

Rendering Global t:messages After Redirect

A common problem when working with JSF is getting global info messages  via <t:messages globalOnly="true"> or <f:messages globalOnly="true"> to display messages set in the previous request when you have a <redirect/> in your faces-config for a particular page You will not see your <t:messages> that are set on the previous page.

The Problem

For instance, say you have two pages – page1.xhtml and page2.xhtml. In your faces-config.xml, you will have 2 entries.

Continue reading

Hibernate Embeddable Objects

Hibernate Embeddable Objects are a really neat way to organize your data model.  Especially, if you have the same few columns in a number of different tables, that all pertain to the same thing. The example commonly used is Addresses.  You may have a number of tables that each have fields pertaining to address information and you don’t want to have to do all the mappings for each entity again and again.  If the column names are the same across each table, you can just add an @Embeddable annotation.
Continue reading

Monte-Carlo Localization in a Nutshell

If you’re a nerd like most of us here at Source Allies, you probably think robots are cool.  One of the most important part of robotics is teaching the robot to find its location on a geographic map – a process known as  “localization.” One such algorithm for solving this problem is known as Monte Carlo Localization. When talking about this algorithm, we typically use a a notion commonly referred to as “particles.”  These particles generally can be thought of as virtual manifestations of the robot within some computer.  They are postulations about the robot’s location, orientation, and certainty of this information on a geographic map.  With that in mind, the Monte Carlo Localization algorithm in plain English is as follows:

  1. Initialize set of particles (or beliefs about the robot’s location.) Depending on what problem you’re trying to solve, the set of particles can either be random or already localized.
  2. Gather data about the physical environment by interacting (taking in sensor information, moving around)
  3. Look at each particle in your set of particles and assign a weight to each based on how well that particle fits with the the data gathered in step 2. Basically, our certainty about whether a particle is actually representing the robot’s location and orientation determines the “weight.”
  4. Create new set of particles by resampling from particles with greater weights. This is sort of a Darwinian, survival of the fittest, particles with higher weights repopulate the set for the next round.
  5. Replace old set of particles with a new set and start again at step 2.

So that was a very basic, watered down version of the algorithm that omits many important statistical calculations, but hopefully it gets most of the main idea across.

In case there is any confusion, let’s walk through an example.  Let’s say I am a robot. For obvious reasons, I was kidnapped by some ninjas. The ninjas then released somewhere in downtown Des Moines I have no idea where I am at first.  Fortunately, I am a robot and have a perfect map of downtown Des Moines, so I initialize in my head a set of postulations about my position and orientation. These beliefs are completely random and distributed fairly evenly over all of downtown Des Moines and all have different random orientations.

First thing I do is open my eyes and look around, perhaps I’ll take a few steps in any direction and continue gathering visual information.  I see a Smokey D’s.  Then I look at all the particles in my set of particles and determine which of those particles also would see a Smokey D’s.  I decide that those particles are better than all the other particles and assign them greater weights.  Then I go through my set of particles again do some ninja-statistics that I learned while I was kidnapped to decide which particles can make it to round two and which cannot. What I’d end up with is two clusters of particles around the two points, appropriately titled “A” and “C.”

There would be a cluster of particles around A and C

There would be a cluster of particles around A and C

Then I’d go back to step 2, interact with my environment some more, find out that I’m actually indoors and that fits better with the particles around A – the Smokey D’s in the skywalks. Perhaps I’d repeat from step 2 a few more times and eventually weed out particles until I have a full set of particles that are all in a very similar spot around the Smokey D’s in the Skywalks. Then I know where I am and I’m a happy robot.