In a previous post, I mentioned that in order to have a successful DevOps experience, there were some key components and principles that need to be implemented. In this post, I’ll cover those components in more detail.
What I want to cover in this post is the experience that I had transitioning from a traditional development role to DevOps and what I learned to be useful in that transition. One of the nice things that I experienced with DevOps was that it pushes developers to take more ownership of their application because they are living through the pains and difficulties of running the application which in its turn pushes them to make running the app easier.
Wikipedia defines DevOps as:
DevOps (a clipped compound of “development” and “operations”) is a software development method that stresses communication, collaboration, integration, automation, and measurement of cooperation between software developers and other information-technology (IT) professionals.
I like to simplify this definition by saying that DevOps is when you’re not only responsible for developing the application but you’re also responsible for running and supporting the application in your testing and production environments. As opposed to the traditional way of developing where you have the luxury of developing the application then throw it over the wall to the Ops team.
Amazon just announced general availability of their Elastic Container Service providing a platform for launching Docker images in the cloud. Let’s say your team is developing software on Windows and Mac OSX, but Docker requires the Linux kernel’s virtualization features to work. By now, you have likely discovered that Vagrant and/or boot2docker provide nice ways to run Linux on your local PC or Mac and provide a docker deployment platform.
But with so many different options available to configure how your Docker containers talk to each other, how do you get started? In this article, we will take a look at a basic set of containers needed to stand up your own Docker registry (a must if you want to share your images in a place other than the public docker.io or paid private quay.io) and look at four different ways to launch your containers:
Latest posts by Matt Vincent (see all)
- 4 Ways to Launch Docker Containers - May 14, 2015
- Aggregate MyBatis.NET SqlMaps from Multiple C# Projects - October 6, 2010
- The Easiest Way to Organize Zimbra Email - February 26, 2010
ActiveMQ is a great messaging broker. However, using the default configuration is not recommended. This article will explain how I determined the appropriate ActiveMQ memory settings for one of our clients.
What is load balancing?
Load balancing is the practice of distributing a workload across multiple computers for improved performance. Load balancing distributes work among resources in such a way that no one resource should be overloaded and each resource can have improved performance, depending on the load balancing algorithm. Items such as network traffic, SSL requests, database queries, or even hardware resources such as memory can be load balanced. This practice is commonly used in server farms where multiple physical boxes are coordinated to fulfill the requests of many end users.
If you’re like me, receiving 30-40 emails is par for the day. Because Source Allies provides consulting services for companies wishing to implement or better take advantage of Zimbra, it is also the mail server we use at our company. Zimbra has incredible search capabilities, but my OCD tendencies still require that my email is nicely filed away in it’s designated folder. However, if statisticians say we spend an average of 3 years of our lives waiting at red stop lights, I certainly don’t want to spent that much time or more, dragging emails from my inbox into my IMAP folders.
This blog post lets you manage all of your email in OCD detail, with just (2) keyboard shortcuts: u and s. Continue reading
- Linux server with NFS (or compatible)
- TFTP server
- DHCP server
- syslinux / pxelinux files
To simplify these instructions we are going to make the following assumptions.
- DHCP server is 10.0.0.2
- TFTP server is 10.0.0.3
- NFS is a Ubuntu server at 10.0.0.4
In reality it’s likely your TFTP and NFS server are going to be the same server, however because we go by IP in this, it is hopefully easier to understand.
OpenSolaris has by far, one of the best service management interfaces that I have used. Below I am going to go over a simple way to turn in shell script into a service managed by the OS.
Anyone that’s worked with Perl is probably familiar with CPAN.pm. CPAN.pm is the bundled module that handles downloading and installing modules from the CPAN repository. It usually works flawlessly but I’ve noticed that on OpenSolaris the process can be a bit more spotty.
“Port 8080 required by Tomcat v6.0 Server at localhost is already in use. The server may already be running in another process, or a system process may be using the port. To start this server you will need to stop the other process or change the port number(s).”
I have been seeing this error from Tomcat every time I reboot my Windows development box.