Articles,Deployment,InstaDMG,OS X May 21, 2013 at 5:49 am

Deployment: A Pedagogical Manifest(o)

Last Fall, a discussion arose on the MacEnterprise list about deployment terminology, with the term “thin imaging” coming in as the winner for variety of (conflicting) definitions. (You can read the thread from the beginning or jump on when the discussion turns to terminology.) I was starting to prepare my session for MacIT 2013 at the time and it made me think more deeply about how we talk about what we do. Much of our deployment terminology is anchored in the past by relating everything to “imaging.” That is a useful point of reference for those transitioning to more modern methods, but I would argue that we have reached the level of maturity in our MacAdmin community where we need to start taking care of pedagogy. We need to define what we are talking about in a way that is consistent and literate — that both experienced and neophyte MacAdmins can understand. This article is meant as a starting point for that pedagogy.

I suspect some of you will not have the appetite to read a 3500-word treatise on terminology. You may be thinking things like, “I already know that, [derogatory reference],” or, “sounds like this could be pretty dry,” or “I’m not seeing any helpful tips that will make my job easier.” You’re probably right — essentially, I am documenting the deployment process from scratch. I give you permission to skip ahead to the Big Picture section or even the Glossary at the very end and see if you can live with the terminology I’ve come up with. Just promise me you won’t put anything in the comments (and I hope there will be comments) until you go back and read the how and why behind those conclusions.

Finally, I want to make it clear that I am limiting my scope to deploying Macs, even though some of the same principles might apply to deploying iOS devices or computers that run other operating systems.

From Unknown to Known to Desired

The first thing to define would be deployment itself.
While it is generally agreed that we are talking only about installing software and making other alterations to the file system, people often use deployment and imaging interchangeably. This correlation is inaccurate when describing more modern deployment methods. So let’s avoid the word imaging in our definition and look at it more holistically. When we deploy a Mac, we are taking its boot drive from its current state and moving it into a desired state. That current state could be unknown (e.g., shared lab machine with abandoned user files), known but not bootable (e.g., new blank hard drive), or known and bootable (e.g., new Mac out of the box). We can simplify this a bit by treating unknown and known but not bootable the same way, since they both require some sort of processing before they can move to a known bootable state. We can also acknowledge that somewhere in the deployment process, regardless of how we do it, the machine will be in a known bootable state. So I like to say we are taking the machine from an unknown state to a known bootable state to a desired state, or unknown to known to desired for short. Sometimes, we won’t need that unknown-to-known stage at all, or we’ll need to do very little to get to the known state, but we will pass through a known state sometime in the process. Moving from known to desired could be as little as setting the machine’s sharing name or could be as much as installing every application and setting that will be used on that machine. The path we take to get us to that desired state is a deployment workflow.

So What’s Imaging, Then? (Workflow #1)

Imaging a machine, defined in this context, is when we change a boot drive from an unknown state (or any other state that we are willing to erase) to a known bootable state by erasing the target drive/volume and copying a bootable file system onto the now-blank drive/volume. The source is almost always a disk image these days. Some people affectionately call this process “nuke and pave.” I’m intentionally going to dodge the question of types of imaging (e.g., monolithic, thin) for now. Let’s just agree that whatever qualifier we apply, this is what we mean by imaging a machine.
That also clarifies that imaging per se is not the only way to get a machine from an unknown state to a known bootable state.

Confusing matters further, disk image creation is related but different. If you are going to use imaging to go from unknown to known, you need to create an image somehow. A common method has been to manually install all the software required onto a “Golden Machine,” tweak that machine’s settings as needed, and then copy that Golden boot drive to a disk image using tools like Carbon Copy Cloner or DeployStudio. (I’m not going to discuss the pros and cons of methods — remember, this discussion is about pedagogy, not best practices.) More recent disk image creation techniques centre around automated installation of software directly onto a disk image with tools like InstaDMG and System Image Utility’s NetRestore function.

Until the release of Lion, the most common path to get from unknown to desired would have been to create an image (regardless of method), deliver that payload through block copying to the target computers and then make the necessary tweaks for each station at the end (manually and/or with automated tools like DeployStudio or first-launch scripts). So that’s our first deployment workflow, where imaging is still king.

Customizing

By my definition, the “imaging” part of that process ended when the block copying was done, as it was in a known bootable state at that point (often very close to the desired state). So to make things a little easier to discuss, I describe the stage that gets us from known to desired as customizing the machine. This is the part of the process where you make machine-specific changes, including adding software that might not be part of the main payload delivered in the previous stage. Anything that is installed by a first-boot script, a software update mechanism (e.g., Software Update, Munki), or by manual means (including via Apple Remote Desktop) is part of the customizing stage by this definition.

The trend recently has been to do most of the work in the customizing stage, since we can often use standard packages and drag-installed applications without modifying or repackaging. Munki is arguably the most evangelized tool by those using this methodology, although DeployStudio will also do the initial installations for you.

Another Path From Unknown to Desired (Workflow #2)

So that leads us to another common path through the deployment process: we create a (usually smaller) image, block-copy that image to the target machines, then have a number of packages installed (and settings set via script) before and/or after the machine is first booted. This happens to be the workflow I used in January when I did the most recent software redeployment in my Labs. I used InstaDMG to build an image that held about four-fifths of the 100 GB payload that needed to be delivered. I had DeployStudio erase the target volume and block copy the image. I also had DeployStudio do most of the customizing work, which included setting the sharing name & IP address, delivering local MCX settings, and installing a number of apps upon first boot, including Adobe Creative Suite (using an AAMEE-created package). The handful of stations that needed a specialized app (e.g., scanning software) were handled manually (although automation would have been possible). So that’s our second workflow, which some call a hybrid deployment, where we use imaging to get to a known bootable state but layer on more software in the customizing phase.

The Shortest Path (Workflow #3)

In rethinking our deployment methodologies, many have questioned why we “nuke and pave” machines that are brand new in the box. If we have built good customization routines, we can skip the “imaging” and just install the software (and user accounts) that we need. This gives us our third path or workflow: start at known and just customize. When you think about it, this is how virtually every individual Mac user sets up their new machine (übergeeks excluded): they start up their machine (from a known bootable state), the setup assistant takes them through the key settings, they transfer data and apps if appropriate, establish user account(s), and they can manually adjust or install whatever they like once they are dropped into the Finder. As MacAdmins, it makes sense to mimic that normal installation process as closely as possible to eliminate problems associated with other methods, but automate it to provide consistency and efficiency.

Install Rather Than Image (Workflow #4)

Once this idea of installing rather than imaging started to take hold in our community, the next logical question to ask was: can we eliminate imaging on machines that are not fresh out of the box? Clearly, the answer is yes, as that’s how Apple reinstalls an OS from a Recovery partition in Lion and later. This is why I said earlier that imaging is not the only way to get a machine from an unknown state to a known bootable state. This gives us our fourth deployment workflow. We can generalize and say that, from the deployment perspective, going from unknown to known is about delivering a bootable OS, whether we copy it or install it. If we copy it, we need to create an image beforehand. If we are installing it, we can use Apple’s OS X installer if we want to do it manually, or we can convert that installer (Lion or later) to a package (e.g., using CreateOSXPackage) and install it with a tool like DeployStudio. The only variation would be whether we erase the target volume first (thus mimicking a “fresh out of the box” experience) or whether we apply the OS install directly onto the existing system as an upgrade. This latter variation would require having a system in a sort-of-known bootable state, so it behaves like the fresh-out-of-the-box workflow but is configured the same as if the volume was blank. Pedagogically, I’ve categorized it as the former, but that’s open for debate.

The Big Picture

Based on these workflows, we can now describe a general path that every software deployment follows, regarless of how much is automated and how much is done manually. I’ve pictured this in the diagramme below.

Deployment Overview

Here it is in prose: As a necessary adjunct from moving from an unknown state to a known bootable state, we need to create or collect a bootable operating system. This can take the form of a disk image (made with modular tools or by capturing an existing volume) or an OS installer (modified slightly to allow for automation). This payload may contain software and settings in addition to just the OS. Once we have that in hand, we can move any number of machines from unknown to known by delivering that payload to the machine, usually erasing the target volume just prior to delivery. This brings the machine into a known state. From here, we can customize the target machines in a variety of ways, including installing software, to get it to the desired state. A machine that comes to us already in a known state (e.g., new in the box) can just be customized, although some may still choose to treat it as if it is in an unknown state.

Keeping machines current (i.e. patch and upgrade management) is essentially perpetual customization, since it is accomplished using many of the same tools as the customizing stage (e.g., Munki, Apple Remote Desktop, manual installation). It also explains why the deployment load is gradually moving away from imaging and towards customizing — once you’ve got customizing automated, your ongoing management may already be automated (e.g., using Munki for both).

Naming Common Workflows

Now that I have laid out a pedagogical description of deployment, let me try to define (or clarify) how we might use existing terminology in a more consistent way that fits this model, since most of our current terminology is workflow-centric, not process-centric.

We defined imaging earlier, but I did not discuss the different types of imaging, which are directly tied to the type of image we create. When we mention monolithic imaging, we are generally referring to creating and delivering a disk image that gets us as close to the desired state as possible. We will still need to individualize some settings like the sharing name (which we might even automate using a tool like DeployStudio), but we probably won’t be running first-boot scripts or having our deployment tool install dozens of packages after the fact (with the possible exception of pushing out patches because the image is no longer current). I understand that some people use the term monolithic for any type of image with more than just a base OS on it, but I think what I’ve described is the most common definition.

As I mentioned off the top, thin imaging is the most disputed term in use. Some mean that the image created is the OS and a few apps (not the whole payload), some mean that the image has the OS and the absolute minimum required to kickstart the installation of software in the customizing phase, and yet others mean the OS is installed directly using an OS X Installer package. Probably the best thing we could do is to drop the term thin imaging altogether, no matter how evocative it seems to be.

We certainly need another term for that first case, where all the machines need a certain base set of software, but, for example, the graphics department needs Creative Suite/Cloud, the administrative offices need FileMaker Pro, etcetera. So if you are creating an image (whether modularly or by Golden Master), you put software that everyone needs on that image and then push out the other software in the customizing phase. Thinking back to grade school math, I call this kind of image a Lowest Common Denominator Image, or Common Image for short. Even if your image has less than the full payload for other reasons (e.g., Adobe’s AAMEE package doesn’t play well with InstaDMG, so you choose to post-install it), this is a term that makes sense. It doesn’t describe the heft of image — one person’s common image could be bigger than another’s monolithic one — it describes the intent of it. An image with not much more than the OS is still a common image, so we need never describe imaging as “thin” if we choose. You might be better served by calling it a “minimal common image” or “small common image” if you feel you need to describe the lack of heft. As Snow Leopard and its predecessors gradually leave our deployed fleet of machines, we will probably not be generating many more minimal/thin images in any case, so hopefully the question will become moot.

That suggests we need new terminology for the remaining two workflows: installing the OS (generally using CreateOSXPackage or NetInstall) followed by installing software in the customizing stage; and customizing an out-of-the-box Mac through software installs only. Greg Neagle likes to call both of these “No Imaging” — he posits that you shouldn’t use the word “image” if there is none involved. (This call to literacy is yet another reason to drop the “thin” moniker from our vocabulary.) For lack of better terminology, I refer to these two workflows as Erase and Install and Customize Only respectively (I could be persuaded to use Install Only for the latter).

Glossary

Let me sum up by collecting the terms I used/re-used/coined in this article for easy reference. I encourage you to debate (and improve!) these suggested definitions in the comments. Hopefully, then we can come to some level of consensus about how we talk about deployment so that we can understand each other better and bring new MacAdmins up to speed more quickly.

Deployment
The process of taking a Mac’s software and settings from an unknown state or known bootable state to the desired state.
Deployment Workflow
A particular path or set up steps taken to deploy a Mac (i.e. to get it to the desired state).
Unknown State
Describes the status of a Mac’s boot volume/drive where the deployment administrator cannot be certain that a Customization Only workflow will bring the boot volume to the desired state. Successful deployment workflows from this state will require the erasure of the boot volume/drive. Note: Can also refer to a boot volume/drive in any state if the deployment administrator chooses not to consider whether the boot volume is in a known bootable state or not.
Known State
Short for Known Bootable State, this describes the status of a Mac’s boot volume/drive when it is ready to to be customized. The Mac is bootable at this point and any dependencies related to the customizing stage of the workflow have been dealt with. Every deployment workflow will either start or pass through this state.
Desired State
When the software on a particular Mac is ready for the end user(s) according to the deployment administrator.
Imaging
The process of taking a Mac boot drive/volume from an unknown (or any) state to a known bootable state by erasing the drive/volume and block-copying a disk image (or volume) with a bootable system onto that volume. Colloquially known as “nuke and pave.”
Customizing
The process of taking a Mac boot volume from a known bootable state to the desired state through the installation of packages and other payloads.
Image Creation
The process of creating a disk image to be used in the imaging process. Such an image can be created modularly (e.g., with InstaDMG) or can be captured from an existing bootable volume (e.g., with DeployStudio).
Monolithic Imaging
The process of creating and delivering a disk image that brings the target disk/volume as close to the desired state as possible. Such a deployment workflow would have a minimal customization stage, limited to station individualization (e.g., sharing name) and perhaps some recent patches/updates. An image created for this purpose is often referred to as a Monolithic Image. Note: Since a Monolithic Image is also a kind of Common Image, it is possible that this term may become deprecated in the future.
Common Image
Short for Lowest Common Denominator Image, this refers to an image of any size used in deployment that has a bootable system and any other applications, files and/or settings the deployment administrator may wish to include. Note 1: While this definition encompasses every deployment image type, the term Monolithic Image/Imaging has been retained as defined above for that specific case and is deprecated for other uses. Note 2: Modifiers can be added to describe the size of the image (e.g., Minimal Common Image as a replacement for Thin Image).
Thin Image/Imaging
These terms are deprecated due to the conflicting ways they are used in the MacAdmin community. The terms Common Image/Imaging, Erase and Install, and Customize Only should be used instead as applicable.
No Image/Imaging
Refers to deployment workflows that do not copy the desired OS from a disk image or volume. Rather, the OS currently installed is accepted, updated, or a new OS is installed in its place. (See: Erase and Install, Customize Only.)
Erase and Install
A deployment workflow where the operating system and all software, accounts and settings are installed on a blank volume (erased as necessary). The installation of the OS brings the machine to a known bootable state and the customization stage moves the machine to the desired state.
Customize Only
A deployment workflow, most commonly used on machines that are just out of the box, where a bootable volume is already in a known state and is moved to the desired state by installation of software, accounts and settings. No imaging is done; the entire payload is delivered in the customizing stage.

Special thanks to everyone who contributed to the discussion on the MacEnterprise list in October 2012, particularly Greg Neagle and Nate Walck, who have obviously done some thinking about this. The reference from the Dark Side cited by William Smith was also useful. I know that the terminology I use changed due to that discussion.

About Anthony Reimer

Anthony has been supporting Macs at the University of Calgary (Canada) since 1996, specifically the labs for the visual and performing arts. He is a musician and teacher by training, using those skills to conduct a community concert band and to give presentations on technology.

3 Comments

  • Great article!! I’m still doing the golden master image. I find it easier at our environment along with pushing it out with Deploy Studio and a few scripts for renaming, etc. Then Apple updates happen automatically after I ok them in Apple SUS every Friday and build packages and deploy with Apple Remote Desktop if needed. Have other options like Munki, etc, but to me the command line is for scripting, etc, not for running apps to create things and running clunky commands, etc. The thin imaging or common imaging I can see in large environments, but I really have no use here.

  • Anthony, thank you for this article, very informative, and this very topic, has been on my mind for a while. I’ve been fighting it, and part of me doesn’t want to give up my golden master image. With that said, this is definitely the future of computer setups, and where my company wants to go. I just need to buy into more, and figure out the best solution that gets the Macs out of my office, and into the hands of my clients, as fast as possible.
    So, one of my concerns is speed. with both scenarios, Erase and Install, and Customize only, machine setup will take longer. I can image a Mac in under 10 minutes with DeployStudio, and pretty much bind, and deploy it. Pushing packages, and customization scripts, even if everything is automated, is going to push that 10 minutes back a bit. CS6 package alone takes 10-15 minutes to install. I plan to test out my roadmap, and see how long the Customize only will really take. Any feedback, would be appreciated.

    Thanks,
    Peter

  • A bit of a response and extension of these ideas…
    http://themacadmin.com/?p=770

Leave a reply

You must be logged in to post a comment.