Catching Up
When we last saw our subject matter experts in 2022(!), we were discussing how to craft queries so that compliance objectives could be reached with Zentral’s ‘checks’, which has since expanded to scripted results from any interpreter you can target when wrapping a Munki run.

For folks behind the times (a lot has changed since then!), not only did Zentral (the company) start hosting SaaS Zentral (the service) for customers, they also shipped the first vendor-supported interface for voting on Santa rules (as introduced at MacSysadmin.se and on YouTube). All of the long-common moving parts of managing the config of osquery/Santa/Munki and MDM are of course configurable via a first-party supported Terraform provider. There’s even a vendor-maintained best practices ‘starter kit‘ repo of Terraform configs so that the kindof ‘rough around the edges’ doc story for Zentral can have the gaps smoothed over – what Mac Admins tend to want to accomplish across various verticals and environments is relatively similar/hopefully can seem intuitive once you can view a proven-working example.
About that Terraform & GitOps Goodness
We’re going to discuss the concept of blueprints and artifacts in MDM as a parallel to how the Terraform provider also manages Munki (sub)manifests, separate of the actual repo where pkginfo files are meticulously maintained with their corresponding catalog references – the misnomer-like but generically named InstallEnterpriseApplication-delivered packages that Zentral’s MDM pushes can (and should) also be stored in your Munki repo. For those that are aware of the puppet configuration as code model of a ‘control’ repo, we call our repo containing .tf files in the ‘starter-kit’ model mentioned above the ‘zentral_config’ repo. It could use some explanation how a rollout through the lifecycle of software in Munki and MDM configs/the ‘DEP bootstrap’ package (with outset, SwiftDialog, our management python, etc. baked in) gets practically managed in Zentral via GitOps, so we’ll expand on that to provide an introduction to practical workflows and hopefully make the moving parts more approachable.
Pre-requisite Learnings
You’ll want to understand Munki and MDM concepts like pkginfos/manifests/catalogs and mobileconfig payloads/DEP/bootstrap packages. Zentral (and other MDM’s) has ‘blueprint’ as the specific term for a collection of packages and settings, and the components contained are referred to as ‘artifacts’ (leaving aside how Declarative Device Management has its own specific moving parts and commands it can be responsible for). A more esoteric and harder to generalize about issue is things like percentage-based rollouts managed by a cron job or automated process like some environments use CI/CD scheduling+runners for, so we’ll be leaving that out as a topic and instead touch on two basic workflows: introducing a new Munki software ‘title’, with tags on a submanifest allowing targeting devices independent of testing tracks, and the same for MDM artifacts, with a side note about the dicey prospect of retiring old/unused components (vs. the relatively more-straightforward path of retiring software titles/versions in Munki).
Zentral-Specific Munki
The ‘Monolith’ service (in comparison to just using the ‘Munki’ service, to migrate any ‘bucket of files+auth’ service you may have but still be able to do things like script checks and get Zentral’s inventory visibility with its pre/postflight wrapping) is where you’d configure Manifests (which should be generic/broad across use cases or your whole tenant/deployment). Catalogs Zentral compiles (it only needs to read in the ‘all’ catalog) get tags you’d designate your various testing tracks of devices with, which e.g. Okta push groups/SCIM can keep in sync). You’re recommended to break down specifics/relevant use cases into submanifests, which can interact with tags so you only offer a subset of software to specific computers, or manage titles differently, since this is where the ‘rubber meets the road’ for designating software as e.g. managed/optional_install/updates. We even gate licensed software this way, and since every catalog/manifest is dynamically built per-client, there’s no need for ‘signware’ blocking visibility or confirming that the ‘bucket of files’ server allows untracked/unauthorized changes or visibilty where a client can peer into the full contents of the repository. On top of that Zentral generates shard values PER ITEM, AND device, which is inspectable in the inventory details of a computer so you know which shard the computer was put in for a push/version – nobody ‘unfairly’ chosen to be bleeding-edge because the shard value set device-side put them in the first bucket every rollout.
Most of the behavioral specifics in making Munki software management work live in a separate designated repository that can either be Zentral-hosted as a ‘virtual server’ or you can even ask for patches to be provided and you just distribute them if you’re hoping to near-entirely rely on them for your Monolith service (or you can mix-and-match). If you’re comfortable with autopkg and cloud storage like an S3 bucket (e.g. paired with a Cloudfront signing key) you’d have complete control over when versions are loaded in and how they’re made available. Other resources and presentations are around for how that part of a Munki service is maintained and can be git (+gitlfs/git-fat, etc.) managed.
When you’re ready to manage your software via a submanifest (like ‘standard_operating_environment’ to designate official apps vs however else you might want to assign things to your overarching ‘workstations’ manifest) you’d use the zentral_monolith_sub_manifest_pkg_info resource type to map the submanifest, name key in the applicable pkginfo file, installable conditions as applicable, and the managed/optional_install/update ‘key’s value, among other options. This is also where you’d use array(s) of excluded_tag_id’s or tag_shards to… exclude/include specifically-tagged computers or gate rollouts accordingly, separate from the all-or-nothing inclusion in a submanifest or catalog access. Tagging at the submanifest level prevents warning messages about software being offered to a computer but not being available in the catalog.
You can add or remove software versions (paired with tag_shards/excluded_tag_ids) per-push if a riskier version change wants to be rolled out more thoughtfully/accurately, munki does its job of always offering latest as applicable by catalog, and removing the submanifest_pkg_info resource makes it so you can purge that software from your Munki repo at will. A very flexible system that just takes a few loosely-coupled parts to orchestrate!
Hopefully that wasn’t too hand-wavy, feel free to ask for clarification or other examples in the MacAdmins Slack #zentral channel, but for now we’ll move on to MDM.
Packages and Configs over UDP, AKA MDM
Zentral’s Terraform starter-kit shows how you’d track plaintext XML mobileconfig payloads/files in git with no bother, meaning you can output straight from ProfileConfigurator/iMazing/or even other MDMs (once you strip the signing/update fields as applicable). You can even inline-variable-ize dynamic values or (ahem) secrets that Terraform would insert for you so they don’t get committed to git… not like that’s ever happened to us!
Continuing with the starter-kit model, new mobileconfig artifacts would get added to a path/folder in your Terraform repo, and referenced as a version-tagged zentral_mdm_profile resource, paired with a ‘container’ zentral_mdm_artifact that would always map to the (arbitrarily set by you) highest-whole-integer profile version. You’d then ‘attach’ the zentral_mdm_artifact to a blueprint associated with enrolled computer(s) via the zentral_mdm_blueprint_artifact resource, (which echoes how the submanifest is the more-appropriate place to do initial/applicable sharding,) unless/until there’s an updated version you’d like automagically inherited, keeping what you need to touch modular and separate. (There’s even nowhere-else innovations like re-delivering a payload to mimic the ollllllldschool ‘often’ MCX frequency, or after every OS update, or once-only, which would be common for a bootstrap package – flexible!)
Now a weirder aspect of this config-as-code setup, perhaps most noticeable with the bootstrap package (but shared with mobileconfig’s), is the subtle attribute that Zentral doesn’t want to purge an artifact from the system even if you stop pushing it completely as long as one device has received it (and sometimes even after all devices have seemingly acknowledged a newer version/no longer have the old). For cleanliness sake you can file everything but the most recent into an ‘archive’ subfolder to make it more clear what the full set of applicable/latest versions are. When it’s completely aged/out properly processed the GUI will show that an artifact can be deleted (with the trash icon inline) and you can then confirm with a terraform plan that removing the applicable resources won’t cause errors or allow your Terraform state/queue of MDM changes to be out of sync. This is slightly more disjointed with the bootstrap InstallEnterpriseApp, since it’s recommended to check the pkg itself into a dedicated folder in the Munki repo/cloud storage bucket (when applicable) since Zentral would auto-get access to it when configured as a repository… you just need to bump versions and map to the S3:// file path in concert, and should only purge when Zentral truly believes the artifact is free to be released.
お疲れさま! Besides That, How Was The Play, Mrs. Lincoln?
Maybe that was overly convoluted, and yet still too vague? I hope this was at least intriguing to help describe how Zentral once again sets what I’d say is the standard paradigm for how a management tool can aspire to offer true governance while empowering Mac Admins to get shit done. Its configs allow teams to responsibly manage thousands of computers, and orchestrate changes accurately and intentionally. If you’ve experienced a certain vendors ‘distribute to all or NEW ONLY’ 2 buttons sweating meme, or whackadoodle :yolo: $latest implementations that don’t seem like they’re in use by actual customers at any kind of scale, this battle-tested way to do GitOps should hopefully be a real relief. Cheers ✊
Recent Comments