Episode 332: GitHub Actions and Automating MDM Infrastructure

Automation is important for all of us, especially a margin business like an MSP. In todays episode we’ll chat with Bobby Hardaway and Ryon Riley of Advisory Solutions to hear about how they automate Jamf infrastructure with GitActions.


  • Charles Edge, CTO, Bootstrappers.mn – @cedge318
  • Marcus Ransom, Senior Sales Engineer, Jamf – @marcusransom


  • Bobby Hardaway, Sr Systems Engineer, Advisory Solutions – LinkedIn
  • Ryon Riley, Director of Technology, Advisory Solutions – LinkedIn


Click here to read the transcript

Please note that this transcript was generated automatically

Speaker 2 (00:01:18):
Hello and welcome to the Mac Admins podcast. I’m your host today, Marcus Ransom, as Tom once again is Yak herding. Those yaks are starting to require a little more effort than Tom ever really thought when he decided that was going to be his side hustle. So Charles, do you think you are going to move into Yak herding at all anytime soon?

Speaker 3 (00:01:39):
No. No, I don’t think I can move to Tibet right this second. So that’s part of it. Or fly there with frequent flyer miles. But other than that, I would mean who doesn’t want to herd yaks? Right?

Speaker 2 (00:01:57):
Well, maybe I’m just going to put this out there. Maybe something that, because this is starting to take up more and more of Tom’s time, maybe something he can look into is some automation perhaps. So we know automation is important for all of us, especially a margin business like an M S P, which is hurting a very different kind of yak. So in today’s episode, we’re going to chat with Bobby Hardaway and Ryan Riley of Advisory Solutions, and we’re going to hear about how they automate Jamf infrastructure with GI actions. So welcome to the podcast.

Speaker 4 (00:02:30):
Thank you. Nice to meet both of you as well.

Speaker 3 (00:02:34):
And have you guys tried to automate yaks with GitHub actions? Just throwing that out there in case Tom, in case it can help. Tom, is there an a P I for that?

Speaker 2 (00:02:45):
Isn’t that what the Y in YAML stands for?

Speaker 3 (00:02:51):
There’s a great shearing my sheep joke that makes me think about yaks, but it’s a New Zealand joke and I don’t think, and I heard it from Duncan McCracken, so I don’t think it’s appropriate for the podcast. Yeah, probably not. We’ll leave that for another time. But one of the things that we will get into is we love to start episodes with a little bit of a background story. So how did each of you get into managing Apple devices? Sure. I’ll let Bobby go first.

Speaker 5 (00:03:19):
Okay, sure. So I think I started the way a lot of Mac admins do. I’ve heard on the show quite a bit, which was sort of by necessity, no one else wanted to volunteer for being the MAC admin for our new jam rollout. And so I was working for the state of Delaware and their community colored system, which spans the entire state. And like I said, we decided, hey, we’ve got about a thousand devices that are unmanaged or in various states of management, and our director wanted to get that wrapped up. So what I ended up doing is just saying, Hey, I’ll take this on in addition to my other responsibilities. And that’s sort of how it started for me. Prior to that point, I was straight Windows Endpoint and Server Management. That was my wheelhouse. And then here we go. I’m now a Jeff and Mac admin.

Speaker 5 (00:04:34):
After my role with the state of Delaware, I went to the University of Pennsylvania 10 days before the start of the pandemic. And so I mentioned that because this is an organization, a very large, well-known research institution that was not built with the idea of remote work. So my task within the first month was to figure out a way to get a non-mobile workforce to be and a remote capable workforce. So, so I spent a month using Jamf to really drive all of that, and it was very much a success and next thing. Nice. Nice. Very nice. Next thing you know, I see a posting at Advisory Solutions for a senior engineer role, and that’s where I am now. And we’ve been doing good work since,

Speaker 3 (00:05:42):
And just for the listener, since it was an awkward moment when I held up a shirt, I did a few semesters at Penn and I have a pin shirt, and it happened to be sitting right beside me because I haven’t folded my laundry that I washed.

Speaker 4 (00:06:00):
And as for me, I can basically say that I fell into it to some degree. When I first graduated college, I started working on M S P, very small MAC footprint. And in the subsequent two companies that I worked for after that were also Windows-based MSPs with a small Mac footprint. I would say the management was very light. I wouldn’t even call it management per se, it was more of a, we have an R M M solution that we’re using for the Windows devices, let’s kind of manage our Macs through that and do very basic things through the R M M. So it wasn’t up until 2019 that I actually started working for Advisory Solutions as a systems engineer, as a freelancer. And it wasn’t until then that I actually started getting more into the actual management of Apple and Jamf in particular. So it’s been a very fast adoption of learning Max and learning how to manage them and all that stuff, but I really enjoy it and I find it interesting. So it was kind of an easy transition into that world.

Speaker 3 (00:07:10):
Nice. And for the listeners who maybe haven’t used an R M M or something that they would consider an R M M? My old M S P, because I had one before I went to work at Jamf. We used Kaseya for a while and then we used Continuum. And the real difference between these RMMs and a Jamf is really the depth of management. Even 15 years ago, maybe I could send a script to a Mac through Kaseya, however that it was a basic bash script that I would concoct and there wasn’t that depth of management I think that we’ve grown to expect and then comes along this whole M D M thing and it kind of semi changed to everything ish. So I feel like that R M M thing is, and last week’s episode with Ben, we talked about bringing the Windows world into our way of doing things, but I definitely never felt like the Mac was, I hate to use a buzzy wordy type thing, but I never felt like the Mac was really a first class citizen.

Speaker 3 (00:08:30):
It was like with Kase, it was easy for me to say, oh, this registry hive, I want to add this key. And there was kind of a bit of depth, not as much as I’d have with like a SS M s if I was to key in on that timeline I guess. But yeah, R M M, remote management and monitoring is the acronym. And then there’s a lot more automation in terms of billing those events could flow to Autotask or whatever else. So I guess that brings up an interesting question that wasn’t in the script. So sorry to blindside you and feel free to not answer if you so choose, but what RMMs have you had experience with, and I guess what types of billing events? Was it all monthly, like this device is supported or was there also hourly or?

Speaker 4 (00:09:28):

Speaker 4 (00:09:29):
So the primary R M M tool that I’ve had the pleasure of working with, if you want to call it that, is SolarWinds, R M M. They had the enable R M M I believe it was called at the time. They might’ve rebranded it by now. But we did things both hourly and fully managed in a sense. I’ve worked at a few MSPs now four to be exact, and each of them does something different. So most of the MSPs I’ve worked at, like I said, were Windows centric, and their fleet was primarily Lee on-prem stuff. So all the servers were Windows server 2012 and 2016 on-prem. A lot of ad management, much like Bobby had alluded to with the Delaware stuff, but the R M M tool really was just a way to kind of keep track of the servers and also the computers that were out there. And we also had some cloud infrastructure as well, but that was mostly servers that were sitting in a data center. So not necessarily cloud, but sort of cloud. But that was the primary tool that we used was SolarWinds, R M M. And we’ve dabbled in a few others. Kase is definitely one of those that most people have probably touched in their career if they’ve been involved with managing devices at some point.

Speaker 3 (00:10:59):
Yeah, absolutely. I mean, I do feel like this was very much support oriented. Everything that we did and those R M M tools was about reducing our cost to support a workstation over the course of its lifecycle. And at this point, I feel like there’s a whole lot of new stuff that we try to do actually make users happy, therefore maybe get contracts awarded for year two. It seems simple to think about it that way to me, but it’s not all that simple. I don’t know if Ben’s episode will definitely be up before this one, but I feel like we talked about that a little bit last week. I do feel like at this point we have been talking about C I C D pipelines a lot more. So do you mind taking us through what that means to systems administrators like at MSPs as opposed to just standard support?

Speaker 5 (00:12:14):
Sure, yeah, I’ll take this one. So the role of C I C D in any organization is to automate some process or processes and M S P, we have a lot of systems that we interact with that are the same across or the same system across many different organizations, but may be used in a different way or have different workflows and things like that. So the thing that we have to solve for is what is the same? And so all of those things that are the same and the things that are repeatable, let’s make those as easy to manage as possible so that we can focus on the places where we can add value to a customer with their unique workflows and solving their specific issues. And so it’s things like monitoring and what is your desired state of certain server configurations and different maintenance workflows, security baselines, and I think probably the most basic of things, OSS and app patch management. So just things like that, making sure that we have a very consistent and easy way to manage those things. That’s where we can really get a lot of value out of C I C D tools.

Speaker 2 (00:13:50):
And one of the things that I sort of realized when I started looking down C I C D as well was most of it was stuff I was already doing. It was more just it wasn’t automated in the way that it should have been using things like auto packager manually on my desktop to generate packages. And the idea of actually, and auto package is an example of C I C D that a lot of people I found aren’t even really aware that they’re aware that that is C I C D, that they have set up an auto package server somewhere automatically looking for and downloading applications and whether they’re uploading them or just leaving them in a repository somewhere and not even aware that that’s C I C D, that they’re actually using there to do those tasks.

Speaker 4 (00:14:42):
Right. And then C I C D typically is used in software development. That’s where it originated from. So when you think about it as a MAC admin or even just as a system as administrator in general, you don’t typically think about the things that you do as C I C D. So when it comes to GitHub actions in the context of MSPs, I think it all boils down to keeping all of our environments up to date without a lot of manual intervention. That’s sort of the ideology that we took with using GitHub actions. And when you’re managing 50 plus Jamf instances, you don’t really want to do anything that would obviously require you to log into the instance. So C I C D is that sort of foundational tooling that you can kind of use to make sure that everything is kept up to date while also doing things like checking for errors and making sure that all of your logging is centralized and all that good stuff.

Speaker 2 (00:15:34):
It’s interesting. I just had the thought then that one of the first presentations I saw Ben Grinner from last week give was one on checklists and lists, and in a way, C I C D is really just that workflow of giving an engineer a checklist to go through for a task and actually taking it out of the hands of the engineer and having code go through that checklist and make sure all of these things have been done so that you eliminate risk and mistakes and forgetting to do things.

Speaker 4 (00:16:06):
Absolutely. Yeah,

Speaker 3 (00:16:07):
Truly, that’s actually a point I made in the history book when it comes to the maturity lifecycle of technology. Like first we make a checklist and then we automate the things on the checklist and then we automate the automation of the things on the checklist and use the telemetry gained to kind of be able to expand the business kind of asymmetrically, no pun intended when it comes to the symmetric keys used to secure those transactions. But I guess speaking of business, how would you say that, you mentioned that C I C D came out of development, but how does it differ from how other businesses use the idea of automation pipelines?

Speaker 4 (00:17:10):
So I would say it varies by business, obviously depending on what kind of business you’re in. So at an M S P, when we think about our automation, we think about the systems that we’re supporting. Typically, we want to make sure that they’re all following the same sort of guideline. The easiest way that I’ve found personally to do that is by leveraging preexisting automation out there. So a lot of the automations that we build with GitHub actions actually leverage preexisting GitHub actions that were built by the community. So when you’re building your workflow file out using yaml, which we’ll talk about in a little bit, but using yaml, you’re using a lot of preexisting automation that’s already built for you. And just building on top of that, so when you’re thinking about things from a tech stack perspective and you’re thinking about, okay, how can we manage all these different systems that are out there in the ecosystem, but make sure that they’re all kept in the same sort of way?

Speaker 4 (00:18:07):
I think you have to look at what are you trying to automate? To your point, Charles, going through the list one by one, okay, can this be automated? Can it not be automated? And what’s the benefit of automation? Are there diminishing returns if we automate this versus just doing it manually? And then you kind of apply that same logic down the line to other items in your list, and you get to a point where you’re like, okay, we’ve automated basically everything that we can automate within reason, and now we’re set for success to some degree. I know that didn’t really exactly answer your question, but I think that it’s all relevant to some degree.

Speaker 3 (00:18:47):
Oh yeah, for sure. And I feel like within reason is an interesting turn of phrase. I have noticed at least three or four outages massive places like in a w s that were based on automation failures that then came back to bite ’em. Have you seen where you’re adding logic? I remember the first ad bind script back in the days when we did that before Joel fixed that for us. Thanks, Joel fixed, I don’t know, fixed dish, but I remember adding a thousand extra lines of bash to fix all these different problems that you’d run into. The machine name was too long, or we didn’t have an IP address, we couldn’t talk to the interwebs, whatever. Do you find that sometimes with hyper automation, which is kind of what we’re getting into when we get to that level of automation, I think, do you find that sometimes we can over-engineer things and create race conditions, whatever that makes the, I don’t know if I want to say, gives us a false sense of security that it’ll work, but that makes it fail in and of itself?

Speaker 4 (00:20:16):
I would say yes. Go ahead, Bobby.

Speaker 5 (00:20:20):
So no, no, really you can, and I think I’m one of those engineers that has the propensity to over-engineer at times. I have to talk myself out of some things a lot of times actually. And so you look at things like what are you logging and what offshoot scenarios can possibly happen? And trying to account for all of those things. And so that’s what my mind goes to when I come across these things. And so I do have to actively say, it isn’t worth it to actually solve this issue right here, but at least have something that says, Hey, this happened and you need to fix this in maybe a manual way because it’s not going to happen often and it’s not going to affect many people and things like that. So just making sure that I just have to make sure that I am not putting too much energy into just these offshoot cases because once you start doing that, you lose the benefit of automation, which is to A, make things consistent so you’re not making mistakes, and B, reduce your time spent doing these repeatable tasks.

Speaker 3 (00:21:47):
Yeah, I think another interesting look at that specifically is customizing the level of automation to the scale of the endeavor. We did one thing where we had to re-image back in the good old battle days of imaging a hundred thousand machines in one night, and we only had the people there for that one night. So the amount of logic to trap four and correct issues, even if something only hit 0.5% of the populace, that was still enough machines to where we really needed to actually fix that. Whereas most of my actual M S P customers, because that was an hourly engagement, and we did both at that company for most of my M S P customers, 0.5%, I wouldn’t have cared about at all. Even if I had to send someone up to Pasadena from Irvine or something, it would’ve been worth the hour drive not to do all this scripting or what have you. So

Speaker 2 (00:22:56):
I think the best example I’ve had of that scenario as a consultant being asked to build an automation to solve a particular race condition that was happening at a customer, and it took, I think two or three days to get this working and then presented it to them to find out it was only impacting one user and they’d resigned the day before. And I think that’s the, and that sort of comes into are there other ways to solve that problem by making the problem go away? Obviously we don’t want to

Speaker 3 (00:23:32):

Speaker 2 (00:23:33):
They fire people to get rid of problems, but if it’s a network issue, can you actually fix the network sometimes? No. But

Speaker 3 (00:23:41):
Yeah, and I think this is almost language dependent as well for at this point, I write SWIFT code in a way where I’m writing my test cases first, whereas if I was doing this in Python, like a microservice that I’m kicking off with a GitHub action or something like that, I wouldn’t probably do that in the same way. And when you start writing test cases for unit testing, you’re like, oh, well, what if two plus two equals five somehow in my coat? Then I’m trying to trap for that and say, don’t compile when it does that. And maybe that’s just the difference between compiled and interpreted languages like a Bash or a Python. But yeah, another difference I think is kind of around security. So I feel like we did hear a lot about Circle CI’s pipeline when it was compromised. Do you find that you’re putting special protections, I mean, imagine a supply chain attack against Jamf servers in a specific environment like this that’s just insanely frightening. Or any other, or any M D M Microsoft, if we want to make it even wider than that, and not just M D M, any script runner because effectively that’s what all these are, or a malware, if your malware got compromised, that would be really embarrassing. But do you find that there are certain special protections that you’d put into these types of systems?

Speaker 5 (00:25:25):
So I would say the way you manage your secrets is probably the most important thing. And making sure that these things are unique to each environment so that you’re not recycling the same things. And also being sure that you have a way to make changes to them separately from your C I C D tool so that you can quickly get things to a point where, hey, they are now secure from whatever vector of attack is coming from this tool, and we can just sever that link right there and then deal with the fallout. Just the immediate thing is just to say, Hey, cut this off, and then we’ll, like I said, fix what we need to fix.

Speaker 3 (00:26:24):
Yeah, that’s interesting. Okay, so secrets management. So with the Jamf a P I ish, you have a username, a password, let’s say A J W T for interacting with the A P I. And then you maybe have passwords on webhooks or codes or whatever you want to call them, but they’re tokens. So those are three distinct secrets, and you encounter something, now you got to roll all three, right? Is what you’re getting at, right? Yeah, that’s a lot to keep in sync, I think. And then there’s also the secret around unlock codes per user, but that would be in the jam, in the

Speaker 5 (00:27:09):
Jam instance, that would be something you’d be extracting with the A P I if you’re going to be interacting with that anyway. And maybe this is a bit, I don’t know, an archaic way of doing it, but being able to do that same process from a local machine and being able to something that is off of that pipeline, we’ll just say it like that, something that is separate from that, being able to interact with those things directly from a system that is safe. That’s sort of what I get at with making sure that we can roll our secrets in the event that something catastrophic does happen with GitHub action.

Speaker 4 (00:28:03):
And worst case scenario too, you also have your A P I permission set so that the user or account that you’re interacting with the endpoints have the least amount of privilege. So for example, if you’re reading computer objects from Jamf, you want to make sure that that a p I user only has read permissions to actually read the computer object and nothing else. So no deleting nothing like that. So that way the downstream application, the thing you’re interacting with isn’t compromised if you are compromised at the top level, so to speak.

Speaker 3 (00:28:37):
Got it. Yeah, I mean, so how did GitHub actions fit into that picture? And I guess as an addition, how much custom script did you have to bring into the picture to make your GitHub actions work a

Speaker 5 (00:28:56):
Lot? So how does it fit into the picture? So I guess one of the first things is monitoring. So making sure you can proactively pull information from a Jamf instance and do it on mass, and so you can react to any kind of issues like a token that needs to be restarted. Previously we had a workflow where the notifications would generate a ticket. We found that that wasn’t always reliable and we weren’t capturing all the information that we needed. So now we have a workflow in place that basically takes the notifications each week and throws them into a Slack channel and we can see, hey, these are the things that need to be actioned. And that sort of happens. That helps us to set priorities for the week as well. So it’s a nice little workflow and other things like your desired state, so inventory collection, your check-in settings and things like that, just making sure that those are all what they need to be. If they have been changed at some point in time for whatever reason, we then reset ’em. And we just have something that runs weekly just to make sure that happens.

Speaker 5 (00:30:36):
Also, things like in our environments, we have scripts that we use to do certain processes that are within Jamf, and we also have those scripts that they exist in our GitHub. And so whenever a change is made to one of those scripts, it does a push and then updates all of the Jamf instances that have that particular script. We have automation that identifies which Jams has what, and then that way we only have to make the change in one place. And I should make a note about that. One of the things that we do is make sure that the scripts that we build are all very generic and they basically plug in into, they can plug into a policy and the customizations will happen either from the policy itself or from some configuration file that exists within GitHub. Yeah,

Speaker 1 (00:31:49):
This week’s episode of the Mac Admins podcast is also brought to you by Collide. Our sponsor, collide has some big news. If you are an Okta user, they can get your entire fleet to a hundred percent compliance. How if a device isn’t compliant, the user can’t log into your cloud apps until they’ve fixed the problem. It’s that simple. Collide Patch is one of the major holes in zero trust architecture device compliance without Collide. It struggles to solve basic problems like keeping everyone’s OSS and browser up to date. Unsecured devices are logging into your company’s apps because there’s nothing to stop them. Collide is the only device trust solution that enforces compliance as part of authentication, and it’s built to work seamlessly with Okta. The moment collides agent detects a problem, it alerts the user and gives them instructions to fix it. If they don’t fix the problem within a set time, they’re blocked. Collides method means fewer support tickets, less frustration, and most importantly, a hundred percent fleet compliance. Visit collide.com/mac admins podcast to learn more or book a demo. That’s K O L I D e.com/mac admins podcast. Thanks to collide for sponsoring this episode of the Mac Admins podcast.

Speaker 3 (00:33:16):
What is a GitHub action? And I guess to put this in context, like a webhook runner, like an I F T T T or a Jenkins to oversimplify it, or is there a bunch more that you like to think of it as?

Speaker 4 (00:33:35):
So it could be a lot of different things depending on your use case, but in our typical workflow, a GitHub action is essentially a workflow file that is built leveraging YAML to do subsequent steps and order depending on what you’re trying to do. And you can leverage what’s called jobs. So you can have different jobs that do different things. You can segment them out so that way you have one job doing the build, you have one job doing the test, the unit testing, and then you have one job that’s actually doing something like uploading a package to Jam, for example. So in terms of C I C D, you can leverage technology like Jenkins, which is actually setting up your pipelines and doing the automations and stuff like that. Whereas in GitHub, since you’re already leveraging GitHub for your version control, your scripts, your things like that that you’re storing, and GitHub, you can actually tap into those scripts.

Speaker 4 (00:34:32):
You can leverage the repos to do something outside of the repo. So for example, if you want to make a GitHub action that runs, whenever you push a change to that repo, you can do that or pull requests to that repo. You can do that. So really it’s just a way to leverage GitHub in a certain way, but also decentralize your automation in some regard. So that way you’re not leveraging things like launch agents and launch dams on local machines. You’re leveraging cloud-based technology and within inside of the actual GitHub actions themselves, the workflow files, you can leverage what’s called runners. So runners are hosted servers that GitHub basically spins up and spins down after your automation is done. So that’s kind of what the gist of GitHub actions is.

Speaker 3 (00:35:21):
So a collection of events is triggered that for those who have experience with Lambdas or other microservice oriented architectures like that, that then run in a semi object-oriented fashion based on a YAML document that is saying, this is what’s about to happen. Correct.

Speaker 4 (00:35:45):
Your Yammer is like your recipe basically and auto package speak.

Speaker 3 (00:35:50):
Got it. And I have no love for yaml. How do y’all feel about Yael?

Speaker 5 (00:35:58):
So I don’t hate it.

Speaker 3 (00:36:02):
Nice or

Speaker 5 (00:36:03):
Admit to hating it

Speaker 3 (00:36:06):

Speaker 5 (00:36:06):
Yourself. There are things about it that are a little annoying, but I think with any technology you’ll find things like that. But one of the things I dislike most about it is multi-line strings.

Speaker 3 (00:36:23):
That’s why I don’t like it. Not that I don’t like it. If I didn’t like it, that would be why I didn’t like it. But go ahead.

Speaker 5 (00:36:30):
It just makes it so that you just don’t want to do it that way ever. I think the most recent example of that was trying to mess with just a Slack web hook integration before I actually discovered there’s a better A P I Slack GitHub integration built for GitHub actions, but trying to get the multiline strings to create the webhook call itself, trying to get all that in order and passed to the GitHub environment. It was painful. It was very painful, and I was like, okay, well, I’ll just do this through a script instead. And then I realized, oh, okay, actually I can do this through this A P I integration. So I was actually using the wrong method. That was the big problem. But I also in that time discovered that it is very painful to work with Multiline strings. It’s just not something I want to do ever.

Speaker 3 (00:37:44):
And regrettably, or luckily, all encryption keys are multi-line strings. So that’s like everything that requires authentication ends up being that way. So I mean, luckily computer names, but maybe U R I structures, whatever you mentioned Slack, and I think that’s become the classic triggered event that we talk about a lot. But what other workflows have you looked into or scripted, and I guess how much time might each have taken and is that dependent on how well their APIs documented? Like, oh, I’ve got a Swagger doc, I can import into Postman and Muck around with this stuff, but sending an email or even Daisy chaining other tasks, I don’t know, getting a billing event into an MSP’s auto task or whatever you use for billing or what have you.

Speaker 4 (00:38:53):
Yeah, so some of the workflows that we’ve implemented through GitHub actions are things like uploading new scripts to a various size of jams, like different jams across our ecosystem, as Bobby mentioned earlier. But another one would be like updating our white labeled version of Monkey. That’s something that we do anytime there’s a new release to Monkey that we don’t want to leverage install Mater for because Install Mater does have a label for Monkey. Now we white label our monkey, so we have to use custom branding, a custom name, all that stuff. And that’s easily done leveraging GitHub actions because you can leverage the GitHub releases from Monkey, you can pull it down, you can do the white labeling through a Python script, and then you can upload it to an S three bucket, which is hosted public read. So that way we can use scripts in our jams to actually pull that down onto the machines. And that saves us, it’s probably a good 10 minutes or so of time each time there’s a new release. But that adds up very quick as you think about how many releases there are. So very simple example of that.

Speaker 2 (00:40:05):
So we’ve spoken about how we can use GitHub actions to solve problems, and you’ve mentioned some of the limitations you’ve found. What are some of the other limitations you’d love to be able to address about the workflows or what are some other vendors that you’d love to see help address those workflows?

Speaker 5 (00:40:22):
Sure. So I guess the one thing with GitHub actions is you just have to make sure that you’re writing your logging very well, because if you don’t, you’ll find yourself in a situation where you’re wasting a lot of time trying to test various runs of different workflows. And so it can be very easy to just miss something that’s very, very simple. I don’t have a direct example off the top of my head here, but it’s just one of those things where it reminds you that you need to do your best practices with the way you’re writing your code, and you should be doing that anyway. But it reinforces that

Speaker 4 (00:41:10):
I think from a vendor perspective, it would be nice if vendors provided all the necessary a P I endpoints so that way you can properly leverage their technology and automate as much as you could. For example, it wasn’t until recently that Jamf added an endpoint for interacting with the J C D S to actually upload files to it, and that wasn’t a reality beforehand. However, you could leverage Python scripts, grand Pews comes to mind for uploading files to Jamf, but it wasn’t until recently, like I said, that Jamf actually added that endpoint. So your options for actually uploading packages are pretty limited.

Speaker 2 (00:41:53):
That was something many customers had been asking for, not just MSPs. And the dangers of using undocumented APIs or workflows are that if it stops working, there’s no SS L a.

Speaker 5 (00:42:10):

Speaker 2 (00:42:12):
But sometimes you got to do what you got to do. Exactly. Private APIs are fine.

Speaker 5 (00:42:19):
I think there’s another example I have, which is still an issue and it’s a minor annoyance for me. It adds additional step in my, so when we do a brand new Jamf build, I have a nice little workflow that just layers on everything that we consider standard. And one of those things is getting the patch policy titles set up and there is an endpoint for it. This is the part that’s a little annoying. There is an endpoint for it, but it doesn’t work. The create command, it will go through and run, but it’ll tell you that the software title already exists or needs to exist before you can use it, which defeats the purpose of using a create. So it’s just little things like that that are a bit of an annoyance, but for the most part, things do work for the most part.

Speaker 2 (00:43:26):
One of the other a P I endpoints that I would’ve loved, and I can kind of understand why it doesn’t exist is having an A P I into Apple Business Manager. Apple School manager, have really good secrets that you can use and make us jump through hoops to create those secrets. But to be able to transfer licenses of applications or move devices into different MDMs would be awesome because there’s a lot of slow, arduous clicking in slowly refreshing websites that’s required to do a lot of that.

Speaker 4 (00:44:12):
It’s funny that they don’t have one or don’t have the endpoints, I should say, because if you think about how a browser functions where you’re effectively just interacting with the backend server through the gui, it’s funny that you don’t have those endpoints readily available for people to tap into via an A P I. So all those manual actions that you’re doing in the browser could be automated, but it’s up to developers to actually create the necessary crud and all that stuff to do the actions on the automation side.

Speaker 3 (00:44:45):
I’ve been begging for some of those for years, but I think to Marcus’s point, it is very complicated what you can do in those kinds of environments. So I would never second guess the developer who chooses not to give me access to an A P I absolutely. Even though they gave the exact same access to the Mac OS 10 server via a private a p i, so it exists code-wise somewhere. I’m not judging

Speaker 5 (00:45:21):
There’s one.

Speaker 2 (00:45:21):
And also not having access to a documented A P I today doesn’t mean that the developer doesn’t know that you’d like one know why you’d like one or have a desire themselves to give you one. It’s just, yeah, sometimes software’s hard.

Speaker 5 (00:45:38):
So this actually reminds me there’s another thing that it would be good to have a better a p I set up for it’s G S X, and it’s whether it’s Apple giving it to us, orf, whoever wants to tap in and make these things easier to work with, but we find that we can get a lot of value from G S X for our service desk support tasks and our customer success managers when they’re doing any kind of reporting qvs with customers and things like that, having access to that information more easily would be nice. And so right now have it set up in our instances, but with Jamf in particular, it’s a manual process to get that data. And I feel like that should be something that we could simply just have automated just the system itself, have it just run and pull that information dynamically or make it so that I can have something that pulls that information on a regular cadence. So yeah,

Speaker 2 (00:46:58):
G S X is a really interesting one because it’s using a system for something that it’s not designed for. So for the listeners that aren’t aware of G S X Global Service Exchange, Apple’s portal that’s used for managing repairs of devices, which is very much not what an integration with jam fees used for, it’s generally just used for checking warranty status so you can be aware of warranty status for a device. And I suppose looking back to supply chain, if Apple got popped, I think that’d probably be worst case scenario for most of the listeners that we have. And seeing some of the things G S X has been used for in the past where people will find a rogue repairer who will start querying devices and getting information out of there. I guess I understand why Apple maybe haven’t made it as easy for us to get the information we want because maybe providing that access is going to potentially open a door for somebody to do something else. That’s not what we want and very much not what they want, but yeah, it’s very frustrating and especially the jumping through hoops required to actually get access to that G S X information for an organization is not a fun process, is it? No.

Speaker 4 (00:48:25):
Yeah, that’s something that we actually had to do pretty recently and it was very painstaking to prove to some degree that we repair Apple devices even though we technically don’t repair them ourselves. It’s an interesting kind of hoop that you have to jump through to prove that you manage Macs and Apple devices and you’re only going to leverage it for pulling that information, like you said, Marcus, the AppleCare status, the purchase date, things like that. The stuff that we care about as an M S P versus a repair shop

Speaker 2 (00:49:02):
Is there still that limitation where you’ve got to create a G S X account and you need to log in with that G SS X account, not using the A P I, but through their portal once every 30 days, which is not something I really wanted to do when I was using it.

Speaker 5 (00:49:22):
No, you don’t have to do that every 30 days. So the way it works now is you have one account that’s your account, and then so for any subsequent customer accounts, you create an individual account for them and then you have to pull the a p I key for their instance. And that doesn’t expire every 30 days. I think it’s annually or I can’t remember actually what it is, but it’s longer than 30 days. And so these things last for a lot longer, so it’s

Speaker 2 (00:50:04):
Little nicer. That’s good. Yeah. G S X and I had a very special relationship when I was working at a university where every 32 days I’m like, can you please unlock this account? And how about you log into it every 30 days? It’s like, well, if you want me to stop asking you every 32 days maybe until they waited, until I no longer needed it, before they solved that problem,

Speaker 3 (00:50:26):
I chose to screen scrape G S X. And as a developer, I find screen scraping disgusting. It is just talk about things. It’s worse than private APIs. It’s going to break period. It’s like when I used to script Warcraft, they’ll change the maps. It’s guaranteed within the next 60 days they will change the maps and I will have no gold when I wake up in the morning and it will suck to be me. Sometimes you got to do what you got to do because you need gold or G SS X information as the case may be. I don’t know. But yeah, that’s one of those places where I got fed up with trying to do a p i keys or whatever, so I’m like, oh yeah, and there’s no synthetic blocking of a web portal event according to until I write limited, and then it all goes away. Yeah, they’re not going to rate limited to two events a month though, so whatever.

Speaker 3 (00:51:35):
But then two factor keys, break all that. So then you’re like, okay, fine. Anyways. So Jamf has APIs that take care of some of the automation steps, and I feel like this is common for all vendors. You just bake in, you have this large atomic operation that you’re trying to complete, so you bake in certain pieces of the automation, but do you find that there’s any best practice stuff you do with each environment that gets set up? Like you mentioned a P I permission sets? That would be one I would assume, but any additional beyond that or even digging into that deeper, if you feel like

Speaker 5 (00:52:20):
The thing that I find is critical to all of this is having good naming conventions. So if you have good naming conventions, it lets you interact with your infrastructure in the same way every single time for every single environment because what things are going to be called how to write things out and things like that. And I mentioned this in particular because in the case of jam, when you do something like create a policy, it requires using, or excuse me, it requires having an ID for the category and for the smart groups, excuse me, smart groups and static groups having an ID assigned to those things. So if you have good naming conventions for everything, you can get around the problem of needing those IDs by being able to call to that smart group, pull the information dynamically, and then throw that into the policy itself to create the policy. Because if you didn’t have something like that together, you wouldn’t be able to actually create the policy in place with its group assignments and its categories assignments together. And just also when you’re going through and reading what everything is supposed to be doing, if you’re naming conventions are all over the place, it will be just impossible to get through just to work through what things do at a very basic level. So that’s why have those up there as being the most important things to have in place.

Speaker 3 (00:54:16):
Interesting. So what you’re saying is if I start all of my naming conventions for things about teachers with teachers Dash, then I can just quickly go through and grab all the IDs for teachers dash with and load those into an array and then loop through that I F Ss array and knock all those out or

Speaker 5 (00:54:39):
Whatever. Exactly, exactly. Got it.

Speaker 3 (00:54:42):
Yeah, I like it.

Speaker 4 (00:54:43):
Yeah. From a security perspective, trying to adhere to principles of least privilege is always a good best practice because there’s going to be a lot of people interacting with this Jamf, and you want to make sure that based on what they actually do in your organization, that they have the appropriate access levels. So if you have an i d identity provider, for example, Okta, you want to make sure that you’re leveraging that to its best ability. You’re tapping into Okta via LDAP and pulling down groups, and you’re making sure those groups have the right permission sets and things like that. That’s something we always keep in mind when we’re building out our jams.

Speaker 5 (00:55:23):
And then I think finally, your documentation definitely document, but I would highly recommend always documenting as you go, it will save you a lot of trouble. And I’m not perfect with it at all. I’m a work in progress for sure, but I find that if you are just taking notes of everything you’ve built, even if it’s not the final format of your documentation, but at least enough that you can reference it to build that final format, make your life a lot easier.

Speaker 3 (00:56:04):
And when you say document, do you mean a standalone separate document or do you mean like, oh, I have a stance of code, I’m going to add whack, whack in front of it and type some documentation?

Speaker 5 (00:56:19):
So both. Both having your inline stuff and then also having your separate documentation of how this was built, why it was built this way, what things are supposed to be doing and accomplishing, and just having that information together so that when someone comes along and they need to make a change or understand what is happening in a workflow in a relatively short timeframe, it’s a lot easier for them to do that than having to go through and parse through code and maybe it’s code that’s referencing a lot of different places. They have to just weave things together to be able to figure out something simple. And so my ultimate goal with having good documentation is just making so that the next person can come in and do what they need to do. And also when I revisit something in a year, I’ve got to update it or make a change. I don’t have to figure out everything that I’ve already figured out. I can just say, oh, this is why I did this. This is what I need to do.

Speaker 2 (00:57:33):
So when you’re saying the next person that I’ve often found with me, it could be you,

Speaker 5 (00:57:37):

Speaker 2 (00:57:38):
Me. I’m looking, why is this not working? Why is this not working? Oh, that’s right. When I built that, I’d done X, Y, Z, and that’s why it’s not working. Whereas if it’s documented, just being able to quickly before embarking on a task, just quickly grab the documentation and go, oh, that’s right. I’d forgotten that was in there for a very good reason. And automating my own, avoiding of my own poor choices, maybe

Speaker 5 (00:58:07):
I like to think of it as a past me looking, looking out for future me.

Speaker 2 (00:58:15):
I also find that writing some of those things down, especially if it’s as you’re saying, documentation at the start where I’m writing a plan of how I’m going to go about something. Sometimes the act of writing that down sort of almost a bit like rubber ducking code is thinking on paper or out loud sometimes brings up better ways of doing things or organizing things, or I start looking at something and go, yeah, that’s awful. That’s dreadful. That’s really not going to be the cleanest way of doing this.

Speaker 3 (00:58:50):
And yet that’s also a classic developer trap to think everything is because we call that technical debt. So to think of everything as, oh, well, I have 10 better ways to do that today than I had a year ago. And it’s like, yeah, but are you going to refactor everything every time you touch something

Speaker 2 (00:59:08):
To-do list, backlog, backlog? That’s what it’s called.

Speaker 3 (00:59:11):
Yeah. Yeah. Although pretty soon, probably some AI thing will be able to correct your, when you say, Hey, go fix all my, I didn’t en Nome, all these variables, right? Can you go fix that for me? And it’ll just be like, okay, make all of this old stuff work, like all of this new stuff now. Yeah, and I guess that’s an interesting question. Have you used copilot to build any of these actions?

Speaker 5 (00:59:43):
No, but I do make liberal use of G P T for just to get code figure certain small things out. But no, have not used co-pilot to actually build anything out.

Speaker 3 (01:00:01):
What I really like about G P T doing that is not the code that it spits out because it’s not incredibly efficient, I’m finding, but it seems to parse out the really crappy packages to import. So for me, so far it seems to surface good packages. You know what I mean? Yeah. Because the first, what four lines of most things that you write these days are import something, import something else, import something else, and sometimes it’s really hard to find the good things to import. You just search a package manager and you’re like, well, there’s 80 packages for this, so how do I know? And you can say, well, use the number of stars, use whatever. But even then, something might be seven years old and have a lot of stars, but there’s something newer, and I’ve been finding that it tells me just by the example code that it spits out.

Speaker 3 (01:01:05):
I’m like, oh, yeah, that’s what they’re doing. I’ll try it that way. At the very least, it gives you a different perspective on how to solve a problem. Yeah. Yeah, that is a very good point. I wrote an Xcode extension that we’ll use chat G P T to try to write SWIFT for you, basically. And I import everything commented. And I’m trying to be very intentional in doing so by saying, oh, I want you to actually comment line by line what you’re doing, because it can be incredibly dangerous to do that. But yeah, I guess that there are a lot of things involved in modern workflows, not just importing packages, but also how we integrate things together. So I guess any webhook integration from Jamf environments that maybe trigger actions or are you further upstream and you’re orchestrating the creation and management of the Jamf environment?

Speaker 5 (01:02:16):
So there’s a few that we use. So the first thing I’ll say is we don’t have strong use cases for all of Jams webhooks. Some of ’em are just too broad, like checking for the computer. That’s just too much noise. Goodness,

Speaker 2 (01:02:36):
That would be so many

Speaker 5 (01:02:38):
Actions kicking off, but the computer added webhook, something enrolls. That’s good information to know. There’s a limitation with that where I think if the device already exists and it’s a re-enrollment, it doesn’t trigger the re-enrollment. So that’s a little bit,

Speaker 2 (01:03:00):
Unless it’s a new serial number, I think,

Speaker 5 (01:03:03):
Right? That would mean that there was a hardware replacement or something.

Speaker 2 (01:03:09):
Might be U I D anyway. Yeah, yeah, yeah.

Speaker 5 (01:03:12):
So computer added when devices get added to D E P, the rest a P I operation, making sure that you can see when something has hit your A P I that’s useful. And also smart group membership changes. That’s another useful one. And actually something I’m working on now is that limitation with the computer check-in Webhooks. So what I’d like to know is when a device hits a certain threshold of it hasn’t checked in this amount of time, that’s when we can see, hey, there’s a problem. We need to reach out to the POCs, let them know proactively, Hey, just contact this user. And so I want to be able to track that information. And so if I can, whenever a device hits a certain number of days, it becomes a member of the smart group. And then when it’s a member of that smart group, it says, Hey, blah, blah, blah, you’re here now. Throw a notification out to Slack or some other place and we can just get that information. So that’s something I’m working on. So yeah, those are the four.

Speaker 2 (01:04:37):
Love it. And we mentioned sort of managing secrets with api, so how are you finding Jam’s new a p i roles and clients? So separating those off from the user access in Jamf,

Speaker 5 (01:04:52):
I like it, but I have not implemented anything with it. I’m still in that discovery process of figuring out how to best leverage these components. We will have to do a lot of backtracking to get things right. So that’s going to be an extended project. I’ll say that

Speaker 2 (01:05:19):
Definitely for me, wrapping my head around it at first to sort of go, hang on, this is different. Why? What? And it really took a fair bit of kicking the tires for the penny to drop. Ah, I get it now. Yeah.

Speaker 3 (01:05:35):

Speaker 6 (01:05:38):
Here at the Mac Admins podcast, we want to say a special thank you to all of our Patreon backers. The following people are to be recognized for their incredible generosity. S Stu Bacca. Thank you. Adam Selby. Thank you. Nate Walk. Thank you. Michael Sy. Thank you Rick Goody. Thank you. Mike Boylan. You know it. Thank you. Melvin Vives. Thank you. Bill Stites. Thank you. Anush Ville. Thank you. Jeffrey Compton, m Marsh, Stu McDonald, Hamlin Cruin, Adam Berg. Thank you. AJ Reka. Thank you. James Traci, Tim Perfi of two Canoes. Thank you, Nate Sinal, will O’Neill, Seb Nash, the folks at Command Control Power, Steven Weinstein, chat, Swarthout, Daniel McLaughlin, Justin Holt, bill Smith and Weldon Dodd. Thank you all so much and remember that you can back us if you just head out out to patreon.com/m podcast. Thanks everybody.

Speaker 3 (01:06:34):
Thank you so much for taking us through all this. We specifically tried to be more, I don’t want to say ideation wise, but we tried to be broader and higher level with this. We could have gotten much deeper of course, but there’s a Mac Admins conference video, which I didn’t realize this talk was going to be given until we sat down to write the script for this one. That gets into the particulars around here’s how to build the yaml. And that kind of stuff just doesn’t translate all that great to a podcast episode. To be honest. The ideation is so much more fun. But we’ll include a link to that in the show notes. But thank you so much for taking us through all this. And Marcus, you thought of a great bonus question while we were going. So fire

Speaker 2 (01:07:31):
Away. It was definitely the talk about secrets being compromised. And so I thought no names to protect the guilty, but what’s the worst or silliest example you’ve ever seen of a secret being compromised?

Speaker 4 (01:07:44):
A secret in particular? I’ve definitely seen, well, not necessarily a secret, but I’ve definitely seen a p i, usernames and passwords being used in both plain text and also base 64 and coded strings. And if you know anything about Base 64, you can definitely decode as easily as you can encode. So there

Speaker 3 (01:08:04):
Are websites for that. Yep,

Speaker 2 (01:08:06):
There you go.

Speaker 3 (01:08:07):
You could

Speaker 2 (01:08:07):
Probably even automate it,

Speaker 4 (01:08:10):
Definitely. But as far as secrets goes, if crafty enough, you can get them. But I haven’t seen any examples of that yet.

Speaker 2 (01:08:24):
I know Ross deco’s favorite pastime is

Speaker 3 (01:08:29):
Going, I’m looking the

Speaker 2 (01:08:30):
Script on GitHub and going, yeah, and doing ethical disclosure of those secrets. You might want to take this down and rotate all of these secrets. What about you, Bobby? What sort of things have you seen out there?

Speaker 5 (01:08:52):
I think it’s really just passing things in a way that’s insecure. So throwing something into a slack channel, for example. Why do that when you have one password? So it’s just like things like that. So I haven’t seen anything that was majorly, this is a serious, serious issue that’s going to break everything and we need to change our practices altogether. But just little things like that where people aren’t being cognizant of the fact that these are technically insecure channels, so let’s do it the right way.

Speaker 3 (01:09:37):
Aren’t they all? What about, how about you, Marcus?

Speaker 2 (01:09:41):
Well, probably the funniest one I’ve seen was when lockdown started here in Australia, and so all of a sudden everybody had to go and work from home. And that was where the idea of having your M D M or R M M or however you were managing these devices, rely on internal networks and not be publicly facing, proved to be one of the worst ideas ever. And seeing organization in that scenario and struggling. But of course they had an admin account on all of those machines with the same password. And so it’s this big discussion taking place as to how can we work out how to use this to try and solve problems without exposing that secret. And then discovered that all the users already knew that password because they’d asked one of their team leaders who had exited the company two years ago and they’d just text him whenever they needed the password because he knew it. And so this idea of this secret, we’re trying to work out how to, it was not a secret and it had not been a secret for some time. And everyone had their own version of he was

Speaker 3 (01:10:54):
Over-engineering things.

Speaker 2 (01:10:59):
And that’s the reason why we don’t have every machine having the same admin account on it everywhere. Kids. What about you, Charles?

Speaker 3 (01:11:08):
Oh, I got one from last night. So last night I up at one 30 to go to the restroom and I see this light from under my kid’s door. And so I knock and open the door and the kid’s on Snapchat of course, quickly hides the phone under the pillow, yada, yada, yada. I’m like, I have that blocked for this hour in screen time. Well, their mom decided to just give them the screen time password so she didn’t have to go type it in and for the last few weeks, Snapchat whenever they want, which includes one 30 according to the screen time thing three 30 as well. Wow, am Yeah. Yeah, just bypassing all the things. So not only can we put the secrets in, whatever, but we can just give the secrets out. To your point, Marcus, about the boss just being like, oh yeah, here it is. I’m like, I build this wonderfully curated experience that Apple exposes to me through the screen time app and I think everything’s cool, but yet no, because someone gave the kit the password.

Speaker 4 (01:12:42):
Well, thanks to Jamf for implementing the lapse endpoints recently. That’s amazing for all of us that keep similarly named admin accounts on client machines. So we’re equally as guilty of maintaining secrets in a way that is kind of insecure.

Speaker 2 (01:13:02):
And look, people just, people did it because they needed to and they needed to get stuff done. But providing more secure and less foolish ways of being able to do things, reckless ways of being able to do things is awesome to see.

Speaker 4 (01:13:22):
Absolutely. And

Speaker 3 (01:13:24):
We will go ahead and put, because Champ released a paper on those back in early August, I think when they released those. And we’ll add that to the show notes as well. And

Speaker 4 (01:13:37):
H c s has a good white paper on it as well, if you’re familiar with them.

Speaker 2 (01:13:44):
The H c S guides were an enormous part of my arsenal when I was a champion integrator and still as an se, are very useful tools to have out there. So they do it. They do a great job of providing that documentation.

Speaker 3 (01:14:00):
Craig plays a mean base, so there’s,

Speaker 2 (01:14:06):
Alright, well thanks very much to our sponsors this week. That’s kanji and thanks very much to our Patriot subscribers, but also Bobby and Ryan, thanks so much for joining us here on the Mac Admins podcast. If people want to find you on the internet, where can they go? Looking

Speaker 4 (01:14:24):
On MAC admins is probably the easiest way. For all the listeners that are on Mac admins, you can find me. My name is Advisory Ryan, but they’re all together. So Advisory o n at the end of it. And Bobby’s also on Mac admins.

Speaker 5 (01:14:40):
Yes, you can find me under Mac in Black. Mac in black, just as it sounds.

Speaker 2 (01:14:48):
Awesome. Well, this has been great hearing about GitHub actions, and as Charles said, we’re going to put a bunch of stuff in the show notes for people who want to find out more about it. And listeners we’re interested in hearing what you are using for automation, whether it be GitHub actions or anything else. Let us know. We’d love to hear about it and maybe share that with our listeners. So thanks everybody and we’ll see you next time. See you next,

Speaker 4 (01:15:15):
Claire. Thanks, Marcus. Thanks

Speaker 5 (01:15:16):
Charles. Thank you

Speaker 6 (01:15:16):
Guys. The M’S podcast is a production of M Admin’s podcast, L L C. Our producer is Tom Bridge. Our sound editor and mixing engineer is James Smith. Our theme music was produced by Adam Coga the first time he opened. GarageBand sponsorship for the Mac Admins podcast is provided by the mcad admins.org Slack, where you can join thousands of Mac admins in a free Slack instance. Visit mcad admins.org and also by teary lll C. Technically we can help. For more information about this podcast and other broadcasts like it, please visit podcast dot mac admins.org. Since we’ve converted this podcast to A P F S, the funny metadata joke is at the end,

Speaker 2 (01:16:00):
So it’s great to be able to use GitHub actions to solve problems, but what are some of the other limitations you’d love to address in the workflows? Or what are some of you, do you

Speaker 7 (01:16:08):
Live in Minnesota? How about you own your own home but still rent your electricity? If you pay more than $70 a month in electricity, then you might,

Speaker 3 (01:16:15):
I was pulling up the link for the Penn State GitHub action stock and GI Up and YouTube wouldn’t shut the hell up. So I’m so sorry. This is why I try to, sorry, James. Yeah, put the links in before we start recording, but as

Speaker 2 (01:16:31):
We tried to say, it’s really actually a shit show before it’s edited, but now you

Speaker 3 (01:16:35):
Get to see, sorry, James.

Speaker 4 (01:16:37):
Fair enough. Alright,

Speaker 2 (01:16:38):
I’ll start that again. So we’ve spoken about how we can use GitHub now I’m completely ruined. Sorry, James. I.



Patreon Sponsors:

The Mac Admins Podcast has launched a Patreon Campaign! Our named patrons this month include:

Rick Goody, Mike Boylan, Melvin Vives, William (Bill) Stites, Anoush d’Orville, Jeffrey Compton, M.Marsh, Hamlin Krewson, Adam Burg, A.J. Potrebka, James Stracey, Timothy Perfitt, Nate Cinal, William O’Neal, Sebastian Nash, Command Control Power, Stephen Weinstein, Chad Swarthout, Daniel MacLaughlin, Justin Holt, William Smith, and Weldon Dodd

Mac Admins Podcast Community Calendar, Sponsored by Watchman Monitoring

Event Name Location Dates Format Cost
XWorld Melbourne, AUS 30-31 March 2023 TBA TBA
Upcoming Meetups
Event Name Location Dates Cost
Houston Apple Admins Saint Arnold Brewing Company 5:30pm 4th March 2024 Free
Recurring Meetups
Event Name Location Dates Cost
London Apple Admins Pub Online weekly (see #laa-pub in MacAdmins Slack for connection details), sometimes in-person Most Thursdays at 17:00 BST (UTC+1), 19:00 BST when in-person Free
#ANZMac Channel Happy Hour Online (see #anzmac in MacAdmins Slack for connection details) Thursdays 5 p.m. AEST Free
#cascadia Channel Happy Hour Online (see #cascadia channel in Mac Admins Slack) Thursdays 4 p.m. PT (US) Free

If you’re interested in sponsoring the Mac Admins Podcast, please email sponsor@macadminspodcast.com for more information.

Social Media:

Get the latest about the Mac Admins Podcast, follow us on Twitter! We’re @MacAdmPodcast!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back MAP on Patreon

Support the podcast by becoming a backer on Patreon. All backer levels get access to exclusive content!