Thursday, 16 May 2024

File Naming Tips (dates)

It is common for users (all of us) to want to put a date in the name of a file or folder to indicate when it refers to and to ensure this is maintained even if the file modification date itself gets changed. 

There is nothing wrong with this however there are some simple basic things you should consider which will make this clearer and work better.

As you may be aware, the ’standard’ UK date format is DDMMYY, the standard US data format is MMDDYY neither of these works well for file names for a simple reason. This is because computers will sort file names ‘alphabetically’ and ’numerically’ and these two formats do not therefore sort the way you expect which is based on ascending or descending dates.

For example - to a human based on in this example UK date formats.

180121 (i.e. 18th, January, 2021)

Is older than -

130224 (i.e. 13th, February, 2024)

However to a computer the ’number’ 130224 is ‘lower’ than 180124 and hence the compute will not sort these based on the dates as you might intend. Even worse you may have a date like 060622.

There is a very, very simple solution. Use the following date format in file names -

YYMMDD

This will aways sort in the intended date order. Using the same two example dates -

210118 (i.e. 18th, January, 2021)

is older and lower than -

240213 (i.e. 13th, February, 2024)


Note: writing dates with separators as 18/01/21 (UK) or 01-18-21 (US) or (if you are Italian) 18.01.21 makes no difference, it will still go wrong. So stick to 210118 i.e. YYMMDD,

Tuesday, 14 February 2023

UK ISPs and (lack of) IPv6

TCP/IP is the protocol used to access the Internet. Users may be familiar with IPv4 style numeric addresses which look like 192.168.0.1 - that is four numbers each of which can be from 0 to 255. An example IPv6 address looks like this 2001:db8:3333:4444:5555:6666:7777:8888

Whilst there is a huge amount of waste with some organisations having more IPv4 addresses than they need and with some possible addresses being reserved the world has officially ran out of available IPv4 addresses in November 2019. Fortunately by using Network Address Translation - NAT the impact of this to your average user is minimal. Nethertheless it was rightly deemed necessary for an official solution to be created and this is to use a newer address protocol known as IPv6.

Both IPv4 and IPv6 addresses come from finite pools of numbers. For IPv4, this pool is 32-bits (232) in size and contains 4,294,967,296 IPv4 addresses. The IPv6 address space is 128-bits (2128) in size, containing 340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses.

A lot of websites already fully support IPv6 and so do all computer/device operating systems such as macOS, iOS, Linux and Windows along with nearly all currently used network equipment. Unfortunately the sad reality is that the overwhelming majority of UK ISPs still do not support IPv6. 😦

See - Update on IPv6 Plans for Virgin Media, TalkTalk, Plusnet and Vodafone

Note: British Telecom is the main exception to this as they do support IPv6. (The mobile phone 5G networks also support 5G as the use of IPv6 for 5G was part of the 5G design process.)

Since I could not rely on the majority of UK ISPs to provide me IPv6 connectivity and since I am a hardcore techie, I decided to solve this myself. This was done by obtaining a 6in4 tunnel which allows sending IPv6 over an IPv4 connection.

When IPv6 was being first rolled out there were a number of free 6in4 tunnel providers but most have now ceased to be available because they assume most ISPs would be able to offer native IPv6 by now or that us customers should beat up our providers to get this. (Fat chance!)

The most well known remaining 6in4 tunnel provider is Hurricane Internet and I did indeed use them successfully to create and use a 6in4 tunnel. All the IPv6 tests then passed. However as Hurricane are based in the US it had an unintended side effect which is that some IPv6 websites considered me to also be located in the US. This has recently become more and more of a problem with a number of TV streaming services blocking my access as a result.

As mentioned most other tunnel providers no longer offer a service but fortunately I have been able to find one that unlike Hurricane Internet does offer choices as to where their tunnel appears to be located. This one - TunnelBroker.ch therefore enabled me to create a 6in4 tunnel that is located in the UK and hence the TV streaming services are now happy. 😃

For those interested this site https://test-ipv6.com/ is a good one to test if you have working IPv6 connectivity.

This site https://whatismyipaddress.com/ is a good one to show what your public IPv4 and IPv6 addresses are.

This one https://tools.keycdn.com/geo shows your presumed geographic location for IPv6.

Note: Whilst it is possible to configure macOS, Linux and Windows themselves to establish the 6in4 tunnel connection it is not possible to do this on iOS, Apple TV, or other sealed config devices. I therefore set the tunnel up in my own Draytek Vigor ADSL router and it then provides IPv6 addresses to all devices on my home network including my Apple TV box.

As a bonus since the IPv6 tunnel belongs to me and is nothing to do with my ISP, if/when I change ISP the tunnel settings will be completely unaffected and my IPv6 addresses will also be unchanged.

Monday, 11 November 2019

macOS Catalina - How to uses imaging even though Apple don't want you to

Apple have with each new version of macOS tightened the security and in general this is clearly a good thing.

Apple have also removed a number of historically available functions - including ones used in the past by many Mac administrators. This arguably is a mixture of good and bad.

The latest casualty in macOS Catalina is the loss of the --volume option in startosinstall.

Losing the --volume option means you cannot boot from an external drive and automate the installation on to the internal drive along with (optionally) flags to erase the internal drive and install packages. Now you can only do this by booting from the internal drive itself and then running the startosinstall command which in turn means going through the Apple Setup Assistant at least once. This could be workable for wiping and reusing an existing Mac but only if you have a valid login when the Mac is returned by the previous user.

This seems an extremely petty change since the GUI macOS installer still does let you boot from an external drive, run the installer and specify a different drive to install on to. Clearly there cannot be any technical reasons for this change. 😕

Ironically the 'solution' to the loss of the --volume option is to go back in time and return to using AutoDMG and an image restoration process e.g. like DeployStudio (run locally).

It should be noted that due to the now extremely aggressive secure implementation of Security & Privacy in Catalina one can no longer run normal DeployStudio workflows to configure a Mac unless you also install DeployStudio Runtime on the target Mac, give it and Terminal/bash/scripts full disk access permission. Clearly you would not do this on a Mac you are configuring.

It is however possible to do the following.

  1. Use Mager Valp's AutoDMG (currently a beta version for Catalina compatibility) to build a Catalina image
    1. The source macOS Installer must be inside a disk image, I happen to use Greg Neagle's installinstallmacos.py script to download the macOS Installer and this automatically puts it in a disk image
    2. Make sure you have no other volumes called 'Macintosh HD' mounted as otherwise AutoDMG gets 'confused' as which to use
    3. This includes the normally invisible 'Macintosh HD - Data' now included with Catalina, I therefore have my USB boot drive named differently
  2. Use Richard Troughton's old first-boot-package tool to run scripts and installers during the first boot of the restored image
  3. Use a DeployStudio server to host the AutoDMG image
  4. Use a USB boot stick with a full install of Catalina and use Disk Utility to erase the target (internal) drive if needed
  5. Use DeployStudio Runtime to restore the AutoDMG created image
  6. On first boot the restored Mac will then run the scripts/installers provided by Richard Troughton's tool, in my case I run an installer created using Greg Neagle's pycreateuserpkg to create an initial local admin account, Mager Valp's SkipAppleSetupAssistant pkg, my own script to set initial preferences, and then Greg Neagle's munkitools installer. I also run another of my own scripts to replace the DeployStudio function to automatically name restored computers.

I could have included an installer to enrol in to our MDM e.g. a Jamf QuickAdd.pkg however I intend to use DEP for Catalina.

The above therefore pretty much restores past 'imaging' capabilities.

Thursday, 17 October 2019

Auto-naming Mac computers using values from a database

Long time Mac admins may have used a tool like DeployStudio to 'build' Macs before issuing them to users.

DeployStudio can install the operating system, set various settings and install files and programs. One of the tasks it can perform as part of its workflow is to automatically set the name of the computer based on a 'database' stored within DeployStudio.

I always found this auto-naming of computers useful as it allowed using a name format that was completely under the control of the Mac administrator and therefore could be for example based on asset numbers rather than the computer serial number.

Unfortunately it seems with macOS Catalina Apple have finally put the last nail in the coffin for using DeployStudio as a tool. (I had been able to devise a way to use it for macOS Mojave even with T2 chip equipped Macs.)

Equally unfortunately it seems most other Mac management tools e.g. Jamf do not have a similar facility and at best leave you to write a script which typically names the computer based on its serial number.

Whilst using a serial number is a possibility and achieves the main goal of being a unique value and one that could be used to track computers on a network it is not the format I prefer and whilst macOS is perfectly happy with that as a computer name that format would not work as well in other operating systems especially Windows which would lead to a loss of consistency in naming computers.

I have therefore devised a script of my own to auto-name Macs using a database sourced value i.e. the way DeployStudio works. This script could be run via Jamf after enrolment. The example script listed below is using the database from DeployStudio. Clearly it would be massive overkill to setup and run a DeployStudio server solely for the purpose of running the database of computer names but if you have an existing DeployStudio server you can continue to use it for just this purpose. In theory this approach could be relatively easily modified to use an alternate database although directly using something like MySQL would then require having the MySQL client installed. In my case I am also considering using the free open source IT asset management system 'Snipe-IT' which like DeployStudio also has a REST API.

Note: In order to make this script as robust as possible and in particular more suitable for Jamf I went to considerable effort to process the XML returned by DeployStudio in a way that avoided having to write the results to a file, that is I have managed to do all the processing using pipes and stdin. This precluded using the defaults command for example. I also was careful to only use tools built-in to macOS.

#!/bin/sh

# DeployStudio connection settings
host='https://deploystudio.domain.com:60443'
adminuser='deploystudiouser'
adminpass='deploystudiopass'

# Get Mac serial number
# Your choice to use ioreg or system_profiler
# MAC_SERIAL_NUMBER=`/usr/sbin/ioreg -l | /usr/bin/grep IOPlatformSerialNumber | /usr/bin/awk '{print $4}' | /usr/bin/cut -d \" -f 2`
MAC_SERIAL_NUMBER=`/usr/sbin/system_profiler SPHardwareDataType | /usr/bin/grep 'Serial Number (system)' | /usr/bin/awk '{print $NF}'`

# Get Mac hostname from DeployStudio
# This is done using DeployStudio's REST API which returns a binary XML record, this then has to be converted to text XML and the host name key obtained from it
# It is assumed that your DeployStudio is using the default option to use a Mac serial number as the index, if you used MAC addresses this will not work
result=`/usr/libexec/PlistBuddy -c "Print $MAC_SERIAL_NUMBER:cn" /dev/stdin 2> /dev/null <<< $(/usr/bin/curl -s -k -u $adminuser:$adminpass "$host/computers/get/entry?id=$MAC_SERIAL_NUMBER" | /usr/bin/plutil  -convert xml1 -r -o - -- - )`

# If a result is returned from DeployStudio then use it, else use a generic name
if [ $? -eq 0 ]; then
echo "$result"
else
result=`/usr/sbin/system_profiler SPHardwareDataType | /usr/bin/grep "Model Name" | /usr/bin/awk '{for(i=3;i<=NF;++i)printf $i""FS ; print ""}'`
echo "$result"
fi

# Set Bonjour and Computer names
/usr/sbin/scutil --set LocalHostName "$result"
/usr/sbin/scutil --set ComputerName "$result"
/usr/bin/dscacheutil -flushcache

exit

Friday, 9 November 2018

Apple REALLY don't want you to use Imaging anymore!

Apple have for quite some time being warning Mac Admins to switch to using DEP as a means of configuring Macs instead of various forms of disk imaging workflows. Linked to using DEP they clearly also assume everyone will get a brand new Mac or they or their admins will use RecoveryHD or Internet Recovery to wipe and reinstall them. (It is necessary to wipe and reinstall the operating system in order to trigger DEP enrolment.)

Whilst there indeed some advantages to the DEP approach there are also some disadvantages - something Apple seem blinkered to. In particular contrary to what Apple seem to believe it is the case that every new employee gets a brand new Mac fresh out of the box, it is in reality far, far more common they will get issued a previously used laptop that needs wiping and rebuilding.

Yes it is possible to do this with DEP and using RecoveryHD or worse Internet Recovery to first wipe and reinstall the operating system but this is orders of magnitude slower than a local disk imaging system. This is made worse by the fact that Apple have not provided a means of 'caching' Internet Recovery images. With Recovery images being now over 6GB in size even organisations with generous high speed Internet links will find this a pain.

Imagine the torture suffered by Mac admins in countries with far less advanced Internet links or worse still capped usage levels!

So, I maintain there still is a case for having a disk imaging solution. (Using a disk imaging approach does not prevent then using DEP after imaging a clean copy of the operating system.)

Apple as mentioned have been discouraging disk imaging and possibly thought they had managed to completely disable this approach in High Sierra. This was because they removed the --volume option from the startosinstall command. Fortunately for me at least somehow the way I used this via a High Sierra based DeployStudioRuntime image it still worked even though it is not supposed to. Sadly DeployStudio has not been updated to allow successfully creating Mojave DeployStudioRuntime images.

Trying to run the equivalent script under Mojave to run the startosinstall command does not work because with this approach the --volumes command definitely is killed off. Therefore the startosinstall command will only target the active boot drive which is no help.

I therefore started to consider previous approaches that had worked for older OS releases, for example the old approach of restoring a previously installed boot drive - an approach commonly referred to as 'thick' imaging. This approach is far from desirable but I might have been driven to it. Before I tried that however I decided to look at my previous 'thin' imaging approach which was based on creating a thin install image using the popular AutoDMG tool and then using a DeployStudio workflow to restore that to an APFS volume.

Well lucky me and ya boo sucks to you to Apple! It turns out AutoDMG does now support making a Mojave thin image, it also turns out that by booting from a full working Mojave disk and running the DeployStudioRuntime utility you can then run the workflow to restore this thin image.

Note: To use an external drive on new Macs so you can boot in to a copy of Mojave and run the DeployStudioRuntime tool you need to turn off SecureBoot.

This approach which I had previously abandoned for High Sierra historically does not include triggering any Firmware updates but so far the only models of Mac I need to use this approach for i.e. Macs that can only boot in to Mojave e.g. the Mac mini Late 2018 do not yet have any firmware updates. Older Macs even the MacBook Pro 15" 2018 can boot in to High Sierra and use my startosinstall based approach even to install Mojave.

Tuesday, 16 October 2018

UK - Dumb Boilers vs Smart Thermostats

It is increasingly common these days for home owners to buy a 'smart' thermostat to control their central heating. Indeed arguably smart thermostats are the number one category of smart home device. The leading member of this category is of course the Nest Learning Thermostat. (Now version 3.)

Originally such smart thermostats whilst indeed having various additional smartness actually worked in the same way as original dumb thermostats in that they basically sent a signal to the boiler asking for heat or saying stop I am warm enough, i.e. a basic on or off control. This approach involves the boiler either running at 100% power or 0% power i.e. fully on or fully off.

However newer models of smart thermostat including the aforementioned Nest Learning Thermostat v3 also support an alternative approach which allows setting a target temperature for the boiler so that the boiler can adjust the level it needs to run at to keep at that target temperature. This means that instead of constantly starting and stopping the boiler it will run continuously at a lower power level to keep the temperature more even. This can create additional energy savings on top of more efficient schedules and might add an additional 5% savings. This approach is referred to as modulating control.

Figure 1 - Traditional on/off control


Figure 2 - Modulating control


As you can see from these diagrams with traditional on/off control the boiler runs at 100% until it reaches the desired temperature and then turns off, it however will overshoot the desired temperature as the heat is released by your radiators, it will then undershoot as the radiators cool down whilst waiting for the boiler to heat them up again. With a modulating control the amount of power (heat) the boiler produces is reduced as it approaches the desired temperature meaning it does not overshoot and instead reduces power to the level needed to keep it at that level.

Now in order to benefit from this more efficient modulating control you need both a smart thermostat that supports this feature and a boiler that also supports this feature. As mentioned the Nest Learning Thermostat v3 supports this, as do various Honeywell Evohome smart thermostats and so does the Tado Thermostat. There is an official open standard called OpenTherm which was original devised by Honeywell and later released as an open standard. This OpenTherm standard is supported by the Nest v3, Evohome and Tado amongst others. Even Drayton offer an OpenTherm compatible thermostat. There seems to be also another alternative standard generally referred to as eBus aka energy Bus, however only Tado support this as well as OpenTherm. (It is not supported by Nest or Evohome.)

Unfortunately here in the UK many of the various boiler manufacturers are proving very unhelpful. Most do now provide at least some boiler models that support modulating control as well as the traditional on/off control but only support modulating control with their own proprietary thermostats. Whilst they do not say so it seems their proprietary thermostats are using the eBus standard. As such this precludes using the Nest etc. in modulating mode although the Tado would still work.

What is even more annoying is that Vaillant a leading brand actually sell their boiler with OpenTherm support in the Netherlands, they do this by selling their own eBus to OpenTherm bridge module - VR33 to convert their eBus signals to OpenTherm signals. However Vaillant do not sell this module in the UK and if you get one and have it fitted even by an official Vaillant engineer they will invalidate the warranty on your entire Vaillant system. Remember this is an official Vaillant part and one that does work on UK boilers.

Worcester-Bosch are a little better, they have their own proprietary variation on eBus called EMS. Bosch own several brands throughout Europe, Worcester-Bosch in the UK, Nefit in the Netherlands, Junkers in I believe Portugal and of course Bosch in Germany. Since OpenTherm is very common in the Netherlands Nefit have produced a module to convert Bosch's EMS to OpenTherm. See - this. However there is also another interesting possibility. Worcester-Bosch also sell an adapter to allow connecting their EasyControl smart thermostat which speaks only their proprietary EMS protocol to OpenTherm boilers, it should also work with their older Wave smart thermostat. This adapter however is described as bi-directional which might mean it can also do the reverse and allow an OpenTherm smart thermostat to connect to a Worcester-Bosch EMS boiler. The Nefit module is not sold in the UK but the Worcester-Bosch adapter is officially available.

Both OpenTherm and eBus have additional benefits, they can provide error diagnostics to your smart thermostat so that you can be far better informed of either a potential problem or an actual fault, they also allow a smart thermostat to not only control the central heating but also to control your hot water scheduling as well. I have not seen anything official but Tado at least suggest that the eBus standard is technically superior to the much older OpenTherm standard. I also get the impression eBus maybe a purely European standard at this point - hence the fact Nest and Honeywell aka Evohome do not support it.

To summarise -

  • UK boiler manufacturers try and lock you in to their own proprietary 'smart' thermostat
  • Most UK boilers do not support OpenTherm and do not say they support eBus (but in reality many do)
  • Vaillant who do at least in the Netherlands support OpenTherm are deliberately refusing to do this in the UK and even go as far as punishing anyone who gets their own OpenTherm bridge module

So either you have to run your boiler in old fashioned dumber on/off mode, or get the Tado Thermostat or accept being locked in to the boiler manufacturers own proprietary 'smart' thermostat.

Note: If your Vaillant boiler is out of warranty you could consider using that VR33 module.

A list of potentially OpenTherm compatible boilers is available here.

Sunday, 5 August 2018

Extracting EFI firmware for standalone install in High Sierra

When Apple released High Sierra they included built-in to the Install macOS High Sierra.app an EFI firmware updater as well. This was mainly to add additional support for booting from APFS volumes but also as part of a plan to continuously check that the Mac firmware had not been infected by malware and also as a way of adding potentially regular EFI firmware updates.

Unfortunately since this firmware update was not available separately and because it could not be automated as part of a traditional disk image based imaging process e.g. DeployStudio this caused some difficulties for Mac admins. As a result Mac admins quickly created a workaround in the form of 'extracting' the EFI firmware updater from a standard Install macOS High Sierra.app so it could be run separately. This indeed worked fine for High Sierra 10.13.1 but Apple changed things again in subsequent versions at least in 10.13.3, I don't have a copy of 10.13.2 to check and the suggested approach then became broken.

Note: Because correctly deploying High Sierra with the built-in firmware updates is effectively impossible with a disk imaging approach Apple say you should instead use the DEP - Device Enrolment Program approach instead. This has its own complexities hence why some Mac admins came up with the original means of extracting the EFI firmware updater.

This script gets round the change introduced in 10.13.3 once more and works for 10.13.6 and I would expect also 10.13.3 to 10.13.5 inclusive. Basically it includes a copy of a sub-script that is no longer included by Apple as of 10.13.3 and later. I also use the munkipkg tool rather than pkgutil so you will need to download munkipkg from here https://github.com/munki/munki-pkg and install it in /usr/local/bin

It should be noted that the change that Apple made in presumably 10.13.3 was to add further firmware updaters to the same mechanism in addition to the original EFI firmware update that is of most concern to Mac admins. Some of the other additional updaters cover SSD firmware and USB-C firmware. It is to me at least, impossible to tell if my 'fixed' version happens to install those as well, I would suspect not.

Therefore as Apple say you should not do this. However at your own risk here is my fixed script.

https://github.com/jelockwood/extract-firmware

If the script completes successfully the custom built installer package is available at /tmp/FirmwareUpddateStandalone/FirmwareUpdateStandalone.pkg

You may want to also check your Mac to see if it has the correct EFI firmware, this is most easily done by downloading and running this free tool - https://github.com/duo-labs/EFIgy-GUI