Home IoT – Part 4 – The ‘final’ solution and the things I learned

This blog post is part of a series of posts were I’ve automated, and Internet of Things (IoT) enabled my garage door, swing gate and Bluetooth enabled garden irrigation system. Links below to each part!

Last changes from lessons learnt

So it’s been a good couple of weeks now with the home IoT setup, and everything has been pretty good. However, it hasn’t been all perfect, and some changes have been made since I last blogged.

Firstly, a weird issue where the Holman BTX-8 Bluetooth server would stop working after a couple of days kept cropping up. I suspect this is because the ESP32 is always connected with it, unlike the phone app which connects only when it writes a characteristic. To mitigate this, I’m simply calling ESP.restart() every night at 2 am via the webserver to give the Holman device some time alone which seems to do the trick.

Another significant change since the last write up is the way I handle the webserver on the ESP32 itself. While part 1’s example code gave me a great starting point, it became annoying when wanting to make small front-end tweaks to the webserver, which resulted in recompiling the C++ code every time. To mitigate this problem, I have since moved the front-end code into a standalone .Net Core web app that makes HTTP Gets to the ESP32.

.Net Core WebApp communicating with HTTP Get to the ESP32

This approach has made life so much better. Not only do I now essentially have the ESP32 working like an API service, but I can further improve the .Net core web app and allow external access to it backed by Azure AD authentication. Now, with AzureAD permissions, I can grant just me and the stakeholder access to the app which can be accessed anywhere in the world

For those interested to know why I deployed the .Net core app on-premises on an IIS server vs. using Azure, this was because of a cost-saving exercise and being able to use my own private DNS and PKI infrastructure.

The .Net Core app has been super valuable. Not only do I no longer need to carry around with me my garage and gate remotes, but over the past few weeks, I’ve come to find how hit and miss IFTTT can be. Sometimes there is a lengthy 2-3 minute delay making the call to the PowerShell runbook, which is not great when your sitting in your car, waiting for the gate and garage to open. The .Net core app doesn’t have any of these delays!

Below is a code snippet of the ActionResult in the .Net Core WebApp which makes the HTTP get calls. All I’m doing is providing this action with the full URL (base64 encoded) to the call from the front-end with some simple JS and jQuery. Simple, and does the job.

        public static string Base64Decode(string base64EncodedData)
        {
            var base64EncodedBytes = System.Convert.FromBase64String(base64EncodedData);
            return System.Text.Encoding.UTF8.GetString(base64EncodedBytes);
        }
        public async Task<ActionResult> GetFromExternal(string url)
        {
            string urlDecode = Base64Decode(url);

            var client = new HttpClient();

            Task<string> getStringTask =
                client.GetStringAsync(urlDecode);

            string contents = urlDecode;
            try
            {
                contents = await getStringTask;
            }
            catch
            {
                contents = "Something went wrong.";
            }
            return Content(contents);
        }

Making the garden irrigation timing smarter using BOM data

Another improvement I’ve made is to the irrigation timings and smarts around changing the running times based on the actual weather.

Like many, every summer, I tend to forget bumping up the watering times or leave it too late, and my garden suffers because of it. To stop this happening ever again, I have put together two PowerShell scripts which also run locally to save on cost.

The first script pulls data from the Bureau of Meteorology of the last few days temperature and evaporation loss. Using that data with a few switch statements the script creates a JSON file with the timings to run the stations for that day. The range could be from not at all, right up to an extreme 40 minutes per station, which would only happen if there were maximum temperatures over 50 degrees for a whole week!

The second script reads the JSON file to water the garden based on that schedule if it’s one of the permitted watering days.

Two PowerShell scripts using BOM data to trigger irrigation run on the ESP32

Get BOM Data PowerShell script.

<#
        .SYNOPSIS
        Downloads BOM data and creates a JSON file.

        .DESCRIPTION
        This script creates a JSON file with irrigation timings to be invoked with another PowerShell script.
        The script is designed to get the best irrigation timings based on the tempature in the last X days and some other factors like how often the irrigation would run per WaterCorp regulations.

        .PARAMETER JSONFile
        Specifies the file name.

        .OUTPUTS
        JSON File used for the other PowerShell script to schedule irrigation.

        .EXAMPLE
        C:\PS> Invoke-IrrigationTimings.ps1 -JsonFile "payload.json"
        File.txt

        .NOTES
        Requires access to the internet.
#>
[CmdletBinding()]
param (
    [Parameter(Mandatory=$true)]
    [string]$JSONFile
)

Write-Host "`r`nInvoke-IrrigationTimings.ps1 is running" -ForegroundColor Cyan 
Write-Host "---------------------------------------`r`n" -ForegroundColor Cyan 

$a = "http://www.bom.gov.au/climate/dwo/{0}/text/IDCJDW6111.{0}.csv"

$datelast = (Get-Date).AddMonths(-1);
$datelast = $datelast.ToString("yyyyMM");

$date = Get-Date -f "yyyyMM"

$combined = @{};

$a -f $date
$data = (Invoke-WebRequest -Uri ($a -f $date)).Content
$combined = $data

$a -f $datelast
$data = (Invoke-WebRequest -Uri ($a -f $datelast)).Content
$combined += "`r`n"
$combined += $data

$combined = $combined -replace "�", "" -split "`r`n" | Where-Object { $_ -like ",*" -and $_ -notlike ",`"*" } 

$data = foreach ($point in $combined) {

    $data = $point -split ","
    New-Object psobject -Property @{
        Date                  = [datetime]::Parse($data[1])
        "Maximum temperature" = $data[3]
        "Minimum temperature" = $data[2]
        "Rainfall"            = $data[4]
        "Evaporation"         = $data[5]
    }
}
$data = $data | Sort-Object Date 

$daystocheckdata = switch ( Get-Date -f "%M") {
    { 11..12 -contains $_ -or 1..2 -contains $_ } { 5 }
    { 9..10 -contains $_ -or 3..5 -contains $_ } { 7 }
    { 6 -contains $_ -or 8 -contains $_} { 5 }
    { 7 -contains $_ } { 0 }
}
Write-Host -ForegroundColor Cyan "`r`nHow many days to check =" $daystocheckdata 

$data = ($data | Select-Object -Last ($daystocheckdata + 1)) | Select-Object -First $daystocheckdata

$data | Format-Table

$rainfall = ($data."Rainfall" | Measure-Object -Sum).sum
$evaporation = ($data."Evaporation" | Measure-Object -Sum).sum
$mintemp = ($data."Minimum temperature" | Measure-Object -Sum).sum
$maxtemp = ($data."Maximum temperature" | Measure-Object -Sum).sum


Write-Host "Rainfall =" ([math]::Round($rainfall, 2))
Write-Host "Evaporation =" ([math]::Round($evaporation, 2))
Write-Host -ForegroundColor Cyan "Sum of evaporation =" ([math]::Round($rainfall - $evaporation, 2)) "`r`n"
Write-Host "Sum Min Temp =" $mintemp
Write-Host "Sum Max Temp =" $maxtemp
Write-Host -ForegroundColor Cyan "Sum of Temps =" ($mintemp + $maxtemp) "`r`n"


$totalloss = ([math]::Round($rainfall - $evaporation, 2))
$totaltemp = ($mintemp + $maxtemp)

$calc1 = switch ($totaltemp) {
    { 401..1000 -contains $_ } { 40 }
    { 351..400 -contains $_ } { 30 }
    { 301..350 -contains $_ } { 25 }
    { 251..300 -contains $_ } { 20 }
    { 201..250 -contains $_ } { 15 }
    { 151..200 -contains $_ } { 10 }
    { 0..150 -contains $_ } { 0 }
}

$calc2 = switch ($totalloss) {
    { $_ -le -70 } { 40 }
    { $_ -le -55 } { 30 }
    { $_ -le -45 } { 25 }
    { $_ -le -35 } { 20 }
    { $_ -le -20 } { 15 }
    { $_ -gt -20 -and $_ -lt 10 } { 10 }
    { $_ -ge 10 } { 0 }
}

$calc = ([array]$calc1, $calc2 | Measure-Object -Maximum).Maximum
$calc_int = [convert]::ToInt32($calc, 10)
Write-Host -ForegroundColor Cyan "Recommended station watering =" $calc "minutes `r`n"

$startTime = "07:00 AM"

$arrayStations = @(1, 2, 3, 4)

[int]$count = 0;
$Object = @()

foreach ($station in $arrayStations) {
    $properties = @{
        station = $station
        time    = [datetime]::ParseExact($startTime, "hh:mm tt", $null).AddMinutes($calc_int * $count);
        runtime = $calc_int
    }
    $count++;

    $Object += New-Object psobject -Property $properties;
}
$json = $Object | ConvertTo-Json 

$json
$path = Join-Path $PSScriptRoot -ChildPath $JSONFile

Write-Host "`r`nWriting JSON file..." $path

$json | Set-Content $path

Write-Host "`r`nScript completed."

The end solution

The below diagram gives a great overview of how the whole solution has come together over these last few weeks.

The entire solution

So what did I learn through this project?

  • There are a lot of great tutorials out there that will get you started. They definitely helped me so I’d like to think anyone with IT and some coding experience should be able to pick this up. Caution though, some examples do naughty things like unencrypted MQTT!
  • Arduino sketches are pretty easy to put together once you’ve got the IDE configured correctly. E.g. Libraries, example sketches etc. Much like anything, make sure your tooling works first, before doing anything big.
  • Azure IoT Hub is probably a bit too big for this project. I’m contemplating switching MQTT over to Adafruit.io as it would probably make integration with IFTTT a lot easier.
  • Make sure to check out HomeAssistant and ESPHome first as it may cater to all your needs. I learnt about these services mid-way through the project so I kept on the same course of doing my own sketch on the ESP32.
  • Think about your ESP32 as an API endpoint or as something that just listens to MQTT subscriptions to do an action. Having it be a front-end webserver became far too unwieldy after a while, especially with no over-the-air (OTA) updates.
  • IFTTT recently announced a paid pro model, which limits what you can do with the free version. Consider and factor in running costs of your IoT solution, especially if actions can be triggered simply using on-prem scripts and cron jobs like I did.

In final, I absolutely loved this challenge and learned so much on this journey. If you’re not sure about doing your first IoT project, I hope this series has helped you in jumping right in!

Home IoT – Part 3 – What about my garden irrigation system

This blog post is part of a series of posts were I’ve automated, and Internet of Things (IoT) enabled my garage door, swing gate and Bluetooth enabled garden irrigation system. Links below to each part!

I was about to fall asleep, and then it hit me

So after a late night just completing part 2, I was laying in bed when my mind was wondering of all the things I could do with my ESP32. Hrmm, could I convince my key stakeholder to wire up the lights in the house to this controller? Nah, she was pretty clear with the rule, nothing inside the house. Hrmm, what is there outside the house? My mind went blank as I started to drift to sleep when I forgot about watering the pots in the patio. Crap! I’ll do it tomorrow.

Just as I’m about to nod off, it hits me like a train. WAIT FOR A SECOND, I hate that darn Android app for my Bluetooth garden irrigation system. The ESP32 has a Bluetooth radio! Maybe I can make the same calls to replace the app and water those pots? Maybe I can say to Google, hey, water my garden for 10 minutes. I was excited about the challenge! Okay, after some long, painful nights, I have successfully got my ESP32 making Bluetooth Low Energy (BLE) calls to the Holman BTX-8 Outdoor Garden Irrigation System. For details on the Holman unit, it’s just one that you can find at your local Bunnings store.

Here’s how I IoT enabled my garden irrigation system.

Capturing BLE packets on my Android Phone

So to start with this, I first needed to capture the BLE calls the phone was sending and receiving. If I was to have any chance of replacing the app, I first needed to know what the app does. Turning on developer mode and capturing HCI logs was relatively straight forward on my Samsung S9 phone. I followed this guide and was able to get the logs into WireShark where I could start seeing the important parts that make up Bluetooth client to server communication like Service UUID and the Characteristic UUID.

WireShark view of the logs

Not long after finding the right write characteristic, I could see the hexadecimal calls being made. Thankfully this write characteristic didn’t look to be based on a read value of something else, so the challenge was simplified a bit. Being honest though, this step took a while, with lots of trial and error, calling to stop and start stations so I could pick up the patterns in the captured logs. I could see numbers changing, but I was only 90% sure I had the logic right. At this point, I felt I had a chance of not making a bad write characteristic to the Holman device which could, though unlikely, scramble its brain. To test locally on my PC first before going all-in on coding it on the ESP32, I ended up getting an app called Bluetooth LE Lab. This app was great. I recommended it to anyone trying to reverse engineer Bluetooth calls.

Bluetooth BLE Lab App

After some fun figuring out some of the other values, I concluded that there was a 10-byte hex value that made the Holman device do something. This value broken down looks like:

  • Turns off all the solenoids.
    • 00 00 00 00 00 00 00 00 00 00 –
  • To run a station (open a solenoid)
    • 01 (run)
    • 00 (station 1, starting at 0)
    • 13 (19 hrs in hex)
    • 12 (18 mins in hex)
    • 00 00 00 00 00 00 (used for scheduling a station at day/time, instead of immediately running it)

To state the obvious because it would have been silly, I did not run a station for 19 hours and 18 minutes! During the testing and validation, I also decided that I didn’t need to schedule the station to run as I had a plan to manage those smarts outside of the controller itself.

Coding it up in the ESP32


// The remote service we wish to connect to.
static BLEUUID serviceUUID("C521F000-0D70-4D4F-X-X");
// The characteristic of the remote service we are interested in.
static BLEUUID charUUID("0000F006-0000-1000-X-X");

static boolean doConnect = false;
static boolean connected = false;
static boolean doScan = false;
static BLERemoteCharacteristic* pRemoteCharacteristic;
static BLEAdvertisedDevice* myDevice;

class MyClientCallback : public BLEClientCallbacks {
    void onConnect(BLEClient* pclient) {
    }

    void onDisconnect(BLEClient* pclient) {
      connected = false;
      Serial.println("onDisconnect");
    }
};

void connectToServer() {
  Serial.print("BLE - Forming a connection to ");
  Serial.println(myDevice->getAddress().toString().c_str());

  BLEClient*  pClient  = BLEDevice::createClient();
  Serial.println("BLE - Created client.");

  pClient->setClientCallbacks(new MyClientCallback());

  // Connect to the remove BLE Server.
  pClient->connect(myDevice);  // if you pass BLEAdvertisedDevice instead of address, it will be recognized type of peer device address (public or private)
  Serial.println("BLE - Client connected.");

  delay(1000);

  // Obtain a reference to the service we are after in the remote BLE server.
  BLERemoteService* pRemoteService = pClient->getService(serviceUUID);
  if (pRemoteService == nullptr) {
    Serial.print("BLE - Failed to find our service UUID: ");
    Serial.println(serviceUUID.toString().c_str());
    connected = false;
  }
  Serial.println("BLE - Found our service");

  pRemoteCharacteristic = pRemoteService->getCharacteristic(charUUID);
  if (pRemoteCharacteristic == nullptr) {
    Serial.print("BLE - Failed to find our characteristic UUID: ");
    Serial.println(charUUID.toString().c_str());
    connected = false;
  }
  Serial.println("BLE - Found our characteristic");
  connected = true;
}

class MyAdvertisedDeviceCallbacks: public BLEAdvertisedDeviceCallbacks {
    void onResult(BLEAdvertisedDevice advertisedDevice) {
      Serial.print("BLE - Advertised Device found: ");
      Serial.println(advertisedDevice.toString().c_str());

      if (advertisedDevice.haveServiceUUID() && advertisedDevice.isAdvertisingService(serviceUUID)) {
        BLEDevice::getScan()->stop();

        Serial.println("BLE - Correct device found.");
        myDevice = new BLEAdvertisedDevice(advertisedDevice);
        doConnect = true;
        doScan = true;
      }
    }
};

bool makeBLECall(uint8_t* value)
{
  char dataString[30] = {0};
  sprintf(dataString, "%02X %02X %02X %02X%", value[0], value[1], value[2], value[3]);
  String output = dataString;

  Serial.print("BLE - ");
  Serial.println(output);

  if (connected) {
    Serial.println("BLE - Call made.");
    pRemoteCharacteristic->writeValue(value, 4);
    return true;
  }
  return false;
}

void BLEEnable(){
  if (BLEDevice::getInitialized() == false){
    esp_bt_controller_mem_release(ESP_BT_MODE_CLASSIC_BT);
    BLEDevice::init("Cortana Design IoT");
    BLEDevice::setPower(ESP_PWR_LVL_P9);

    Serial.println("BLE - Enabling.");
    BLEScan* pBLEScan = BLEDevice::getScan();
    pBLEScan->setAdvertisedDeviceCallbacks(new MyAdvertisedDeviceCallbacks());
    pBLEScan->setActiveScan(true);
    pBLEScan->start(5, false);

    Serial.println("BLE - Delaying for 3 seconds.");
    delay(3000);

    if (doConnect == true) {
      connectToServer();
      doConnect = false;
    }

    Serial.println("BLE - Delaying for 3 seconds.");
    delay(3000);
  }
}

void BLEDisable(){
  if (BLEDevice::getInitialized() == true){
    Serial.println("BLE - Disabling.");
    BLEDevice::deinit(false);
  }
}


void setup() {
      server.on("/irrigation", HTTP_GET, [] (AsyncWebServerRequest *request) {
    String inputMessage0;
    String inputParam0;
    String inputMessage1;
    String inputParam1;
    String inputMessage2;
    String inputParam2;
    String inputMessage3;
    String inputParam3;
    if (request->hasParam(PARAM_INPUT_3) & request->hasParam(PARAM_INPUT_4) & request->hasParam(PARAM_INPUT_5) & request->hasParam(PARAM_INPUT_6)) {
      inputMessage0 = request->getParam(PARAM_INPUT_3)->value();
      inputParam0 = PARAM_INPUT_3;
      inputMessage1 = request->getParam(PARAM_INPUT_4)->value();
      inputParam1 = PARAM_INPUT_4;
      inputMessage2 = request->getParam(PARAM_INPUT_5)->value();
      inputParam2 = PARAM_INPUT_5;
      inputMessage3 = request->getParam(PARAM_INPUT_6)->value();
      inputParam3 = PARAM_INPUT_6;

      uint8_t value[4] = {inputMessage0.toInt(),(inputMessage1.toInt()-1),inputMessage2.toInt(),inputMessage3.toInt()};
      if (makeBLECall(value)){
        request->send(200, "text/plain", "OK");
      }
      else{
        request->send(400, "text/plain", "Something went wrong.");
      }
    }
    else {
      inputMessage0 = "BLE - Incorrect message sent.";
      inputParam0 = "none";
      Serial.println(inputMessage0);
      request->send(400, "text/plain", "Bad Request");
    }
  });
}

When it came to coding this up, I learnt a tough lesson about the importance of keeping your ESP32 codebase small, efficient and optimised as much as possible. It turns out the BLE library for ESP32 is quite big, and when trying to run that with WiFi, a WebServer and MQTT subscription and publishing to Azure IoT Hub, I was overflowing on the 4mb memory of the controller. To solve this, I had to change the memory partitions, but this meant that over the air (OTA) updates were now no longer possible.

This change made updating my C++ code and the webserver inside it only possible over USB, which made progress tedious and a lot slower. I also ran into another issue that as I kept moving the ESP32 back to where I wanted it to be, it couldn’t see or scan for the Holman device. This issue was quite problematic to troubleshoot as I wasn’t able to see the serial output as it wasn’t near my PC. But when I brought it back which subsequently meant it was nearer to the Holman device (located outside on the office PC wall), it worked fine! It took a good day to realise I needed to bump up the power gain settings on the ESP32 to reach the distance I wanted it to.

With a few changes to the webserver, I now had the irrigation controlled via this much simpler interface, which actually works. Did I mention the Holman app is horrible?! Though this webserver was still only on the network, so I also created a new subscriber in Azure IoT too.

It’s now possible, through the ESP32, to control my Bluetooth irrigation system from anywhere in the world!

ESP32 Webserver with the irrigation controls
Integration of the irrigation using IFTTT, the WebHook + Runbook and Azure IoT Hub

In the last blog post, I’ll share the final solution and recap some learnings that I’ve taken away from this project. Stay tuned!

Home IoT – Part 2 – Putting it together and integration

This blog post is part of a series of posts were I’ve automated, and Internet of Things (IoT) enabled my garage door, swing gate and Bluetooth enabled garden irrigation system. Links below to each part!

Okay, time to put this together

So not long after following Rui Santos’s blog post and video (see part 1), I had my ESP32 controller on our home Wi-Fi network and running a webserver that would switch on/off a relay with a toggle function.

To start, I had only wired up the first relay to the appropriate GPIO, but I had compiled the C++ code to make use of all four relays. Yep, I only have the garage and gate to automate, but why not think big!

ESP32 dev-board wired to one relay
Webserver on the ESP32

The first problem with the example code is that I needed my gate and garage door buttons to be a momentary press rather than a switch. A necessary change because I planned to hook up the relay to the normally open (NO) loop of both the garage and gate and with it potentially stuck in the closed position, they were essentially locked out from any other action from the conventional remotes. It also didn’t make sense to turn on and then immediately off the switch to get the functionality I needed. So with a bit of HTML, JS and CSS changes plus giving the GPIOs nicer labels in the C++ code, I had what I was after.

At this point, I was pretty excited. I had a web server that gave me the ability to control my gate and garage when on the network. However, I faced another problem the very next day. A bit more back story, the gate I’ve installed blocks any access to the front of the house and subsequently, the meter boxes. This design decision became a sticking point when the meter reader rang my Ring doorbell on the gate, and I couldn’t let him in because no one was home or on the network. At that point, I was like huh, maybe I could connect my Ring Doorbell to this? Perhaps get it to send a push notification to my phone when it’s rung, that I can acknowledge, and make the call to trigger the momentary button press on the gate?!

The gate in question

In comes Azure IoT Hub and the power of MQTT. Essentially, I needed my ESP32 to be listening to Azure IoT Hub for the call to action and do the thing I needed it to do.

Azure IoT Hub integration with the ESP32

The best way to describe this is a boy (the ESP32) constantly nagging for the chocolate bar (the action) at the exit aisle by pestering mum (Azure IoT Hub). When Azure IoT Hub says yes after persistently saying no, as in changes from 0 to 1, little Jimmy ESP32 gets his chocolate (opens the gate).

To call it out, as it’s sometimes misunderstood, there is no backdoor (inbound access) to the ESP32 from the outside world. MQTT is all about subscribing and publishing messages, so by nature of that principle; it’s always outbound traffic.

Setting up the subscribing and publishing of values was relatively straight forward. In my C++ code, all I needed to do was to connect to the Azure IoT Hub and set up a function in the loop() that checked if the value had changed, do the function to open the gate and then, have another function then publish back after write HIGH (back to LOW) 0 again.

void publishAzuredata(char* event, unsigned int value){
  client.publish(event, value);
}
void subscibeAzuredata(char* event){
  client.subscribe(iothub_subscribe_endpoint);
}

void reconnect() {
   while (!client.connected()) {
    Serial.print("Attempting MQTT connection...");
    if (client.connect(iothub_deviceid, iothub_user, iothub_sas_token)) {
      Serial.println("connected");
  
    } else {
      Serial.print("failed, rc=");
      Serial.print(client.state());
      Serial.println("try again in 5 seconds");
      // Wait 5 seconds before retrying
      delay(5000);
    }
  }
}

void callback(char* topic, byte* payload, unsigned int length) {
  if (String(topic) == "device/deviceID/gate") {
    if (value == "1") {
      digitalWrite(gatePin, HIGH);
      delay(2000);
      client.publish(topic, 0);
      digitalWrite(gatePin, LOW);
    }
    else if (value == "0") {
      //do nothing really
      //digitalWrite(gatePin, LOW);
    }
  }
}

void setup(){
  client.setServer(mqttServer, mqttPort);
  client.setCallback(callback);
  subscibeAzuredata("device/deviceID/gate");
  subscibeAzuredata("device/deviceID/garage");
  connect_mqtt();
}
void loop(){
  if (!client.connected()) {
    reconnect();
  }
  client.loop();
}

To interface with Azure IoT Hub itself and change the value of 0 to 1, I had a few options like a Logic App or Function App in Azure. However, to keep it simple, I decided to stick with what I know and write a PowerShell script hosted in a Runbook that I could call with a WebHook. That way I could do an HTTP Post with a JSON body that could say open the gate or garage or even both at the same time.

Now that I have the WebHook, I could theoretically call it from anywhere in the world, but I wanted to have a nice way of doing it besides using Postman. Here is where I leveraged IFTTT applets, one for my phone (push notification with IFTTT app), and another for Google Assistant.

Integration with IFTTT and the WebHook

The end result of it all coming together…

ESP32 dev-board and the 4ch relay in a box to be installed

Stay tuned for the next part where I integrate my Bluetooth garden irrigation system with my ESP32 so I can smarten up my irrigation system!

Home IoT – Part 1 – The start of the journey

Over the past few weeks, I’ve automated, and Internet of Things (IoT) enabled my garage door, swing gate and never to be finished with everything, my Bluetooth enabled garden irrigation system which required some reverse engineering. If you’re interested to know more, and the challenges I faced, keep reading!

Background

So to give you some context and background, custom IoT is relatively new to me. Of course, we all have smartwatches and gadgets at home these days but I’ve never worked with micro-controllers and integration, protocols and services like Adafruit.io, Azure IoT Hub, and MQTT, I’m a bit nervous I’ll get tripped up here. To be really honest, I had to google MQTT to know what it was for, so you could say my experience was very much a beginner.

As for automation, I’ve got some pretty good experience with pipelines, scripting, logical flows (Logic Apps, PowerAutomate, IFTTT Runbooks) and function apps. Enough knowledge and know-how that I knew I would be relying heavily on these skills to help with the IoT learning curve I’m about to embark on.

On top of some knowledge around automation, to call it out early, you’ll definitely need some coding experience to customise your solution. However, there are great examples out there that give you the full solution of controlling relays, sensors and displays with a micro-controller. At the end of my journey, I ended up doing C++, HTML, CSS, Javascript, .Net Core and PowerShell, but really, you could do a lot of this by copying coding examples from reputable sources (Arduino ESP32 example library), plugging your variables and hitting the compile button.

Lastly, full disclaimer here, when I started with this project, I jumped right into it without much of a plan. The series of blog posts is more about a journey of tinkering that tested my knowledge, my patience and my sanity! There are probably a million better ways to do this yourself, such as Home-Assitant and ESPHome, but I wanted to get a deep understanding of the ins-and-outs of IoT, so when I do this the next time, for reals, I’d know 😉

Challenges

Like any project, this one not only had technical challenges but business challenges too. Yep, a business challenge in the home! This challenge was working with the key stakeholder, my wife. While she’s a lover of tech (though she doesn’t admit that), she’s not a big fan of seeing home automation on the news and how it’s spying and mining all our conversations (data) for advertising. 

So with the stakeholder uneasy already on what I’m about to set out to do, our at least the perception of what I was going to do, I had to set out some ground rules for myself.

Rules

Knowing the challenges I would face with the key stakeholder, I gave myself some rules and boundaries for this project. They were:

  1. I can’t control anything inside the house. The stakeholder was very clear about that.
  2. I wouldn’t be able to install any third-party apps on the stakeholder’s phone to assist with the home automation.
  3. No Google Homes, no Alexa’s, no anything that is listening in our home.

Let’s get started

Alright, to get started we need some hardware to run this on. Something like an Arduino, Rasberry Pi, ESP8266 or an ESP32. I wanted something cheap and did all the things, so here is where I made my first mistake and brought something from eBay.

The custom PCB that I fried

My recommendation if you’re starting out with IoT, buy a hardware option from a known manufacturer. Custom PCB’s, while sounding like a great idea (“Hey, this does everything, I can’t go wrong”), are very lacking in documentation. I learned my mistake by frying the eBay custom PCB by sending 5v somewhere I shouldn’t have because the PCB schematic was wrong. Turns out I has got a different iteration of the PCB from the doco.

What I ended up with is something much more manageable, buying from a local store my ESP32 development board that had integrated USB and a separate 4-ch relay controller. Not only did I have documentation of the pin layouts (trust me, knowing your pin layouts comes in handy when you start putting it all together) but the people at my local store were super helpful about how I should tackle the project.

Take two, using the ESP32 dev-kit board and 4ch relay

Links to the hardware:

So with the hardware, I need to start putting it together. Thankfully, following the tip from the guys at my local store, there is plenty of ESP32 how-to blogs with relay controls. My need was a webserver (noting I couldn’t use third-party apps on the stakeholder’s phone), so a quick Google and I landed on Rui Santos’s end to end example here:

The code you see in his post is C++, which you need to compile with an IDE. Arduino IDE is an excellent starting point with the ESP32 library, but you can also do this in VS Code (still requires Arduino IDE).

The ESP32 sketch in Arduino IDE

With my IDE ready, I’m now ready to get started. In the next part, I’ll talk about how I took Rui Santos tutorial and got my first iteration working! Stay tuned!

Is DevOps the plight of those who can do both Infrastructure (Ops) and Development?

There is something that is bothering me, something I want to share… I look at the industry today and I worry that the divide between Developers and Operators doesn’t appear to be closing. If anything, it appears to be expanding and I’m lost in the middle of it!

For those who haven’t, I highly recommending reading the Phoenix Project. It’s an amazing book that talks about IT in general, DevOps and setting up a business for success. It’s a book that gave me a lot to think about and gave me a good understanding of where I live in the world of IT.

So full disclaimer, I am at times a Brent. For those who haven’t read the book, Brent is a perfectionist and someone that must pull all the levers on everything he can get his hands on. The fact that the names Trent and Brent are similar is one thing, but the nickname of Trenticles which was donned on me many moons ago does imply that I have a fingers in a lot of pies. Some will say that’s me taking on too much or unable to say no but I must divulge, I think it’s because I’m simply misunderstood. Here is my story…

My journey in IT started like most, on a help-desk. This help-desk was actually a one man shop where I was responsible for looking after a public high school in Perth. I did that job because I in fact graduated from that very high school and was causing a lot of pain for the Managed Service Provider they had looking after their systems (hacking forward proxies were fun!). I did that job for about a year and when an opportunity came up in sleepy Fremantle (6 mins from home, a dream for 19 year old me), I moved to Notre Dame University to join their IT team.

It wasn’t long at Notre Dame (9 months) before I was one of the third tier System Administrators. I was given the chance of a life time and was taken under the wings of someone amazing that taught me the ropes of being an amazing System Administrator. Learning the importance of doing IT for the businesses needs, being responsible with costs, being security mindful and thinking about automation first on everything I do.

Automation first… If I could automate my entire life, I would consider it!

Unlike what the industry would consider as a ‘conventional’ System Administrator, with an automation first mindset, I learnt super early to code. It was something I never thought I’d need and it certainly wasn’t in my job JD’s as a skill I needed, but without any doubt, it has changed the way I work for the better. It’s helped me build some awesome solutions that I’m very proud of which without code, I’d still be tiring to put together. I even loathe myself like any other developer when I look back at some of the code I put together.

Some examples of learning to develop.

So here we are with the skills of both Infrastructure and Development it’s a complicated mess. We’re seen by some as both (fingers in too many pies), neither (he can’t help us here) or siloed into one (he’s only a developer).

So I have to ask, what are we in this DevOps world where we have both ‘Dev’ and ‘Ops’ knowledge? Is there a role for us or are we in a lost void? I would love to hear what others have to say about this?!

My thoughts right now. I’m a fraudulent Developer, wanting to continue to automate my applications and infrastructure code. I feel the power to automate this is resting in the hands of Developers, and not both!

Some good reads:

Migrating Certificate Services from Server 2012 R2 to Server 2019, the right way.

So firstly, it’s been about 4 years since my last blog post so I think an apology is in order. Honestly I thought the Ubuntu Server GuestVM I had this site running on was deleted years ago when I moved house. Turns out it wasn’t and I just had it mis-configured with the wrong DNS settings (Reverse Proxy; yada, yada, yada). I can’t believe this server has been here all this time with no one (including myself) ever seeing it, particularly when I’m always periodically looking at my Hyper-V console.

Anyways, as the title suggests I’m currently in the midst of a server refresh, because you know, you have to do that everyone once and a while. This task in-particular is for Active Directory Certificate Services and moving it to Server 2019 Core.

The server I’m moving from is Server 2012 R2 Core. So you can already get a sense the instructions that everyone uses because it works with any edition of Window Server wont work here because yep, I don’t have a GUI. I shouldn’t need to tell you why in 2019, Server GUI is a bad, lazy way to so Server stuff

Because I’m already running a SHA256 root CA the process is a bit more straight forward. If for whatever you’re still running SHA1, then I’d suggest move the Certificate Services database first then do the changes and certificate reissue for the new root. Tip: The GUI steps in this link are done command line below!

So let’s start with building our new Server 2019. Get the operating installed and go ahead and join it to the domain. On top of those, install the like-for-like Certificate Services role from the old server on the new server but don’t configure them just yet! You can easily do a side-by-side comparison with running “Get-WindowsFeature” on the old and then just installing those roles with “Install-WindowsFeature” on the new.

When your new server has the new roles, Server Manager will show it like this. Leave it as is.

Moving onto the Root Certificate and Certificate Service Database backup phase now. Get onto your old server and start up an administrative PowerShell window (TIP: just type powershell.exe). Run the following PS cmdlets:

cd C:\
mkdir C:\CertificateServicesBackup
Backup-CARoleService c:\CertificateServicesBackup -Password (Read-Host -prompt "Password:" -AsSecureString)

When prompted, provide a password that you’ll remember. You’ll need it later.

https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/component-updates/ca-backup-and-restore-windows-powershell-cmdlets

Okay, so backup of Certificate Root and Database done, now to backup some important registry settings. In the same PS window, lets backup the important registry now.

reg export HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc C:\CertificateServicesBackup\backup.reg

Great! Now take a copy of the C:\CertificateServicesBackup folder and keep it safe, maybe even on the new server. (TIP: “xcopy /e /s C:\CertificateServicesBackup \\newserver.fqdn.com\c$”)

At this point we’ve got what we need, everything is backed up. Now for what people would think is the scary bit and that’s the removal of the old server! This is an important step because doing this later, not at all or after the migration will seriously screw up Active Directory… so don’t do that. Let’s remove it now and be done with it.

Safely remove the old certficate server roles with the “Remove-WindowsFeature” cmdlet. Once that’s done, remove the server from domain.

Now it’s onto the new. With Server Manger or Windows Admin Center lets now click that link to complete the set up we said we would never touch. Tricked you! Go through the wizard until you get to this screen…

Look familiar? It should, because we’re following the same step as the everyone uses blog!

As the everyone uses blog article suggests, we’re going to provide an existing certificate and private key (protected by password). That certificate is the one you backed up earlier and the password you remembered.

Now, continue through the wizard with all the defaults, including the questions about the database to use as we’ll restore over the new database with the backed up data. When the wizard is done, jump onto the server and launch an administrative PowerShell window again. This time we’re running the restore PS cmdlet.

Stop-Service certsvc  
Restore-CARoleService c:\CertificateServicesBackup -Password (read-host -prompt "Password:" -AsSecureString) -Force

Again the password is the one we remembered. With that done, we just needed to import the registry settings in. Before you do this, I suggest you open the .reg file in notepad.exe and just check to make sure there is no FQDN’s, Hostnames or IP’s that need updating. If they do, so that before import the registry file by running the below.

reg import HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc C:\CertificateServicesBackup\backup.reg

At this point you’re basically done! Start the service and make sure it comes up.

Start-Service certsvc
Get-EventLog (for errors)

Your last task is to re-issue your certificate templates into Active Directory. Easy to just do this with the certsrv.msc management console go to “Certificate Templates (right click) > New > Certificate Template to issue”

The last step! Hooray!

That’s it from me, have a splendid day!

Cheers,

Trent

Windows 10 1511 update on WSUS error (Retry Loop)

A quick post today for those who may be struggling with WSUS and Windows 10 1511 updates. It turns out there is one missing component/ step in order to get WSUS to deliver these new type of updates.
 
Lets assume you are the administrator, knowing enough about WSUS to have already completed the following:
 
  1. On the WSUS server(s), running Windows Server 2012 or later, installed the required manual update. https://support.microsoft.com/en-us/kb/3095113
  2. On the same WSUS server(s), made appropriate changes to see Upgrade classification patches from the Windows catalog.
  3. Have approve the appropriate Windows 10 1511 Upgrade patches and have them successfully download onto the WSUS server(s).
  4. Attempted a client-side Windows Update.
 
On step 4, non-1511 Windows 10 client are seeing this error with a simply retry button.
 
There were problems installing some updates, but we'll try again later. If you keep seeing this and want to search the web or contact support for information, this may help:
 
The solution comes from a bit of digging in the WindowsUpdate.log/ Get-WindowsUpdateLog. It appears that the Windows update client is unable to find a file with the suffix *.esd. For me this was 6F5CDF12827FAE0E37739F3222603EAF38808H12.esd.
 
Looking at the WSUS server, and in particular the IIS component of WSUS I could see this file was in fact in the directory so the client should get it… Hrmm, let's try the direct URL to the file, ah! 404! That's no good.
 
Let's check the MIME types to see if this file type can be downloaded from IIS. Nope! IIS is unable to dish out the file because *.esd files are a new MIME type that is not configured in IIS.
 
Okay, I'll quickly add this and give it another go.

2

 

Sure enough success!

1 

Exchange 2007 and 2013 environment – Public Folders on 2007 “Get-PublicFolder” cmdlet fails.

Recently I was tasked with a Public Folder migration project moving the public folders from Exchange 2007 to 2013. This particular company had been on Exchange 2013 for pretty much everything except Public Folders for well over a year which provides a good segue into this issue! When the administrators tried to manage Public Folders on Exchange 2007 they're were getting no where! Both the Exchange 2007 Public Folders Management Console would not load any of the public folders (500+ of them) and the Get-PublicFolder cmdlet would fail with:

There is no existing PublicFolder that matches the following Identity: '\'. Please make sure that you specified the correct PublicFolder Identity and that you have the necessary permissions to view PublicFolder.

Yikes! Well that's no good! So how is it possible that PublicFolders are online and working for end users yet the administrators can't manage them! Let the investigation begin. Funnily enough “Get-PublicFolderStatistics” still worked!

After talking to the engineers previously tasked with the migration to Exchange 2013 work, it was when the last remaining mailbox database in Exchange 2007 was removed that Public Folders management broke. This immidiately led me to believe either one of two things:

  1. It's a permission issue with the administrative accounts they're using.
  2. Something isn't quite right with objects in AD referencing Public Folders in Exchange 2007.

After a quick look around I confirmed that while the permissions for managing Exchange infrastructure was a bit all over the place, it wouldn't have been the root cause. Their accounts had the right permissions to Organizational Management and/or the Public Folders Owner security group in AD. 

Taking a dive in AD with ADSI/ LDP (If you're using ADSI, connect with a non-domain admin account to prevent accidental changes) I could again see things were a bit messy with two administrative groups, but everything was in order. The Folder Hierarchy CN was there and its references were correct. 

Hitting TechNet with searches on "Exchange 2007" lead me to issues around the HomeMDB being nulled when the last mailbox database is removed for each Exchange 2007 server for the Microsoft System Attendant object. Sure enough this attribute was null so I immediately tested by creating a new fresh Exchange 2007 mailbox database to see if that was the issue. HomeMBD was instantly populated for the Microsoft System Attendant object when I created the databases but alas, still the above issue! 

At this point I was really clutching at straws and therefore started checking the ‘interwebs’ and found all sorts of whacky recommendations and fixes! There are some shockers out there and it does scare me that someone with no background experience in Exchange/ AD could follow them and make a real mess of their environment. Sadly, you can't help those who can't help themselves! 

None of them really fitted well with the issue I had and/ or made practical sense to even action. So with that in mind I made a call to Microsoft.

After a bit of coming and going as you do with Microsoft tier 1 support the suggestion was to populate the Exchange 2013 servers Microsoft System Attendant object with a HomeMDB attribute to the Exchange 2007 mailbox databases. As you can appreciate with my comments already above I found this a bit baffling and refuted it. However that didn’t stop me from looking at it a bit more…

On one of the Exchange 2007 Servers I got going the troubleshooting tool and selected trace control. From there I proceeded beyond the warning messages about running traces when only recommended by an Exchange support engineer.

3

Leaving all the trace file configuration pretty much default, I did want to capture only Store trace errors (similar to https://support.microsoft.com/en-us/kb/971878) so I made this selection. For trace types I selected all of them.

For the trace tags, I took a stab in the dark a couple of times to see what I wanted to check against. After much trial and error I got it down to three which found me the issue…. These were:

  • tagDSError
  • tagInformation
  • tagRpcIntfLogon

2

1

While the trace was running I then opened up an Exchange 2007 management shell and ran “Get-PublicFolder” to let it fail. Stopping the trace and running the report on the trace highlighted my issue!

—-

tagDSError – Mailbox /o=Y/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=XXX/cn=Microsoft System Attendant, does not have either a Home MDB or a GUID

tagInformation – EcConnect2: User /o=Y/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=XXX/cn=Microsoft System Attendant, does not have a Home MDB/GUID attribute

tagRpcIntfLogon -Connect as /o=Y/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Configuration/cn=Servers/cn=XX/cn=Microsoft System Attendant failed; connect flags were 0x1

—-

XXX CN was an Exchange 2013 server! Aha, Microsoft tier 1 support are onto something. Okay, well I don’t believe them about the Exchange 2007 mailbox database though! So I thought what’s stopped me putting in an Exchange 2013 mailbox database, at least that way the information would be accurate and not legacy!

So I jumped into ADSI edit this time as Domain Admin and found one of the mailbox databases in Exchange 2013. This is under ADSI -> Configuration -> Services -> Microsoft Exchange -> Administrative Groups -> Exchange Administrative Groups -> Databases. I copied out the DN attribute and then went to each of the Exchange 2013 servers and set the homeMBD as that DN. Note that if you have multiple AD sites to pick a mailbox database local to that sites server.

I gave it 15mins to replicate in AD (that’s the replication time for this client) and run the “Get-PublicFolder” cmdlet and sure enough… it worked! Now we're ready to migrate to Exchange 2013!

That's it for now, until next time.

Invalid namespace – EventID 906 – AD Azure Sync Tool (WAAD)

Today I came across an interesting issue where the AD Azure Sync Tool via Microsoft Online alerted me that AD Azure Sync had failed to run for some time!

errorADAzure

This was quite odd as there had been no changes to the Office 365 or AD instances that provide the identity sources for this AD Azure Sync. There were some power outages in their server room that caused a few other services to not come up clean so I thought it could be that the service failed to start etc. Monitoring by SCOM said otherwise though! Okay time for checking the event logs and within a few seconds found this.

errorADAzure2

Interesting! EventID 906 "Invalid Namespace"… That's the same issues that appeared with the old DIRSYNC.exe when the WMI object had unregistered itself. Common for example if you have SCCM client installed on the server or something else goes through and manipulates the WMI classes probably where it shouldn't be. Okay, let's fix this quickly without having to reinstall anything… Something that you had to do with DIRSYNC.exe. A total pain in the backside, and if you weren't careful, you could have ended up with a boatload of disconnected objects in the metaverse!

At this point I had a choice. Do these steps manually or create a dirty batch script that will do the work for me and if needed in the future, on demand. I decided to do both, ran the steps manually and once I was happy with it, save my work into the batch file for future use!

So below is my script that I first ran the commands or enacted the same thing the script would do with the GUI. Once I actually had it all set up and AD Azure working again, I actually ran the new script again (over the top of my manual work) to confirm that the script was safe. And sure enough, it was and everything was working fine after the script ran.

To break down what the script does here is a list of what each row does. 

  1. 'mofcomp' parses the MMS (FIM) wmi file and goes through the process of adding the classes etc. to the WMI repository.
  2. 'regsvr32' then registers the WMI .dll file to the server.
  3. 'net stop winmgt /y' stops the WMI management services and its dependancies.
  4. The following 'net start' commands then start the services stopped when we fired off the 'net stop'. The services are also started in the correct order.
  5. Finally, we run AD Azure Sync manually by calling "DirectorySyncClientCmd.exe". 
mofcomp "D:\Program Files\Microsoft Azure AD Sync\Bin\mmswmi.mof"
regsvr32 /s "D:\Program Files\Microsoft Azure AD Sync\Bin\mmswmi.dll"

net stop winmgmt /y
net start winmgmt
net start "IP Helper"
net start "User Access Logging Service"
net start "Microsoft Azure AD Sync"

"D:\Program Files\Microsoft Azure AD Sync\Bin\DirectorySyncClientCmd.exe"

As you can see the directory for which AD Auzre has been installed is the D: drive. You can change this batch file with %Program Files% if you're using your system drive (C:).

That's it from me for now. I hope this helps others in the future using the AD Azure Sync Tool! 

Azure Active Directory (WAAD) Sync Tool – Password Sync issue and the importance of running the entire wizard

Today I came across an interesting find with the Azure AD Sync Tool that I thought I would share. The issue was rather easy to identify but someone with an untrained eye might find it confusing or a little misleading. When I say untrained this article is really for those who may be going alone with an Office 365 migration for the first time for their business or in a test lab… Though the semi-professional administrator may find this interesting.

To give you a bit of history about my knowledge on this, I come from the Forefront Identity Management 2010 (FIM) head space in that when myself and Lain rebuilt a major university's IT infrastructure back in 2010-2011 we didn't have the freedom that comes with Office 365 / Azure AD today. I.e. It was FIM, FIM or, and you've guessed it, FIM. So when we built our management agents and the subsequent metaverse, it was staggered in order so that once everything was in place and we were happy with it, we'd go back and implement Password Change Notification Services (PCNS) as the last step.

Coming back to today however there are three options out there for us to use for directory synchronisation. Those being of course:

  • DirSync. The widely used directory sync tool out there for many clients. It has served its purpose and proven to be a suitable solution for many organisations who just need basic Office 365 attribute flow etc. 
  • Azure AD Sync. The now standard and recommended solution for those looking to directory sync with Azure and Office 365. Those who are about to begin with directory sync this is the current recommended tool by Microsoft. Deter away from using DirSync as the eventual plan for that tool is for it to be put to pasture. The great thing about Azure AD Sync is that it supports multiple AD forests and password write back. Password write back shouldn't be confused with Password Sync by the way! They are two different things with two very different outcomes, especially around security implications!
  • FIM. Lastly FIM is for those who need the ability to customise management agent's (MA's) to the needs of their business. An example where FIM is very powerful is where you have say a HR system (based on say SQL) that flows into an on-premise AD. Meaning your HR system is in fact the definitive source for identity management within your organisation. From AD however you then establish lots of different ADLDS instances for proprietary systems that you might not want talking or bloating AD itself (E.g. Cisco CUCM and Avaya UCM are two strong canidates for ADLDS). Then of course you configure the Office 365 MA and PCNS for provisioning to the Azure cloud.

There is actually a fourth option to this list above but it's only in a Public Preview at the moment. Going by the name AD Azure Connect, this product makes massive inroads for making ADFS federation a hell of a lot easier for the inexperienced administrator. Essentially you only need to provide it with the right certificate and create some DNS records and it configures ADFS for you. It's evident when comparing the four DirSync and Azure AD Sync look somewhat primitive and that Microsoft with this new tool are really pushing the customer base away from Same Sign-On to Single Sign-On (SSO). Unfortunately its hard to compare it with FIM as it is much more powerful in terms of configuration but also to the fact Microsoft haven't really stated what FIM's future is at the moment… I guess we'll know more when AD Azure Connect becomes GA.

So now back to my issue (sorry, got sidetracked). When I was going through the process of installing the Azure AD Sync Tool in a test lab I thought I'd be smart about making sure my metaverse wouldn't be filled up with unnecessary and unneeded objects (User and Groups) from my on-premise AD. When I say unneeded, I don't for example need groups such as "Domain Admins" synchronised to the cloud. Why? Well Office 365 doesn't need them and realistically, neither does your organisation. They for example wouldn't be using "Domain Admins" as a email distribution group and administrative privileges at the Office 365 at least are done at the user level not by groups! Of course you're wanted Lync and Exchange RBAC controls you may have a need for "Domain Admins" but again your probably don't either.

So when it came to the final screen on the wizard I unselected the "Synchronize now" checkbox. No problem as I'll change the filter on my on-premise AD Management Agent first (to pick a subset of OU's) and then run my first Full Import sync on both MA's manually. Done that, great… Now run the first sync by calling the DirectorySyncClientCmd.exe in the install location 'Bin' folder. Done. Awesome! I have got only the users and groups I needed in Azure AD and there is no initial 'disconnectors' in my metaverse. Time to assign licenses and get users on Office 365! 

Now at the point I thought everything was great and I was ready to go… I attempt to login to the Office 365 portal with one of my synchronised accounts but I'm getting told it's not the right username and password combination. Hrmm, turns out I'm not close…. So what's gone wrong?

Well after a quick look at Event Viewer connected to the server running the Azure AD Sync tool I knew my problem straight away…. 

eventvwr-ADAzure

Event 652! But I told the wizard back at Step 4 that I wanted Password Synchronisation. What's going on?!

Well turns out if you choose not to run that first initial sync with the wizard, you're PCNS is not registered on the sever. In a way that does make sense because for PCNS to work you really do need accounts in the cloud to password sync too. But on the other hand it should be able to wait until the first 'Delta Imports and Exports' are run to go, okay, time to grab the password hashes and sync them because there is something there that is new and synced… Much as the case with a new user you create in on-premise AD and eventually syncs to the cloud. For those who don't know PCNS is run completely independent of the the sync tool and the scheduled task I'm about to talk about so I thought that the latter thought of mine would make the most sense. However it's clear with the application event log Microsoft have built the tool in a way that is contrary to that.

So what's the lesson here? Well if you want to change your on-premise AD MA filters before you run the first 'Full Imports' manually you really need to actually kick off the wizard first with 'Password Synchronization' on step 4 (Optional Features) unchecked. Once you've done what you've need to do with the OU filters etc., you need to disable (not delete) the newly created task in Task Schedular, run the wizard again, enable 'Password Synchronization' and save. Again to just remind you, you don't need to run the sync again because PCNS is independent of this. 

Once that's all done, confirm the task as enabled again and if it hasn't, enable it manually. You should find also that your event log is full of Event 657 informing you that PCNS is syncing passwords propery.

Finally and this is just a quick note, if you have gone through the wizard already with Password Synchronisation enabled but you didn't let the wizard run the first sync, then you'll need to add in an additional step prior to the steps above that goes through the wizard first disabling Password Synchronisation. This is because the wizard does not go off and run everything again if it determines there are no changes to be made as you have made any selection changes etc.​

Happy syncing!