Using Liquid transformations in Logic Apps… for free!

Microsoft offers a few different solutions to perform message transformations in Logic Apps. One of them I described before in ‘Translating JSON messages with Logic Apps’. Liquid is considered as the new way forward to translate JSON and XML messages. XSLT still has strong support if you’re working with XML documents but if you’re working with the JSON message format Liquid is your friend.

Liquid is an open source template language created by Shopify. Microsoft uses the .NET implementation DotLiquid under the hood of their Liquid connectors. The usage of Liquid transforms in Logic Apps require an integration account. Integration accounts are not available as a serverless / consumption based service, and have a fixed price tag that form a serious barrier for those that only have a need for a handful of message transformations.

Although integration accounts offer more functionality than merely Liquid transforms (EDI, B2B) its main use seems to be for Liquid transformations among a few customers I’ve engaged with recently.

Given DotLiquid is open source I took the challenge to build a serverless Azure Function to perform Liquid transforms with on-par functionality compared to Logic Apps Liquid transformations (JSON and XML to JSON, XML or plain text). The main objective was to no longer rely on an integration account and saving $5000 (!) on an annual basis. Combine this with serverless Logic Apps and the API Management consumption tier and you can potentially run an iPaaS for pennies. Take that MuleSoft…

I’ve published my solution on Github accompanied by documentation on how to use it. Usage doesn’t restrict itself to Logic Apps. The solution is available as a V1 Azure Function, mainly due to the missing Swagger support in Azure Functions V2. Azure Functions with a Swagger definition make for excellent integration with Logic Apps, as documented by Microsoft here. The Logic Apps designer interprets the Swagger automatically making consumption of the function a drag and drop experience.

The actual Liquid transforms are stored in the storage account associated with the Azure function, further reducing the amount of code required thanks to function bindings:

[FunctionName("LiquidTransformer")]
public static async Task<HttpResponseMessage&gt; Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = "liquidtransformer/{liquidtransformfilename}")] HttpRequestMessage req,
    [Blob("liquid-transforms/{liquidtransformfilename}", FileAccess.Read)] Stream inputBlob,
    TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");
}        

The Azure Function takes the Liquid transform input as the function’s request body. The filename of the Liquid transform as stored in the storage account is part of the URL path, making is possible to perform the function binding with the blob.

Once the Swagger is enabled we can simply use the Azure Function connector in a Logic App and provide the necessary details of the transformation we want to execute:

The function supports JSON and XML as input types, specified in the Content-Type HTTP header. The expected output type has to be specified in the Accept header.

Using the following HTTP payload

POST /api/liquidtransformer/customext.liquid HTTP/1.1
Host: localhost:7071
Content-Type: application/json
Accept: application/json

{
	"name": "olaf"
}

With the following Liquid template:

{
    "fullName": "{{content.name | upcase }}"
}

Results in the following JSON output:

{
	"fullName": "OLAF"
}

The benefits don’t stop at simple cost savings. Thanks to the DotLiquid extensible framework you can create your own filters, tags, etc. If you’ve ever tried to perform basic character padding with standard Liquid templates you’ll realise the power of custom filters…

Source code of this solution is published under the MIT license, so feel free to use and redistribute. Enjoy!

Asynchronous Logic Apps with Azure API Management

One of the many great features of Logic Apps is its support for long running asynchronous workflows through the ‘202 async’ pattern. Although not standardised in any official specification as far as I know, the ‘202 async’ pattern is commonly used to interact with APIs in an asynchronous way through polling. In summary, this pattern informs an API consumer that an API call is accepted (HTTP status 202) accompanied by a callback URL where the API consumer can regularly check for an actual response payload. The callback URL is provided in the HTTP location response header. In addition Logic Apps provides a recommendation regarding the polling interval through a ‘Retry-After’ HTTP response header. To demonstrate how this magic works in action I’ve created a small Logic App with a built-in delay of one minute. Here’s the basic message flow:

If we call this Logic App from an HTTP client you’ll notice a response message after a minute. Not ideal, and scenarios like this will likely cause in timeouts if the operation even takes longer. Luckily Logic Apps makes it very easy to resolve this challenge with a simple flick of a switch on the settings of the HTTP response connector:

When we call the Logic App API with an HTTP client like Postman we receive a completely different response. Instead of the expected response payload we get a bunch of Logic App run properties. The status property indicates that the workflow is still running.

{
    "properties": {
        "waitEndTime": "2018-12-12T00:50:09.7523799Z",
        "startTime": "2018-12-12T00:50:09.7523799Z",
        "status": "Running",
        "correlation": {
            "clientTrackingId": "08586570310757255241595533212CU31"
        },
        "workflow": {
            "id": "/workflows/04c74a7043984ffd8bda2dc2437a6bf7/versions/08586586828895236601",
            "name": "08586586828895236601",
            "type": "Microsoft.Logic/workflows/versions"
        },
        "trigger": {
            "name": "manual",
            "inputsLink": {
                "uri": "https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerInputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerInputs%2Fread&sv=1.0&sig=xD7xM2pvbs-_RgX1FVcsq2MImu5rlt_WCfLq4tE9Qqc",
                "contentVersion": "HnyZbRBXXZ5RxDoTJydztQ==",
                "contentSize": 28,
                "contentHash": {
                    "algorithm": "md5",
                    "value": "HnyZbRBXXZ5RxDoTJydztQ=="
                }
            },
            "outputsLink": {
                "uri": "https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerOutputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerOutputs%2Fread&sv=1.0&sig=TpIrjRJqorJLs621AEWFrDNzHcRfYKORA8YuVXq6e9A",
                "contentVersion": "DdHqLk/lmUN0iUdPwWVQ8A==",
                "contentSize": 277,
                "contentHash": {
                    "algorithm": "md5",
                    "value": "DdHqLk/lmUN0iUdPwWVQ8A=="
                }
            },
            "startTime": "2018-12-12T00:50:09.7506184Z",
            "endTime": "2018-12-12T00:50:09.7506184Z",
            "originHistoryName": "08586570310757255241595533212CU31",
            "correlation": {
                "clientTrackingId": "08586570310757255241595533212CU31"
            },
            "status": "Succeeded"
        },
        "outputs": {},
        "response": {
            "startTime": "2018-12-12T00:50:09.7506184Z",
            "correlation": {},
            "status": "Waiting"
        }
    },
    "id": "/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31",
    "name": "08586570310757255241595533212CU31",
    "type": "Microsoft.Logic/workflows/runs"
}

If we look at the response HTTP headers we notice a status of ‘202 Accepted’, a location header and a suggested polling retry interval.

If we immediately follow the link from the location header we’ll get the same response message, until after about a minute. When the Logic App workflow has completed its run we’ll finally receive our Logic App response payload:

Pretty cool. Although I’m no huge fan of polling given the fact it’s so chatty and doesn’t result in a timely response, I think there’s certainly valid use cases for this scenario. API consumers could provide a callback channel in the request payload or header, but this has a drawback of every API consumer now all of the sudden becoming a provider too. Not feasible in most cases.

Azure API Management

As a general rule of thumb I don’t expose Logic App APIs directly to consumers and mediate them through either an Azure function proxy or Azure API Management. Azure API Management has built-in integration with Logic Apps, and especially with the recent addition of the consumption tier in Azure API Management it’s a great way of abstracting your API implementations. Let’s create a basic API by adding our Logic App to our Azure API Management instance:

I’ve simply selected my Logic App ‘longrunning’ and associated it with my API product ‘Anonymous’, which doesn’t require a subscription and makes testing our API even easier. Next we’ll call our API through the Azure API Management test console.

Our API is successfully called through Azure API Management, hiding the Logic Apps trigger URL and exposed via the more static URL https://kloud.azure-api.net/longrunning/manual/paths/invoke

Very neat, but if we have a closer look at the API response we can notice the location header and trigger Uris in the response payload exposing our Logic App endpoint. Call it paranoia or API OCD, but wouldn’t it be better to have the subsequent polling API calls mediated through Azure API Management as well? This allows us to enforce policies like throttling, provides us the ability to identify our caller and captures traffic in Application Insights together with all other API calls. Bonus.

Run API, run!

Let’s have a closer look a the URL in the location header: 

https://prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/operations/1d19015e-5f32-4d72-a92c-04a5da36084d?api-version=2016-10-01&sp=%2Fruns%2F08586570310757255241595533212CU31%2Foperations%2F1d19015e-5f32-4d72-a92c-04a5da36084d%2Fread&sv=1.0&sig=oU1qLxgdECjumvTai2mXMDn-0gKl0d3xJAM785JBMcI

And the trigger input/output URLs:

https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerInputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerInputs%2Fread&sv=1.0&sig=xD7xM2pvbs-_RgX1FVcsq2MImu5rlt_WCfLq4tE9Qqc https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerOutputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerOutputs%2Fread&sv=1.0&sig=TpIrjRJqorJLs621AEWFrDNzHcRfYKORA8YuVXq6e9A

The URLs all have a similar pattern of /workflows/<WorkflowId>/runs/<RunId>. The workflow ID is static and corresponds to our Logic App, the run ID belongs to every instance of a running Logic App. So what can we do to:

  • Rewrite URLs in our response headers and payload
  • Route calls through our Azure API Management instance to the correct Logic App instance

That’s where Azure API Management policies come to the rescue. First we’ll define an outbound processing policy on the ‘All operations’ level:

<policies>
    <inbound>
        <base />
        <set-backend-service id="apim-generated-policy" backend-id="LogicApp_longrunning_rg-logicdemo" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <find-and-replace from="prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7" to="kloud.azure-api.net/longrunning" />
        <find-and-replace from="prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7" to="kloud.azure-api.net/longrunning" />
        <set-header name="Location" exists-action="override">
            <value>@(context.Response.Headers.GetValueOrDefault("Location")?.Replace("prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7","kloud.azure-api.net/longrunning"))</value>
        </set-header>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

When we call the API again we’ll notice both location header as well as all URLs in the response payload are now redirected to Azure API Management.

The next step is to map any calls to /longrunning/runs to a corresponding backend URL. Let’s define a new operation ‘runs’:

We’ll also need to override the backend URL and remove the <base /> policy which points to the Logic App trigger instead of run endpoint:

<policies>
    <inbound>
        <set-backend-service base-url="https://prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

Last but not least, let’s test if the added operation actually works by performing a HTTP GET on the URL as specified in the location header. The first call results in a ‘202 accepted’:

Let’s try again after waiting for another minute:

The HTTP GET requests to our added API operations get successfully routed to our backend Logic App instance, and we’re receiving the expected payload after some asynchronous polling.

To summarise the required steps in order to abstract our Logic Apps backend URLs:

  • Configure your Logic App to be asynchronous in the HTTP response connector
  • Perform URL rewriting on the API level with a find-and-replace and location header override
  • Add a /runs operation to map the rewritten URLs to the corresponding Logic Apps backend URLs 

Securing APIs through RBAC with Azure API management and Azure AD

One of Azure API Management great features is the ability to secure your APIs through policies, and thereby separating authorisation logic from your actual APIs. There’s plenty of guidance available on how to integrate Azure API management with Azure Active Directory or other OAuth providers, but very little information on how to apply fine grained access control on your APIs. Yes, it’s easy to setup OAuth to grant access to API consumers (authorisation grant) or machine to machine communication (client credentials grant). With the ‘validate JWT token’ policy we can validate the authenticity of your access token, and perform claims based authorisation. However unless we implement further controls anyone from our Azure AD tenant can access your APIs by default. So what can we do to restrict access to certain groups or roles within our application?

Option 1: Graph API

One option would be to interrogate the Graph API within our application and check for AD group membership. However, I prefer to keep this authorisation logic outside my actual API and repeating this task for every API becomes cumbersome. Besides, keeping strangers out at the front door would be my preference. Ideally unauthorised users are kept at bay in Azure API management.

Option 2: AAD group claims

Add group claims to our AAD JWT token. This can be easily configured in the App manifest of the API application registration in AD by configuring the groupMembershipClaims property:

Although this results in group claims to be added in OAuth JWT tokens, group claims are provided as GUIDs. Not really user friendly to apply our access controls, having to know each exact GUID corresponding to a group. Besides, we don’t always want to rely on AD wide group definitions to control access within our application.

Option 3: Role Based Access Control with JWT validation 

A third and in my opinion neatest option would be to define application specific roles in our API application manifest in AAD. Users can be assigned to these application specific roles, and we can check for role claims in an Azure API management policy. In addition to allowing users to be assigned to roles, we’ll enable application assignment for application to application communication as well (line 10):

For the purpose of this demo I’ve defined two application registrations in the AAD tenant:

  • Kloud API M (API)
    This represents our backend API, and will contain the application roles and user assignments. For the sake of simplicity in this demo I’ll use a mocked response in Azure API management instead of standing up an API.
  • API M client (API client application)
    This is the application accessing our backend API on behalf of the signed-in API users.  This application will need a delegated permission to access the API, as well as delegated permissions to initiate the user sign-in and read a user’s profile:

You’ll also notice that the API M client has the custom application permission of ‘Admin’ and ‘Reader’ assigned to it. This allows the application to access our backend API directly using the OAuth client credentials grant.

In addition, we’ll require users to be specifically assigned to access our application through one of these roles by enabling the ‘user assignment required’ option in our AAD Enterprise Application definition:

We can assign our application specific roles to a user from our AAD tenant:

Next we’ll look at how to perform authorisation based on role claims in Azure API Management. Let’s first have a look at the JWT policy. You can apply this at the operation, API or global level.

The easiest way to test our setup is by enabling OAuth user authorisation in the developer console, as per instructions here. This allows us to use the API M developer console as our client application, accessing our API on behalf of a signed in user. The demonstration below shows the API returning a ‘200 OK’ when we provide a JWT token containing the Admin role:

And here’s the JWT that was returned by the authorization server:

The JWT policy specifically checks for an admin role, so lets try calling the API with an account that only has the role of Reader:

This time the JWT policy returns a ‘401 unauthorised’ due to the absence of the Admin role claim. We’ve successfully demonstrated how we can grant access to a backend API based on role membership in our AAD application.

Lastly, I want to show how we can enable machine to machine communications in a similar way. Let’s have a look again at the permissions assigned to the API M Client application. As you can see below we’ve assigned the application permission ‘Admin’ to the API M Client:

After giving administrator consent and enabling the client credentials grant in Azure API Management we can verify our policy through the developer console:

As I demonstrated a combination of Azure Active Directory and Azure API Management offer great capabilities to apply RBAC on APIs, without having to implement any authorization logic in our actual API.

Putting SQL to REST with Azure Data Factory

Microsoft’s integration stack has slowly matured over the past years, and we’re on the verge of finally breaking away from BizTalk Server, or are we? In this article I’m going to explore Azure Data Factory (ADF). Rather than showing the usual out of the box demo I’m going to demonstrate a real-world scenario that I recently encountered at one of Kloud’s customers.
ADF is a very easy to use and cost-effective solution for simple integration scenarios that can be best described as ETL in the ‘old world’. ADF can run at large scale, and has a series of connectors to load data from a data source, apply a simple mapping and load the transformed data into a target destination.
ADF is limited in terms of standard connectors, and (currently) has no functionality to send data to HTTP/RESTful endpoints. Data can be sourced from HTTP endpoints, but in this case, we’re going to read data from a SQL server and write it to a HTTP endpoint.
Unfortunately ADF tooling isn’t available in VS2017 yet, but you can download the Microsoft Azure DataFactory Tools for Visual Studio 2015 here. Next we’ll use the extremely useful 3rd party library ‘Azure.DataFactory.LocalEnvironment’ that can be found on GitHub. This library allows you to debug ADF projects locally, and eases deployment by generating ARM templates. The easiest way to get started is to open the sample solution, and modify accordingly.
You’ll also need to setup an Azure Batch account and storage account according to Microsoft documentation. Azure Batch is running your execution host engine, which effectively runs your custom activities on one or more VMs in a pool of nodes. The storage account will be used to deploy your custom activity, and is also used for ADF logging purposes. We’ll also create a SQL Azure AdventureWorksLT database to read some data from.
Using the VS templates we’ll create the following artefacts:

  • AzureSqlLinkedService (AzureSqlLinkedService1.json)
    This is the linked service that connects the source with the pipeline, and contains the connection string to connect to our AdventureWorksLT database.
  • WebLinkedService (WebLinkedService1.json)
    This is the linked service that connects to the target pipeline. ADF doesn’t support this type as an output service, so we only use it to refer to from our HTTP table so it passes schema validation.
  • AzureSqlTableLocation (AzureSqlTableLocation1.json)
    This contains the table definition of the Azure SQL source table.
  • HttpTableLocation (HttpTableLocation1.json)
    T
    he tooling doesn’t contain a specific template for Http tables, but we can manually tweak any table template to represent our target (JSON) structure.

AzureSqlLinkedService
AzureSqlTable
Furthermore, we’ll adjust the DataDownloaderSamplePipeline.json to use the input and output tables that are defined above. We’ll also set our schedule and add a custom property to define a column mapping that allows us to map between input columns and output fields.
The grunt of the solution is performed in the DataDownloaderActivity class, where custom .NET code ‘wires together’ the input and output data sources and performs the actual copying of data. The class uses a SqlDataReader to read records, and copies them in chunks as JSON to our target HTTP service. For demonstration purposes I am using the Request Bin service to verify that the output data made its way to the target destination.
We can deploy our solution via PowerShell, or the Visual Studio 2015 tooling if preferred:
NewADF
After deployment we can see the data factory appearing in the portal, and use the monitoring feature to see our copy tasks spinning up according to the defined schedule:
ADF Output
In the Request Bin that I created I can see the output batches appearing one at a time:
RequestBinOutput
As you might notice it’s not all that straightforward to compose and deploy a custom activity, and having to rely on Azure Batch can incur significant cost unless you adopt the right auto scaling strategy. Although the solution requires us to write code and implement our connectivity logic ourselves, we are able to leverage some nice platform features as a reliable execution host, retry logic, scaling, logging and monitoring that are all accessible through the Azure portal.
The complete source code can be found here. The below gists show the various ADF artefacts and the custom .NET activity.

The custom activity C# code:

The Internet of Things with Arduino, Azure Event Hubs and the Azure Python SDK

In the emerging world of Internet of Things (IoT) we see more and more hardware manufacturers releasing development platforms to connect devices and sensors to the internet. Traditionally these kind of platforms are created around microcontrollers, and the Arduino platform can be considered as the standard in (consumer) physical computing, home automation, DIY and the ‘makers community’.

Most Arduinos come with an 8-bit AVR RISC-based microcontroller running at 16 MHz with the modest amount of 2 kilobytes of memory. These devices are perfectly capable of calling REST services with the Ethernet library and Arduino Ethernet shield. However, we do face some challenges when it comes to encrypting data, generating Azure shared access signatures and communicating over HTTPS due to a lack of processing power and memory. The Arduino platform has no SSL libraries and therefore cannot securely transmit data over HTTPS. This article shows a solution to this problem by using a secondary device as a bridge to the Internet.

Microsoft Azure allow us to store and process high volumes of sensor data through Event Hubs, currently still in preview. More information on Event Hubs, its architecture and how to publish and consume event data can be found here. In this article I focus on how to publish sensor data from these ‘things’ to an Azure Event Hub using a microcontroller with field gateway that is capable of communicating over HTTPS using the Azure Python SDK.

Azure Event Hubs

Before we start we need to create an Azure Service Bus Namespace and an Event Hub. This can be done in the Azure management portal:

Creating an Azure Event Hub
Creating an Azure Event Hub

When creating the event hub we need to specify the number of partitions. The link provided earlier will describe partitioning in detail, but in summary this will help us to spread the load of publishing devices across our event hub.

Event Hub Partitioning
Event Hub Partitioning

We can also define policies that can be used to generate a shared access signature on our devices that will be sending event data to the hub:

Event Hub Policies
Event Hub Policies

Arduino Yun

The Arduino Yun combines a microcontroller and ‘Wi-Fi System on Chip’ (WiSOC) on a single board. The microcontroller allows us to ‘sense’ the environment through its analogue input sensors, whereas the WiSOC runs a full Linux distribution with rich programming and connectivity capabilities. The WiSOC can be considered as the field gateway for the microcontroller and is able to send data to the Azure Event Hub. For other Arduino development boards that only have a microcontroller you can for example use a Raspberry Pi as the field gateway.
For the purpose of this demo we’ll keep the schematics simple and just use a simple temperature sensor and some LEDs to report back a status:

Yun temperature drawing_bb

The Arduino sketch reads the voltage signal from the temperature sensor and converts it to a temperature in Celsius degrees as our unit of measurement. The microcontroller communicates with the Yun Linux distribution via the bridge library, and blinks either the green or red LED depending on the HTTP status that is returned from the Linux distribution.
The complete Arduino sketch looks like this:

The Arduino bridge library is used to run a Python script within the Linux environment by executing a shell command to send the temperature to the Azure Event Hub. Next we’ll have a look at how this Python script actually works.

Python SDK

The Microsoft Azure Python SDK provides a set of Python packages for easy access to Azure storage services, service bus queues, topics and the service management APIs. Unfortunately there is no support for Event Hubs at this stage yet. Luckily Microsoft is embracing the open source community these days and is hosting the Python SDK on GitHub for public contribution, so hopefully this will be added soon. Details on how to install the Azure Python SDK in a Linux environment can be found on http://azure.microsoft.com/en-us/documentation/articles/python-how-to-install/. You can use a package manager like pip or easy_install to install the Python package ‘azure’.

The complete Python script to send event data to an Azure Event Hub is as follows:

The script can be called with a series of sensor values in the following format:

python script.py temperature:22,humidity:20 deviceid

Multiple key-value pairs with a sensor type and sensor value can be provided, and these will be nicely stored in the JSON message.

By using the ServiceBusSASAuthentication class from the Python SDK we can easily generate the shared access signature token that is required by the various services in the Azure ServiceBus, including Event Hubs.
Sending the actual event data is done with a simple REST call. Although Event Hubs allow any arbitrary message to be sent, we send JSON data which is easy to parse by potential consumers. Event data will be sent to a specific partition in the Event Hub. The hostname of the Arduino Yun is used as the partition key. Azure is taking care of assigning an actual Event Hub partition, but by using the hostname as the partition key it’s more likely that traffic from different devices is spread across the Event Hub for better throughput. The Python script will create the appropriate REST HTTP request according to the Azure Event Hub REST API:

When we deploy the Arduino sketch it will start sending the temperature to the Azure Event Hub continuously with one second intervals. We can confirm successful transmission by consuming the Event Hub data with Service Bus Explorer:

Service Bus Explorer
Service Bus Explorer

Conclusion

I’ve demonstrated how we can combine the Arduino microcontroller platform to read sensor data with a more powerful computing environment that runs Python. These allow our ‘things’ to leverage Azure Event Hubs for event processing with the potential to scale to millions of devices.