A first look at Azure Integration Service Environments (ISE)

Introduction

Microsoft released its long awaited Integration Service Environments (ISE) Azure service in public preview. Time to have a sneak peak! Integration service environments consists of a dedicated environment to execute your integration workloads. This as opposed to the serverless compute paradigm that was used in traditional Logic Apps. Although the serverless model is great for many purposes, using an ISE can be best fit for purpose in some scenarios which I’ll further outline in this article.

Why use ISE?

The benefit of dedicated resources is the lack of the noisy neighbour effect, resulting in some more reliable performance metrics. Although different workloads within your ISE may be impacted by each other you’ll no longer experience the drawbacks of shared resources in terms of latency and throughput. In addition ISE has the capability of auto-scaling. If resource intensive integration workloads (temporarily) put extra strain on resources this can be alleviated by scaling out if necessary.

As with most cloud services there’s a financial aspect to consider. Using an ISE comes at a significant cost. However, it’s a fixed and predictable cost as opposed to consumption based Logic Apps billing. There’s also the inclusion of one standard integration account and one enterprise connector. All enterprise connectors available are inclusive of this, although currently not all enterprise connectors have a VNET integrated ISE equivalent. For the SAP connector you’ll still be required to use the on-premises data gateway for instance. Usage of the connector will be included in the allowance of your ISE though. The SAP ISE connector is in the works and will be released somewhere later this year.

Pricing

ISE is currently billed at A$5.16 an hour when hosted in Australian regions. This includes a 50% public preview discount. Once it reaches general availability an ISE will set you back approximately A$7,500 a month.

It will take a standard integration account, 100,000 enterprise connector executions, 300,000 standard executions and 300,00 action executions a month a combined to break even with Logic Apps consumption based billing once we’re out of public preview pricing. This is based on a single ISE base unit, more compute power can be purchased at additional cost. A significant amount but taking into consideration that a single ISE base unit provides 75 million executions a month this could make perfect economic sense in large scale operations.

What else? ISE support connectivity to VNETs! Both VPN and express route are supported. We no longer have to rely on the on-premises data gateway for hybrid cloud integration scenarios. And we can finally tackle some on-premises integration scenarios without data having to flow outside of your VNET at all.

In addition an ISE offers a static inbound IP address and isolated storage. The latter can help to comply with data sovereignty constraints with your run history remaining within the boundaries of your VNET.

Creating an ISE

Creating an ISE is well documented by Microsoft here, don’t forget to perform the steps of assigning permissions in your VNET to the Logic Apps resource. Within half an hour you should be up and running:

An ISE contains a workflow runtime used to execute your Logic Apps and built-in steps, as well as a connector runtime to execute all your connectors. Both can be monitored in the dashboard allowing you to keep an eye out on resource utilisation and plan capacity appropriately.

Creating an integration account

As mentioned before an ISE includes one standard integration account. Curiosity took over and I wanted to find out if an additional free account could also be added:

The answer is no, but with a maximum of 1000 maps and schemas a single standard account should suffice. Besides, the free tier does not provide any SLAs and should only be used for development and test purposes.

Auto scaling

If a single base unit is not sufficient for your workloads you can easily add additional compute instances to your ISE. There’s a maximum of 3 additional units out of the box, but Microsoft is happy to cater for more capacity if required.

Similar as in App Services ISE supports auto scaling. Based on performance metrics of your workflow or connector instances you can scale scale out or back according to workload demand.

Connectivity

The built-in connectors (e.g. HTTP) will always be running within your ISE’s workflow runtime, allowing you to connect any resource within the boundaries of your VNET and NSGs. In addition there are specific standard ISE and enterprise ISE connectors. These have the label ISE and always run in the same ISE as your Logic Apps, albeit in the connector runtime. Their equivalent without the ISE label will run the in global Logic Apps service and have therefore no access to your VNET.

For my demos I created a VNET with a point to site connection, connecting a development machine to a VNET in Azure. My development environment is both hosting a HTTP Web API as well as a SQL server. Let’s have a look what’s involved in connecting to those

On premises connectivity with the SQL connector

Assuming you’ve spun up an ISE and have connectivity to your VNET we can jump straight into connecting to an on-premises SQL server database. For this demo I’m running a SQL Express edition with a Northwind database and enabled remote access.

The ISE SQL connector can easily be configured in the Logic Apps designer (or API connection subsequently) with SQL connection details:

Upon a successful connection you’ll be able to use the connector in the same manner as the SQL Azure connector:

Alternatively I could have connected to my SQL server with the on-premises data gateway, but given the mechanics of underlying Service Bus relay you’ll experience much worse performance than connecting through a VNET / express route.

On premises connectivity with the HTTP connector

The HTTP connector is built-in, and does not require special configuration to connect to on-premises resources:

Summary

ISE offers a great addition to the Azure iPaaS family and further increases Microsoft’s capability in the integration space. If you’re concerned about security aspects and data governance around your integration solutions, want predictable performance or need lower latency between your Logic Apps and on-premises resources ISE might be a good fit for you. A price tag of A$7,500 per month in Australian regions may sound like a lot, but given the huge amounts of compute power available this is a more economical than other iPaaS offerings.

Using Liquid transformations in Logic Apps… for free!

Microsoft offers a few different solutions to perform message transformations in Logic Apps. One of them I described before in ‘Translating JSON messages with Logic Apps’. Liquid is considered as the new way forward to translate JSON and XML messages. XSLT still has strong support if you’re working with XML documents but if you’re working with the JSON message format Liquid is your friend.

Liquid is an open source template language created by Shopify. Microsoft uses the .NET implementation DotLiquid under the hood of their Liquid connectors. The usage of Liquid transforms in Logic Apps require an integration account. Integration accounts are not available as a serverless / consumption based service, and have a fixed price tag that form a serious barrier for those that only have a need for a handful of message transformations.

Although integration accounts offer more functionality than merely Liquid transforms (EDI, B2B) its main use seems to be for Liquid transformations among a few customers I’ve engaged with recently.

Given DotLiquid is open source I took the challenge to build a serverless Azure Function to perform Liquid transforms with on-par functionality compared to Logic Apps Liquid transformations (JSON and XML to JSON, XML or plain text). The main objective was to no longer rely on an integration account and saving $5000 (!) on an annual basis. Combine this with serverless Logic Apps and the API Management consumption tier and you can potentially run an iPaaS for pennies. Take that MuleSoft…

I’ve published my solution on Github accompanied by documentation on how to use it. Usage doesn’t restrict itself to Logic Apps. The solution is available as a V1 Azure Function, mainly due to the missing Swagger support in Azure Functions V2. Azure Functions with a Swagger definition make for excellent integration with Logic Apps, as documented by Microsoft here. The Logic Apps designer interprets the Swagger automatically making consumption of the function a drag and drop experience.

The actual Liquid transforms are stored in the storage account associated with the Azure function, further reducing the amount of code required thanks to function bindings:

[FunctionName("LiquidTransformer")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "post", Route = "liquidtransformer/{liquidtransformfilename}")] HttpRequestMessage req,
    [Blob("liquid-transforms/{liquidtransformfilename}", FileAccess.Read)] Stream inputBlob,
    TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");
}        

The Azure Function takes the Liquid transform input as the function’s request body. The filename of the Liquid transform as stored in the storage account is part of the URL path, making is possible to perform the function binding with the blob.

Once the Swagger is enabled we can simply use the Azure Function connector in a Logic App and provide the necessary details of the transformation we want to execute:

The function supports JSON and XML as input types, specified in the Content-Type HTTP header. The expected output type has to be specified in the Accept header.

Using the following HTTP payload

POST /api/liquidtransformer/customext.liquid HTTP/1.1
Host: localhost:7071
Content-Type: application/json
Accept: application/json

{
	"name": "olaf"
}

With the following Liquid template:

{
    "fullName": "{{content.name | upcase }}"
}

Results in the following JSON output:

{
	"fullName": "OLAF"
}

The benefits don’t stop at simple cost savings. Thanks to the DotLiquid extensible framework you can create your own filters, tags, etc. If you’ve ever tried to perform basic character padding with standard Liquid templates you’ll realise the power of custom filters…

Source code of this solution is published under the MIT license, so feel free to use and redistribute. Enjoy!

Asynchronous Logic Apps with Azure API Management

One of the many great features of Logic Apps is its support for long running asynchronous workflows through the ‘202 async’ pattern. Although not standardised in any official specification as far as I know, the ‘202 async’ pattern is commonly used to interact with APIs in an asynchronous way through polling. In summary, this pattern informs an API consumer that an API call is accepted (HTTP status 202) accompanied by a callback URL where the API consumer can regularly check for an actual response payload. The callback URL is provided in the HTTP location response header. In addition Logic Apps provides a recommendation regarding the polling interval through a ‘Retry-After’ HTTP response header. To demonstrate how this magic works in action I’ve created a small Logic App with a built-in delay of one minute. Here’s the basic message flow:

If we call this Logic App from an HTTP client you’ll notice a response message after a minute. Not ideal, and scenarios like this will likely cause in timeouts if the operation even takes longer. Luckily Logic Apps makes it very easy to resolve this challenge with a simple flick of a switch on the settings of the HTTP response connector:

When we call the Logic App API with an HTTP client like Postman we receive a completely different response. Instead of the expected response payload we get a bunch of Logic App run properties. The status property indicates that the workflow is still running.

{
    "properties": {
        "waitEndTime": "2018-12-12T00:50:09.7523799Z",
        "startTime": "2018-12-12T00:50:09.7523799Z",
        "status": "Running",
        "correlation": {
            "clientTrackingId": "08586570310757255241595533212CU31"
        },
        "workflow": {
            "id": "/workflows/04c74a7043984ffd8bda2dc2437a6bf7/versions/08586586828895236601",
            "name": "08586586828895236601",
            "type": "Microsoft.Logic/workflows/versions"
        },
        "trigger": {
            "name": "manual",
            "inputsLink": {
                "uri": "https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerInputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerInputs%2Fread&sv=1.0&sig=xD7xM2pvbs-_RgX1FVcsq2MImu5rlt_WCfLq4tE9Qqc",
                "contentVersion": "HnyZbRBXXZ5RxDoTJydztQ==",
                "contentSize": 28,
                "contentHash": {
                    "algorithm": "md5",
                    "value": "HnyZbRBXXZ5RxDoTJydztQ=="
                }
            },
            "outputsLink": {
                "uri": "https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerOutputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerOutputs%2Fread&sv=1.0&sig=TpIrjRJqorJLs621AEWFrDNzHcRfYKORA8YuVXq6e9A",
                "contentVersion": "DdHqLk/lmUN0iUdPwWVQ8A==",
                "contentSize": 277,
                "contentHash": {
                    "algorithm": "md5",
                    "value": "DdHqLk/lmUN0iUdPwWVQ8A=="
                }
            },
            "startTime": "2018-12-12T00:50:09.7506184Z",
            "endTime": "2018-12-12T00:50:09.7506184Z",
            "originHistoryName": "08586570310757255241595533212CU31",
            "correlation": {
                "clientTrackingId": "08586570310757255241595533212CU31"
            },
            "status": "Succeeded"
        },
        "outputs": {},
        "response": {
            "startTime": "2018-12-12T00:50:09.7506184Z",
            "correlation": {},
            "status": "Waiting"
        }
    },
    "id": "/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31",
    "name": "08586570310757255241595533212CU31",
    "type": "Microsoft.Logic/workflows/runs"
}

If we look at the response HTTP headers we notice a status of ‘202 Accepted’, a location header and a suggested polling retry interval.

If we immediately follow the link from the location header we’ll get the same response message, until after about a minute. When the Logic App workflow has completed its run we’ll finally receive our Logic App response payload:

Pretty cool. Although I’m no huge fan of polling given the fact it’s so chatty and doesn’t result in a timely response, I think there’s certainly valid use cases for this scenario. API consumers could provide a callback channel in the request payload or header, but this has a drawback of every API consumer now all of the sudden becoming a provider too. Not feasible in most cases.

Azure API Management

As a general rule of thumb I don’t expose Logic App APIs directly to consumers and mediate them through either an Azure function proxy or Azure API Management. Azure API Management has built-in integration with Logic Apps, and especially with the recent addition of the consumption tier in Azure API Management it’s a great way of abstracting your API implementations. Let’s create a basic API by adding our Logic App to our Azure API Management instance:

I’ve simply selected my Logic App ‘longrunning’ and associated it with my API product ‘Anonymous’, which doesn’t require a subscription and makes testing our API even easier. Next we’ll call our API through the Azure API Management test console.

Our API is successfully called through Azure API Management, hiding the Logic Apps trigger URL and exposed via the more static URL https://kloud.azure-api.net/longrunning/manual/paths/invoke

Very neat, but if we have a closer look at the API response we can notice the location header and trigger Uris in the response payload exposing our Logic App endpoint. Call it paranoia or API OCD, but wouldn’t it be better to have the subsequent polling API calls mediated through Azure API Management as well? This allows us to enforce policies like throttling, provides us the ability to identify our caller and captures traffic in Application Insights together with all other API calls. Bonus.

Run API, run!

Let’s have a closer look a the URL in the location header: 

https://prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/operations/1d19015e-5f32-4d72-a92c-04a5da36084d?api-version=2016-10-01&sp=%2Fruns%2F08586570310757255241595533212CU31%2Foperations%2F1d19015e-5f32-4d72-a92c-04a5da36084d%2Fread&sv=1.0&sig=oU1qLxgdECjumvTai2mXMDn-0gKl0d3xJAM785JBMcI

And the trigger input/output URLs:

https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerInputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerInputs%2Fread&sv=1.0&sig=xD7xM2pvbs-_RgX1FVcsq2MImu5rlt_WCfLq4tE9Qqc https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerOutputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerOutputs%2Fread&sv=1.0&sig=TpIrjRJqorJLs621AEWFrDNzHcRfYKORA8YuVXq6e9A

The URLs all have a similar pattern of /workflows/<WorkflowId>/runs/<RunId>. The workflow ID is static and corresponds to our Logic App, the run ID belongs to every instance of a running Logic App. So what can we do to:

  • Rewrite URLs in our response headers and payload
  • Route calls through our Azure API Management instance to the correct Logic App instance

That’s where Azure API Management policies come to the rescue. First we’ll define an outbound processing policy on the ‘All operations’ level:

<policies>
    <inbound>
        <base />
        <set-backend-service id="apim-generated-policy" backend-id="LogicApp_longrunning_rg-logicdemo" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <find-and-replace from="prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7" to="kloud.azure-api.net/longrunning" />
        <find-and-replace from="prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7" to="kloud.azure-api.net/longrunning" />
        <set-header name="Location" exists-action="override">
            <value>@(context.Response.Headers.GetValueOrDefault("Location")?.Replace("prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7","kloud.azure-api.net/longrunning"))</value>
        </set-header>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

When we call the API again we’ll notice both location header as well as all URLs in the response payload are now redirected to Azure API Management.

The next step is to map any calls to /longrunning/runs to a corresponding backend URL. Let’s define a new operation ‘runs’:

We’ll also need to override the backend URL and remove the <base /> policy which points to the Logic App trigger instead of run endpoint:

<policies>
    <inbound>
        <set-backend-service base-url="https://prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

Last but not least, let’s test if the added operation actually works by performing a HTTP GET on the URL as specified in the location header. The first call results in a ‘202 accepted’:

Let’s try again after waiting for another minute:

The HTTP GET requests to our added API operations get successfully routed to our backend Logic App instance, and we’re receiving the expected payload after some asynchronous polling.

To summarise the required steps in order to abstract our Logic Apps backend URLs:

  • Configure your Logic App to be asynchronous in the HTTP response connector
  • Perform URL rewriting on the API level with a find-and-replace and location header override
  • Add a /runs operation to map the rewritten URLs to the corresponding Logic Apps backend URLs 

Securing APIs through RBAC with Azure API management and Azure AD

One of Azure API Management great features is the ability to secure your APIs through policies, and thereby separating authorisation logic from your actual APIs. There’s plenty of guidance available on how to integrate Azure API management with Azure Active Directory or other OAuth providers, but very little information on how to apply fine grained access control on your APIs. Yes, it’s easy to setup OAuth to grant access to API consumers (authorisation grant) or machine to machine communication (client credentials grant). With the ‘validate JWT token’ policy we can validate the authenticity of your access token, and perform claims based authorisation. However unless we implement further controls anyone from our Azure AD tenant can access your APIs by default. So what can we do to restrict access to certain groups or roles within our application?

Option 1: Graph API

One option would be to interrogate the Graph API within our application and check for AD group membership. However, I prefer to keep this authorisation logic outside my actual API and repeating this task for every API becomes cumbersome. Besides, keeping strangers out at the front door would be my preference. Ideally unauthorised users are kept at bay in Azure API management.

Option 2: AAD group claims

Add group claims to our AAD JWT token. This can be easily configured in the App manifest of the API application registration in AD by configuring the groupMembershipClaims property:

Although this results in group claims to be added in OAuth JWT tokens, group claims are provided as GUIDs. Not really user friendly to apply our access controls, having to know each exact GUID corresponding to a group. Besides, we don’t always want to rely on AD wide group definitions to control access within our application.

Option 3: Role Based Access Control with JWT validation 

A third and in my opinion neatest option would be to define application specific roles in our API application manifest in AAD. Users can be assigned to these application specific roles, and we can check for role claims in an Azure API management policy. In addition to allowing users to be assigned to roles, we’ll enable application assignment for application to application communication as well (line 10):

For the purpose of this demo I’ve defined two application registrations in the AAD tenant:

  • Kloud API M (API)
    This represents our backend API, and will contain the application roles and user assignments. For the sake of simplicity in this demo I’ll use a mocked response in Azure API management instead of standing up an API.
  • API M client (API client application)
    This is the application accessing our backend API on behalf of the signed-in API users.  This application will need a delegated permission to access the API, as well as delegated permissions to initiate the user sign-in and read a user’s profile:

You’ll also notice that the API M client has the custom application permission of ‘Admin’ and ‘Reader’ assigned to it. This allows the application to access our backend API directly using the OAuth client credentials grant.

In addition, we’ll require users to be specifically assigned to access our application through one of these roles by enabling the ‘user assignment required’ option in our AAD Enterprise Application definition:

We can assign our application specific roles to a user from our AAD tenant:

Next we’ll look at how to perform authorisation based on role claims in Azure API Management. Let’s first have a look at the JWT policy. You can apply this at the operation, API or global level.

The easiest way to test our setup is by enabling OAuth user authorisation in the developer console, as per instructions here. This allows us to use the API M developer console as our client application, accessing our API on behalf of a signed in user. The demonstration below shows the API returning a ‘200 OK’ when we provide a JWT token containing the Admin role:

And here’s the JWT that was returned by the authorization server:

The JWT policy specifically checks for an admin role, so lets try calling the API with an account that only has the role of Reader:

This time the JWT policy returns a ‘401 unauthorised’ due to the absence of the Admin role claim. We’ve successfully demonstrated how we can grant access to a backend API based on role membership in our AAD application.

Lastly, I want to show how we can enable machine to machine communications in a similar way. Let’s have a look again at the permissions assigned to the API M Client application. As you can see below we’ve assigned the application permission ‘Admin’ to the API M Client:

After giving administrator consent and enabling the client credentials grant in Azure API Management we can verify our policy through the developer console:

As I demonstrated a combination of Azure Active Directory and Azure API Management offer great capabilities to apply RBAC on APIs, without having to implement any authorization logic in our actual API.

Building deployment pipelines for Azure Function proxies and Logic Apps

Azure Logic Apps offer a great set of tools to rapidly build APIs and leverage your existing assets through a variety of connectors. Whether in a more ad-hoc scenario or in a well-designed micro service architecture, it’s always a good way to introduce some form of decoupling through the mediator pattern. If you don’t have the budget for a full blown API Management rollout and your requirements don’t extend further than a basic proxy as a mediator, keep on reading.

One of the intricacies of working with the Logic Apps HTTP input trigger is the dynamic input URLs. When recreating your Logic Apps via ARM templates you’ll notice that these input URLs change once you removed your existing Logic App. This, amongst other reasons make the Logic Apps unsuitable for direct exposure to API consumers. Azure API management offer a great way of building an API gateway between your consumers and Logic Apps but comes with a serious price tag until the consumption tier is finally available later this year. Another way of introducing a mediator for your Logic Apps is Azure Function App Proxies. Although very lightweight, we can consider the Function App Proxy as an API layer with the following characteristics:

  • Decouple API consumer from API implementation
    By virtue of decoupling we can move our API implementation around in the future, or introduce versioning without impacting the API consumer
  • Centralised monitoring with Application Insights
    Rich out of the box monitoring capabilities through one-click deployment

In this post we’ll look at fully automating resource creation of a microservice, including the following components:

  • Application Insights instance
    Each App Insights instance has its unique subscription key. The ARM template will resolve the key during deployment for Application Insights integration with the Function App Proxy.
  • Logic App
    As mentioned before the input URL will be dynamic, so we’ll need to resolve this during deployment.
  • Function App
    The function app and its hosting plan can be easily created with an ARM template, together with some application settings including the reference to the Logic App backend URL which is determined during deployment time.
  • Function App Proxy
    Last but not least, proxies are defined as part of the application content. The proxies.json file in the wwwroot contains the actual proxy service definition, establishing the connection with the Logic App.

ARM deployment

Below is the single ARM template that contains the resource definition for the AppInsights instance, Logic App and Function App. The Logic App contains a simple workflow with HTTP trigger and response, outputting a supplied URL parameter in a JSON message body.

The important bit to point out here are setting the LogicAppBackendUri setting in the Function App:

[skip(listCallbackURL(concat(resourceId('Microsoft.Logic/workflows/', variables('logicAppName')), '/triggers/manual'), '2016-06-01').value,8)]

This command strips the ‘https://&#8217; from the dynamically retrieved LogicApp so we can refer to it from our proxy definition below. In addition it will copy a URL query string parameter to the backend service.

Deployment

PowerShell

The following PowerShell script (with a little help of Azure CLI) deploys the ARM template, and function app proxy by uploading the hosts.json and proxies.json files to the Function App using Azure CLI. The DeployAzureResourceGroup.ps1 is the out of the box script that Visual Studio scaffolds in ARM template projects.

The above PowerShell and Azure CLI scripts are an excellent way of creating your assets from scratch. In addition we’ll show how to use an Azure DevOps pipeline to perform true CI/CD.

Azure DevOps pipeline

With Azure DevOps pipelines we can easily setup a CI/CD pipeline with just three simple steps.

The first step performs an Azure resource group deployment to deploy the Logic App, Function App and AppInsights instance.

Next we’ll package the Function App proxy definition into a zip file.

The last step will deploy the packaged proxy definition to our Function App:

After a successful deployment with either PowerShell or Azure DevOps we can finally test our function app:

Happy days. The above demonstrates how we can utilise Azure to create a very cost effective and neat solution to provide an API and proxy whilst leveraging Application Insights to monitor incoming traffic.

Translating JSON messages with Logic Apps

One of the key components of an integration platform is message translation. The Microsoft Azure iPaaS Logic Apps service offers message translation with the out of the box ‘compose’ operation. Alternatively, message translation can be achieved with Liquid transforms. The latter requires an Azure Integration account which comes with additional cost. In this article we’ll look at the two transformation options and do a comparison in terms of cost, performance and usability. For my demo purposes I created two logic apps with HTTP input triggers and response output.

Performance

For the purpose of testing the performance of Logic App execution, I generated a sample JSON payload on https://www.json-generator.com/ containing an array of 1500 objects with a payload of about 1MB.

Although you can increase the compose performance by increasing the degree of parallelism of your for-each loop, the Liquid transform performs significantly better in all cases.

Input payload Liquid Compose
800KB array with 1500 items 1.5 seconds 12 seconds
1.6MB array with 3000 items 3 seconds 22 seconds

Needless to say, if you’re working with large arrays of data and performance is a criteria, Liquid maps are the way to go.

Winner: Liquid

Cost

For development scenarios you can use a free tier, which comes without an SLA and limitations around the maximum number of maps (25) and instances (1 account per region).

For production scenarios you have the choice of either a basic or standard account. The main difference between the two is the number of EDI partners and agreements that can be setup, which has no relevance to message translation scenarios. A basic integration account falls just short of $400 AUD a month but allows 500 maps to be created. Execution is still charged at the price of a standard action, so there’s no additional cost over the JSON compose transformation. In fact, the cost of the JSON compose solution could rapidly increase due to the for-each construct that’s needed to iterate through an array. If you’re mapping large sets of arrays at a frequent basis you may even come close to justifying the integration account from a cost perspective.

Winner: Compose

Assembly

JSON Compose

Compose mappings can be created directly in the Logic App designer. The compose editor is a bit fiddly and flaky in both browser as well as VS2017 and I found myself reverting to the code editor at times. Overall it’s fairly easy to construct basic mappings.

Liquid transforms

Liquid transforms can be easily created in your text editor of choice. Visual Studio Code has some nice extensions that given you snippets or even output preview.

[

{% for item in content %}

    {

        "Id": "{{ item._id }}",

        "Name": "{{ item.name }}",

        "Email": "{{ item.email }}",

        "Phone": "{{ item.phone }}",

        "Address": "{{ item.address }}",       

        "About": "{{ item.about }}",       

    },

{% endfor %}

]

Winner: tie

Deployment

Liquid transforms are managed and deployed separately from your Logic Apps, greatly enhancing the ability to reuse maps. Although easy to upload or update in the Azure Portal, in CI/CD pipelines you’ll find yourself writing separate PowerShell scripts in addition to your Logic App ARM templates.

Compose transformations are deployed as part of your Logic Apps, and therefore can be deployed as part of the Logic App ARM templates.

Winner: Compose

Conclusion

Whether to use Liquid transforms or Compose largely comes down to your own needs around performance, cost and mapping complexity required (e.g. nested loops, this can get ugly with Compose). If you require transformations involving XML you’ve got another strong argument for Liquid maps. When doing enterprise integration at scale there’s probably no way around needing Liquid maps and with the ability to create up to 500 maps per integration account cost can be shared amongst many integrations.

Putting SQL to REST with Azure Data Factory

Microsoft’s integration stack has slowly matured over the past years, and we’re on the verge of finally breaking away from BizTalk Server, or are we? In this article I’m going to explore Azure Data Factory (ADF). Rather than showing the usual out of the box demo I’m going to demonstrate a real-world scenario that I recently encountered at one of Kloud’s customers.
ADF is a very easy to use and cost-effective solution for simple integration scenarios that can be best described as ETL in the ‘old world’. ADF can run at large scale, and has a series of connectors to load data from a data source, apply a simple mapping and load the transformed data into a target destination.
ADF is limited in terms of standard connectors, and (currently) has no functionality to send data to HTTP/RESTful endpoints. Data can be sourced from HTTP endpoints, but in this case, we’re going to read data from a SQL server and write it to a HTTP endpoint.
Unfortunately ADF tooling isn’t available in VS2017 yet, but you can download the Microsoft Azure DataFactory Tools for Visual Studio 2015 here. Next we’ll use the extremely useful 3rd party library ‘Azure.DataFactory.LocalEnvironment’ that can be found on GitHub. This library allows you to debug ADF projects locally, and eases deployment by generating ARM templates. The easiest way to get started is to open the sample solution, and modify accordingly.
You’ll also need to setup an Azure Batch account and storage account according to Microsoft documentation. Azure Batch is running your execution host engine, which effectively runs your custom activities on one or more VMs in a pool of nodes. The storage account will be used to deploy your custom activity, and is also used for ADF logging purposes. We’ll also create a SQL Azure AdventureWorksLT database to read some data from.
Using the VS templates we’ll create the following artefacts:

  • AzureSqlLinkedService (AzureSqlLinkedService1.json)
    This is the linked service that connects the source with the pipeline, and contains the connection string to connect to our AdventureWorksLT database.
  • WebLinkedService (WebLinkedService1.json)
    This is the linked service that connects to the target pipeline. ADF doesn’t support this type as an output service, so we only use it to refer to from our HTTP table so it passes schema validation.
  • AzureSqlTableLocation (AzureSqlTableLocation1.json)
    This contains the table definition of the Azure SQL source table.
  • HttpTableLocation (HttpTableLocation1.json)
    T
    he tooling doesn’t contain a specific template for Http tables, but we can manually tweak any table template to represent our target (JSON) structure.

AzureSqlLinkedService
AzureSqlTable
Furthermore, we’ll adjust the DataDownloaderSamplePipeline.json to use the input and output tables that are defined above. We’ll also set our schedule and add a custom property to define a column mapping that allows us to map between input columns and output fields.
The grunt of the solution is performed in the DataDownloaderActivity class, where custom .NET code ‘wires together’ the input and output data sources and performs the actual copying of data. The class uses a SqlDataReader to read records, and copies them in chunks as JSON to our target HTTP service. For demonstration purposes I am using the Request Bin service to verify that the output data made its way to the target destination.
We can deploy our solution via PowerShell, or the Visual Studio 2015 tooling if preferred:
NewADF
After deployment we can see the data factory appearing in the portal, and use the monitoring feature to see our copy tasks spinning up according to the defined schedule:
ADF Output
In the Request Bin that I created I can see the output batches appearing one at a time:
RequestBinOutput
As you might notice it’s not all that straightforward to compose and deploy a custom activity, and having to rely on Azure Batch can incur significant cost unless you adopt the right auto scaling strategy. Although the solution requires us to write code and implement our connectivity logic ourselves, we are able to leverage some nice platform features as a reliable execution host, retry logic, scaling, logging and monitoring that are all accessible through the Azure portal.
The complete source code can be found here. The below gists show the various ADF artefacts and the custom .NET activity.

The custom activity C# code:

Moving resources between Azure Resource Groups

The concept of resource groups has been around for a little while, and is adequately supported in the Azure preview portal. Resource groups are logical containers that allow you to group individual resources such as virtual machines, storage accounts, websites and databases so they can be managed together. They give a much clearer picture to what resources belong together, and can also give visibility into consumption/spending in a grouped matter.

However, when resources are created in the classic Azure portal (e.g. virtual machines, storage accounts, etc.) there is no support for resource group management, which results in a new resource group being created for each resource that you create. This can lead to a large number of resource groups that are unclear and tedious to manage. Also, if you do tend to use resource groups in the Azure preview portal there is no way to perform housekeeping or management of these resource groups.

With the latest Azure PowerShell cmdlets (v0.8.15.1) we now have the ability to move resources between resource groups. You can install the latest version of the PowerShell tools via the Web Platform Installer:

wpi azure powershell

After installation of this particular version we now have the following PowerShell commands available that will assist us in moving resources:

  • New-AzureResourceGroup
  • Move-AzureResource
  • Remove-AzureResourceGroup
  • Get-AzureResource
  • Get-AzureResourceGroup
  • Get-AzureResourceLog
  • Get-AzureResourceGroupLog

Switch-AzureMode AzureResourceManager

After launching a Microsoft Azure Powershell console we need to switch to Azure Resource Manager mode in order to manage our resource groups:

Switch-AzureMode AzureResourceManager

Get-AzureResourceGroup

Without any parameters this cmdlet gives a complete list of all resource groups that are deployed in your current subscription:

When resources are created in the classic Azure portal they will appear with a new resource group name that corresponds to the name of the object that was created (e.g. virtual machine name, storage account name, website name, etc.).

Note that we have a few default resource groups for storage, SQL and some specific resource groups corresponding to virtual machines. These were automatically created when I built some virtual machines and a Azure SQL server database in the classic Azure portal.

New-AzureResourceGroup

In order to group our existing resources we’re going to create a new resource group. It’s important to note that resource groups reside in a particular region which needs to be specified upon creation:

You’d think that resources can only be moved across resource groups that reside in the same region. However, I’ve successfully moved resources between resource groups that reside in different regions. This doesn’t affect the actual location of the resource so I’m not sure what the exact purpose of specifying a location for a resource group is.

Get-AzureResourceGroup

The Get-AzureResourceGroup cmdlet allows you to view all resources within a group, including their respective types and IDs:

Move-AzureResourceGroup

To move resources from the existing resource groups we need to provide the Move-ResourceGroup cmdlet a list of resource IDs. The cmdlet accepts the resource ID(s) as pipeline input parameters, so we can use the Get-AzureResource cmdlet to feed the list of resource IDs. The following script moves a cloud service, virtual machine and storage account (all residing in the same region) to the newly created resource group:

The Get-AzureResource cmdlet allows you to further filter based on resource type, or individual resource name. The Move-ResourceGroup cmdlet automatically removed the original resource group in case there are no resources associated after moving them.

Unfortunately at the time of writing there was an issue with moving SQL database servers and databases to other resource groups:

Trying to move the SQL server only does not raise any errors, but doesn’t result in the desired target state and leaves the SQL server and database in the original resource group:

The cmetlets Get-AzureResourceLog and Get-AzureResourceGroupLog provide a log of all the performed operations on resources and resource groups, but couldn’t provide any further information regarding the failure to move resources to the new group.

Now we have successfully moved our virtual machine and storage account to the new resource group we can get insight into these resources through the resource group:

Resource Group

Command and control with Arduino, Windows Phone and Azure Mobile Services

In most of our posts on the topic of IoT to date we’ve focussed on how to send data emitted from sensors and devices to centralised platforms where we can further process and analyse this data. In this post we’re going to have a look at how we can reverse this model and control our ‘things’ remotely by utilising cloud services. I’m going to demonstrate how to remotely control a light emitting diode (LED) strip with a Windows Phone using Microsoft Azure Mobile Services.

To control the RGB LED strip I’m going to use an Arduino Uno, a breadboard and some MOSFETs (a type of transistor). The LED strip will require more power than the Arduino can supply, so I’m using a 9V battery as a power supply which needs to be separated from the Arduino power circuit, hence why we’re using MOSFET transistors to switch the LEDs on and off.

The Arduino Uno will control the colour of the light by controlling three MOSFETs – one each for the red, blue and green LEDs. The limited programmability of the Arduino Uno means we can’t establish an Azure Service Bus relay connection, or use Azure Service Bus queues. Luckily Azure Mobile Services allow us to retrieve data via plain HTTP.

A Windows Phone App will control the colour of the lights by sending data to the mobile service. Subsequently the Arduino Uno can retrieve this data from the service to control the colour by using a technique called ‘pulse width modulation‘ on the red, green and blue LEDs. Pulse width modulation allows us to adjust the brightness of the LEDs by quickly turning on and off a particular LED colour, thus artificially creating a unique colour spectrum.

For the purpose of this example we won’t incorporate any authentication in our application, though you can easily enforce authentication for your Mobile Service with a Microsoft Account by following these two guides:

A diagram showing our overall implementation is shown below.

Command and Control diagram

Mobile service

We will first start by creating an Azure Mobile Service in the Azure portal and for the purpose of this demonstration we can use the service’s free tier which provides data storage up to 20MB per subscription.

Navigate to the Azure portal and create a new service:

Creating a Mobile Service 1

Next, choose a name for your Mobile Service, your database tier and geographic location. We’ll choose a Javascript backend for simplicity in this instance.

Creating a Mobile Service 2

Creating a Mobile Service 3

In this example we’ll create a table ‘sensordata’ with the following permissions:

Mobile Service Permissions

These permissions allows us to insert records from our Windows Phone app with the application key, and have the Arduino Uno retrieve data without any security. We could make the insertion of new data secure by demanding authentication from our Windows Phone device without too much effort, but for the purpose of this demo we stick to this very basic form of protection.

In the next section we’re going to create a Windows Phone application to send commands to our mobile service.

Windows Phone Application

To control the colour in a user friendly way we will use a colour picker control from the Windows Phone Toolkit, which can be installed as a NuGet package. This toolkit is not compatible with Windows Phone 8.1 yet, so we’ll create a Windows Phone Silverlight project and target the Windows Phone 8.0 platform as shown below.

Visual Studio Create Project 1

Visual Studio Create Project 2

Next, we’ll install the ‘Windows Phone Toolkit’ NuGet package as well as the mobile services NuGet package:

Install Windows Phone Toolkit Nuget

Install Mobile Services NuGet

For the purpose of this demo we won’t go through all the colour picker code in detail here. Excellent guidance on how to use the colour picker can be found at on the Microsoft Mobile Developer Wiki.

The code that sends the selected colour to our mobile service table is as follows.

The event data consists of colour data in the RGB model, separated by semicolons.

The complete working solution can be found in this Github repository. Make sure you point it to the right Azure Mobile Service and change the Application Key before you use it!

Run the application and pick a colour on the phone as shown below.

Phone ScreenShot

Now that we have a remote control that is sending out data to the Mobile Service it’s time to look at how we can use this information to control our LED strip.

Arduino sketch

In order to receive commands from the Windows Phone App we are going to use OData queries to retrieve the last inserted record from the Azure Mobile Servicewhihch exposes table data via OData out of the box. We can easily get the last inserted record in JSON format via a HTTP GET request to a URL similar to the following:

https://myiotservice.azure-mobile.net/tables/sensordata?$top=1&$orderby=__createdAt%20desc

When we send a HTTP GET request, the following HTTP body will be returned:

[
  {
    "id":"A086CE3F-5FD3-45B6-A967-E0928E3C5A96",
    "DeviceId":"PhoneEmulator",
    "SensorId":"ColorPicker",
    "EventType":"RGB",
    "EventData":"0;0;255"
  }
]

Notice how the colour is set to blue in the RGB data.

The Arduino schematics for the solution:

Arduino Command Control Schematic

For illustrative purposes I’ve drawn a single LED. In reality I’m using a LED strip that needs more voltage than the Arduino can supply, hence the 9V battery is attached and MOSFET transistors are used. Don’t attach a 9V battery to a single LED or it will have a very short life…

The complete Arduino sketch:

When we run the sketch the JSON data will be retrieved, and the colour of the LED strip set to blue:

The Working Prototype!

In this article I’ve demonstrated how to control a low end IoT device that does not have any HTTPS/TLS capabilities. This scenario is far from perfect, and ideally we want to take different security measures to prevent unauthorised access to our IoT devices and transport data. In a future article I will showcase how we can resolve these issues by using a much more powerful device than the Arduino Uno with an even smaller form factor: the Intel Edison. Stay tuned!

Microsoft Windows IoT and the Intel Galileo

You might have seen one of these headlines a while back: ‘Microsoft Windows now running on Intel Galileo development board’, ‘Microsoft giving away free Windows 8.1 for IoT developers’. Now before we all get too excited, let’s have a closer look beyond these headlines and see what we’re actually getting!

Intel Galileo

With a zillion devices being connected to the Internet by the year 2020 a lot of hardware manufacturers want to have a piece of this big pie, and Intel got into the game by releasing two different development boards / processors: the Intel Galileo and more recently the Intel Edison.

Intel Galileo
Intel Galileo

Intel Edison
Intel Edison

The Galileo is Intel’s first attempt to break into consumer prototyping, or the ‘maker scene’. The board comes in two flavours, Gen 1 and Gen 2 with the latter being a slightly upgraded model of the first release.

Like many other development platforms the board offers hardware and pin compatibility with a range of Arduino shields to catch the interest from a large number of existing DIY enthusiasts. The fundamental difference between boards like the Arduino Uno and the Intel Galileo is that Arduino devices run on a real-time microcontroller (mostly Atmel Atmega processors) whereas the Galileo runs on a System on Chip architecture (SoC). The SoC runs a standard multi-tasking operating system like Linux or Windows, which aren’t real-time.

Both Gen1 and Gen2 boards contain an Intel Quark 32-bit 400 MHz processor, which is compatible with the Intel Pentium processor instruction set. Furthermore we have a full-sized mini-PCI express slot, a 100 Mb Ethernet port, microSD slot and USB port. The Galileo is a headless device which means you can’t connect a monitor via a VGA or HDMI unlike the Raspberry Pi for example. The Galileo effectively offers Arduino compatibility through hardware pins, and software simulation within the operation system.

The microSD card slot makes it easy to run different operating systems on the device as you can simply write an operating system image on an SD card, insert it into the slot and boot the Galileo. Although Intel offers the Yocto Poky Linux environment there are some great initiatives to support other operating systems. At Build 2014 Microsoft announced the ‘Windows Developer Program for IoT’. As part of this program Microsoft offers a custom Windows image that can run on Galileo boards (there’s no official name yet, but let’s call it Windows IoT for now).

Windows on Devices / Windows Developer Program for IoT

Great, so now we can run .NET Framework application, and for example utilise the .NET Azure SDK? Well not really, yet… The Windows image is still in Alpha release stage and only runs a small subset of the .NET CLR and is not able to support larger .NET applications of any kind. Although a simple “Hello World” application will run flawlessly, applications will throw multiple Exceptions as soon as functionality beyond the System.Core.dll is called.

So how can we start building our things? You can write applications using the Wiring APIs in exactly the same way as you program your Arduino. Microsoft provides compatibility with the Arduino environment with a set of C++ libraries that are part of a new Visual Studio project type when you setup your development environment according to the instructions on http://ms-iot.github.io/content/.

We’ll start off by creating a new ‘Windows for IoT’ project in Visual Studio 2013:

New IoT VS Project

The project template will create a Visual C++ console application with a basic Arduino program that turns the built-in LED on and off in a loop:

Now let’s grab our breadboard and wire up some sensors. For the purpose of this demo I will use the built-in temperature sensor on the Galileo board. The objective will be to transmit the temperature to an Azure storage queue.

Since the Arduino Wiring API is implemented in C++ I decided to utilise some of the other Microsoft C++ libraries on offer: the Azure Storage Client Library for C++, which in return is using the C++ REST SDK. They’re hosted on Github and Codeplex respectively and can both be installed as Nuget packages. I was able to deliver messages to a storage queue with the C++ library in a standard C++ Win32 console application, so assumed this would work on the Galileo. Here’s the program listing of the ‘main.cpp’ file of the project:

The instructions mentioned earlier explain in detail how to setup your Galileo to run Windows, so I won’t repeat that here. We can deploy the Galileo console application to the development board from Visual Studio. This simply causes the compiled executable to be copied to the Galileo via a file share. Since it’s a headless device we can only connect to the Galileo via good old Telnet. Next, we launch the deployed application on the command line:

Windows IoT command line output

Although the console application is supposed to write output to the console, none of it is shown. I am wondering if there are certain Win32 features missing in this Windows on Devices release, since no debug information is outputted to the console for most commands that are executed over Telnet. When I tried to debug the application from Visual Studio I was able to extract some further diagnostics:

IoT VS Debug Output

Perhaps this is due to a missing Visual Studio C++ runtime on the Galileo board. I tried to perform an unattended installation of this runtime it did not seem to install at all, although a lack of command line output makes this guesswork.

Conclusion

Microsoft’s IoT offering is still in its very early days. That doesn’t only apply to the Windows IoT operating system, but for also to Azure platform features like Event Hubs as well. Although this is an Alpha release of Windows IoT I can’t say I’m overly impressed. The Arduino compatibility is a great feature, but a lack of easy connectivity makes it just a ‘thing’ without Internet. Although you can use the Arduino Ethernet / HTTP library, I would have liked to benefit from the available C++ libraries to securely connect to APIs over HTTPS, something which is impossible on the Arduino platform.

The Microsoft product documentation looks rather sloppy at times and is generally just lacking and I’m curious to see what the next release will bring along. According to Microsoft’s FAQ they’re focussing on supporting the universal app model. The recent announcements around open sourcing the .NET Framework will perhaps enable us to use some .NET Framework features in a Galileo Linux distribution in the not-to-distant future.

In a future blog post I will explore some other scenarios for the Intel Galileo using Intel’s IoT XDK, Node JS and look at how to connect the Galileo board to some of the Microsoft Azure platform services.