A first look at Azure Integration Service Environments (ISE)

Introduction

Microsoft released its long awaited Integration Service Environments (ISE) Azure service in public preview. Time to have a sneak peak! Integration service environments consists of a dedicated environment to execute your integration workloads. This as opposed to the serverless compute paradigm that was used in traditional Logic Apps. Although the serverless model is great for many purposes, using an ISE can be best fit for purpose in some scenarios which I’ll further outline in this article.

Why use ISE?

The benefit of dedicated resources is the lack of the noisy neighbour effect, resulting in some more reliable performance metrics. Although different workloads within your ISE may be impacted by each other you’ll no longer experience the drawbacks of shared resources in terms of latency and throughput. In addition ISE has the capability of auto-scaling. If resource intensive integration workloads (temporarily) put extra strain on resources this can be alleviated by scaling out if necessary.

As with most cloud services there’s a financial aspect to consider. Using an ISE comes at a significant cost. However, it’s a fixed and predictable cost as opposed to consumption based Logic Apps billing. There’s also the inclusion of one standard integration account and one enterprise connector. All enterprise connectors available are inclusive of this, although currently not all enterprise connectors have a VNET integrated ISE equivalent. For the SAP connector you’ll still be required to use the on-premises data gateway for instance. Usage of the connector will be included in the allowance of your ISE though. The SAP ISE connector is in the works and will be released somewhere later this year.

Pricing

ISE is currently billed at A$5.16 an hour when hosted in Australian regions. This includes a 50% public preview discount. Once it reaches general availability an ISE will set you back approximately A$7,500 a month.

It will take a standard integration account, 100,000 enterprise connector executions, 300,000 standard executions and 300,00 action executions a month a combined to break even with Logic Apps consumption based billing once we’re out of public preview pricing. This is based on a single ISE base unit, more compute power can be purchased at additional cost. A significant amount but taking into consideration that a single ISE base unit provides 75 million executions a month this could make perfect economic sense in large scale operations.

What else? ISE support connectivity to VNETs! Both VPN and express route are supported. We no longer have to rely on the on-premises data gateway for hybrid cloud integration scenarios. And we can finally tackle some on-premises integration scenarios without data having to flow outside of your VNET at all.

In addition an ISE offers a static inbound IP address and isolated storage. The latter can help to comply with data sovereignty constraints with your run history remaining within the boundaries of your VNET.

Creating an ISE

Creating an ISE is well documented by Microsoft here, don’t forget to perform the steps of assigning permissions in your VNET to the Logic Apps resource. Within half an hour you should be up and running:

An ISE contains a workflow runtime used to execute your Logic Apps and built-in steps, as well as a connector runtime to execute all your connectors. Both can be monitored in the dashboard allowing you to keep an eye out on resource utilisation and plan capacity appropriately.

Creating an integration account

As mentioned before an ISE includes one standard integration account. Curiosity took over and I wanted to find out if an additional free account could also be added:

The answer is no, but with a maximum of 1000 maps and schemas a single standard account should suffice. Besides, the free tier does not provide any SLAs and should only be used for development and test purposes.

Auto scaling

If a single base unit is not sufficient for your workloads you can easily add additional compute instances to your ISE. There’s a maximum of 3 additional units out of the box, but Microsoft is happy to cater for more capacity if required.

Similar as in App Services ISE supports auto scaling. Based on performance metrics of your workflow or connector instances you can scale scale out or back according to workload demand.

Connectivity

The built-in connectors (e.g. HTTP) will always be running within your ISE’s workflow runtime, allowing you to connect any resource within the boundaries of your VNET and NSGs. In addition there are specific standard ISE and enterprise ISE connectors. These have the label ISE and always run in the same ISE as your Logic Apps, albeit in the connector runtime. Their equivalent without the ISE label will run the in global Logic Apps service and have therefore no access to your VNET.

For my demos I created a VNET with a point to site connection, connecting a development machine to a VNET in Azure. My development environment is both hosting a HTTP Web API as well as a SQL server. Let’s have a look what’s involved in connecting to those

On premises connectivity with the SQL connector

Assuming you’ve spun up an ISE and have connectivity to your VNET we can jump straight into connecting to an on-premises SQL server database. For this demo I’m running a SQL Express edition with a Northwind database and enabled remote access.

The ISE SQL connector can easily be configured in the Logic Apps designer (or API connection subsequently) with SQL connection details:

Upon a successful connection you’ll be able to use the connector in the same manner as the SQL Azure connector:

Alternatively I could have connected to my SQL server with the on-premises data gateway, but given the mechanics of underlying Service Bus relay you’ll experience much worse performance than connecting through a VNET / express route.

On premises connectivity with the HTTP connector

The HTTP connector is built-in, and does not require special configuration to connect to on-premises resources:

Summary

ISE offers a great addition to the Azure iPaaS family and further increases Microsoft’s capability in the integration space. If you’re concerned about security aspects and data governance around your integration solutions, want predictable performance or need lower latency between your Logic Apps and on-premises resources ISE might be a good fit for you. A price tag of A$7,500 per month in Australian regions may sound like a lot, but given the huge amounts of compute power available this is a more economical than other iPaaS offerings.

Asynchronous Logic Apps with Azure API Management

One of the many great features of Logic Apps is its support for long running asynchronous workflows through the ‘202 async’ pattern. Although not standardised in any official specification as far as I know, the ‘202 async’ pattern is commonly used to interact with APIs in an asynchronous way through polling. In summary, this pattern informs an API consumer that an API call is accepted (HTTP status 202) accompanied by a callback URL where the API consumer can regularly check for an actual response payload. The callback URL is provided in the HTTP location response header. In addition Logic Apps provides a recommendation regarding the polling interval through a ‘Retry-After’ HTTP response header. To demonstrate how this magic works in action I’ve created a small Logic App with a built-in delay of one minute. Here’s the basic message flow:

If we call this Logic App from an HTTP client you’ll notice a response message after a minute. Not ideal, and scenarios like this will likely cause in timeouts if the operation even takes longer. Luckily Logic Apps makes it very easy to resolve this challenge with a simple flick of a switch on the settings of the HTTP response connector:

When we call the Logic App API with an HTTP client like Postman we receive a completely different response. Instead of the expected response payload we get a bunch of Logic App run properties. The status property indicates that the workflow is still running.

{
    "properties": {
        "waitEndTime": "2018-12-12T00:50:09.7523799Z",
        "startTime": "2018-12-12T00:50:09.7523799Z",
        "status": "Running",
        "correlation": {
            "clientTrackingId": "08586570310757255241595533212CU31"
        },
        "workflow": {
            "id": "/workflows/04c74a7043984ffd8bda2dc2437a6bf7/versions/08586586828895236601",
            "name": "08586586828895236601",
            "type": "Microsoft.Logic/workflows/versions"
        },
        "trigger": {
            "name": "manual",
            "inputsLink": {
                "uri": "https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerInputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerInputs%2Fread&sv=1.0&sig=xD7xM2pvbs-_RgX1FVcsq2MImu5rlt_WCfLq4tE9Qqc",
                "contentVersion": "HnyZbRBXXZ5RxDoTJydztQ==",
                "contentSize": 28,
                "contentHash": {
                    "algorithm": "md5",
                    "value": "HnyZbRBXXZ5RxDoTJydztQ=="
                }
            },
            "outputsLink": {
                "uri": "https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerOutputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerOutputs%2Fread&sv=1.0&sig=TpIrjRJqorJLs621AEWFrDNzHcRfYKORA8YuVXq6e9A",
                "contentVersion": "DdHqLk/lmUN0iUdPwWVQ8A==",
                "contentSize": 277,
                "contentHash": {
                    "algorithm": "md5",
                    "value": "DdHqLk/lmUN0iUdPwWVQ8A=="
                }
            },
            "startTime": "2018-12-12T00:50:09.7506184Z",
            "endTime": "2018-12-12T00:50:09.7506184Z",
            "originHistoryName": "08586570310757255241595533212CU31",
            "correlation": {
                "clientTrackingId": "08586570310757255241595533212CU31"
            },
            "status": "Succeeded"
        },
        "outputs": {},
        "response": {
            "startTime": "2018-12-12T00:50:09.7506184Z",
            "correlation": {},
            "status": "Waiting"
        }
    },
    "id": "/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31",
    "name": "08586570310757255241595533212CU31",
    "type": "Microsoft.Logic/workflows/runs"
}

If we look at the response HTTP headers we notice a status of ‘202 Accepted’, a location header and a suggested polling retry interval.

If we immediately follow the link from the location header we’ll get the same response message, until after about a minute. When the Logic App workflow has completed its run we’ll finally receive our Logic App response payload:

Pretty cool. Although I’m no huge fan of polling given the fact it’s so chatty and doesn’t result in a timely response, I think there’s certainly valid use cases for this scenario. API consumers could provide a callback channel in the request payload or header, but this has a drawback of every API consumer now all of the sudden becoming a provider too. Not feasible in most cases.

Azure API Management

As a general rule of thumb I don’t expose Logic App APIs directly to consumers and mediate them through either an Azure function proxy or Azure API Management. Azure API Management has built-in integration with Logic Apps, and especially with the recent addition of the consumption tier in Azure API Management it’s a great way of abstracting your API implementations. Let’s create a basic API by adding our Logic App to our Azure API Management instance:

I’ve simply selected my Logic App ‘longrunning’ and associated it with my API product ‘Anonymous’, which doesn’t require a subscription and makes testing our API even easier. Next we’ll call our API through the Azure API Management test console.

Our API is successfully called through Azure API Management, hiding the Logic Apps trigger URL and exposed via the more static URL https://kloud.azure-api.net/longrunning/manual/paths/invoke

Very neat, but if we have a closer look at the API response we can notice the location header and trigger Uris in the response payload exposing our Logic App endpoint. Call it paranoia or API OCD, but wouldn’t it be better to have the subsequent polling API calls mediated through Azure API Management as well? This allows us to enforce policies like throttling, provides us the ability to identify our caller and captures traffic in Application Insights together with all other API calls. Bonus.

Run API, run!

Let’s have a closer look a the URL in the location header: 

https://prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/operations/1d19015e-5f32-4d72-a92c-04a5da36084d?api-version=2016-10-01&sp=%2Fruns%2F08586570310757255241595533212CU31%2Foperations%2F1d19015e-5f32-4d72-a92c-04a5da36084d%2Fread&sv=1.0&sig=oU1qLxgdECjumvTai2mXMDn-0gKl0d3xJAM785JBMcI

And the trigger input/output URLs:

https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerInputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerInputs%2Fread&sv=1.0&sig=xD7xM2pvbs-_RgX1FVcsq2MImu5rlt_WCfLq4tE9Qqc https://prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7/runs/08586570310757255241595533212CU31/contents/TriggerOutputs?api-version=2016-10-01&se=2018-12-12T04%3A00%3A00.0000000Z&sp=%2Fruns%2F08586570310757255241595533212CU31%2Fcontents%2FTriggerOutputs%2Fread&sv=1.0&sig=TpIrjRJqorJLs621AEWFrDNzHcRfYKORA8YuVXq6e9A

The URLs all have a similar pattern of /workflows/<WorkflowId>/runs/<RunId>. The workflow ID is static and corresponds to our Logic App, the run ID belongs to every instance of a running Logic App. So what can we do to:

  • Rewrite URLs in our response headers and payload
  • Route calls through our Azure API Management instance to the correct Logic App instance

That’s where Azure API Management policies come to the rescue. First we’ll define an outbound processing policy on the ‘All operations’ level:

<policies>
    <inbound>
        <base />
        <set-backend-service id="apim-generated-policy" backend-id="LogicApp_longrunning_rg-logicdemo" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <find-and-replace from="prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7" to="kloud.azure-api.net/longrunning" />
        <find-and-replace from="prod-18.australiaeast.logic.azure.com:443/workflows/04c74a7043984ffd8bda2dc2437a6bf7" to="kloud.azure-api.net/longrunning" />
        <set-header name="Location" exists-action="override">
            <value>@(context.Response.Headers.GetValueOrDefault("Location")?.Replace("prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7","kloud.azure-api.net/longrunning"))</value>
        </set-header>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

When we call the API again we’ll notice both location header as well as all URLs in the response payload are now redirected to Azure API Management.

The next step is to map any calls to /longrunning/runs to a corresponding backend URL. Let’s define a new operation ‘runs’:

We’ll also need to override the backend URL and remove the <base /> policy which points to the Logic App trigger instead of run endpoint:

<policies>
    <inbound>
        <set-backend-service base-url="https://prod-18.australiaeast.logic.azure.com/workflows/04c74a7043984ffd8bda2dc2437a6bf7" />
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

Last but not least, let’s test if the added operation actually works by performing a HTTP GET on the URL as specified in the location header. The first call results in a ‘202 accepted’:

Let’s try again after waiting for another minute:

The HTTP GET requests to our added API operations get successfully routed to our backend Logic App instance, and we’re receiving the expected payload after some asynchronous polling.

To summarise the required steps in order to abstract our Logic Apps backend URLs:

  • Configure your Logic App to be asynchronous in the HTTP response connector
  • Perform URL rewriting on the API level with a find-and-replace and location header override
  • Add a /runs operation to map the rewritten URLs to the corresponding Logic Apps backend URLs 

Securing APIs through RBAC with Azure API management and Azure AD

One of Azure API Management great features is the ability to secure your APIs through policies, and thereby separating authorisation logic from your actual APIs. There’s plenty of guidance available on how to integrate Azure API management with Azure Active Directory or other OAuth providers, but very little information on how to apply fine grained access control on your APIs. Yes, it’s easy to setup OAuth to grant access to API consumers (authorisation grant) or machine to machine communication (client credentials grant). With the ‘validate JWT token’ policy we can validate the authenticity of your access token, and perform claims based authorisation. However unless we implement further controls anyone from our Azure AD tenant can access your APIs by default. So what can we do to restrict access to certain groups or roles within our application?

Option 1: Graph API

One option would be to interrogate the Graph API within our application and check for AD group membership. However, I prefer to keep this authorisation logic outside my actual API and repeating this task for every API becomes cumbersome. Besides, keeping strangers out at the front door would be my preference. Ideally unauthorised users are kept at bay in Azure API management.

Option 2: AAD group claims

Add group claims to our AAD JWT token. This can be easily configured in the App manifest of the API application registration in AD by configuring the groupMembershipClaims property:

Although this results in group claims to be added in OAuth JWT tokens, group claims are provided as GUIDs. Not really user friendly to apply our access controls, having to know each exact GUID corresponding to a group. Besides, we don’t always want to rely on AD wide group definitions to control access within our application.

Option 3: Role Based Access Control with JWT validation 

A third and in my opinion neatest option would be to define application specific roles in our API application manifest in AAD. Users can be assigned to these application specific roles, and we can check for role claims in an Azure API management policy. In addition to allowing users to be assigned to roles, we’ll enable application assignment for application to application communication as well (line 10):

For the purpose of this demo I’ve defined two application registrations in the AAD tenant:

  • Kloud API M (API)
    This represents our backend API, and will contain the application roles and user assignments. For the sake of simplicity in this demo I’ll use a mocked response in Azure API management instead of standing up an API.
  • API M client (API client application)
    This is the application accessing our backend API on behalf of the signed-in API users.  This application will need a delegated permission to access the API, as well as delegated permissions to initiate the user sign-in and read a user’s profile:

You’ll also notice that the API M client has the custom application permission of ‘Admin’ and ‘Reader’ assigned to it. This allows the application to access our backend API directly using the OAuth client credentials grant.

In addition, we’ll require users to be specifically assigned to access our application through one of these roles by enabling the ‘user assignment required’ option in our AAD Enterprise Application definition:

We can assign our application specific roles to a user from our AAD tenant:

Next we’ll look at how to perform authorisation based on role claims in Azure API Management. Let’s first have a look at the JWT policy. You can apply this at the operation, API or global level.

The easiest way to test our setup is by enabling OAuth user authorisation in the developer console, as per instructions here. This allows us to use the API M developer console as our client application, accessing our API on behalf of a signed in user. The demonstration below shows the API returning a ‘200 OK’ when we provide a JWT token containing the Admin role:

And here’s the JWT that was returned by the authorization server:

The JWT policy specifically checks for an admin role, so lets try calling the API with an account that only has the role of Reader:

This time the JWT policy returns a ‘401 unauthorised’ due to the absence of the Admin role claim. We’ve successfully demonstrated how we can grant access to a backend API based on role membership in our AAD application.

Lastly, I want to show how we can enable machine to machine communications in a similar way. Let’s have a look again at the permissions assigned to the API M Client application. As you can see below we’ve assigned the application permission ‘Admin’ to the API M Client:

After giving administrator consent and enabling the client credentials grant in Azure API Management we can verify our policy through the developer console:

As I demonstrated a combination of Azure Active Directory and Azure API Management offer great capabilities to apply RBAC on APIs, without having to implement any authorization logic in our actual API.

Building deployment pipelines for Azure Function proxies and Logic Apps

Azure Logic Apps offer a great set of tools to rapidly build APIs and leverage your existing assets through a variety of connectors. Whether in a more ad-hoc scenario or in a well-designed micro service architecture, it’s always a good way to introduce some form of decoupling through the mediator pattern. If you don’t have the budget for a full blown API Management rollout and your requirements don’t extend further than a basic proxy as a mediator, keep on reading.

One of the intricacies of working with the Logic Apps HTTP input trigger is the dynamic input URLs. When recreating your Logic Apps via ARM templates you’ll notice that these input URLs change once you removed your existing Logic App. This, amongst other reasons make the Logic Apps unsuitable for direct exposure to API consumers. Azure API management offer a great way of building an API gateway between your consumers and Logic Apps but comes with a serious price tag until the consumption tier is finally available later this year. Another way of introducing a mediator for your Logic Apps is Azure Function App Proxies. Although very lightweight, we can consider the Function App Proxy as an API layer with the following characteristics:

  • Decouple API consumer from API implementation
    By virtue of decoupling we can move our API implementation around in the future, or introduce versioning without impacting the API consumer
  • Centralised monitoring with Application Insights
    Rich out of the box monitoring capabilities through one-click deployment

In this post we’ll look at fully automating resource creation of a microservice, including the following components:

  • Application Insights instance
    Each App Insights instance has its unique subscription key. The ARM template will resolve the key during deployment for Application Insights integration with the Function App Proxy.
  • Logic App
    As mentioned before the input URL will be dynamic, so we’ll need to resolve this during deployment.
  • Function App
    The function app and its hosting plan can be easily created with an ARM template, together with some application settings including the reference to the Logic App backend URL which is determined during deployment time.
  • Function App Proxy
    Last but not least, proxies are defined as part of the application content. The proxies.json file in the wwwroot contains the actual proxy service definition, establishing the connection with the Logic App.

ARM deployment

Below is the single ARM template that contains the resource definition for the AppInsights instance, Logic App and Function App. The Logic App contains a simple workflow with HTTP trigger and response, outputting a supplied URL parameter in a JSON message body.

The important bit to point out here are setting the LogicAppBackendUri setting in the Function App:

[skip(listCallbackURL(concat(resourceId('Microsoft.Logic/workflows/', variables('logicAppName')), '/triggers/manual'), '2016-06-01').value,8)]

This command strips the ‘https://&#8217; from the dynamically retrieved LogicApp so we can refer to it from our proxy definition below. In addition it will copy a URL query string parameter to the backend service.

Deployment

PowerShell

The following PowerShell script (with a little help of Azure CLI) deploys the ARM template, and function app proxy by uploading the hosts.json and proxies.json files to the Function App using Azure CLI. The DeployAzureResourceGroup.ps1 is the out of the box script that Visual Studio scaffolds in ARM template projects.

The above PowerShell and Azure CLI scripts are an excellent way of creating your assets from scratch. In addition we’ll show how to use an Azure DevOps pipeline to perform true CI/CD.

Azure DevOps pipeline

With Azure DevOps pipelines we can easily setup a CI/CD pipeline with just three simple steps.

The first step performs an Azure resource group deployment to deploy the Logic App, Function App and AppInsights instance.

Next we’ll package the Function App proxy definition into a zip file.

The last step will deploy the packaged proxy definition to our Function App:

After a successful deployment with either PowerShell or Azure DevOps we can finally test our function app:

Happy days. The above demonstrates how we can utilise Azure to create a very cost effective and neat solution to provide an API and proxy whilst leveraging Application Insights to monitor incoming traffic.

Translating JSON messages with Logic Apps

One of the key components of an integration platform is message translation. The Microsoft Azure iPaaS Logic Apps service offers message translation with the out of the box ‘compose’ operation. Alternatively, message translation can be achieved with Liquid transforms. The latter requires an Azure Integration account which comes with additional cost. In this article we’ll look at the two transformation options and do a comparison in terms of cost, performance and usability. For my demo purposes I created two logic apps with HTTP input triggers and response output.

Performance

For the purpose of testing the performance of Logic App execution, I generated a sample JSON payload on https://www.json-generator.com/ containing an array of 1500 objects with a payload of about 1MB.

Although you can increase the compose performance by increasing the degree of parallelism of your for-each loop, the Liquid transform performs significantly better in all cases.

Input payload Liquid Compose
800KB array with 1500 items 1.5 seconds 12 seconds
1.6MB array with 3000 items 3 seconds 22 seconds

Needless to say, if you’re working with large arrays of data and performance is a criteria, Liquid maps are the way to go.

Winner: Liquid

Cost

For development scenarios you can use a free tier, which comes without an SLA and limitations around the maximum number of maps (25) and instances (1 account per region).

For production scenarios you have the choice of either a basic or standard account. The main difference between the two is the number of EDI partners and agreements that can be setup, which has no relevance to message translation scenarios. A basic integration account falls just short of $400 AUD a month but allows 500 maps to be created. Execution is still charged at the price of a standard action, so there’s no additional cost over the JSON compose transformation. In fact, the cost of the JSON compose solution could rapidly increase due to the for-each construct that’s needed to iterate through an array. If you’re mapping large sets of arrays at a frequent basis you may even come close to justifying the integration account from a cost perspective.

Winner: Compose

Assembly

JSON Compose

Compose mappings can be created directly in the Logic App designer. The compose editor is a bit fiddly and flaky in both browser as well as VS2017 and I found myself reverting to the code editor at times. Overall it’s fairly easy to construct basic mappings.

Liquid transforms

Liquid transforms can be easily created in your text editor of choice. Visual Studio Code has some nice extensions that given you snippets or even output preview.

[

{% for item in content %}

    {

        "Id": "{{ item._id }}",

        "Name": "{{ item.name }}",

        "Email": "{{ item.email }}",

        "Phone": "{{ item.phone }}",

        "Address": "{{ item.address }}",       

        "About": "{{ item.about }}",       

    },

{% endfor %}

]

Winner: tie

Deployment

Liquid transforms are managed and deployed separately from your Logic Apps, greatly enhancing the ability to reuse maps. Although easy to upload or update in the Azure Portal, in CI/CD pipelines you’ll find yourself writing separate PowerShell scripts in addition to your Logic App ARM templates.

Compose transformations are deployed as part of your Logic Apps, and therefore can be deployed as part of the Logic App ARM templates.

Winner: Compose

Conclusion

Whether to use Liquid transforms or Compose largely comes down to your own needs around performance, cost and mapping complexity required (e.g. nested loops, this can get ugly with Compose). If you require transformations involving XML you’ve got another strong argument for Liquid maps. When doing enterprise integration at scale there’s probably no way around needing Liquid maps and with the ability to create up to 500 maps per integration account cost can be shared amongst many integrations.

Reducing the size of an Azure Web Role deployment package

If you’ve been working with Azure Web Roles and deployed them to an Azure subscription, you likely have noticed the substantial size of a simple web role deployment package. Even with the vanilla ASP.NET sample website the deployment package seems to be quite bloated. This is not such a problem if you have decent upload bandwidth, but in Australia bandwidth is scarce like water in the desert so let’s see if we can compress this deployment package a little bit. We’ll also look at the consequences of this large package within the actual Web Role instances, and how we can reduce the footprint of a Web Role application.

To demonstrate the package size I have created a new Azure cloud service project with a standard ASP.NET web role:

1

Packaging up this Azure Cloud Service project results in a ‘CSPKG’ file and service configuration file:

2

As you can see the package size for a standard ASPX web role is around 14MB. The CSPKG is created in the ZIP format, and if we have a look inside this package we can have a closer look at what’s actually deployed to our Azure web role:

3

The ApplicationWebRole_….. file is a ZIP file itself and contains the following:

4

The approot and sitesroot folders are of significant size, and if we have a closer look they both contain the complete WebRole application including all content and DLL files! These contents are being copied to the actual local storage disk within the web role instances. When you’re dealing with large web applications this could potentially lead to issues due to the limitation of the local disk space within web role instances, which is around the 1.45 GB mark.

So why do we have these duplicate folders? The approot is used during role start up by the Windows Azure Host Bootstrapper and could contain a derived class from RoleEntryPoint. In this web role you can also include a start-up script which you can use to perform any customisations within the web role environment, like for example registering assemblies in the GAC.

The sitesroot contains the actual content that is served by IIS from within the web role instances. If you have defined multiple virtual directories or virtual applications these will also be contained in the sitesroot folder.

So is there any need for all the website content to be packaged up in the approot folder? No, absolutely not. The only reason we have this duplicate content is that the Azure SDK packages up the web role for storage and both the approot as well as sitesroot folders due to the behaviour of the Azure Web Role Bootstrapper.

The solution to this is to tailor the deployment package a little bit and get rid of the redundant web role content. Let’s create a new solution with a brand new web role:

5

This web role will hold just hold the RoleEntryPoint derived class (WebRole.cs) so we can safely remove all other content, NuGet packages and unnecessary referenced assemblies. The web role will not contain any of the web application bits that we want to host in Azure. This will result in the StartupWebRole to look like this:

6

Now we can add or include the web application that we want to publish to an Azure Web Role into the Visual Studio solution. They key point is to not include this as a role in the Azure Cloud Service project, but add it as a ‘plain web application’ to the solution. The only web role we’re publishing to Azure is the ‘StartupWebRole’, and we’re going to package up the actual web application in a slightly different way:

7

The ‘MyWebApplication’ project does not need to contain a RoleEntryPoint derived class, since this is already present on the StartupWebRole. Next, we open up the ServiceDefinition.csdef in the Cloud Service project and make some modifications in order to publish our web application along the StartupWebRole:
8

There are a few changes that need to be made:

  1. The name attribute of the Site element is set to the name of the web role containing the actual web application, which is ‘MyWebApplication’ in this instance.
  2. The physicalDirectory attribute is added and refers to the location where the ‘MyWebApplication’ will be published prior to creating the Azure package.

Although this introduces the additional step of publishing the web role to a separate physical directory, we immediately notice the reduced size of the deployment package:

9

When you’re dealing with larger web applications that contain numerous referenced assemblies the savings in size can add up quickly.