Beginning Logic Apps course

I have made two presentations available online at the Integration Monday website.

Part 1: Logic Apps a Beginner’s guide

I made this in April 2017 and it starts from the beginning. There are some prereqs which I cover in this post. The post is for VS 2015 but it works for 2017 as well.

This starts from the beginning. If you don’t know JSON, no problem. If you don’t know REST, same thing. If you do not know why you should use Logic Apps? I will explain it to you.

The session is made for you to follow along if you want to, just paus the video if I am moving too fast. Oh. Here is the link to the video.

Part 2: Beginning Logic Apps – Part 2

This was recorded on the December 4th, 2017 an picks up where the first one finished. I walk you thru a lot of concepts in Logic Apps like loops, variables, basic architecture and much more. This session works entirely in a browser s you do not need to use Visual Studio.

You can follow along in this session and build your own solutions, but the focus is more to tell you about a specific topic. If you know about that topic you can fast forward to the next.

Here is the link to the video.

Contact me?

My contact info is in the session if you need to ask me something or if I was unclear.


Authenticating an API call to Azure

This is more for me personally, rather than trying to put something new out there. A while back I struggled with getting something simple and basic to work. The reason is that there is usually too much useful information on “options” and “you have to decide”. I took upon myself to document the simplest of authentication flows, when authenticating your call to an Azure service.

Note that not all Azure Services use this way of authenticating. Azure Keyvault does its own thing, and so does Azure Storage.

This article is not a full walkthrough but a condensed walk this way.

The call should look like this

BODY Encoding type: application/x-www-form-urlencoded

Keys and values:
: client_credentials
client_id : {your azure client ID}
client_secret : {your azure client secret}

Successful response

“token_type”: “Bearer”,
“expires_in”: “3599”,
“ext_expires_in”: “0”,
“expires_on”: “[numeric value]”,
“not_before”: “[numeric value]”,
“resource”: “[guid]”,
“access_token”: “[loooong secure string]”

From postman

The collection can be found here.

What is all this?

Down here, I can fill in some information. Basically you need three things:

  1. The TennantID of the subscription you want to access.
  2. The Client ID
  3. The Client Secret.

Getting the TennantID

There are a lot of ways to do this. My favorite way is to use an API-call. The API-call will fail but the tenant ID can be found in the headers.

Issue a GET to https://management.azure.com/subscriptions/{AzureSubscriptionID}?api-version=2015-01-01

In the result, look at the headers and find WWW-Authenticate. In the value for that header there is a GUID, that is the tenant ID. The call can be found in the postman collection I uploaded for this post.

Getting the Client ID

This is a bit hairy as there are several steps to do this and some concepts you need to understand. The short version is this: You create a “client” in Azure. This “client” is an identity (much like a regular user). The old “service user” might be a good way of describing it. In the end you will have a GUID. That is the client ID. The best instructions on how to create a client in Azure can be found here.

Getting the Client Secret

This is just bit a further down the page on how to create a client. Make sure you save the key (secret) properly.

Full information

If you need more information on how to authenticate an API call, a very good place to start is on the Azure Rest API reference page.


Getting the DateTime from unix epoch in Logic Apps

Relating to my other post on Get unix epoch from DateTime in Logic Apps, there is a much simpler way to calculate UtcNow from a Unix timestamp. There is even no real math involved.

All you do is make use of the built in function addToTime.

Here is the expression:
addToTime(‘1970-01-01′, 1508852960,’second’)

So if you receive a JSON body with a tag called UnixTimeStamp, containing the Unix timestamp it will look like
addToTime(‘1970-01-01’, int(triggerBody()?[‘UnixTimeStamp’]),’second’)

Hope you can make use of it


Get unix epoch from DateTime in Logic Apps

This was posted as a question on the forums a while back. I thought this was a very interesting question as dates, math and the combination of the two intrigues me.

There is a very easy way to achieve this using c# and Azure Functions:

Int32 unixTimestamp = (Int32)(DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1))).TotalSeconds;

But I wanted to solve it using only Logic Apps functionality or at least see if it was possible.

How to get the value (math)

To make it work we need to use the functionality called “ticks”. Ticks are part the Windows OS and is a large number that means “the number of 100 nanoseconds that has passed since Jan 1st year 0 UTC” (western Christian calendar). Unix time is the same but is “the number of seconds that has passed since Jan 1st 1970 UTC”. These constants in time, and their relation to each other, can be used to calculate the value we need.

One second is 10 000 000 ticks.

TTN is the number of ticks from start until now. TT1970 is the number of ticks that passed from start until 1970. This constant is 621355968000000000.

The calculation looks like (TTN-TT1970) / 10 000 000.

Calculating the Unix value for “Now” (October 24th 2017 13:28) looks like
(636444485531778827 – 621355968000000000) = 15088518843498049
15088518843498049 / 10 000 000 = 1508851993

How to get the value (Logic App)

  1. Set up a new Logic App that can be triggered easily. I usually use a HTTP Request / Response.
  2. You need two variables so create to “Initialize variables”.
  3. Name the first TicksTo1970, set the type to Integer and set the value to ticks(‘1970-01-01’).
  4. Name the second TicksToNow, set the type to Integer and set the value to ticks(utcNow()).
  5. Now you are ready to do the calculation. If you have used a Request / Response, set the Response Body to div(sub(variables(‘TicksToNow’),variables(‘TicksTo1970’)),10000000)
  6. Save your flow, execute it to receive the value and validate it against https://www.unixtimestamp.com/

The flow can be downloaded here.


Use of a saved Azure login in PowerShell


Using powershell to make things happen in Azure is awesome. Personally, I put together small little snippets based on one or two simple actions, like “start the development VM” or “disable the polling Logic App”. There is one downside to this though and that is the constant barrage of login windows. You might not succeed with your new script the first time and after updating it, you have to login again … this makes Mikael a dull boy.

You can work around this by saving the credentials to disc, using powershell. Here is how you do it.


There is a powershell comman that saves your key locally in a jsonfile. You can the use that file to login. Of course, you need to make sure you protect that key.

Simply execute the command and point to a path where you want to save it.


This creates a json-file that contains all the information you need to login to your Azure subscription. Take a look after you saved it to see that it contains a veeeery long key.


Now it is time to use the saved credential and that is very easy as well.

There is a script that makes use of a saved credential and starts a virtual machine.

“C:\MySecretFolder\azurecredentials.json “

“<subscription GUID>”


“Devmachine” -ResourceGroupName “Devmachine”

Looking thru the script, the first line does the actual logging in. Then a subscription is selected, if you only have one subscription you can skip this step. Then the machine is started.

The magic is really in the first two rows, and these are the two rows I reuse in all my short action-focused scripts.


Testing the Azure Eventgrid response time

What is Azure Eventgrid

Azure Eventgrid is a new technology in Azure, aimed at connecting different applications, much like other integration technology like Azure Service Bus or Logic Apps. However, Eventgrid wants to turn traditional integration patterns on its head. Traditionally, you poll for data until data arrives. An event based model is the other way around. You do not poll, you wait for another system to send you an event. That event might contain all the necessary data or just enough for you to ask for the new data. An example might be in order.

Say you have a database that does some heavy number-crunching. You need the crunched data. The database exposes a service (or a stored procedure) for you to get the data once it’s done. In a traditional integration you would poll that service once every x minutes to get the data as soon as possible. In an event based integration, the database sends an event to the Eventgrid telling you that the number crunching is done. That event tells you to get the data. No polling is needed.

This is not new. It can be done using a simple Logic App that the database can call instead to send the event. So why use Azure Eventgrid? Logic Apps can do so much more and is therefore not as cheap. It might not even be quick enough and you might need to handle a lot of events with a very low latency. This is where Eventgrid fits in.

For more information about Eventgrid, including very nice usable demos and capabilities like routing and filtering, read this post by Eldert Grootenboer.

What kind of performance?

What do you want out of Eventgrid? I would like for it to be able to forward events quickly without any latency even if there are long waits between events. I want it to react quickly to me sending an event even if there are long periods of inactivity between events. I decided to test this. Does Azure Eventgrid have the

I would like the response time and forwarding time to be “short enough” and consistent. Not like “half a second, 2 seconds, half a second, one minute”.

The test design

First a short disclaimer: The Eventgrid service is in preview, this means that response times and availability is not supported in any SLA. The test is not meant to focus on getting the maximum speed but to find if Azure Eventgrid has consistent response times.

Here is a picture of the communication architecture:

The flow

A command line c# program, running on an Azure VM, sends custom events to the Eventgrid. The Eventgrid forwards the event (using a subscription) to an Azure Function that writes timestamp-information into a database. All resources are in West US 2. The timestamps where both using UTC to negate any time zone problems.

The sending application

The c# program worked like a product database might work. When a product is changed an event will be sent. The program waited for a random number of seconds between sending events, to simulate my imagined workload. People are not consistent. The program sent 5 messages every 1 to 600 seconds.

The message consisted of a light data body and I used the eventTime property to mark the start time of the flow.

The Azure Function

To make sure the function would not be the bottle neck, I used the App Service Plan option and scaled it to two instances. The function code was written in csx (not compiled) and simply received the event message, got the starting timestamp, adding its own timestamp to act as “time received” and then saved it to the Azure SQL Server database.

If you think this might be inefficient I can say that when I did initial bulk testing (200+ messages per second) I flooded the Azure SQL Server database, but the Azure Functions were fine.

The database

It was a simple Azure SQL database with a simple table consisting of three columns: ID, EventSentTime and EventReceivedTime.

Test Execution

The test ran between 2017-09-13 07:31 UTC and 2017-09-13 10:04 UTC, during that time a total of 110 events was sent on a total of 24 occasions.

The test results

The overall results are good! The Eventgrid lives up to my expectations of quickly responding and sending messages even after long periods of inactivity.

Timestamp trouble

Sadly, the timestamps did not line up. Due to different clocks on the VM and in Azure Functions I got negative numbers, as low as -240 miliseconds (ms). This coupled with a maximum time of 1304 ms, the results do not lend themselves to statistics.

In conclusion

Even with the timestamp trouble, there is a clear pattern: The reaction times are quick (the whole flow took about 500 ms to execute after a longer period of inactivity), and consistent, exactly what I wanted out of Azure Eventgrid. I am looking forward to being able to use this technology in production.

Further study

I would like to try is to run more instances of the messaging program.


Timeout and parallel branches in Logic Apps

Back when I was a BizTalk developer I used something called sequential convoys. Two main features that had to be implemented to use that pattern was a timeout shape, and a parallel branch. The flow either received a new message or it “timed out”, executing some other logic, perhaps sending an aggregated batch to a database.

Looking at logic apps the same pattern does not match 100% but there are still very good uses for parallel actions and clever use of the delay action.

Can a Logic App timeout?

The question is quite fair: How can we get a behavior that makes the Logic App send back a timeout if a run does not complete within a given time frame? In order to illustrate this I have set up a Logic App that takes inspiration from the sequential convoy pattern:

  1. Receives a request
  2. Starts a delay (the timeout) in one branch.
  3. Starts a call to an external service in the other branch.
  4. If the service responds back before the delay (timeout) is done, HTTP 200 is sent back.
  5. If the service does not respond back in time, HTTP 504 (Gateway timeout) is sent back.

For demo reasons I have added another delay shape to the “call the service”-branch to make the call take too long, and to trigger the timeout.

The Logic App

If the TimeoutResponse triggers, the Logic App engine will recognize this and it will not try to send the other response. That action will be skipped. If you execute the Logic App and this happens the run will be marked as “Failed” in the run history, which then in turn points to a timeout.

The json for the Logic App can be downloaded here.

Some caveats and perhaps better solutions

Note that in this Logic App, the HTTP call to the service will still be executed, even if TimeoutResponse will execute. That can be fixed using the Terminate action.

Lastly you also should think about why you need to implement a timeout in a Logic App. Can’t the calling client set a timeout on their end? If not, why? Can this be solved in some other way? If there is a risk of timing out, can you rebuild the messaging paths in a more async (webbhook) manor? One call from the client starts one process and another Logic App sends the result when the processing is done.


I might find scenarios when this is very useful. I have yet to find it but it is nice to know that it’s there and how it behaves.


Remove DTA orphans in BizTalk

Standard disclaimer: This is officially not supported and you should never update a BizTalk database using T-SQL *wink* *nudge*

What are orphans?

When tracking is performed in BizTalk, it happens in several steps. The first thing that happens is that a row is created in the DTA database, the dta_ServiceInstances table to be specific. There is a bug in BizTalk that makes the “DTA Purge and Archive” job unable to delete the row. This creates an orphan and if this happens a lot the table will be filled with junk data.

But why?

When tracking starts the current date and time is placed in column dta_ServiceInstances.dtStartTime. In the same row, there is a column called dtEndTime. When tracking starts, this column gets a null-value. When tracking completes, the column is updated and the date and time of completion is set. If this does not happen, the job will not remove the row as it is considered active.

How to find out how many orphans there are

You are running BHM (BizTalk Health Monitor) so you know how many there are, right? But how many are too many? If your queries using BizTalk Administration Console times out, then there are too many.

Here is another way to find out how many there are, using SQL.


    count(*) as ‘NoOfOrphans’




    dtEndTime is NULL and [uidServiceInstanceId] NOT IN





        [BizTalkMsgBoxDb].[dbo].[Instances] WITH (NOLOCK)





        [BizTalkMsgBoxDb].[dbo].[TrackingData] with (NOLOCK)



This query clearly shows the number of unmarked instances but also matches the result to what instances are still active in the MessageBox.

If you want more information on each instance, replace count(*) with a simple * in the first row of the SQL script. If you do this, you can easily see that the data has a dtStartTime but no dtEndTime.

How do I remove them?

BizTalk Terminator tool

This is the supported way to remove the data. There is a very important caveat to using the tool: you must completely stop the environment. If this is a bad thing you can run the script executed by the terminator tool yourself.

T-SQL Script

A very simple script that will update the table by setting a endtime for all orphans, making it possible for the purge job to delete them.


USE [biztalkDTADb]





    [dtEndTime] = GetUTCDate()


    dtEndTime is NULL


    [uidServiceInstanceId] NOT IN





        BizTalkMsgBoxDb.[dbo].[Instances] WITH (NOLOCK)





        BizTalkMsgBoxDb.[dbo].[TrackingData] WITH (NOLOCK)


— If it works: uncomment and run this row.

— Commit tran

— If it does NOT work: uncomment and run this row

— Rollback tran

The script will handle the exact same rows as in the first query. In order to make this update behave in the best way, use a transaction by following these steps:

  1. Run the first script that gets the number of rows, note the result.
  2. Run the second script (make sure to include the BEGIN TRAN at the start).
  3. Note the number of rows affected.
  4. If the numbers match up the script has run correctly, uncomment and run the COMMIT TRAN row.
  5. If the numbers dos not match up, something went wrong. Uncomment and run the ROLLBACK TRAN row to cancel the transaction.

NOTE! It is very important to run the COMMIT/ROLLBACK in the same query window as the main script.

The purge job

The next time the purge job runs, the number of orphans should decrease. Run the first script to make sure.



Logic App for testing return codes

Return codes are very useful when communicating between services, such as Azure Functions and Logic Apps. I found that, in some cases, testing different return codes and their behavior in a Logic App can be boring as you need to update and redeploy code. Therefore, I made a little Logic App that sends a response with the desired return code. A very basic mocking service basically.

If you need to check how a 200 OK works with your workflow, you call it and ask it to return a 200.

The Logic App

The request simply takes in a POST with a “EchoCode”. Like:


The response part is a little trickier as the designer only allow you to set strings and “Status Code” is an integer. It is not hard though, simply enter @int(triggerBody()?[‘EchoCode’]) and it will convert the “EchoCode” from a string to an integer. I did it using the Expression Editor.

So if you send in a 429, the Logic App will respond with “429 – Too many requests”. If you send in a 202, the app will respond with “202 – Accepted”

The code

Here is the json for the Logic App. I hope you find it as useful as I did.