fromDateTimeToUnix.json

Getting the DateTime from unix epoch in Logic Apps

Relating to my other post on Get unix epoch from DateTime in Logic Apps, there is a much simpler way to calculate UtcNow from a Unix timestamp. There is even no real math involved.

All you do is make use of the built in function addToTime.

Here is the expression:
addToTime(‘1970-01-01′, 1508852960,’second’)

So if you receive a JSON body with a tag called UnixTimeStamp, containing the Unix timestamp it will look like
addToTime(‘1970-01-01’, int(triggerBody()?[‘UnixTimeStamp’]),’second’)

Hope you can make use of it

fromDateTimeToUnix.json

Get unix epoch from DateTime in Logic Apps

This was posted as a question on the forums a while back. I thought this was a very interesting question as dates, math and the combination of the two intrigues me.

There is a very easy way to achieve this using c# and Azure Functions:

Int32 unixTimestamp = (Int32)(DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1))).TotalSeconds;

But I wanted to solve it using only Logic Apps functionality or at least see if it was possible.

How to get the value (math)

To make it work we need to use the functionality called “ticks”. Ticks are part the Windows OS and is a large number that means “the number of 100 nanoseconds that has passed since Jan 1st year 0 UTC” (western Christian calendar). Unix time is the same but is “the number of seconds that has passed since Jan 1st 1970 UTC”. These constants in time, and their relation to each other, can be used to calculate the value we need.

One second is 10 000 000 ticks.

TTN is the number of ticks from start until now. TT1970 is the number of ticks that passed from start until 1970. This constant is 621355968000000000.

The calculation looks like (TTN-TT1970) / 10 000 000.

Calculating the Unix value for “Now” (October 24th 2017 13:28) looks like
(636444485531778827 – 621355968000000000) = 15088518843498049
15088518843498049 / 10 000 000 = 1508851993

How to get the value (Logic App)

  1. Set up a new Logic App that can be triggered easily. I usually use a HTTP Request / Response.
  2. You need two variables so create to “Initialize variables”.
  3. Name the first TicksTo1970, set the type to Integer and set the value to ticks(‘1970-01-01’).
  4. Name the second TicksToNow, set the type to Integer and set the value to ticks(utcNow()).
  5. Now you are ready to do the calculation. If you have used a Request / Response, set the Response Body to div(sub(variables(‘TicksToNow’),variables(‘TicksTo1970’)),10000000)
  6. Save your flow, execute it to receive the value and validate it against https://www.unixtimestamp.com/

The flow can be downloaded here.

fromDateTimeToUnix.json

Use of a saved Azure login in PowerShell

Introduction

Using powershell to make things happen in Azure is awesome. Personally, I put together small little snippets based on one or two simple actions, like “start the development VM” or “disable the polling Logic App”. There is one downside to this though and that is the constant barrage of login windows. You might not succeed with your new script the first time and after updating it, you have to login again … this makes Mikael a dull boy.

You can work around this by saving the credentials to disc, using powershell. Here is how you do it.

Save-AzureRmProfile

There is a powershell comman that saves your key locally in a jsonfile. You can the use that file to login. Of course, you need to make sure you protect that key.

Simply execute the command and point to a path where you want to save it.

Save-AzureRmProfile
-Path
“c:\MySecretFolder\azurecredentials.json”

This creates a json-file that contains all the information you need to login to your Azure subscription. Take a look after you saved it to see that it contains a veeeery long key.

Select-AzureRmProfile

Now it is time to use the saved credential and that is very easy as well.

There is a script that makes use of a saved credential and starts a virtual machine.

Select-AzureRmProfile
-Path
“C:\MySecretFolder\azurecredentials.json “

Select-AzureRmSubscription
-SubscriptionId
“<subscription GUID>”

 

Start-AzureRmVM
-Name
“Devmachine” -ResourceGroupName “Devmachine”

Looking thru the script, the first line does the actual logging in. Then a subscription is selected, if you only have one subscription you can skip this step. Then the machine is started.

The magic is really in the first two rows, and these are the two rows I reuse in all my short action-focused scripts.

fromDateTimeToUnix.json

Testing the Azure Eventgrid response time

What is Azure Eventgrid

Azure Eventgrid is a new technology in Azure, aimed at connecting different applications, much like other integration technology like Azure Service Bus or Logic Apps. However, Eventgrid wants to turn traditional integration patterns on its head. Traditionally, you poll for data until data arrives. An event based model is the other way around. You do not poll, you wait for another system to send you an event. That event might contain all the necessary data or just enough for you to ask for the new data. An example might be in order.

Say you have a database that does some heavy number-crunching. You need the crunched data. The database exposes a service (or a stored procedure) for you to get the data once it’s done. In a traditional integration you would poll that service once every x minutes to get the data as soon as possible. In an event based integration, the database sends an event to the Eventgrid telling you that the number crunching is done. That event tells you to get the data. No polling is needed.

This is not new. It can be done using a simple Logic App that the database can call instead to send the event. So why use Azure Eventgrid? Logic Apps can do so much more and is therefore not as cheap. It might not even be quick enough and you might need to handle a lot of events with a very low latency. This is where Eventgrid fits in.

For more information about Eventgrid, including very nice usable demos and capabilities like routing and filtering, read this post by Eldert Grootenboer.

What kind of performance?

What do you want out of Eventgrid? I would like for it to be able to forward events quickly without any latency even if there are long waits between events. I want it to react quickly to me sending an event even if there are long periods of inactivity between events. I decided to test this. Does Azure Eventgrid have the

I would like the response time and forwarding time to be “short enough” and consistent. Not like “half a second, 2 seconds, half a second, one minute”.

The test design

First a short disclaimer: The Eventgrid service is in preview, this means that response times and availability is not supported in any SLA. The test is not meant to focus on getting the maximum speed but to find if Azure Eventgrid has consistent response times.

Here is a picture of the communication architecture:

The flow

A command line c# program, running on an Azure VM, sends custom events to the Eventgrid. The Eventgrid forwards the event (using a subscription) to an Azure Function that writes timestamp-information into a database. All resources are in West US 2. The timestamps where both using UTC to negate any time zone problems.

The sending application

The c# program worked like a product database might work. When a product is changed an event will be sent. The program waited for a random number of seconds between sending events, to simulate my imagined workload. People are not consistent. The program sent 5 messages every 1 to 600 seconds.

The message consisted of a light data body and I used the eventTime property to mark the start time of the flow.

The Azure Function

To make sure the function would not be the bottle neck, I used the App Service Plan option and scaled it to two instances. The function code was written in csx (not compiled) and simply received the event message, got the starting timestamp, adding its own timestamp to act as “time received” and then saved it to the Azure SQL Server database.

If you think this might be inefficient I can say that when I did initial bulk testing (200+ messages per second) I flooded the Azure SQL Server database, but the Azure Functions were fine.

The database

It was a simple Azure SQL database with a simple table consisting of three columns: ID, EventSentTime and EventReceivedTime.

Test Execution

The test ran between 2017-09-13 07:31 UTC and 2017-09-13 10:04 UTC, during that time a total of 110 events was sent on a total of 24 occasions.

The test results

The overall results are good! The Eventgrid lives up to my expectations of quickly responding and sending messages even after long periods of inactivity.

Timestamp trouble

Sadly, the timestamps did not line up. Due to different clocks on the VM and in Azure Functions I got negative numbers, as low as -240 miliseconds (ms). This coupled with a maximum time of 1304 ms, the results do not lend themselves to statistics.

In conclusion

Even with the timestamp trouble, there is a clear pattern: The reaction times are quick (the whole flow took about 500 ms to execute after a longer period of inactivity), and consistent, exactly what I wanted out of Azure Eventgrid. I am looking forward to being able to use this technology in production.

Further study

I would like to try is to run more instances of the messaging program.

fromDateTimeToUnix.json

Timeout and parallel branches in Logic Apps

Back when I was a BizTalk developer I used something called sequential convoys. Two main features that had to be implemented to use that pattern was a timeout shape, and a parallel branch. The flow either received a new message or it “timed out”, executing some other logic, perhaps sending an aggregated batch to a database.

Looking at logic apps the same pattern does not match 100% but there are still very good uses for parallel actions and clever use of the delay action.

Can a Logic App timeout?

The question is quite fair: How can we get a behavior that makes the Logic App send back a timeout if a run does not complete within a given time frame? In order to illustrate this I have set up a Logic App that takes inspiration from the sequential convoy pattern:

  1. Receives a request
  2. Starts a delay (the timeout) in one branch.
  3. Starts a call to an external service in the other branch.
  4. If the service responds back before the delay (timeout) is done, HTTP 200 is sent back.
  5. If the service does not respond back in time, HTTP 504 (Gateway timeout) is sent back.

For demo reasons I have added another delay shape to the “call the service”-branch to make the call take too long, and to trigger the timeout.

The Logic App

If the TimeoutResponse triggers, the Logic App engine will recognize this and it will not try to send the other response. That action will be skipped. If you execute the Logic App and this happens the run will be marked as “Failed” in the run history, which then in turn points to a timeout.

The json for the Logic App can be downloaded here.

Some caveats and perhaps better solutions

Note that in this Logic App, the HTTP call to the service will still be executed, even if TimeoutResponse will execute. That can be fixed using the Terminate action.

Lastly you also should think about why you need to implement a timeout in a Logic App. Can’t the calling client set a timeout on their end? If not, why? Can this be solved in some other way? If there is a risk of timing out, can you rebuild the messaging paths in a more async (webbhook) manor? One call from the client starts one process and another Logic App sends the result when the processing is done.

Lastly

I might find scenarios when this is very useful. I have yet to find it but it is nice to know that it’s there and how it behaves.

fromDateTimeToUnix.json

Remove DTA orphans in BizTalk

Standard disclaimer: This is officially not supported and you should never update a BizTalk database using T-SQL *wink* *nudge*

What are orphans?

When tracking is performed in BizTalk, it happens in several steps. The first thing that happens is that a row is created in the DTA database, the dta_ServiceInstances table to be specific. There is a bug in BizTalk that makes the “DTA Purge and Archive” job unable to delete the row. This creates an orphan and if this happens a lot the table will be filled with junk data.

But why?

When tracking starts the current date and time is placed in column dta_ServiceInstances.dtStartTime. In the same row, there is a column called dtEndTime. When tracking starts, this column gets a null-value. When tracking completes, the column is updated and the date and time of completion is set. If this does not happen, the job will not remove the row as it is considered active.

How to find out how many orphans there are

You are running BHM (BizTalk Health Monitor) so you know how many there are, right? But how many are too many? If your queries using BizTalk Administration Console times out, then there are too many.

Here is another way to find out how many there are, using SQL.

select

    count(*) as ‘NoOfOrphans’

from

    [BizTalkDTAdb].[dbo].[dta_ServiceInstances]

where

    dtEndTime is NULL and [uidServiceInstanceId] NOT IN

    (

    SELECT

        [uidInstanceID]

    FROM

        [BizTalkMsgBoxDb].[dbo].[Instances] WITH (NOLOCK)

    UNION

    SELECT

        [StreamID]

    FROM

        [BizTalkMsgBoxDb].[dbo].[TrackingData] with (NOLOCK)

    )

 

This query clearly shows the number of unmarked instances but also matches the result to what instances are still active in the MessageBox.

If you want more information on each instance, replace count(*) with a simple * in the first row of the SQL script. If you do this, you can easily see that the data has a dtStartTime but no dtEndTime.

How do I remove them?

BizTalk Terminator tool

This is the supported way to remove the data. There is a very important caveat to using the tool: you must completely stop the environment. If this is a bad thing you can run the script executed by the terminator tool yourself.

T-SQL Script

A very simple script that will update the table by setting a endtime for all orphans, making it possible for the purge job to delete them.

BEGIN TRAN

USE [biztalkDTADb]

 

UPDATE

    [dbo].[dta_ServiceInstances]

SET

    [dtEndTime] = GetUTCDate()

WHERE

    dtEndTime is NULL

    AND

    [uidServiceInstanceId] NOT IN

    (

    SELECT

        [uidInstanceID]

    FROM

        BizTalkMsgBoxDb.[dbo].[Instances] WITH (NOLOCK)

    UNION

    SELECT

        [StreamID]

    FROM

        BizTalkMsgBoxDb.[dbo].[TrackingData] WITH (NOLOCK)

    )

— If it works: uncomment and run this row.

— Commit tran

— If it does NOT work: uncomment and run this row

— Rollback tran

The script will handle the exact same rows as in the first query. In order to make this update behave in the best way, use a transaction by following these steps:

  1. Run the first script that gets the number of rows, note the result.
  2. Run the second script (make sure to include the BEGIN TRAN at the start).
  3. Note the number of rows affected.
  4. If the numbers match up the script has run correctly, uncomment and run the COMMIT TRAN row.
  5. If the numbers dos not match up, something went wrong. Uncomment and run the ROLLBACK TRAN row to cancel the transaction.

NOTE! It is very important to run the COMMIT/ROLLBACK in the same query window as the main script.

The purge job

The next time the purge job runs, the number of orphans should decrease. Run the first script to make sure.

 

fromDateTimeToUnix.json

Logic App for testing return codes

Return codes are very useful when communicating between services, such as Azure Functions and Logic Apps. I found that, in some cases, testing different return codes and their behavior in a Logic App can be boring as you need to update and redeploy code. Therefore, I made a little Logic App that sends a response with the desired return code. A very basic mocking service basically.

If you need to check how a 200 OK works with your workflow, you call it and ask it to return a 200.

The Logic App

The request simply takes in a POST with a “EchoCode”. Like:

{“EchoCode”:429}

The response part is a little trickier as the designer only allow you to set strings and “Status Code” is an integer. It is not hard though, simply enter @int(triggerBody()?[‘EchoCode’]) and it will convert the “EchoCode” from a string to an integer. I did it using the Expression Editor.

So if you send in a 429, the Logic App will respond with “429 – Too many requests”. If you send in a 202, the app will respond with “202 – Accepted”

The code

Here is the json for the Logic App. I hope you find it as useful as I did.

fromDateTimeToUnix.json

Securing passwords in Logic Apps

The history

In the beginning, storing passwords, usernames and so on in a Logic App was not very secure. You had to store the value in clear text in the Logic App “code behind”, or do some magic using a file on a blob storage and a function. That was before the wonderful service called KeyVault.

If you have experience of BizTalk in the past, then you can view a KeyVault much the same way as the SSODB; an encrypted place to store keys and values. You are not limited to just storing user credentials, you can store other things like endpoint addresses, settings, certificates and even access it using REST, but this post focuses on using the KeyVault as a storage for user credentials. More information about the KeyVault service can be found here, and here is how to get started.

NOTE: You do not need any KeyVault preconfigured. The instructions below will setup a basic one.

The scenario

You are calling an API that uses Basic Auth (aka username and password). The authentication is therefore per call and must be supplied every time the service is called. You do not want to store the username and password in clear text in the Logic App.

The Logic App is only a wrapper for the call to the service (for demo purposes).

You are using Visual Studio to develop and deploy your Logic App.

NOTE: It is possible to achieve the same result without using Visual Studio but this post does not cover that.

The Logic App without KeyVault


As you can see the password and username is clearly visible to anyone accessing the Logic App, so any Logic App contributor can access it. The JSON for it can be downloaded here. Just paste it into a new project, replacing the old text in the LogicApp.json-file.

This is how you do it

Logic Apps tooling in Visual Studio comes prepared to use KeyVault, the only tricky part is to add parameters that will make the tooling use KeyVault. We are going to make use of this together with the Logic Apps ARM template in Visual Studio. There is a lot of text here but I am sure that if you do this once you will fully understand it. Take your time to get it right.

Add parameters

Open the Logic App as JSON and scroll to the top. The Logic App always starts with a parameter for the Azure Resource Manager: the name of the logic app. Here we will add two new parameters: ExternalSupplierUsr and ExternalSupplierPwd. Add the following JSON before the first parameter.

“ExternalSupplierUsr”: { “type”: “securestring” },

“ExternalSupplierPwd”: { “type”: “securestring” },

Note the type: securestring. This will tell the tooling that we would like to use the KeyVault.

The updated JSON file can be downloaded here.

Configure the parametersfile and create the KeyVault

Next we need to make room for the keys. Save the Logic App, right-click the project in the solution explorer and choose Deploy and then the name of your project. The usual dialog appears. Fill it out and the click the Edit Parameters button. The new dialog should look something like this

See those little keys to the right. These are shown because we used the securestring type. Click the top one.

If you already have a KeyVault, you can use that. Let’s create a new one.

Click the link saying Create KeyVault using PowerShell. This will point you to this page on github.

Open the PowerShell ISE and copy + paste the powershell code into it. Update the code to your needs. I will create a KeyVault called SuperSecretVault in West Europe in its own resource group. I highly recommend this, use a separate group for your KeyVaults.

My finished script will look like this:

#Requires -Module AzureRM.Profile
#Requires -Module AzureRM.KeyVault

#Login and Select the default subscription if needed
Login-AzureRmAccount
Select-AzureRmSubscription -SubscriptionName ‘your subscription name goes here’

#Change the values below before running the script
$VaultName = ‘SuperSecretVault’

#Globally Unique Name of the KeyVault
$VaultLocation = ‘West Europe’

#Location of the KeyVault
$ResourceGroupName = ‘KeyVaultGroup’

#Name of the resource group for the vault
$ResourceGroupLocation = ‘West Europe’

#Location of the resource group if it needs to be created
New-AzureRmResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation -Force
New-AzureRmKeyVault -VaultName $VaultName -ResourceGroupName $ResourceGroupName -Location $VaultLocation -EnabledForTemplateDeployment

Execute it and wait.

Use the KeyVault

Now go back to Visual Studio. You have to close down all dialogs except the first one. Click the little key icon again, next to the parameter called ExternalSupplierUsr

Select your subscription, select your vault and choose <Create New>

Give it a name, I will use SecretExternalSupplierUsr, and then set the value “SuperSecretUserName” for the username. Click Ok and repeat the process for the ExternalSupplierPwd (all the way back and press the little key again). Name your Logic App SecurePasswordsInLogicApps and it should look something like this:

Click Save to save the configuration into the parameters.json file. We are not going to deploy it yet but you can look at it to see what was updated.

Use the parameters in the Logic App

Here is the tricky part. You must add parameters in the JSON behind the Logic App. This is pretty hardcore and make sure to know where to type what.

Start by opening the JSON file for the Logic App, not in the designer but the whole file. Scroll down to the bottom. Here you will find the first parameters-clause. You should enter the value of the parameters at the top. At deploy time, the resource manager will take the value of the parameters given in the KeyVault and just paste them here. Since this part is never shown in the Logic App code behind, this is ok. Think of this as values being compiled into “the DLL of your Logic App”.

Make sure you use good names for these parameters. They do not have to be the same as those at the top but the name must be the same from now on. I updated my JSON-file to look like this.

“parameters”: {
“SupplierAPIUsr”: {
“value”: “[parameters(‘ExternalSupplierUsr’)]”
},
“SupplierAPIPwd”: {
“value”: “[parameters(‘ExternalSupplierPwd’)]”
}
}

 

My updated JSON file can be downloaded here.

Setup the Logic App with parameters

If you would simply paste in [parameters(‘ExternalSupplierUsr’)] in your Logic App, a deployment will replace the parameter with a value and therefor making it visible in the Logic App code behind. We have to send the value into the Logic App as a secure string.

Scroll up to the next parameters-clause. Mine are at row 87. Here you declare two new parameters, with the same name as the parameters you just declared at the bottom of the file. After update, my file looks like this:

“parameters”: {
“SupplierAPIUsr”: {
“type”: “SecureString”
},
“SupplierAPIPwd”: {
“type”: “SecureString”
}
},

 

My updated JSON file can be downloaded here.

We have now set up parameters to retrieve the value sent in using the two parameters at the bottom.

Use the parameters

The last step is to use the parameters in the Logic App. This is very simple since there is an array in the Logic App called Parameters.

Scroll up and find the username and passwords for the external API. Mine are at rows 69 and 70. Update the values to use parameters.

I updated the file to look like this:

“authentication”: {
“type”: “Basic”,
“username”: “@parameters(‘SupplierAPIUsr’)”,
“password”: “@parameters(‘SupplierAPIPwd’)”
}

The final file can be downloaded from here.

Deploy and test

Deploy your Logic App just like you usually do and then test it using Postman. We get an error back because the service called does not exist.

Look at the results

If you look at the run, you will see that this has a downside. Not all values sent as secure strings are sanitized.

But at least the password is not in clear text.

Now open the code behind of the Logic App and you can see that the values of the parameters are never shown! This is awesome!

The good thing

This gives you and your team a uniform, and secure, way to keep your passwords in check. Use different KeyVaults for different environments (one for test and one for prod) and you will be good to go.

The bad thing

Since the value of the KeyVault is only accessed when the Logic App is deployed, you must redeploy the Logic App if you need to update a value in the KeyVault. For instance, say we need to update the password in the Logic App used here, then you first need to update the KeyVault (use the portal) and then you need to redeploy the Logic App. That way the new value is picked up by the Azure Resource Manager and the Logic App is updated.

fromDateTimeToUnix.json

Why Do I Integrate?

I got a question from a colleague; 2why should I go to Integrate? Give me a reason.”

First off: If you need convincing to go to London for three days, have fun and meet new people, then you are not conference material. Bye, bye and see you when I get home.

News?

Once we went to conferences to get heads up on news, what is coming and what is important. Nowadays we get the news over Twitter or Yammer, so that is not the reason.

Educational?

Once this was the only way to get information about how to use new features and what features to use, when. Nowadays the sessions are online within an hour, so that is not the reason.

Social?

Once we weary of speaking to “the competition”. We stayed within our designated groups, fearful of saying something that might say too much about a client or a project. I remember very well trying to get two guys that had “reprogrammed the ESB Toolkit” to say why and what. I might just as well have asked them for the nuclear launch codes.

But we are getting better at this, and after a while we realized we could talk about other things besides work, we did things together, had dinner, beer and a good time.

This is one of the reasons but not the main one.

The passion <3

I am, as some know, a passionate guy. I…love…doing what I do for work. I love people that feel the same, and at Integrate I know I will meet my fellows. The place where I can be myself for three days. The only place I can discuss the merits of JSON vs XML for an hour, hear a crazy Italian guy passionately talking about his latest project, shaking the hand of that Kiwi guy that helped me get onboard the Logic Apps train.

Then, you meet the people from the team in Redmond and you realize: they are just like you. Just as passionate and just as social.

Integrate is News, Integrate is Educational and most certainly Social, but most of all: It is the passion.

Hope to see you there, I will be the guy in the front row, asking questions and arranging dinner.