Category: Uncategorized

BizTalk 2020 quiet release

There is a new version of BizTalk out. This time it is called BizTalk 2020 and it has some really nice new features. I was surprised that the release contains new and interesting features, not only a platform alignment.

This time it was released without any marketing whatsoever, which is called a quiet release. I can only speculate why this is, but my guess is that this is the last version and Microsoft does not feel they want to onboard new customers.

So, what do you get? Here is the complete list, but let me list the ones that are most interesting, and do not forget that some previously key features has actually been removed.

My top 5 new features

1 – Operational Data Monitoring and Analytics

You can send trackingdata to Azure and get a PowerBI dashboard out of the box(!). Without any additional monitoring software. You also get access to the storage capabilities available in Azure, and store years of data rather than days.

2 – API Management

People know of my love for API management, and being able to publish BizTalk orchestrations as APIs directly, and also use the APIm policies to alter the messages before sending them to BizTalk, is very powerful

3 – Auditing

Finally! The age-old question of “who stopped the receive port” can be answered by simply looking into logs. As it should always have been.

4 – XSLT 3.0

Building powerful maps using custom XSLT will be easier and better than ever.

5 – Support for always encrypted

Built on SQL Server of course but the support will make sure that BizTalk is an on-prem force and a integration tool for those very, very secret things.

My top 3 good riddance

There are also some things that have been remove from BizTalk and these are my top three, good riddance. Some are marked as Deprecated, so “in the release, but don’t use them”.

1 – SOAP Adapter

If you built something new with it, shame on you. 32 bit old school, with functionality covered by the WCF-BasicAdapter

2 – BAM Portal

I was once forced to present the BAM portal as the viable option to a client. I still cringe.

3- Samples

Have you heard of “The Internet”? You do not need to download static versions of it anymore.

Happy 2020 and Happy 2020 version.

Using the HTTP connector for other things

There are a lot of connectors in Logic Apps, and they usually make your life a lot easier but sometimes there might be even better ways to connect to an Azure Service.

The “problem”

This is not a fix-that-bug post so there really is no problem, however I think you can consider using another approach sometimes. This was evident when the team could not use the Azure Table-connector some weeks ago. Due to security reasons we had to use the HTTP-adapter and call the table storage API-directly, and in the end it solved a very big problem for us.

Azure Services APIs

A lot of Azure services have APIs. You can find documentation for them here. They include Cosmos DB, MySQL, maintenance, subscriptions and much more. If there is no connector for the thing you need to do in Azure, perhaps there is an API that you can call. Sometimes the APIs can be much more granular and have a little more finesse than the connector.

I therefore suggest you should check out the possibilities when using Logic Apps (and even functions). If you feel the connector lack a bit of refinement, or behaves in unwanted ways, take a look at the APIs.

Azure Table storage

I will use Azure Table Storage as an example. There is a Table Storage Connector that does the job, but it does not do it very well. take a look at this flow that was built using the original connector:

The original flow has been lost to time but the important thing here is to look at the remove metadata. Every call to the storage responds with three additional properties: odata.etag, PartitionKey and RowKey. We did not want to return that data to the caller and so it was removed. However, this was done using the “RemoveProperty” operation and for some strange reason the combination of that, together with the “Add to response” at the bottom every row took between 2 and 5 seconds(!). When returning rowsets of 30 rows, we where talking minutes to respond.

What can be done using the connector?

First off, you have to ask: What can I do just using the connector? In the case above, the developer could use the parameter called Select query to return only the columns needed and omitting the partitionKey and RowKey, but the adapter would still return the odata.etag, and therefore the need for one "remove metadata" and the Add to Response message would still be needed, and was the most time consuming.

Sequential vs parallel

The next thing you can look at is the flow control. In this case the data manipulation was done in a loop. Try changing the Degree of parallelism to one and run the flow again, and then try the max value. In our case it made little to no change.

Using the API directly

To start off there is and inherent problem with using the API and that is the security model and recycling of SAS-keys. You have to be aware of it, that is basically it.

Going into this part of the plan we knew we had one issue: To return only the data we needed to send back to the caller. This meant only the columns they wanted and then remove the odata.etag.

Looking at the documentation for querying tablestorage for entities we found three things to use:

Authorization

According to the documentation this was supposed to be a header but you can just as easy just use the querystring, i.e. the string you copy from the storage account to give you access

/

$select

The API supports the ability to ask for only a subset of the columns and thereby having the same capability as the connector.

Ask for no metadata back

By setting the Accept header to application/json;odata=nometadata you can omit any metadata.

The resulting call

/
Note the URL-escaping. We did not succeed in using the Queries part and I think that is due to how they are URL-encoded when sent to the service. So we had to put everything in the URI-field.

Result

By combining these we could make sure the caller would get the correct data and we did not have to manipulate it before returning the payload. This resulted in calls that responded in milliseconds instead of a full minute.

Logic Apps, storage and VNETs

Recently I had the opportunity to use Logic Apps in a much more “locked down” Azure environment, than I am used to, and I found some interesting things.

Logic Apps support for VNET

Famously, your Logic Apps share the space with other customers on its servers as it is a share service. This makes it very easy to maintain and very cheap to run enterprise grade stuff. But famously Logic Apps cannot be assigned to a particular vnet. This does not hold true for the Logic Apps ISE, but that was off the table in this case.

This does not mean that it is unsecure, and this client made it possible to use Logic Apps despite the locked down environment as long as we:

  • Accessed all Logic Apps thru another service connected to a vnet. In this case we used Azure API management Premium.
  • Whitelisted only the APIM’s IP-address for a Logic App, unless
  • The Logic App was called by another Logic App, in which case we used that option.

Limits of Logic Apps

First off you should always have the Logic Apps Limits and Config in your favorites, not because you often hit them, but you should be aware that there are limits. One section is particularly interesting in this case, the one on firewall configuration and IP-addresses.

Allowing access to a resource

When you want to open a firewall for Logic Apps deployed in a particular region, you look up the IP addresses in this list and configure the firewall/network security group. This means that the resource is then potentially available to all Logic Apps in that region. Therefore, you need to protect the resource with an additional layer, such as a SAS-key.

This is how we allowed access between our Logic Apps, and the Azure SQL server instance. In that case we also used credentials as an additional layer.

To allow access you simply need to find your region in the list and then allow exceptions for the IP-addresses listed.

Allowing access to a storage

Now here is when things started to “head south”.

Thanks to a support case I generated the text has been updated and it now reads (my formatting for emphasis):

Logic Apps can’t directly access storage accounts that use firewall rules and and exist in the same region. However, if you permit the outbound IP addresses for managed connectors in your region, your logic apps can access storage accounts that are in a different region except when you use the Azure Table Storage or Azure Queue Storage connectors. To access your Table Storage or Queue Storage, you can use the HTTP trigger and actions instead. 

What you need to do

If you are using blob or file storage, you do not need the last step, but if you are using Table Storage or Queue Storage, you need to do all these steps.

The Storage and Logic App cannot be in the same region

Move the Logic App accessing the storage to the paired region. For us, we have the storage in North Europe and the Logic Apps in West Europe.

Update the storage firewall to allow IPs from Logic Apps

Finding all the IPs for your Logic App is easy. Just go to this link: https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config#firewall-configuration-ip-addresses and scroll down to find the Outbound addresses. You need to add all the IPs as well as the Managed Connectors IPs.

Here is a tip: Since you need to add IP-ranges using the CIDR format in the storage firewall, and some IPs are just listed as ranges, you can visit this page to convert them.

Here is another tip: You can find the IP-addresses of the affected Logic App under Properties for the Logic App.

Here is my updated storage firewall after adding everything:

If you are using blob and files storage, you are done.

Update your Logic Apps for Table Storage

We did not use queue storage, so I have no input on that. However, my guess is that it is basically the same.

The connector for Table Storage will still not work, so you need to call the API directly. As a matter of fact, I really liked that way much better as it gives a granularity that the connector does not support. The ins and outs of this will be covered in a separate post.

We changed the connector from a Table Storage Connector to an HTTP Connector and configured it like this (sorry for the strange formatting):

he documentation on how to use the API directly can be found here: https://docs.microsoft.com/en-us/rest/api/storageservices/query-entities

To summarize

Having to enable the firewall in an Azure Storage might be necessary. Logic Apps, as well as other Azure Services, has issues with this. To solve it for Azure Table Storage you need to:

  1. Place the Logic App in another region (datacenter).
  2. Use the API directly with the HTTP connector

Storage is a bit strange in some aspects, not only for Logic Apps an strange things can happen.

Simple How-to: Upload a file to Azure Storage using Rest API

There are a lot of different ways to make this happen but, like before, I was looking for the “quick and easy way” to just get it done. So here is a condensed version. Please send me feedback if you find errors or need clarification in any areas. I would also like to point to the official Azure Storage API documentation.

Later update

Since I wrote this post Microsoft has done a lot of work on the permission side of the file service. This means that this post does not support the latest version. The simple and easy way I propose is still usable. You just need to add this header: x-ms-version:2018-11-09. All the examples below uses this header.

Tools

For testing the Rest APIs I recommend using Postman.

Create a file storage

First you need to create a file storage in Azure. More information can be found here

For this I created a storage account called bip1diag306 (fantastic name I know), added a file share called “mystore”, and lastly added a subdirectory called “mysubdir”. This is important to understand the http URIs later in this post.

Create a SAS key

In order to give access to your files you can create an SAS key, using the Azure Portal. The SAS key is very useful since it is secure, dependable, easy to use and can be set to expire at a given time, if you need it.

At the moment, a SAS key created in the portal can only be set for the entire storage account. It is possible to set a particular key for a folder but in that case, you have to use code.

To create an SAS key using the portal, open the overview for the storage account and look in the menu to the left. Find “Shared Access Signature” and click it.

Select the access option according to the image. This will make sure you can create and upload a file.

Make sure the Start date and time is correct, including your local (calling) time zone. I usually set the start date to “yesterday” just to be sure and then set the expiration to “next year”.

Click the “Generate SAS” button. The value in “SAS Token” is very important. Copy it for safekeeping until later.

Create and then upload

The thing that might be confusing is that the upload must happen in two steps. First you create the space for the file, and then you upload the file. This was very confusing to me at first. I was looking for an “upload file” API, but this is the way to do it.

There are a lot more things you can configure when calling this API. The full documentation can be found here. Note that the security model in that documentation differs from the one in this article.

Create

First you need to call the service to make room for your file.
Use postman to issue a call configured like this:

PUT https://[storagename].file.core.windows.net/[sharename][/subdir]/[filename][Your SAS Key from earlier]
x-ms-type:file
x-ms-content-length:file size in bytes
x-ms-version:2018-11-09

Example

If I was tasked with uploading a 102-byte file, called myfile.txt to the share above, the call would look like this:

PUT https://bip1diag306.file.core.windows.net/mystore/mysubdir/myfile.txt?sv=2020-08-04&ss=f&srt=so&sp=rwdlc&se=2021-12-08T21:29:12Z&st=2021-12-08T13:29:12Z&spr=https&sig=signaturegoeshere
x-ms-type:file
x-ms-content-length:file size in bytes
x-ms-version:2018-11-09

Upload

Now, it is time to upload the file, or to fill the space we created in the last call. Once again there is a lot more you can set when uploading a file. Consult the documentation.

Use postman to issue a call configured like this:

PUT https://[storagename].file.core.windows.net/[sharename][/subdir]/[filename]?comp=range&[Your SAS Key from earlier] (remove the ?-sign you got when copying from the portal).
x-ms-write:update
x-ms-range:bytes=[startbyte]-[endbyte]
content-length:[empty]
x-ms-version:2018-11-09

Note the added parameter comp=range

Looking at the headers, the first one means that we want to “update the data on the storage”.

The second one is a bit trickier. It tells what part of the space on the storage account to update, or what part of the file if you will. Usually this is the whole file so you set it to 0 for the startbyte and then the length of the file in bytes minus 1.

The last one, is content-length. This is the length of the request body in bytes. In Postman, this value cannot be set but is filled for you automatically depending on the size of the request body, you can simply omit it if you want to. If you are using some other method for sending the request, you have to calculate the value.

If you are using PowerShell, it seems that this value is calculated as well, and you should not define a content-length header. You get a very strange error about the content-type if you try to send the content-length:

The cmdlet cannot run because the -ContentType parameter is not a valid Content-Type header. Specify a valid Content-Type for -ContentType, then retry.

Example

Returning to the 102-byte file earlier, the call would look like this:

PUT https://bip1diag306.file.core.windows.net/mystore/mysubdir/myfile.txt?comp=range&sv=2020-08-04&ss=f&srt=so&sp=rwdlc&se=2021-12-08T21:29:12Z&st=2021-12-08T13:29:12Z&spr=https&sig=signaturegoeshere
x-ms-write:update
x-ms-range:bytes=0-101
content-length: 
x-ms-version:2018-11-09

The requestbody is the file content in clear text.

Limitations

There are limitations to the storage service. One which impacted me personally. You can only upload 4mb “chunks” per upload. So if your files exeed 4mb you have to split them into parts. If you are a good programmer you can make use of tasks and await to start multiple threads. Please consult the Azure limits documentation to see if any other restrictions apply.

Timeout and parallel branches in Logic Apps

Back when I was a BizTalk developer I used something called sequential convoys. Two main features that had to be implemented to use that pattern was a timeout shape, and a parallel branch. The flow either received a new message or it “timed out”, executing some other logic, perhaps sending an aggregated batch to a database.

Looking at logic apps the same pattern does not match 100% but there are still very good uses for parallel actions and clever use of the delay action.

Can a Logic App timeout?

The question is quite fair: How can we get a behavior that makes the Logic App send back a timeout if a run does not complete within a given time frame? In order to illustrate this I have set up a Logic App that takes inspiration from the sequential convoy pattern:

  1. Receives a request
  2. Starts a delay (the timeout) in one branch.
  3. Starts a call to an external service in the other branch.
  4. If the service responds back before the delay (timeout) is done, HTTP 200 is sent back.
  5. If the service does not respond back in time, HTTP 504 (Gateway timeout) is sent back.

For demo reasons I have added another delay shape to the “call the service”-branch to make the call take too long, and to trigger the timeout.

The Logic App

If the TimeoutResponse triggers, the Logic App engine will recognize this and it will not try to send the other response. That action will be skipped. If you execute the Logic App and this happens the run will be marked as “Failed” in the run history, which then in turn points to a timeout.

The json for the Logic App can be found here.

Some caveats and perhaps better solutions

Note that in this Logic App, the HTTP call to the service will still be executed, even if TimeoutResponse will execute. That can be fixed using the Terminate action.

Lastly you also should think about why you need to implement a timeout in a Logic App. Can’t the calling client set a timeout on their end? If not, why? Can this be solved in some other way? If there is a risk of timing out, can you rebuild the messaging paths in a more async (webbhook) manor? One call from the client starts one process and another Logic App sends the result when the processing is done.

Lastly

I might find scenarios when this is very useful. I have yet to find it but it is nice to know that it’s there and how it behaves.