Find and use Diagnosics Settings for a resource

The basics

I will assume you know what Diagnostics are within Azure and that you know how to create and deploy Bicep. This post aims at showing you how you can solve how to connect Azure Diagnostigs to your resources, using Bicep.

The problem

When deploying a diagnostic setting you might not always know which metrics are available to you, and in some cases the metrics differ between the portal and the APIs used by Azure for deployment. So if you use the names in the portal might trigger strange errors complaining about different metrics not being available.

Another problem is that diagnostic settings are not exported as a part of the resource, so finding the settings can be really tricky.

The solution

It is not that hard actually. You can access the JSON/ARM before you create the setting.

Getting the ARM from the portal

  1. Start by navigating to the resource you want to create diagnistics for. I am using an Azure Function. Find the Diagnostic Setting in the menu to the left:

  1. On the new page, click the Add Diagnotic Setting.

  2. Fill in the settings you need:

  3. Then, up to the right. Way up there you can find a link that says JSON View. Click it.
    BOM! The ARM template for the diagnostic settings.

{
    "id": "/subscriptions/GUIDHERE/resourceGroups/RG-NAME/providers/Microsoft.Web/sites/FUNCTION_NAME/providers/microsoft.insights/diagnosticSettings/myDiagnosticSetting",
    "name": "myDiagnosticSetting",
    "properties": {
        "logs": [
            {
                "category": "FunctionAppLogs",
                "categoryGroup": null,
                "enabled": true,
                "retentionPolicy": {
                    "days": 0,
                    "enabled": false
                }
            },
            {
                "category": "AppServiceAuthenticationLogs",
                "categoryGroup": null,
                "enabled": false,
                "retentionPolicy": {
                    "days": 0,
                    "enabled": false
                }
            }
        ],
        "metrics": [
            {
                "enabled": true,
                "retentionPolicy": {
                    "days": 0,
                    "enabled": false
                },
                "category": "AllMetrics"
            }
        ],
        "workspaceId": "/subscriptions/GUIDHERE/resourceGroups/RG-NAME/providers/Microsoft.OperationalInsights/workspaces/LogAnalyticsName-here",
        "logAnalyticsDestinationType": null
    }
}

Converting it into Bicep

  1. Open a new Bicep-file in VS-Code
  2. Copy the ARM from the Azure Portal.
  3. Press Ctrl+Shift+P
  4. Find Paste JSON as Bicep
  5. Bom! Your ARM has been converted into Bicep.

Finishing up

Bicep is really useful when deploying infrastructure in Azure but sometimes you need a little help to find all the settings you need to make things work. JSON view is available in many places when creating resources, use it.

Setting a function app key using Bicep

If you search using Google for “azure function app key deploy arm” or even Bicep, you will get some older results saying that it is not possible to do, that you have to enter app keys manually after deploy. That is not true anymore.

As always, just scroll down to the Bicep if that is what you are looking for.

Why do this?

When you call an Azure Function with just the minimum level of security, you supply a key. Either as a query parameter named code or as a header named x-functions-key. You can easily get a key from the function and just use the _master or default. However, from a maintainance perspective it is useful to have separate app keys for every consumer (such as your organization’s API manager).

You simply add a key and name it something that tells you how the key is used, such as ourAPIm-DEV. Now you need to deploy this app key to TEST and PROD as well, so you want to use Bicep for that. Here is how you do it:

The solution

@secure()
param functionAppKey string

var functionAppName     = 'MyFunctionName'
var functionAppKeyName  = 'MyAppKeyName'


resource FunctionAppName_default_keyName 'Microsoft.Web/sites/host/functionKeys@2022-03-01' = {
  name: '{functionAppName}/default/{functionAppKeyName}'  
  properties: {  
    name: functionAppKeyName  
    value: functionAppKey  
  }  
}  

The secret is that the function key settings is not under Microsoft.Web/sites but under Microsoft.Web/sites/host which is really confusing. Especially since the Microsoft.Web/sites/host is always default. The way you achieve this is to use some clever naming instead of using the parent property.

My bicep is shortened for this post, you should use parameters instead, such as the key name and the function name.

Lastly, use different values for different environments then give the key to the consumer.

Find application registrations used by Logic Apps

This is an edge case but boy was I happy to find the solution.

Some time ago I was tasked with finding what Logic Apps where using a particular application (or app reg) as an authentication mechansim. The reason was that the secret was expiring and we needed to know which Logic Apps to update.

There are several way to solve this. I tried using the search code option in Azure DevOps to find references to it. That did not turn up that many depending on how the codebase ws configured. We usually inject connection settings from the release pipeline in Azure DevOps.

Enter Azure Resource Graph Explorer

This is a tool that uses KQL to query Azure resources. List all the VMs, show a list of all IP-addresses used etc etc. It is very very useful. Particularly to me, looking for application references.

Access

First off you need access to the resources you want to query. That might go without saying but I thought I just point that out.

Finding it

Use the search box in Azure (at the top of the page) and type resource graph. The service will show up in the result.

Using it

There are a number of predefined queries, and there is also an explorer to the left, showing you all types of Azure resources grouped by type. You can click any of these and they will show up in the query window.

Using it for Logic Apps

Sadly, there is very little in the way of help for Logic Apps and connectors but the resource type is very easy to find. Just pull up resource of the type you want to the query to be about and look under properties. There is always a property called Resoource ID. That contains the resource type.

Finding all connectors using the application

First off, you need the client ID of the application you are looking for. It can be found on its overview page in Azure Entra. If you want to filter your results to one particular subscription, you need the subscription ID as well.

Here is the KQL query.

resources
| where type == "microsoft.web/connections" and subscriptionId == "your subscription ID here"
| where properties.parameterValues.["token:clientId"] == "application client ID"

This will give you a list of all connections that are using that application to authenticate.

Note the strange syntax for ["token:clientId"]. This is because the KQL language does not like colons. So you have to use a string literal and [] for it to work.

If the property you are looking for does not contain any colon, you do not need it. Here is an example looking for connections with a particular display name.

resources
| where type == "microsoft.web/connections" and subscriptionId == "your subscription ID here"
| where properties.displayName == "the displayname"

Happy hunting.

Bypassing cache in APIm

The good thing about caching

One very strong feature of Azure API manager is the ability to cache data. When implemented, the response is picked up from an in-memory store and returned to the caller within milliseconds. It all depends on the type of data returned, but not all data needs to be kept fresh from the backend systems all the time.

The response to creating an order might be a bad idea to cache, but a list of the company offices might be a good candidate.

There are a million articles on how to implement caching in APIm including the official documentation.

Here is an example that stores a response for an hour, with a separate cache for each developer/caller.

<policies>
    <inbound>
        <base />
        <cache-lookup vary-by-developer="true" vary-by-developer-groups="false" />
    </inbound>
    <outbound>
        <base />
          <cache-store duration="3600" />
    </outbound>
</policies>

The trouble with caching

In some cases you need to force the call to get data from the backend system and not use the cache. One such case is during development of the backend system. If something is updated, the data is not updated until the cache has timed out, and you need that new data now!

I did not find any ready made examples of how to achieve this and that is why I wrote this post.

How to control the cache

The official documentation points to using a header called Cache Control. More information can be found here. In fact, if you test your API from the portal, the tester always sets this header to no-store, no-cache. It is up to you how to handle this header in your API.

Example of no-cache

This is what I did to implement the following case: “If someone sends a Cache Control header with no-cache, you should get the data from the backend system and ignore the cache.”

Using the same example from above, I added some conditions.

<policies>
    <inbound>
        <base />
        <choose>
            <when condition="@(context.Request.Headers.GetValueOrDefault("Cache-Control","").Contains("no-cache") == false)">
                <cache-lookup vary-by-developer="true" vary-by-developer-groups="false" />
            </when>
        </choose>
        <!-- Call your backend service here -->
    </inbound>
    <backend>
        <base />
    </backend>
    <outbound>
        <base />
        <choose>
            <when condition="@(context.Request.Headers.GetValueOrDefault("Cache-Control","").Contains("no-cache") == false)">
                <cache-store duration="3600" />
            </when>
        </choose>
    </outbound>
    <on-error>
        <base />
    </on-error>
</policies>

It is very straight forward: Find a header, and if it does not contain no-cache; use the cache.
If the cache is used the call will be done on row 6 and then go down to row 17, returning what ever is in store.

Using OAUTH 2.0 in Logic Apps

OAUTH 2.0

There are many scenarios where you need to call a service that implements OAUTH 2.0. It has support for roles and claims for instance. Out of the box support for OAUTH 1.0 is really easy and there are many walkthrus on this topic. I will show how to configure a Logic App to use OAUTH 2.0.

This is also related to my earlier post Getting a bearer token from AAD using Logic Apps where I show you how to get a OAUTH 2.0 token using Logic Apps.

The scenario

Someone has setup a service that handles sensitive data. Simply protecting it using an API key is considered too low a level of security. The provider has setup an Application Registration in Azure AD and provided you with a ClientID, Client Secret and the Scope. All are needed to authenticate.

How this is setup in AAD is out of scope in this post.

The solution

We decide to use Logic Apps and an HTTP connector. It has built in support for OAUTH 1.0 but we are going to use 2.0.
Here is a mock-up of the settings, lets go thru them.

  • URI: The URI of the service you need to call.
  • Header: api-key: You usually need to provide an API key when calling an API. This setting is specific to the service you need to call.
  • Authentication Type: Choose Active Directory OAuth
  • Authority: Set to https://login.microsoftonline.com when using Azure AD
  • Tenant: Your Azure AD TenantID
  • Audicence: Provide the Scope you have been sent. Make sure to omit the /.default at the end of the scope string, if present.
  • Client ID: The Client ID you have been sent.
  • Client Secret: The Client Secret you have been sent.

That is actually it!

Some notes

Scope

The strange and hard part for me was finding how to configure the Scope. First off you put the Scope as Audience, which feels strange. Then you must provide the base Scope. This was different for me.

When you use Postman to get an OAUTH2.0 token you send the scope with a /.default at the end of it to say “give me claims for the default scope”. When I set the property like that I got an error. You need to remove the suffix.

Authority

This hard to find in the documentation, but the setting makes sense. If you are using standard Azure (not US military, China or Germany) this is always set to https://login.microsoftonline.com. You can find the other settings here.