Save Azure Activity logs for up to 12 years!

Yes, it’s true. You can save your Activity logs for as long as Azure Log Analytics allows.

Azure Activity log

As you know, everything you add, update or remove from an Azure Subscription is logged in the Activity log. It is a very useful feature as it can help you find out who added a resource or even who deleted something important.

You can find the activity log in both at the subscription as well as any resource group by clicking it in the left hand menu.

Expiring logs

The downside of this log is that it is only stored for 90 days. This means that if you need to look further back into the past, you will not find anything.

You can, however, mediate this problem going forward using Export Activity Logs feature. Note that it is not a time travel device. Data will not be restored just because you use this feature.

Export Activity Logs

Finding it is not hard. Just find the Activity log you want to export and find this button at the top of the main blade.

Exporting logs to Log Analytics

You can export logs to several different data storages such as storage account or Eventhubs. To me, Log Analytics is the best one as you can use a potent query language when you need to find information later.

Create the Log Analytics workspace

Start by creating a Log Analytics workspace. The region does not really matter, but you need to create the Workspace in order to

  1. Connect your Activity log to it
  2. Set the retention policy, i.e. how long do you want to keep the log.

You can export the Activity logs to an existing workspace, but I suggest a centralized log for each subscription, allowing only the correct people access, such as Security Admins or Azure Resource Admins.

Set the retention policy (easy mode)

After you have created the workspace, in the left menu find Usage and estimated costs and click it.

At the top of the main blade find the Data Retention button.

Clicking it will show a flyout to the right. Here you can set the standard retention for the entire workspace. I suggest you do not use this feature as the same retention will be used for every table in the workspace, even future ones. We will set the retention later.

Configure the log for export

Go back to the Activity log. It is worth knowing that if you configure export on a Subscription level, all resource groups will be included in the export.

Find and click the Export Activity Loga button at the top in the main blade.

On the new page, find and click + Add diagnostic setting

You have reached the configuration page, and you need to configure it according to you logging needs. Click the options for what you want to export. I suggest to export at least Administrative as this logs CRUD-operations for resources.

In the destination details, choose Send to Log Analytics workspace and find the Log Analytics workspace you created earlier.

Don’t forget to give the diagnostic setting a name and save your new configuration.

Test the logging functionality

You need to have something trigger a row in the Activity log. Any resource CRUD operation will. I suggest that you create something new, such as a resourcegroup, add a storage account and the delete it.

After that, make sure that your activity has been logged in the regular Activity log. When you are sure your activities have been logged, open the Log Analytics workspace you connected earlier.

To the left, find Logs and click it.

Click Select a table at the top and make sure you have a table called AzureActivity. Click it to see the contents of that table. You should have, at least, your recent activities there. If it’s empty, give it some time and retry the query. Sometimes it takes a while before the data is available.

Retention policy basics

Now it’s time to set the retention policy. Before you do that you need to understand two concepts:

  1. Interactive retention
  2. Archive period

When you configure retention, you have to decide: How long do you want to keep the data (Total retention period), and how long do you want to be able to easily query that data (Interactive retention). The Archive Period is the Total retention period minus the Interactive retention.

You can still access data after it is archived. It is just harder.

The longer the Interactive retention, the more cost, but still not a lot. Details here.

Set the retention policy (better mode)

Time to set the retention. You can do that on a per table level instead of the whole workspace. Simply follow these steps.

  1. In the workspace find Tables to the left.
  2. Find the table named AzureActivity and click the three dot menu to the right.
  3. Choose Manage table
  4. Set the retention according to your needs. Here is on configured to keep data for two years and it will be interactive for one year.

Note that you can set your retention period to 12 years!

KQL

If you do not want to see all the data in the log table you can use the query language called KQL. I will not go into any deep stuff here but here is how you use it in a Log Analytics workspace. Note that I am far from a KQL guru.

  • Go back to the Logs.
  • On the right side, change from Simple mode to KQL in the dropdown.
  • Start writing your query. Here are some examples:

List the activity for the give user for the last 24 hours.

AzureActivity
| where Caller == "user@domain.onmicrosoft.com"

Show all the delete operations for the last 24 hours

AzureActivity
| where OperationNameValue contains "DELETE"

Show all the operations from a given IP-address for the last two days

AzureActivity
| where CallerIpAddress == "81.170.238.13"
| where TimeGenerated >= ago(2d)

Conclusion

Being able to extend the log retention of the Azure Activity log is useful for many reasons. Storing the data in an Azure Log Analytics Workspace makes it easy to query and help you find answers to questions relating to resource management in Azure, even after 90 days.

Find what Logic App is using an On Prem Gateway

Some time ago I wrote a post called Find application registrations used by Logic Apps. This post is really similar. If you know how to access and use the Azure Resource Graph Explorer, just skip to the end to get the KQL.

Using On Prem gateways to access data behind a firewall is great. You get a firewall friendly way of accessing services and databases simply by using HTTPs. The downside is that you need to keep track of all those gateways and most importantly: where are they used? This is a great candidate for using the Azure Resource Graph Explorer.

Enter Azure Resource Graph Explorer

This is a tool that uses KQL to query Azure resources. List all the VMs, show a list of all IP-addresses used etc etc. It is very very useful. Particularly in this case, when looking for a gateway usage.

Access

First off you need access to the resources you want to query. That might go without saying but I thought I just point that out.

Finding it

Use the search box in Azure (at the top of the page) and type resource graph. The service will show up in the result.

Using it

There are a number of predefined queries, and there is also an explorer to the left, showing you all types of Azure resources grouped by type. You can click any of these and they will show up in the query window.

Using it for Logic Apps

Sadly, there is very little in the way of help for Logic Apps and connectors but the resource type is very easy to find. Just pull up resource of the type you want to the query to be about and look under properties. There is always a property called Resource ID. That contains the resource type.

Find the gateway usage

Enter this KQL to list all the connections that use the given gateway.

resources
| where properties.parameterValues.gateway.name == "Gateway name here"

Hopefully you can find the information you need in the result. Use the resource group name or look under properties by scrolling to the right and clicking See details.

Azure DevOps: Pipeline fails with ‘Job is pending’

This seems like a really strange error and I have to admit, it stumped me for months. I am running an automated job once a week and the result is sent to me via e-mail. A great way to start the week. This started failing and I tried to solve it but failed again and again. I ran the job manually instead.

Until this morning

The pipeline

The pipeline is really simple.

  1. Get values from a KeyVault.
  2. Execute a PowerShell script using values.
  3. Send an output file as an email attachment.

The problem

My pipeline just failed with the message: “Job is pending”.

Not much information to go on.

A quick google search did not really turn up anything useful for my case.

The solution

Turns out that there was an authentication error in a Library.
The pipeline uses a Library to get a key to authenticate to the send e-mail service. This Library uses a service connection to authenticate. This service connection was invalid and the Library could not be populated at runtime.

I updated the service connection and everything just worked.

Allowing access to Logic App operators to read tracking data for APIm

Yes I know, the title is not exactly catchy.

The issue

My issue was this: A business user needs access to tracking info in a Logic App in order to help with finding solutions to issues. There is a very nice built in role for that called Logic App Operator. Adding that user as an operator is easy using the portal.

However, the user received an error when he wanted to look at tracking data for the APIm-connector. The Logic App was using the standard Azure API management connector in order to connect to our instance of APIm. Normally this works fine as other users tend to have, at least read access, in the connected APIm.

In this case, the user was a business user, and as such, had no other access at all.

The answer to the question is really easy because it is right there in the error message: The user does not have read access for the API you are calling.

So how do you assign access on this level. You could make the business user a reader for the whole APIm instance, but that is too much.

The solution

You can use Azure CLI in order to assign that specific access level. My assumption is that you know how to install and run Azure CLI. If not, just follow the link.

I usually run Azure CLI from the terminal in VS Code.

First off, you need the access level to assign roles. If you don’t have that, this will not work.

If you have access, you need to login. Run this command:

az login

Then simply run this command from the terminal:

az role assignment create --assignee user@company.com --role "Reader" --scope "/subscriptions/[subscription GUID]/resourceGroups/[RG name]/providers/Microsoft.ApiManagement/service/[Apim instance name]/apis/[api name]"

For Frank at the contoso company that needs read access to the orders api it might look like this.

az role assignment create --assignee frank@contoso.com --role "Reader" --scope "/subscriptions/e8b5e5a6-4b7d-4f8e-9b2d-8c6d7e5a4b7d/resourceGroups/contoso-apim-prod-RG/providers/Microsoft.ApiManagement/service/contoso-apim-prod/orders"

Hope this helps.

Find and use Diagnosics Settings for a resource

The basics

I will assume you know what Diagnostics are within Azure and that you know how to create and deploy Bicep. This post aims at showing you how you can solve how to connect Azure Diagnostigs to your resources, using Bicep.

The problem

When deploying a diagnostic setting you might not always know which metrics are available to you, and in some cases the metrics differ between the portal and the APIs used by Azure for deployment. So if you use the names in the portal might trigger strange errors complaining about different metrics not being available.

Another problem is that diagnostic settings are not exported as a part of the resource, so finding the settings can be really tricky.

The solution

It is not that hard actually. You can access the JSON/ARM before you create the setting.

Getting the ARM from the portal

  1. Start by navigating to the resource you want to create diagnistics for. I am using an Azure Function. Find the Diagnostic Setting in the menu to the left:

  1. On the new page, click the Add Diagnotic Setting.

  2. Fill in the settings you need:

  3. Then, up to the right. Way up there you can find a link that says JSON View. Click it.
    BOM! The ARM template for the diagnostic settings.

{
    "id": "/subscriptions/GUIDHERE/resourceGroups/RG-NAME/providers/Microsoft.Web/sites/FUNCTION_NAME/providers/microsoft.insights/diagnosticSettings/myDiagnosticSetting",
    "name": "myDiagnosticSetting",
    "properties": {
        "logs": [
            {
                "category": "FunctionAppLogs",
                "categoryGroup": null,
                "enabled": true,
                "retentionPolicy": {
                    "days": 0,
                    "enabled": false
                }
            },
            {
                "category": "AppServiceAuthenticationLogs",
                "categoryGroup": null,
                "enabled": false,
                "retentionPolicy": {
                    "days": 0,
                    "enabled": false
                }
            }
        ],
        "metrics": [
            {
                "enabled": true,
                "retentionPolicy": {
                    "days": 0,
                    "enabled": false
                },
                "category": "AllMetrics"
            }
        ],
        "workspaceId": "/subscriptions/GUIDHERE/resourceGroups/RG-NAME/providers/Microsoft.OperationalInsights/workspaces/LogAnalyticsName-here",
        "logAnalyticsDestinationType": null
    }
}

Converting it into Bicep

  1. Open a new Bicep-file in VS-Code
  2. Copy the ARM from the Azure Portal.
  3. Press Ctrl+Shift+P
  4. Find Paste JSON as Bicep
  5. Bom! Your ARM has been converted into Bicep.

Finishing up

Bicep is really useful when deploying infrastructure in Azure but sometimes you need a little help to find all the settings you need to make things work. JSON view is available in many places when creating resources, use it.