091317_1042_TestingtheA1.png

The easiest way to connect to the SalesForce API

This post is a simple reminder of how to simply get going connecting to Salesforce using the API. The security model is very rich and the documentation is sadly both lacking and, in some cases, wrong. I therefore took it upon me to create a “How to get to the first successful message” post.
Any additional information can be found in the Salesforce documentation.

Note that the security might not be optimal for your needs down the line in your use of Salesforce. Personally, I used the API to upload Account and Contact data and the let loose the salespeople on the application.

The steps are these:

  1. Make sure you have access to Salesforce.
  2. Make sure you are a Systems Admin.
  3. Setup a Connected Application, this is the connection used for the API calls.
  4. Getting some user info
  5. Getting your Security Token
  6. Login and get an access token.
  7. Test the access token using a standard call.

I will use the simple Username and Password flow. There are others, but this seems to fit my needs the best.

Here we go.

Make sure you have access to Salesforce

You have been assigned a user and a path for login. Usually this is login.salesforce.com or test.salesforce.com if you are using the sandbox.

Make sure you are a Systems Admin

Access the Setup part of Salesforce. This is usually done by clicking the cogs up to the right of the screen.

A new tab will open with all the settings.

Access the “Users” setting by clicking the menu to the left under Administration. Click Users and the Users again.

In the list to the right, find your identity and make sure it is System Administrator.

Setup a connected Application

In the menu to the left find the Platform Tools heading. Click Apps and then App Manager

The list to the right contains all the currently connected apps. Ignore that and look for the button saying New Connected App. It is up to the right. Click it.

Time to fill in the fields.

Connected App name: A name you choose. Can be anything.

API Name: Auto fills. Do not touch it.

Contact e-mail: Fill in a valid e-mail that you have access to.

Scroll down and choose Enable OAuth Settings.

Now comes the tricky part. Looking at the documentation you should fill in … well it does not really say there but the path is https://login.salesforce.com/services/oauth2/callback. If you are using the sandbox (or test-version) the address is https://test.salesforce.com/services/oauth2/callback.

Lastly set the OAuth Scope to the level you need. To be sure it gets all the access it needs, simple choose Full Access and click Add to the right.

Now you are done. Click to Save and the wait for instructions to wait.

Getting some user info

In order to access the API you need the application’s Consumer Key and Consumer Secret. You can get them by looking at the app you just created.

Go back to the App Manager Page and find you App in the list to the right. Look to the far right of that row and click the “down arrow”, choose View.

There are two values here that you need to copy, the consumer key (usually a very long text string of gibberish) and then your Consumer secret, usually a string of numbers.

Getting your security token

This is a token that is used to verify your password when you call to login to the API. There might be a way of getting it without resetting it (as per the instructions below) but it will at least work.

Open your own personal page (up to the right) and click settings.

I the menu to the left find the item “Reset My Security Token”

Click it and the click the Reset Security Token button.

A new token will be sent to you in a minute. Continue with the instructions here and wait for it.

Login and get an access token

Time to put all this to good use. I personally use Postman to test the API. Here is how you should configure the POST to make sure you get the access token back.

Method: POST

Headers

Content-Type: application/x-www-form-urlencoded

URL: https://login.salesforce.com/services/oauth2/token or https://test.salesforce.com/services/oauth2/token if you are using the Sandbox.

Then you need to add the following params to your URL string.

grant_type:password

client_id: The Consumer Key you copied above

client_secret: The Consumer Secret you copied above

username: Your username that you used to log into Salesforce. Note! If you are using an e-mail address you should escape the @-sign as %40. So, if your username is mikael_sand@salesforce.com it should be formatted as mikael_sand%40salesforce.com

password: The password you used to log into Salesforce and then add the security key that was e-mailed to you.

Now you are ready log in. Click Send in Postman and if it works, you will get back some nice JSON with an access-token.

Test the access token using a standard call

Now to test that the access token works.

Simply send configure Postman like this:

Method: GET

Headers

Authorization: Bearer [the access-token above] (Note that there is a space between “Bearer” and the token.

URL: Here you need to know what instance of Salesforce you are running on. This is suppled in the authorization call above in a JSON property called “instance_url”.

The path for getting information on the Account object is this: https://instance_url/services/data/v39.0/sobjects/Account/describe. The v39.0 may shift, this is the latest version at the time of writing.

Click send and you should get back some nice JSON describing the fields of the Account object.

Errors?

If you get back an error like “Session Expired or Invalid” make sure that:

  1. You send the call to the correct instance url (test vs prod got me here).
  2. You send the correct access token in the Authorization header (got me once).
091317_1042_TestingtheA1.png

An easier way to install Logic App Prereqs

Recently I have been doing some teaching work on Logic Apps. The sessions have been focused on basic “beginning” but also how to use Visual Studio for development. The ALM functionality in VS is preferred at this point in time.

There were some on the installation though so I thought I would post a more to-the-point solution here.

The official instructions can be found here.

If you find anything wrong with this guide, please provide feedback using my e-mail or by commenting below.

Install Visual Studio 2015

The software can be found here, or using your MSDN subscription.

Install Azure SDK and so on

The easier way is to simply get the Web Platform Installer (WebPI).

Using that you can simply check the things you need, start installation and go have a coffee.

Finding Azure PowerShell

Search for “azure powershell”, and add it. Use the latest version.

Finding Azure SDK

Then do a search for “azure sdk”. Find the one highlighted in the picture, and add it. If the version number is higher than the one in the picture, use that version.

The downside of screen grabs is that they do not update by themselves.

Installing

Now simply click Install and have yourself a well-deserved break.

Installing the Logic Apps Extension

Open Visual Studio 2015 and choose Tools/Extensions and Updates…

Select “Online” in the menu to the left.

Search for logic apps

Select “Azure Logic Apps Tools for Visual Studio” and choose install. If you miss any prereqs the installer will point that out and you will not be able to install.

Further reading and testing it out

To make sure you have everything you need and start flexing your developer skills, you can follow this handy guide: “Build and Deploy Logic Apps in Visual Studio

091317_1042_TestingtheA1.png

InvalidTemplateDeployment in Azure RM

Using scripting when deploying Logic Apps, and the surrounding bits is very useful. If you have set something up it is very easy to just script it and save it locally or under your templates.

I stored mine locally and got the error above when deploying. My thoughts where “An error in the template? The template that was generated for me? This is not good”.

I tried opening the template file and found some minor upper and lower case errors but that did not do it.

The solution was to get more information! You need to access your subscription’s activity logs. You can find it in the left side menu or by searching for “Activity Log” in the expanded menu.

The starting query should return your failed validation of the template.

Click on the row of the failed validation (it is not a link strangely) and choose to show the info as JSON.

Scroll down to the end of the message. Under the tag “properties/statusMessage” you will find the full story. In my case (I am ashamed to say) the name of the storageaccount was invalid.

091317_1042_TestingtheA1.png

CaseSensitiveDeploymentParameterNamesFound

I got this error when deploying a Logic App. Since I could not find anything on it I just thought I would do a quick post about it.

If you Google it, you get zero hits and instead you get pointed to a page on keeping parameters secret. Not a bad idea but it did not solve anything for me.

The real error was easy to fix. I simply had input two parameters with the same name but with different cases. This was then interpreted as me trying to use case-sensitive names in my deployment. That is not how it’s done. Keep variable names case-insensitive.

091317_1042_TestingtheA1.png

SQL Server Edition Upgrade might fail

What happened?

A while back I tasked myself with automating an SQL server edition upgrade using PowerShell.

I ran into some problems. I made sure the upgrade was as /s (silent) as possible and so I only got a very rudimentary progress bar. The upgrade would seem to take a long time and after two hours of waiting I decided that the upgrade had “hung”. I repeated the upgrade but kept a look at the log file.

What was wrong?

Looking into the log file I found that the thing that seemed to hang was this row:

Waiting for nt event ‘Global\sqlserverRecComplete’ to be created

How to solve it?

Searching for it online I found several reasons for this and one (unsupported) option stood out, simply skip the rules-check.

If the upgrade fails in this way, simply add the following to your PowerShell string:

/SkipRules=Engine_SqlEngineHealthCheck

The implications

Some images on Azure has SQL Server evaluation edition installed by default. You usually want to upgrade these to Developer Edition, using the built in Edition Upgrade functionality.

If you run into the “hang” issues you have to upgrade SQL server without checking the rule SQLEngineHealthCheck.

091317_1042_TestingtheA1.png

How to only poll data on specific weekdays using the WCF-SQL adapter

There are a lot of solutions to this particular question. The need is that we only poll data from a database on Sundays. This might be solved using a stored procedure that only returns data on Sundays. It might also be solved by using the famous schedule task adapter to schedule the poll for Sundays. You can also do some cool coding thing using a custom pipeline component that rejects data on all other days but Sundays. Your scenario might be very well suited for one of these solutions, the scenario presented by my colleague Henrik Wallenberg did not fit any of those.

The scenario

A database is continuously updated thru out the week but we need the export data from a specific table every Sunday at 6pm. We cannot use the schedule task adapter nor stored procedures. We decided to try to trick BizTalk using the PolledDataAvailableStatement in the WCF-SQL adapter on a receive port. Turns out it works! Here is how.

Please note that this does not work if you cannot use ambient transactions.

According to this post, you must set Use Ambient Transaction = true if you need to us a polledDataAvailableStatement. This seems really odd to me but after receiving feedback about this article I know that it is true.

 

The solution

  1. Create the receive location and polling statement.
  2. Find the setting PolledDataAvailableStatement
  3. Set it to: SELECT CASE WHEN DATEPART(DW, GETDATE()) = 1 THEN 1 ELSE 0 END
  4. Set the polling interval to 3600 (once an hour).
  5. Apply your settings.
  6. Set the Service Window to only enable the receive location between 6pm and 6:30 pm.
  7. Now the receive location will only poll once a day and only execute the polling statement on Sundays.

More information

How does this work? It is very simple really. The property PolledDataAvailableStatement (more info here) needs to return a resultset (aka a SELECT). The top leftmost, first if you will, cell of this resultset must be a number. If a positive number is returned, then the pollingstatement will be executed, otherwise not. The SQL statement uses a SQL built-in function called DATEPART with a parameter value of “dw”, which returns “Day Of Week”. More information here. Day 1 is by default in SQL Server a Sunday, because Americans treat days and dates in a very awkward way. There might be some tweaking to your statement in order to make Sunday the 7th day of the week. So the statement SELECT CASE WHEN DATEPART(DW, GETDATE()) = 1 THEN ‘1’ ELSE ‘0’ END returns a 1 if it is day 1 (Sunday). This means that the pollingstatement will only be executed of Sundays. We then set the pollinginterval to only execute once an hour. This, together with the service window, will make sure the statement only executes once a day (at 6pm) as the receive location is not enabled the next hour (7pm). You could update the SQL statement to take the hour of the day into consideration as well but I think it is better to not even execute the statement.

The downside

This is not a very reliable solution though. What if the database was unavailable that one time during the week when data is transported? Then you have to either wait for next week or manually update the PolledDataAvailableStatement to return a 1, make sure the data is transported and then reset the PolledDataAvailableStatement again.

In conclusion

It is a very particular scenario in which this solution is viable and even then it needs to be checked every week. Perhaps you should consider another solution. Thanks to Henrik for making my idea a reality and testing it out. If you want to test it out for yourself, some resources to help you can be found here: InstallApp Script

091317_1042_TestingtheA1.png

Connecting to Wi-Fi without verifying certificate

I love Windows 10! One reason is that it simplifies a lot of things that I do not want to care about. Usually you just click an icon and things just work. There is a downside to this though: Sometimes you want to access the “advanced properties” and that can be tricky. This is a reminder article for me (and perhaps someone else) how you can alter settings for a wireless network connection in Windows 10.

The problem

The thing was this: I just returned back to the office after spending about 18 months at client. I wanted to connect my computer to the “BYOD-network” at the office. So I just clicked the icon but got an error message, the not very informative “cannot connect to network”. So this time I needed to access the advanced options, but in Windows 10 you cannot access those simply by right-clicking.

If you try to right-click any of the icons nothing happens, so you cannot get or change any information about the Wi-Fi access point. So I did not even know what was wrong. I did remember being able to connect my phone to the network, so I tried that, and I got it to work by not verifying the server certificate. So all I needed to do is to make Windows do the same. This, as it turns out, is not easy.

The solution

Basically I needed to manually create a connection to the Wi-Fi network. To do this, there are some steps. Looking at the overview you need to:

  1. Delete the existing connection from the Windows “known connections”.
  2. Create a connection to the wireless network manually, adding settings as you go.
  3. Updating credentials.

Delete connection

I tried to connect to the wireless network the usual way. When I did the network got added to the “Known Networks” even if I failed to connect. In order to add it manually you need to remove it first. This might not apply to you but make sure it is not in the known networks list.

  1. Open the Wi-Fi settings using the icon in the lower right corner and clicking network settings.
  2. Scroll down to “Manage Wi-Fi Settings” and click it.
  3. Scroll down to manage known networks and find the network in question, click it and then click the Forget button.

Manually create the connection

In order to change anything that is not the standard settings, you have to set them yourself.

  1. Open the old control panel
  2. Choose “Network and Internet”.
  3. Choose “Network and Sharing Center”.
  4. Click Setup new connection or network
  5. Choose according to picture and click next:
  6. In the “Network name”-box you have to enter the full name of the network as it was displayed in the network list earlier. This is usually the SSID as well.
  7. The security type can be different from what is shown in the picture. Choose what is most likely for you and click next.
  8. On the next page, choose Change connection settings. If you get a message that the network already exists you must remove it first (see above). It cannot be changed.
  9. The following page appears
  10. Click the Security tab
  11. Click the Settings button indicated by the picture.
  12. Untick the box indicated in the picture, if you need to remove the certificate verification.

    This is the setting that removes the certificate check that I needed. Click OK to close.
  13. Now click advanced settings.
  14. Select to Specify authentication mode and pick the one that applies to you.

    In my case it was “User authentication” as I do not use domain logon credentials. If you do, select the “User or computer authentication”.
  15. If you selected “User Authentication” you can opt to save your credentials now by clicking that button and entering your username and password. Click OK to close.

Update credentials

You have now created a new connection. Simply select it by clicking the wireless icon. If you have configured everything correctly, you can now connect. Perhaps you need to enter your credentials.

091317_1042_TestingtheA1.png

The BTSWCFServicePublishing tool

The basis for this post is mostly to serve as a reminder of the steps you need to go thru to use this tool.

Usage and why

Why do I need this tool? It is simply put the best way to deploy WCF-services on the local IIS and even if the BtsWcfServicePublishingWizard.exe can do it on the dev machine, you still have to deploy the service on the QA and Prod machines. Also it automates the deployment process as all that is needed is the WcfServiceDescription-file generated by the Wizard.

So in short: use the BtsWcfServicePublishingWizard.exe on the dev box and the BTSWCFServicePublishing tool on all the others.

Getting the tool

The tool is strangely not a part of the BizTalk deployment. You have to download and install it separately. The tool can be downloaded here. The package is a self-extracting zip. Just unpack it at a location of choice. I usually have a “Tools”-folder somewhere.

Configuring the tool

I don’t know why but Microsoft left some configuration out when publishing this download. In order to make the tool work on BizTalk 2013 and 2013R2-versions you have to update the configuration to use version 4.0 of the .net framework, otherwise it will not be able to use the BizTalk schema DLL:s as intended. The fix is simple though. Just open the BtsWcfServicePublishing.exe.config from the newly extracted package and add the following settings at the top, just under the configuration-tag.

<startup
useLegacyV2RuntimeActivationPolicy=true>

<supportedRuntime
version=v4.0 />

</startup>

Now the tool will work properly. If you don’t do this the error Error publishing WCF service. Could not load file or assembly ‘file:///C:\Windows\Microsoft.Net\assembly\GAC_MSIL\ProjectName\v4.0_1.0.0.0__keyhere\ProjectName.dll’ or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

(Side note: The “-signs in the xml above can be jumbled due to the blog’s theme. Just make sure they are the straight kind).

Running the tool

Simple as can be. Open a cmd-promt run the program and give it the WcfServiceDescription.xml file as input. The program will deploy the website/service as configured in the file.

This file is located under the App_Data/Temp folder when you use the BtsWcfServicePublishingWizard to publish the site locally.

More information

A command-line reference can be found here.

091317_1042_TestingtheA1.png

Moving the BizTalk databases

This used to be a big issue at my old blog, but thanks to that, and also the wonderful documentation and content providers at Microsoft, the old article that was criticized has been updated. They got the number of steps down and there is clarity about the whole 32 vs 64 bit issue. More on that later.

Preparations

Before you begin you have get a couple of things done. Get a person who knows SQL server and has access rights to everything you need on both machines. On an enterprise level this is usually not the BizTalk admins, nor the BizTalk developers. The person needs to be SQL Server Sysadmin. Plan the outage! In our case we were lucky enough to get a full week between two testing stints. Set aside a day in which the platform is completely out. Plan the backups! Let’s say you get what I got: The backups run once a day at 3am. Therefore nothing may enter or leave the platform after 3am. You need that backup to be 100% compatible with a fallback (retreat?) scenario. More info on backing up your databases can be found here. If you are using BAM there might be activities that starts before the database move and they need to be completed manually. There is a script for that. Get a txtfile and paste the names of the source and destination servers and everything else you might find useful. Read thru the article by Microsoft just to see what you are expected to do, and what you might need to ignore.

Custom stuff

Are there any custom databases or custom components that might be affected by the database move? If you have custom databases, you might want to move them as well and if you have custom components for posting to BAM or some added functionality, make sure that they do not use hard coded connection strings or simply update anything pointing to the old database server.

32 vs 64 bit how?

Imagine you are on a 64-bit machine. It should not be hard to do if you have any contact with BizTalk. If you run the cmd-tool by using Run+”cmd”, you get the 64 bit version BUT its path points to the “System32” folder. To make things even more confusing, the 32 bit version of the cmd tool is in the SysWOW64 folder. Many people like me just assumed that the message “make sure you use the 64-bit version” was to run the one in the SysWOW64 folder. Which was wrong, which caused all sorts of issues, which prompted me to write the original post. That is now resolved. So make sure you are using the correct version. If you by any chance are running BizTalk on a 32-bit machine, you do not need to move the databases. You need to upgrade my friend! The article by Microsoft is now 99% there and you really should follow that one to the letter except for a two things.

Wrong config if you use EDI

If you plan to move the EDI-functionality you need to add a node to the SampleUpdateInfo.xml file. You are supposed to add the following to the “other databases” node.

<Database Name="MsEDIAS2" oldDBName="old dta db name" oldDBServer="old dta server" newDBName="new dta db name" newDBServer="new dta server" />

The thing is that this will not work with the scrip for updating the registry. Open the file UpdateRegistry.vbs and go to line 131. It says to look for an attribute called EDI. If it finds this attribute, the EDI settings will be updated. However as the guide says you should update your SampleUpdateInfo.xml to include an attribute called MsEDIAS2. Update the line in the script to:

set node = configObj.selectSingleNode("/UpdateConfiguration/OtherDatabases/Database[@Name='MsEDIAS2']")

Update SQL server scripts by hand

It is simple to move the BizTalk SQL Server jobs but there are some addendums to the article.

  1. BizTalk Backup has a database server name in it.
  2. Operations_OperateOnInstances_OnMaster_BizTalkMsgBoxDb also has a server name in it.
  3. Rules_Database_Cleanup_BizTalkRuleEngineDb.sql run this when/if the Rule engine DB has been created on the new server.
  4. TrackedMessages_Copy_BizTalkMsgBoxDb.sql also has a server name in it.

BAM databases and other BAM components

If you are using BAM in your environment, good for you. Be sure to follow this article to the letter after moving the databases and running the scripts. Everything is there but I would just like to point out some things you need to have in mind.

  1. Do not forget to move the SSIS packages for BAM DM and BAM AN. It is simple and can be done using SQL server management studio.
  2. Do not forget that you might need to update all your BAM components, like if you are running the BAM portal.

Closing words

Remember: This is not really all that hard and if you practice you can move an entire production environment in under an hour.

091317_1042_TestingtheA1.png

Why we fail: An architect’s journey to the private cloud

This is a re-post from my old blog and the reason I moved it is that I often come back to this talk that was given at TechEd2012 in Amsterdam. Alex Jauch has sinced moved from NetApp but he still pursues perfection as ever. This talk was a life and game/changer for me and I wanted to update it.

Here it is

The session was presented by Alex Jauch, currently at NetApp but he used to work for Microsoft. Actually he was behind the precursor that became the MCA. I had never even heard about this guy before and I would say that it is a shame. I have now though. The heading for the session seem ominous and deterministic but given my personal experience I would say that it is not far from the truth to simply assume that “cloudification” will fail. Incidentally it is also the title of Alex’ book 🙂 Alex (or should I say Mr. Jauch?) started the session by clearly stating that he was about to say things that not all of us would agree upon. He would also try to upset some people! Bold and funny in my opinion.

Definition

The, or even a, definition for what cloud computing really is, can be hard to come by and one definition might differ a lot from the next. * (Add. This is still the case.) Alex presented the definition made by NIST. He pointed to the fact that NIST is a governmental agency and these are notorious for not agreeing on anything. The fact that they have agreed on a definition for cloud computing gives some credibility to it. According to them there are five essential characteristics that together form a cloud. If any of these are left out, you are not a true cloud provider. They are:

 

On-demand self-service. A consumer should be able to change provisioning in the cloud by him/herself.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms.

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model.

Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand.

Measured service. Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. So if your cloud does not have a portal for adding resources in a way that the consumer can do it, you do not have a cloud service.

The full definition (2 pages) can be found here.

So why do we fail?

no_cloud

I say that it comes down to this comparative table that sums up the two different mind-sets that is the demarcation between traditional IT and cloud IT. In some ways it is also the same difference between operations and developers.

Traditional IT Customer Centric IT (Cloud)
Sets IT standards Supports business requirements
Focus on operations excellence Focus on customer satisfaction
Engineering is key skillset Consulting is key skillset
Sets policy Seek input
Focus on large projects Focus on smaller projects
Organized by technology Organized by customer
Technology focus Business value focus
Delivers most projects in house Brokers external vendors as needed

It is not around technology we fail. It is in how we use it and the attitudes in those that implement the technology. When trying to run a cloud service as “we always have”, in a traditional manner that is when we fail. In order to run a successful a successful cloud shop, we must change focus and really (am he means really) focus on the customer. A very telling quote from the session was around the focus on operations vs. focus on customer.

“’We had a 100% uptime last month’ What does that mean if the customer still has not manage to sell anything?”

So if someone is telling you: “We sell cloud”, at least ask them about the 5 points from the NIST definition. If you (or your organization) is thinking about deliver cloud capacity: Good luck.