SQL Server Edition Upgrade might fail

What happened?

A while back I tasked myself with automating an SQL server edition upgrade using PowerShell.

I ran into some problems. I made sure the upgrade was as /s (silent) as possible and so I only got a very rudimentary progress bar. The upgrade would seem to take a long time and after two hours of waiting I decided that the upgrade had “hung”. I repeated the upgrade but kept a look at the log file.

What was wrong?

Looking into the log file I found that the thing that seemed to hang was this row:

Waiting for nt event ‘Global\sqlserverRecComplete’ to be created

How to solve it?

Searching for it online I found several reasons for this and one (unsupported) option stood out, simply skip the rules-check.

If the upgrade fails in this way, simply add the following to your PowerShell string:

/SkipRules=Engine_SqlEngineHealthCheck

The implications

Some images on Azure has SQL Server evaluation edition installed by default. You usually want to upgrade these to Developer Edition, using the built in Edition Upgrade functionality.

If you run into the “hang” issues you have to upgrade SQL server without checking the rule SQLEngineHealthCheck.

How to only poll data on specific weekdays using the WCF-SQL adapter

There are a lot of solutions to this particular question. The need is that we only poll data from a database on Sundays. This might be solved using a stored procedure that only returns data on Sundays. It might also be solved by using the famous schedule task adapter to schedule the poll for Sundays. You can also do some cool coding thing using a custom pipeline component that rejects data on all other days but Sundays. Your scenario might be very well suited for one of these solutions, the scenario presented by my colleague Henrik Wallenberg did not fit any of those.

The scenario

A database is continuously updated thru out the week but we need the export data from a specific table every Sunday at 6pm. We cannot use the schedule task adapter nor stored procedures. We decided to try to trick BizTalk using the PolledDataAvailableStatement in the WCF-SQL adapter on a receive port. Turns out it works! Here is how.

Please note that this does not work if you cannot use ambient transactions.

According to this post, you must set Use Ambient Transaction = true if you need to us a polledDataAvailableStatement. This seems really odd to me but after receiving feedback about this article I know that it is true.

 

The solution

  1. Create the receive location and polling statement.
  2. Find the setting PolledDataAvailableStatement
  3. Set it to: SELECT CASE WHEN DATEPART(DW, GETDATE()) = 1 THEN 1 ELSE 0 END
  4. Set the polling interval to 3600 (once an hour).
  5. Apply your settings.
  6. Set the Service Window to only enable the receive location between 6pm and 6:30 pm.
  7. Now the receive location will only poll once a day and only execute the polling statement on Sundays.

More information

How does this work? It is very simple really. The property PolledDataAvailableStatement (more info here) needs to return a resultset (aka a SELECT). The top leftmost, first if you will, cell of this resultset must be a number. If a positive number is returned, then the pollingstatement will be executed, otherwise not. The SQL statement uses a SQL built-in function called DATEPART with a parameter value of “dw”, which returns “Day Of Week”. More information here. Day 1 is by default in SQL Server a Sunday, because Americans treat days and dates in a very awkward way. There might be some tweaking to your statement in order to make Sunday the 7th day of the week. So the statement SELECT CASE WHEN DATEPART(DW, GETDATE()) = 1 THEN ‘1’ ELSE ‘0’ END returns a 1 if it is day 1 (Sunday). This means that the pollingstatement will only be executed of Sundays. We then set the pollinginterval to only execute once an hour. This, together with the service window, will make sure the statement only executes once a day (at 6pm) as the receive location is not enabled the next hour (7pm). You could update the SQL statement to take the hour of the day into consideration as well but I think it is better to not even execute the statement.

The downside

This is not a very reliable solution though. What if the database was unavailable that one time during the week when data is transported? Then you have to either wait for next week or manually update the PolledDataAvailableStatement to return a 1, make sure the data is transported and then reset the PolledDataAvailableStatement again.

In conclusion

It is a very particular scenario in which this solution is viable and even then it needs to be checked every week. Perhaps you should consider another solution. Thanks to Henrik for making my idea a reality and testing it out. If you want to test it out for yourself, some resources to help you can be found here: InstallApp Script

Connecting to Wi-Fi without verifying certificate

I love Windows 10! One reason is that it simplifies a lot of things that I do not want to care about. Usually you just click an icon and things just work. There is a downside to this though: Sometimes you want to access the “advanced properties” and that can be tricky. This is a reminder article for me (and perhaps someone else) how you can alter settings for a wireless network connection in Windows 10.

The problem

The thing was this: I just returned back to the office after spending about 18 months at client. I wanted to connect my computer to the “BYOD-network” at the office. So I just clicked the icon but got an error message, the not very informative “cannot connect to network”. So this time I needed to access the advanced options, but in Windows 10 you cannot access those simply by right-clicking.

If you try to right-click any of the icons nothing happens, so you cannot get or change any information about the Wi-Fi access point. So I did not even know what was wrong. I did remember being able to connect my phone to the network, so I tried that, and I got it to work by not verifying the server certificate. So all I needed to do is to make Windows do the same. This, as it turns out, is not easy.

The solution

Basically I needed to manually create a connection to the Wi-Fi network. To do this, there are some steps. Looking at the overview you need to:

  1. Delete the existing connection from the Windows “known connections”.
  2. Create a connection to the wireless network manually, adding settings as you go.
  3. Updating credentials.

Delete connection

I tried to connect to the wireless network the usual way. When I did the network got added to the “Known Networks” even if I failed to connect. In order to add it manually you need to remove it first. This might not apply to you but make sure it is not in the known networks list.

  1. Open the Wi-Fi settings using the icon in the lower right corner and clicking network settings.
  2. Scroll down to “Manage Wi-Fi Settings” and click it.
  3. Scroll down to manage known networks and find the network in question, click it and then click the Forget button.

Manually create the connection

In order to change anything that is not the standard settings, you have to set them yourself.

  1. Open the old control panel
  2. Choose “Network and Internet”.
  3. Choose “Network and Sharing Center”.
  4. Click Setup new connection or network
  5. Choose according to picture and click next:
  6. In the “Network name”-box you have to enter the full name of the network as it was displayed in the network list earlier. This is usually the SSID as well.
  7. The security type can be different from what is shown in the picture. Choose what is most likely for you and click next.
  8. On the next page, choose Change connection settings. If you get a message that the network already exists you must remove it first (see above). It cannot be changed.
  9. The following page appears
  10. Click the Security tab
  11. Click the Settings button indicated by the picture.
  12. Untick the box indicated in the picture, if you need to remove the certificate verification.

    This is the setting that removes the certificate check that I needed. Click OK to close.
  13. Now click advanced settings.
  14. Select to Specify authentication mode and pick the one that applies to you.

    In my case it was “User authentication” as I do not use domain logon credentials. If you do, select the “User or computer authentication”.
  15. If you selected “User Authentication” you can opt to save your credentials now by clicking that button and entering your username and password. Click OK to close.

Update credentials

You have now created a new connection. Simply select it by clicking the wireless icon. If you have configured everything correctly, you can now connect. Perhaps you need to enter your credentials.

The BTSWCFServicePublishing tool

The basis for this post is mostly to serve as a reminder of the steps you need to go thru to use this tool.

Usage and why

Why do I need this tool? It is simply put the best way to deploy WCF-services on the local IIS and even if the BtsWcfServicePublishingWizard.exe can do it on the dev machine, you still have to deploy the service on the QA and Prod machines. Also it automates the deployment process as all that is needed is the WcfServiceDescription-file generated by the Wizard.

So in short: use the BtsWcfServicePublishingWizard.exe on the dev box and the BTSWCFServicePublishing tool on all the others.

Getting the tool

The tool is strangely not a part of the BizTalk deployment. You have to download and install it separately. The tool can be downloaded here. The package is a self-extracting zip. Just unpack it at a location of choice. I usually have a “Tools”-folder somewhere.

Configuring the tool

I don’t know why but Microsoft left some configuration out when publishing this download. In order to make the tool work on BizTalk 2013 and 2013R2-versions you have to update the configuration to use version 4.0 of the .net framework, otherwise it will not be able to use the BizTalk schema DLL:s as intended. The fix is simple though. Just open the BtsWcfServicePublishing.exe.config from the newly extracted package and add the following settings at the top, just under the configuration-tag.

<startup
useLegacyV2RuntimeActivationPolicy=true>

<supportedRuntime
version=v4.0 />

</startup>

Now the tool will work properly. If you don’t do this the error Error publishing WCF service. Could not load file or assembly ‘file:///C:\Windows\Microsoft.Net\assembly\GAC_MSIL\ProjectName\v4.0_1.0.0.0__keyhere\ProjectName.dll’ or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.

(Side note: The “-signs in the xml above can be jumbled due to the blog’s theme. Just make sure they are the straight kind).

Running the tool

Simple as can be. Open a cmd-promt run the program and give it the WcfServiceDescription.xml file as input. The program will deploy the website/service as configured in the file.

This file is located under the App_Data/Temp folder when you use the BtsWcfServicePublishingWizard to publish the site locally.

More information

A command-line reference can be found here.

Moving the BizTalk databases

This used to be a big issue at my old blog, but thanks to that, and also the wonderful documentation and content providers at Microsoft, the old article that was criticized has been updated. They got the number of steps down and there is clarity about the whole 32 vs 64 bit issue. More on that later.

Preparations

Before you begin you have get a couple of things done. Get a person who knows SQL server and has access rights to everything you need on both machines. On an enterprise level this is usually not the BizTalk admins, nor the BizTalk developers. The person needs to be SQL Server Sysadmin. Plan the outage! In our case we were lucky enough to get a full week between two testing stints. Set aside a day in which the platform is completely out. Plan the backups! Let’s say you get what I got: The backups run once a day at 3am. Therefore nothing may enter or leave the platform after 3am. You need that backup to be 100% compatible with a fallback (retreat?) scenario. More info on backing up your databases can be found here. If you are using BAM there might be activities that starts before the database move and they need to be completed manually. There is a script for that. Get a txtfile and paste the names of the source and destination servers and everything else you might find useful. Read thru the article by Microsoft just to see what you are expected to do, and what you might need to ignore.

Custom stuff

Are there any custom databases or custom components that might be affected by the database move? If you have custom databases, you might want to move them as well and if you have custom components for posting to BAM or some added functionality, make sure that they do not use hard coded connection strings or simply update anything pointing to the old database server.

32 vs 64 bit how?

Imagine you are on a 64-bit machine. It should not be hard to do if you have any contact with BizTalk. If you run the cmd-tool by using Run+”cmd”, you get the 64 bit version BUT its path points to the “System32” folder. To make things even more confusing, the 32 bit version of the cmd tool is in the SysWOW64 folder. Many people like me just assumed that the message “make sure you use the 64-bit version” was to run the one in the SysWOW64 folder. Which was wrong, which caused all sorts of issues, which prompted me to write the original post. That is now resolved. So make sure you are using the correct version. If you by any chance are running BizTalk on a 32-bit machine, you do not need to move the databases. You need to upgrade my friend! The article by Microsoft is now 99% there and you really should follow that one to the letter except for a two things.

Wrong config if you use EDI

If you plan to move the EDI-functionality you need to add a node to the SampleUpdateInfo.xml file. You are supposed to add the following to the “other databases” node.

<Database Name="MsEDIAS2" oldDBName="old dta db name" oldDBServer="old dta server" newDBName="new dta db name" newDBServer="new dta server" />

The thing is that this will not work with the scrip for updating the registry. Open the file UpdateRegistry.vbs and go to line 131. It says to look for an attribute called EDI. If it finds this attribute, the EDI settings will be updated. However as the guide says you should update your SampleUpdateInfo.xml to include an attribute called MsEDIAS2. Update the line in the script to:

set node = configObj.selectSingleNode("/UpdateConfiguration/OtherDatabases/Database[@Name='MsEDIAS2']")

Update SQL server scripts by hand

It is simple to move the BizTalk SQL Server jobs but there are some addendums to the article.

  1. BizTalk Backup has a database server name in it.
  2. Operations_OperateOnInstances_OnMaster_BizTalkMsgBoxDb also has a server name in it.
  3. Rules_Database_Cleanup_BizTalkRuleEngineDb.sql run this when/if the Rule engine DB has been created on the new server.
  4. TrackedMessages_Copy_BizTalkMsgBoxDb.sql also has a server name in it.

BAM databases and other BAM components

If you are using BAM in your environment, good for you. Be sure to follow this article to the letter after moving the databases and running the scripts. Everything is there but I would just like to point out some things you need to have in mind.

  1. Do not forget to move the SSIS packages for BAM DM and BAM AN. It is simple and can be done using SQL server management studio.
  2. Do not forget that you might need to update all your BAM components, like if you are running the BAM portal.

Closing words

Remember: This is not really all that hard and if you practice you can move an entire production environment in under an hour.

Why we fail: An architect’s journey to the private cloud

This is a re-post from my old blog and the reason I moved it is that I often come back to this talk that was given at TechEd2012 in Amsterdam. Alex Jauch has sinced moved from NetApp but he still pursues perfection as ever. This talk was a life and game/changer for me and I wanted to update it.

Here it is

The session was presented by Alex Jauch, currently at NetApp but he used to work for Microsoft. Actually he was behind the precursor that became the MCA. I had never even heard about this guy before and I would say that it is a shame. I have now though. The heading for the session seem ominous and deterministic but given my personal experience I would say that it is not far from the truth to simply assume that “cloudification” will fail. Incidentally it is also the title of Alex’ book 🙂 Alex (or should I say Mr. Jauch?) started the session by clearly stating that he was about to say things that not all of us would agree upon. He would also try to upset some people! Bold and funny in my opinion.

Definition

The, or even a, definition for what cloud computing really is, can be hard to come by and one definition might differ a lot from the next. * (Add. This is still the case.) Alex presented the definition made by NIST. He pointed to the fact that NIST is a governmental agency and these are notorious for not agreeing on anything. The fact that they have agreed on a definition for cloud computing gives some credibility to it. According to them there are five essential characteristics that together form a cloud. If any of these are left out, you are not a true cloud provider. They are:

 

On-demand self-service. A consumer should be able to change provisioning in the cloud by him/herself.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms.

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model.

Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand.

Measured service. Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. So if your cloud does not have a portal for adding resources in a way that the consumer can do it, you do not have a cloud service.

The full definition (2 pages) can be found here.

So why do we fail?

no_cloud

I say that it comes down to this comparative table that sums up the two different mind-sets that is the demarcation between traditional IT and cloud IT. In some ways it is also the same difference between operations and developers.

Traditional IT Customer Centric IT (Cloud)
Sets IT standards Supports business requirements
Focus on operations excellence Focus on customer satisfaction
Engineering is key skillset Consulting is key skillset
Sets policy Seek input
Focus on large projects Focus on smaller projects
Organized by technology Organized by customer
Technology focus Business value focus
Delivers most projects in house Brokers external vendors as needed

It is not around technology we fail. It is in how we use it and the attitudes in those that implement the technology. When trying to run a cloud service as “we always have”, in a traditional manner that is when we fail. In order to run a successful a successful cloud shop, we must change focus and really (am he means really) focus on the customer. A very telling quote from the session was around the focus on operations vs. focus on customer.

“’We had a 100% uptime last month’ What does that mean if the customer still has not manage to sell anything?”

So if someone is telling you: “We sell cloud”, at least ask them about the 5 points from the NIST definition. If you (or your organization) is thinking about deliver cloud capacity: Good luck.

Leaving CGI

As some of you might know, I am leaving CGI Sweden for Enfo Zystems (Sweden). Some of you have asked me why and there are a million little reasons.

CGI (back when it was WM-data) took me in and let me grow for 8 years. When I started I was a junior developer with an interest in BizTalk (then 2004) and I worked my way up to Chief BizTalk Architect. The company gave me a lot of opportunities and support in ways I think few other consulting companies can. So thank you CGI for the time we had together.

However, much like a relationship, we have started to drift apart over the last year or so and after a lot of little annoyances I decided it was time to leave. People that has worked with me know what these annoyances were and I will not list them here, but I would say that the office and its location was at the top of the list. The BizTalk integration team at CGI is top notch and I can recommend every one of them for any BizTalk job.

I will now move to Enfo Zystems, a company known for its passionate and competent personnel and for focusing heavily on just integration. I will work as a Senior Integration Specialist and focus more on Azure then before but I will also continue to support the BizTalk community in every way possible.

My next step in my career will start next week (March 30).

My upcoming talk at SWEBUG

Tomorrow (and Tuesday) is another Swebug event and I will make my second apperance at it. Last time (August 2012) I spoke about Tips and Trick for BizTalk.

This time I am trying to be a bit more provocative and I will talk about “The fall of the BizTalk Architect” and how he/she might not be as important as we used to think. I am hoping for a productive debate, but that in the end everyone will agree with me 😉

The thing is that I feel we have been taking ourselves too seriously for way too long. The fact that “integration is hard” is becoming less and less of an excuse and more of a barrier. Other developers around us have trouble seeing the light and we compete in a faster changing market. BizTalk has, not only a steep learning curve, but also an implementation curve. In order to actually get some bang for the buck you need about 6 servers, preferably physical, a lot of users and access rights, and about three weeks. That is commitment for a company so we need to make the most of that commitment.

I will talk about putting things in perspective, how others see us and what kind of architecture might save us from keep riding the integration wave up into the cloud.

I would love for you to attend: At least to tell me I am wrong.

2015-02-23 Göteborg Acandos kontor 18:00. Tickets.

2015-02-24 Stockholm Akalla kl 18:00. Tickets.

Flatfiles and repeating tag identidiers

This particular post might not be very “edgy” but rather something to get me going, as well as functioning as a personal “how the h*ll did I do that” for the future.

The problem

A client presented us with a rather standard flatfile. It was about 20 rows long, separated by the ever usual CR LF, and they contained tag identifiers.

The identifiers were numbered 10, 20, 30 and 40 and they could appear in any order, may or may not be present and the records were positional.

Seems easy, right? Well it took me some time and there are a couple of gotchas along the way so I thought I would share them.

The file

Here is a subsection of the file. Notice the repeating identifiers and the difference in length between identical tags.

10000000000004083006019800324E50000John Doe
2500000000000000433       Q00086Jane doe
3000000000000008448H00001Joe Doe
4000000000000008448H00001Hanna Montana
10000000000004083006019800324E50000

 

The solution

There were three issues that needed to be solved and, again I am not presenting anything new here;

  1. Tag identifiers in positional records.
  2. Repeating tag identifiers.
  3. The last column could be empty or not padded.

I started by adding a flat file schema, adding a root, and setting the “Child delimiter Type” to “Hexadecimal” and “Child Delimiter” to “0xD 0xA”, now for the interesting part.

Tag identifiers in positional records

Tag identifiers are simply put, just that: some kind of text at the start of a record that shows what kind of record it is. It is a very useful feature and is often used in mainframe scenarios.

The tag identifiers in the file above are the first two positions. So I added a new child record and set the property “Tag identifier” to “10”, and the “Structure” to “Positional”.

image

I then proceeded to add all the different columns based of the relative length and position.

Moving on I did the same for the other tag-types as well and ended up with a schema looking quite simple and straightforward.

 

image

This is when I ran into my first error: The tag is not removed from the row of data when it is used. So all the other positions was offset by two positions and the parser could not find any columns, or rather the rows were too long.

In order to make it work you have to offset the first field by the length of your tag identifier, or add a column at the front to handle the identifier. I opted for the first one and updated the property “Positional Offset” to “2” for the first field in every record.

Repeating identifiers

You may, or may not know this, but XML actually infer some kind of tag order when writing data defined from an XSD. That is why you get an error when you skip a tag that is not “MinOccurs: 0”.

imageSo how do you handle the fact that some records might repeat later in a file, like the one in question? The answer is to put all the “root children” in a Choice-node. So I right-clicked the “root” node and selected to add a “Choice Group”. I set the property “Max Occurs” to “Unbounded” for the choice node, as well as all the other child records. For those I also added “Min Occurs: 0”.

Lastly I added all the child nodes to the Choice Node and now the row types (tags) may be in any order and may occur zero or unbounded times.

The last column may be empty

One very popular way to export data from a mainframe is to make all the rows be of the same length. One very popular length is 80 characters. However, many exporters choose to end the rows when the data ends. So rather then putting in enough white space to reach a total length of 80, a CR LF is inserted and the row is ended.

imageYou might think that simply setting the tag to “Min Occurs: 0” might solve this but you, like I was, would be wrong. The concept is called “Early Termination” and once you know that it is easy.

In order to allow early termination, which is set to false by default, I had to enable it. The property was located at “Schema” level and is called “Allow Early Termination”. Once I set that to “True”, everything worked fine.

Note that if you do not set “Min Occurs: 0”, the tag will be created but set to null if the row terminates early.

Data: "…Mikael" = <NAME>Mikael</NAME>
 
Data: "…" = <NAME />

In conclusion

The fairly standard way of exporting data from a mainframe to a textfile, should from now  on not really be a problem.

By the way, the files can be found here: RepeatingTagsFF