Thursday, 29 March 2018

Selective branch indexing with TFS and the Search Server


Team Foundation Server’s Search Server can be tough. I mean, it works really well but it takes a certain degree of planning, otherwise it can easily sink your instance’s performance.

I’ve mentioned in the past that there are scripts from the Product Team that help with the daily administration of the server, they are still the number one choice IMHO from an admin point of view.

But it’s not all command-line. For example, if you look into the Version Control settings of your Team Project, you will discover that each Git repository has a nice setting for selective indexing.









This makes a lot of sense, so you can only index the common branches and have a rational use of your Elastic Search instance.

There is an excellent reason for that: you don’t want *all of your branches* to be searchable. They will feature a ridiculous amount of duplicates, hence you would be wasting resources.



Wednesday, 28 March 2018

Something strange with SQL Server AlwaysOn Automatic Seeding and TFS

I ran into this strange issue the other day in my homelab, and it is worth sharing it: I was trying to setup a highly available Team Foundation Server data tier with AlwaysOn Automatic Seeding instead of the usual backup and restore process, but the TFS_Configuration database (for some reason) was not collaborating.

Automatic seeding of availability database 'Tfs_Configuration' in availability group 'TFSAG' failed with an unrecoverable error. Correct the problem, then issue an ALTER AVAILABILITY GROUP command to set SEEDING_MODE = AUTOMATIC on the replica to restart seeding.

We are talking about a plain, empty instance, so... it was a bit of a needle in a haystack!

Let's take a step back: SQL Server AlwaysOn Automatic Seeding is a new feature of SQL Server 2016 and above that manages to sync up a database in an Availability Group without leveraging backup and restore. This is a life saver in certain situations, so that you can avoid the computational load of a backup and of a restore that might take a long time.

There are some constraints - above all, the instances making up the Availability Group must be *identical*. Yes, identical in everything, including paths used by SQL Server. It is a very cloud-first approach at the end of the day, where you have identical, commodity resources at your disposal and your actual target is to provide a friction-less experience to whom is going to consume the service you'll offer.

So cool, right? Still, for some reason, my Configuration database didn't stream from Primary to Secondary replica. I checked the DMV, and I got an obscure 1200 failed_state error - Internal Error.



















The first thing I did (as the instances are really identical, they were provisioned the day before) was to check that I was on the latest CU, as there are fixes available for Automatic Seeding. Check.

I had a look at the script used by the wizard to add the databases to the Availability Group, nothing too fancy to be fair. Reading around seems that there is still a chance that things might suddenly break, so I took another path.

Yes, a Full Backup (taken with the TFS Administration Console nonetheless) was supposed to be enough to enable Automatic Seeding as the recovery chain is started. Would another Transaction Log backup hurt? I don't think so.

After taking the faulty database off the Availability Group, I ran the speedy Transaction Log backup and added the database back in the Availability Group with the script. Guess what, it worked! And my new TFS instance is up-and-running.

Of course this is totally transparent as usual for TFS, as the configuration wizard is smart enough to set the right connection string from the beginning. But you still need to make sure the Availability Group is correctly set, otherwise at the first failover you will be left with nothing.

Wednesday, 14 March 2018

How Team Foundation Server saves you from a potential mess with IIS and SSL


This is an example of how TFS is robust enough to prevent you doing silly and potentially costly (in terms of time) mistakes.
Let’s say you are configuring a new instance, and you just got your SSL certificate installed on the machine. So you select the HTTPS and HTTP option in the configuration settings, and you select your certificate. And you get an error:









Clicking on that link creates the correct bindings for that certificate. Fair enough.
But the Public URL is not what you like, so you change it to something else. And you go ahead. The result?






The readiness checks prevent you from doing this. It also checks for SNI validity amongst the other things, something that comes handy when you deal with Chrome.



Thursday, 1 March 2018

Not all is lost if your cube gets corrupted…

I ran into this very odd situation yesterday with the Reporting capability of my production TFS instance – I realised the Incremental Analysis Database Sync job and the Optimize Databases job were running for hours!

image_thumb2

I stopped the Incremental Analysis, and the Optimize Databases job completed successfully. Fine.
But – for whatever reason – my SSAS cube got corrupted! I couldn’t even connect to the Analysis Engine with SSMS. I also found errors in the Event Viewer pointing at a corrupted cube:

image_thumb1

Errors in the metadata manager. An error occurred when loading the 'Team System' cube, from the file, '\\?\<path>\Tfs_Analysis.0.db\Team System.3330.cub.xml'.
Errors in the metadata manager. An error occurred when loading the 'Test Configuration' dimension, from the file, '\\?\<path>\Tfs_Analysis.0.db\Configuration.254.dim.xml'.

Now, what to do? It looked like a full-blown rebuild was in order, and it is a costly operation, given that what the rebuild does is dropping both the data warehouse and the SSAS cube, rebuilds the warehouse with data from the TFS databases and then rebuilds the cube.

It is not like being without source code or Work Items, but still… it is an outage, and it is painful to swallow.

Now, in this case the data warehouse was perfectly healthy – the report shown an update age just a few minutes old. So all the raw data in this case is fine, and all you need to do is to rebuild how you look at this data.

The SSAS cube is just a way of looking at the data warehouse. If your warehouse is fine, just wait for the next scheduled Incremental Analysis Database Sync job to run, it will recreate the cube (thus making the Analysis Database Sync job a Full one rather than an Incremental one) without going through the full rebuild.

clip_image0025_thumb1

Why didn’t I process this myself by using the WarehouseControlService? Simply because the less you mess with the scheduled jobs the better it is Smilehiccups happen, but the system is robust enough to withstand such problems and pretty much self-heal itself once the stumbling block is removed.