Wednesday, 11 April 2018
Wednesday, 4 April 2018
Thursday, 29 March 2018
Wednesday, 28 March 2018
We are talking about a plain, empty instance, so... it was a bit of a needle in a haystack!
Let's take a step back: SQL Server AlwaysOn Automatic Seeding is a new feature of SQL Server 2016 and above that manages to sync up a database in an Availability Group without leveraging backup and restore. This is a life saver in certain situations, so that you can avoid the computational load of a backup and of a restore that might take a long time.
There are some constraints - above all, the instances making up the Availability Group must be *identical*. Yes, identical in everything, including paths used by SQL Server. It is a very cloud-first approach at the end of the day, where you have identical, commodity resources at your disposal and your actual target is to provide a friction-less experience to whom is going to consume the service you'll offer.
So cool, right? Still, for some reason, my Configuration database didn't stream from Primary to Secondary replica. I checked the DMV, and I got an obscure 1200 failed_state error - Internal Error.
The first thing I did (as the instances are really identical, they were provisioned the day before) was to check that I was on the latest CU, as there are fixes available for Automatic Seeding. Check.
I had a look at the script used by the wizard to add the databases to the Availability Group, nothing too fancy to be fair. Reading around seems that there is still a chance that things might suddenly break, so I took another path.
Yes, a Full Backup (taken with the TFS Administration Console nonetheless) was supposed to be enough to enable Automatic Seeding as the recovery chain is started. Would another Transaction Log backup hurt? I don't think so.
After taking the faulty database off the Availability Group, I ran the speedy Transaction Log backup and added the database back in the Availability Group with the script. Guess what, it worked! And my new TFS instance is up-and-running.
Of course this is totally transparent as usual for TFS, as the configuration wizard is smart enough to set the right connection string from the beginning. But you still need to make sure the Availability Group is correctly set, otherwise at the first failover you will be left with nothing.
Wednesday, 14 March 2018
Thursday, 1 March 2018
I stopped the Incremental Analysis, and the Optimize Databases job completed successfully. Fine.
But – for whatever reason – my SSAS cube got corrupted! I couldn’t even connect to the Analysis Engine with SSMS. I also found errors in the Event Viewer pointing at a corrupted cube:
Errors in the metadata manager. An error occurred when loading the 'Team System' cube, from the file, '\\?\<path>\Tfs_Analysis.0.db\Team System.3330.cub.xml'.
Errors in the metadata manager. An error occurred when loading the 'Test Configuration' dimension, from the file, '\\?\<path>\Tfs_Analysis.0.db\Configuration.254.dim.xml'.
Now, what to do? It looked like a full-blown rebuild was in order, and it is a costly operation, given that what the rebuild does is dropping both the data warehouse and the SSAS cube, rebuilds the warehouse with data from the TFS databases and then rebuilds the cube.
It is not like being without source code or Work Items, but still… it is an outage, and it is painful to swallow.
Now, in this case the data warehouse was perfectly healthy – the report shown an update age just a few minutes old. So all the raw data in this case is fine, and all you need to do is to rebuild how you look at this data.
The SSAS cube is just a way of looking at the data warehouse. If your warehouse is fine, just wait for the next scheduled Incremental Analysis Database Sync job to run, it will recreate the cube (thus making the Analysis Database Sync job a Full one rather than an Incremental one) without going through the full rebuild.
Why didn’t I process this myself by using the WarehouseControlService? Simply because the less you mess with the scheduled jobs the better it is hiccups happen, but the system is robust enough to withstand such problems and pretty much self-heal itself once the stumbling block is removed.
Tuesday, 27 February 2018
Friday, 16 February 2018
Last week SonarSource released a new version of their tasks for TFS and VSTS, with a couple of very welcome additions.
Up to v3, we basically had to do everything manually – especially passing parameters with the /d:… switch.
v4 introduces a context-aware switch where you can specify what you are using for your build:
The Use standalone scanner is quite interesting, as it guides you towards providing a .properties file:
Also, gone are the days of using /d:… inline. There is a very handy Additional Properties textbox to use with a line-by-line parsing, which makes property override very easy to do:
Tasks are also split now into Prepare Analysis, Run Code Analysis and Publish Analysis Result, to allow a more streamlined design of your Build Definition.
Thursday, 8 February 2018
A quick one I am dealing with these days – if you switch the Public URL of your Team Foundation Server to HTTPS you might see your Build Agents losing connection with the server.
This usually happens because of a known bug in TFS an OAuth token isn’t registered so all the authentication tokens on the agents expire.
Of course YMMV, so always double check with Support before running a Stored Procedure on your production instance.
If you happen to get into this problem, you can mitigate it by reverting your HTTPS switch-on and changing the Public URL back to the HTTP version. Doing that will re-establish the connection between the server and the agents.
Monday, 22 January 2018
Despite the push we’ve seen in the last few years, the Hosted Build Service might not be the right product for you for whatever reason.
Then, if you are in a situation where your agents aren’t running in the same domain as Team Foundation Server’s and you want to use the Test Agent then you really risk opening the Pandora’s box, courtesy of WinRM and PowerShell remoting.
And to be completely clear – I have nothing against them the only downside is that they need to be approached in the right way, otherwise the can-of-worms effect is just behind the corner.
First and foremost, remember that whenever you target a machine for Test Agent deployment you only need to consider the Build Agent-Test Agent relationship. All the errors you will get are going to be from the Test box, not the build box.
So when you need to configure WinRM, the Test box is the machine that is going to be accepting the connections. While it sounds straightforward, sometimes things happen and one is tempted to look at the Build box first: don’t.
Also, if you really want to use HTTP and WinRM, remember that this is the trickiest combination – so think twice before going down that route!
Then in terms of errors – you will likely face WinRM errors of all sorts. The most common is this:
If you are outside a domain then REMEMBER about Shadow Accounts – it is the only way to keep identity issues to a minimum. You’ll also need to set the TrustedHosts value to the machines pushing the agent.
Remember that passwords need to match, and that mixing users at setup time isn’t really a good idea if you are going down the workgroup/non-trusted domain route.
Always triple check passwords, and I recommend to use the same account for both provisioning and execution, at least as a baseline. This will make sure you have a safety net incase things don’t pan out as expected.
Eventually there is this error, that really puzzles me:
This is actually an aggregated exception:
Look at UAC and execution context for this – it always happens when you are not running stuff as Administrator when that’s supposed to be elevated. It always drives me mad.