Wednesday, 20 June 2018

Easily handle internal settings while orchestrating components' deployments and parameters

After ten years of attending, then speaking at conferences it always strikes me what demos often miss are real world details that really make the difference.

Like...deploying an application with a pipeline. Everybody talks about it, right? And everybody (including myself!) have some demo-ready stuff to show around in case it might be required.

I am working on a sample application right now, and I realised how blind I was - even if I am deploying stuff to different slots and environments and whatnot, I am still treating everything as a single monolith. Not really what you want these days, right?

Well' let's sort it out. Say that you have an API component and a Frontend component, the best thing to do is to decouple the two of them so they can be independently deployed *and* mix-matched depending on the requirement.

It is .NET Core in my case, so in my Frontend component's appsettings.json I created this section:








Of course I modified the application so I could add the configuration in my ConfigureServices method and consume it in my Controller. The variable part in this case is the Slot property.

Now comes the fun side of the story - of course I have a pipeline in place. How do I handle these settings?



The best approach here, given the relative complexity of this exercise, is to scope the relevant value by environment. The Dev environment will always point at the Dev environment, Staging to Staging, and the last two environments are effectively production so I do not need to worry about adding a slot. It's not like I have cross-environment settings here.






The reason why the variables are named that way is because I am using the JSON variable substitution option in the Azure App Service Deploy task, and as my property is not on the first level then it needs to be explicitly written that way.







Doing it ensures that each environment has its own setting, and it also makes sure you remain sane while handling internal app settings across your applications and environments 😉 it is really easy to do as well, so there is really no reason to skimp on it.

Saturday, 16 June 2018

Quickly deploy a baseline SQL database with VSTS

"Sometimes we go full steam ahead with a complex solution for a very simple problem..."

That was the answer I gave to a friend of mine who asked me how to feed some baseline database for testing purposes with VSTS in Azure.

The obvious one would be to have your versioned SQL scripts in a dedicated repository which you can use to rebuild the whole thing from code (which is by all accounts the most correct solution to this problem). But in this case there are other avenues.

Databases have been treated like second class citizens for years - by tools and practices. For example, why not using BACPAC files for this exercise? At the end of the day, a BACPAC file contains the packaged version of a database at a certain point in time, including its data.

So if you have your BACPAC somewhere, get to an Azure storage account and run this SQLPackage command inside a VSTS PowerShell Script task (of course you need to replace the variables and provide the actual path):

& 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\130\sqlpackage.exe' /Action:Import  /TargetServerName:$(DBUrl) /TargetDatabaseName:$(DBName) /TargetUser:$(DBAdmin) /TargetPassword:$(DBPassword) /SourceFile:"<your location>/sample.bacpac"

Don't get me wrong, I love seeing a database fully integrated with the pipeline and that's how it should be. But in this specific case, I feel the tradeoff is worth it.

Also - this is a baseline database, nobody prevents us from running delta scripts against it depending on needs. But given it was for testing purposes, I highly doubt there is going to be much development on it in the future!

Thursday, 7 June 2018

How to run UI tests in a Deployment Group with TFS and VSTS

Especially if you are testing client applications, you might want to run UI tests on a Deployment Group instead of a Build Agent. While technology is the same, there are a couple of things to keep in mind.

In order to enable a machine to run UI tests you need to make sure your InteractiveSession capability is set to true.








In order to do so, you need to re-configure or manually change the script used to add a machine to the Deployment Group. Given a standard script the first step is removing the --runasservice switch from it.

Once you run the configuration script the process will guide you to configure the agent for interactive interaction. You will set it to auto-start so you will get an unattended experience when rebooting the machine, but you will be able to run interactive sessions on it.

Eventually, I always recommend to use the VSTest Platform Installer task to make sure you have a consistent environment to run your tests from:











and to refer to the tools installed by that in the Visual Studio Test task:



Wednesday, 30 May 2018

A story of high availability with SQL Server AlwaysOn and TFS

A few weeks ago something happened on our TFS instance - we discovered that DBCC CHECKDB under certain conditions can mark a database as corrupted.

Long story short, this was due to a peculiar condition related to a high volume of transactions during that operation, not something you see every day. Microsoft Support was really good helping us getting back to normality.

In retrospective, what really hit me was how resilient TFS was thanks to SQL Server AlwaysOn. As you know, I am a huge fan of AlwaysOn because of how transparent it makes High Availability.

For us, maintaining availability meant a simple failover to the other node. Given that we are running the Availability Group with Synchronous-Commit Mode (my default choice when it comes to TFS) the then-Primary Replica was already updated to the latest transaction, so there was no data loss. 

Team Foundation Server did not lose a single heartbeat. When things go south like this, during the issue itself and if you are doing something during the failover you will get a JobInitializationError, which is self-explanative. As this is a transactional system by design, nothing is left hanging in the balance like good ol' SourceSafe :)

Of course we were in limited availability while we were troubleshooting and fixing this problem (always change the Failover Mode to Manual when you are doing so), but there was no downtime.

Also talking recovery, at the end of the day we had to restore backups on the Secondary Replica to get back to a proper synchronisation. Again, a bit tedious and time consuming given the sizes involved, but it was flawless.

Tuesday, 22 May 2018

Small details carrying a huge value

I was reading this post by Microsoft Premier Developer’s blog, and it was a nice throwback to past times where I had to deal with these type of requests because of the existing process in place.

I also thought about how easy it became customising a process with VSTS compared to TFS, and the first thing that sprung to mind was to pair this up with the Board Styling options:




















This will cause cards that are unassigned to a single individual but assigned to a group to be highlighted in the board:


















There can be so may reasons why a team might choose to do this – and it does not just apply to product development. Think about situations where telemetry operators escalate events or tickets are integrated in the backlog.

Why am I focusing on such small details? Well, this is the kind of personalisation (I cannot really call them customisations 😊) that enable cross-role consumption of the stack. 
It does not have to be anything extremely complicated, but whenever you can bring an existing process inside the tool in a frictionless manner you are already paving the way for a better reception and adoption of the tool itself.

Friday, 11 May 2018

Elevate your telemetry from silo to valuable data source


I am going to speak at DevOpsDays Kiel next week about telemetry, and I was thinking about how much Application Insights evolved in the last few years.

Without mentioning the awesome Application Insights Analytics, I was really pleased with how easy it is to bring valuable data to the forefront.

For example, this was there pretty much since the inception:






It’s great, but it is kind-of-buried in the detailed information provided. What I really enjoyed on the other hand, was this:










This is an organic and straightforward way to escalate a single piece of information. Why you ask?
Well, because the previous screen is a summary, with a single button named Operations in a pane called Take Action

So, from a UX point of view, it comes natural to dig into the details of a single request raising an exception and promote that information to an actionable backlog item.

A development team does not (usually) need quantity, it needs quality in order to fix problems raised by telemetry. It is the natural evolution of telemetry systems to be able to integrate with DevOps stacks in an effortless way – the real challenge is doing so without being excessively verbose, but still providing the much needed value to close the loop.

Monday, 30 April 2018

Review – Professional Visual Studio 2017

I recently gave a go to this book, because I feel it is important as a stepping stone to whoever is approaching the IDE – remember, there is always somebody who is starting around 😊


It does its job well to be fair, it covers pretty well all the features and it is pretty up-to-date with the RTM release of the IDE. The only problem I find with it is that the Visual Studio release cadence is going pretty fast, so it will always be a matter of playing catch-up with the team. There is so much that is added and updated on a regular basis that it is almost inevitable for a book like this to fall behind.

Regardless of that, there is also a nice introduction to the Continuous Delivery Tools for Visual Studio that happens to be a nice starting point to the DevOps and CD pipelines tools as well – including Code Analysis.

Visual Studio Team Services is mentioned at the end, instead of Team Foundation Server. It is a change that makes sense, as it is extremely quick and easy to get started there instead of installing TFS.