Thursday 21 December 2017

Help! My TFS automatically redirects HTTPS to HTTP!

If you are deploying HTTPS to TFS and you want to keep HTTP for a while, you might experience a strange behaviour: despite following the documentation, every time you browse to your TFS Web Access with HTTPS you are redirected to HTTP.

Fairly simple answer to this, and it is not because of any extra tool or technology: it has to do with the Public URL (it used to be called Notification URL) you set on the TFS itself.

If you leave the HTTP version over there you will always get the HTTP version – even with HTTPS  – otherwise if you switch to an HTTPS URL there you will get the HTTPS version by default, and explicitly browsing the HTTP version keeps you on the non-secured protocol so you can keep them side by side.

Tuesday 19 December 2017

The impact of an upgrade to Team Foundation Server 2018

Usually upgrading Team Foundation Server is smooth and easy, but there is an exception this time around. It is not strictly related to TFS per-se, instead the issue lies with the new deprecations introduced with version 2018.

If you have a local server with just a few users it might not be a huge deal, if you have hundred of users spanning across timezones all over the world things can get hot pretty quickly. I am not going to cover pre-requisites like versions of Windows, SQL Server, etc. but I want to focus on which are the deprecated features of the product.

Let’s take a look at what these deprecations are, and how to approach them to minimise service disruption.

XAML Team Build

The elephant in the room is clearly about saying farewell to the XAML Build. After 12 years it is time to say goodbye to the old system and move to the new Team Build.

While this is good news for some, it can be troublesome for others, especially if you have a complex Build Definition which hasn’t been updated for years.

The way to go here is to have a clear idea of what the build process does and try to replace it with as many OOB tasks as possible. Second choice would be to cherry-pick from the Marketplace, and eventually write PowerShell scripts.

You might be tempted to reuse existing scripts or automations – which is good – but my suggestion is about maintenance: less legacy scripts mean less problems when it comes to maintaining the process, and using OOB tasks makes easy to take advantage of the constant flow of upgrades you will get from the tasks.

SharePoint integration

SharePoint and TFS 2018 is no longer tightly coupled with SharePoint. This does not mean you will lose any of your Document Library, but only that some features won’t work anymore.

The impacted features are:

· SharePoint site creation when requesting a new Team Project

· Web Parts integration

· The Documents pane within Team Explorer

While this is a major change in the history of the product, it is not unexpected. In an era of APIs and extensibility, relying on some opaque integration which isn’t really fit for purpose anymore – it does not work with Office 365… – isn’t the way to go. But remember: the hyperlink field in Work Item Types is not changing, so you can still link your (now external) documents to Work Items.

The existing Excel reports and the Reporting Services reports are not affected by this change, they will still work normally as they leverage on different features of the product. What is going away is the tailor-made integration which nobody else can use, really.

Lab Management deprecation

Lab Management in its current form will be deprecated and removed. This does not mean you cannot use virtual testing environments anymore – there are a couple of choices. You can use Deployment Groups or the SCVMM task if you want to keep using what you are used to.

It is also worth mentioning that the MTM integration for Lab is going away too – not a huge surprise given how good the web interface is, but it is worth mentioning if you are a heavy user of Lab Management.

Team Rooms

This is pretty much the only feature that is removed with no replacement – again, for good reason: Teams, Slack and others are such good collaboration tools, which focus only on having a good communication experience so they are the prime candidates for adoption.

Monday 4 December 2017

An unusual scenario for Release Management, part 2: production SonarQube upgrades

In the previous post we saw how you can automate SonarQube test upgrades, but now it is time for production.

As mentioned, my artifacts here will be the TFVC repository, and I am going to have just one environment as target: Production.

image

There are three phases here, two Deployment Group phases and an Agentless phase. This is also where Tags come into play:

image

The Release Definition itself is going to be fairly straightforward – after all, I am assuming testing already happened so this automation is just aimed at saving time:

image

The first phase will prepare the environment for the upgrade itself. It is going to copy the zip file to the location we’d like to use, unblock and extract the ZIP file, copy the plugins in use to the new instance and so on.

I am also using these variables to keep the process reusable, and I am taking the ALM Rangers template as a starting point, so everything will happen into C:\sq\:

image

image

image

The Start SonarQube Interactive launches StartSonar.bat from PowerShell to make sure it is not going to make the task hang and run indefinitely.

image

The Agentless phase is required because once you launch StartSonar.bat you will have to browse to the /setup URL and start the upgrade process.

I am going to get a notification (it could be anyone else here, even a group) and I am going to start the process. You could automate that, but it is surely better to do so manually IMHO.

image

Once the upgrade is completed (and you stopped the StartSonar batch job, again I could automate this too but I am happy to have it included in the manual interaction for now) you can resume the pipeline with the second Deployment Group phase, which is going to remove the old service, install the new one and start it using the OOB batch scripts in your SonarQube instance.

This saves time (remember you can schedule this Release Definition too Smile) but the most important takeaway here is the value you get from automating the process and versioning your configurations.

It might be unorthodox, but it works quite well IMHO.

Friday 1 December 2017

An unusual scenario for Release Management, part 1: testing SonarQube upgrades

Among the many things I do I manage a SonarQube instance. Not a big deal to be fair with you, but it is a valuable tool and it has its quirks. You need to spend time on it.

So I thought about automating this process a little bit. It is a bit unusual, but it brings some value, so why not!

The result is a TFS or VSTS Team Project with a TFVC repository (TFVC is perfect for handling binary files!) and two Release Definitions, one for Test and one for Production.

The reason why there are two Definitions is because – oddly enough – the Test one came after the Production one (which is easier, you’ll discover why later on). I might revamp the whole thing in the future to have sequential environments, but this is it as of now Smile

In the TFVC repository you are going to find folders for each SonarQube version I deployed on my server, together with the relevant sonar.properties file filled with the values you want, and a scripts folder with some utility scripts.

image

The reason why I am not automating the configuration file creation (via a find-and-replace operation for example) is because you are explicitly told by SonarSource not to just replace this file with an existing version but to start from scratch.

While testing your configuration you will need to work on it anyway, so it is a good idea to put it in a repository, and you will get versioning for free as well. Bonus.

Both my Release Definitions feature a Deployment Group: guess what, it contains my SonarQube server SmileI also leveraged on Tags, in case I might want to have completely separate enviromnents, as the Deployment Group phases are marked to run only on machines that sport the right tag for the Release Definition. It isn’t the case for now though.

image

Now it comes the fun part, let’s start with the test upgrades. My process for testing SonarQube is as it follows:

  • Restore a backup of the production database
  • Get the new SonarQube version on the VM that hosts its services
  • Extract the new version, set the right values in the sonar.properties file (like different ports and java switches)
  • Check that the upgrade runs successfully
  • Verify all the involved plugins

Like I said I am not going to automatically find and replace values in the sonar.properties file, and the latter steps aren’t really worth scripting, but the first two steps can benefit from an automated process.

This is what my testing pipeline looks like:

image

Nothing too fancy, but it saves time.

The cool bit here IMHO is the Azure PowerShell script I am running to restore the database: given the Resource Group, Server, Database and SonarQube version (which is used to form the name) I can check if I already have a testing database – if not it starts restoring a backup copy from ten minutes before.

image

If this prerequisite check fails, I integrated the error handling so it stops the task immediately and marks the release as failed:

image

image

How? Like this:

image

The Write-Error statement stops the task execution and raises the error message, the Write-Host statement with the specific ##vso line marks the task result as failed and the exit 1 line terminates the sessions so that whatever is next (the database restore!) is not executed.

Eventually at the end there is an Agentless phase which is just manual intervention with the required things to do:

image

I will go through the production pipeline in the next post, as it is different Smile

Monday 20 November 2017

Old but still very good, SlowCheetah with VSTS!

I was asked if there is a way of transforming configuration files at build time. Nothing else than the good, old and reliable SlowCheetah if you ask me Smile

Just install the Extension, and you will be able to add your transformations by right clicking the .config file and selecting Add Transform:

image

These are going to be based off the BuildConfiguration you defined in your Solution. Once this is done you can define your own settings, like this:

image

where a transformation can be something like this:

image

There are so many examples on how to do this, so please do not shoot on the pianist Smileall of that goes into a Version Control System of sorts, and it can be built by VSTS or TFS or any other Build Engine.

Your Build Definition needs to specify a Configuration, either at Queue time or embedded into the Build Definition itself.

imageimage

The result? As expected:

image

This isn’t really about VSTS or TFS per-se, but it is always a valuable approach to configuration management, and it was worth a refresher Smile

YAML Build Definitions in the Team Build, now what?

Among the news and announcements from Connect() you surely saw YAML Build Definitions mentioned, and you might wonder – what’s coming? How does this fit into the overall TFS/VSTS product?

Let’s start from the past, from 2011 – this UserVoice request asks for something that enables versioning for Build Definitions.

Being 2011 many things weren’t as available or as robust as of today, and the current Team Build was not remotely on the horizon. Fast-forward six years, and we’ve got YAML Build Definitions.

Didn’t we have a way of tracing Build Definitions without YAML? Really? Six years to implement some kind of traceability? Well…:

image

image

There is a built-in Compare Difference feature in both TFS and VSTS, and you can export your Build Definitions in a JSON file. So no, it is not just about traceability.

In the age of Continuous Delivery, Infrastructure as Code is critical. It saves an enormous amount of time and resources, and it is an extremely reliable way of automating the build and release process.

That is where YAML Build Definitions fit into the equation: they represent the concept of Infrastructure as Code pushed to the limit. You are not just treating the deployed infrastructure as code, you are also adding the deployment infrastructure in this definition, where as long as you have the required resources at your disposals (agent queues, build tasks, etc.) you are good to go.

This also does not mean the current Build Definitions are leaving – YAML is just for the Build Definitions, but the underlying technology is still the same. Also, with YAML you don’t get a visual breakdown of the tasks of the Build Definition:

imageimage

It is a different way of doing the same thing – it might not cater for everybody (I am a big fan of the current Definition UI because it makes it understandable for everybody regardless of the expertise and the role), but it adds options for someone who needs a different experience and has different requirements.

At the end of the day, the more the better Smile

Monday 6 November 2017

Unorthodox reporting with TFS/VSTS and PowerQuery

I am a huge Excel fan, because it allows easy data transformation and its flexibility is second to none, despite its complexity. But I also use other tools, depending on the requirements.

Many users instead only use Excel, with no possible alternative. Good or bad, this is the norm in many organisations and trying to change this habit too early while trying to push new concepts or ideas only puts strain on these users, raising barriers and potentially preventing the very change we are pursuing.

That is where PowerQuery comes to the rescue. Ironically enough, I discovered it by accident trying to help my partner with some stuff from her work (she is a heavy Excel user) – PowerQuery is a very powerful data analysis engine (it goes hand to hand with PowerPivot, another Excel Data Modelling tool I am really fond of!) that, in a nutshell and from a developer point of view, enables database-like querying and reporting scenarios.

So what can you do with it? Well, let’s take a very easy example: you have a TFS/VSTS query which returns all the non-done PBIs in a backlog, and you want to report on this query so you will know how many Work Items you have in a certain state, but without using TFS or VSTS at all.

image

That is where Excel comes easily to the rescue: you can connect it to TFS/VSTS with the Team Add-in, downloading the raw data from the query you saved there:

image

image

Select the raw data you want to use and from the Data ribbon, select From Table. Excel will automatically recognise the data source you want to use, if you don’t select data beforehand you’d have to input the range manually.

image

PowerQuery now kicks in:

image

What we want to do is pretty easy and straightforward, so we are going to use Group By:

image

A basic group by works well here:

imageimage

If you use an advanced one you can group by based on multiple columns. If you have bugs as well as PBIs as requirements, that’s what you want:

image

image

Now, if you Close & Load, you are done. How is this useful in any way?

image

Easy: this query is going to show on the side of your spreadsheet:

image

Click on the Query you created, and you will be immediately shown the result:

imageor image

Moreover, it is quite dynamic.

Going deeper on it, the name isn’t cool at all – VSTS_<GUID> doesn’t say much. You can change it in the Query Editor:

image

Underneath you can see the Applied Steps – that is where things get interesting:

image

It is the visual representation of all the data transformation you applied. If you want to access these steps and change them, click on the Advanced Editor:

image

You will get the actual PowerQuery raw language (it is a functional language called M by the way Smile):

image

This is where you can start creating your custom transformations, leading to dynamic custom reports based on TFS/VSTS data.

image

image

I really find it cool and fascinating to be fair, bringing together such different user requirements and scenarios – without mentioning that you can up your percentage of Excel knowledge by a notch, which is always a great skill to master Smile