Thursday, 6 December 2018

Lift and shift migration of Team Foundation Server to Azure with Azure DevOps Server 2019

This is a consequence of the support for Azure SQL Database - as you can use it as a data tier now, you can also upgrade your existing Team Foundation Server instance to Azure DevOps in a lift and shift fashion.

First of all -- *this works on my machine* (lab) and *I bear no responsibilities* 😁 it is highly experimental and only with Azure DevOps Server 2019 RC1 - although I am sure it is going to be polished in the next releases. Let's review the pre-requisites:

  • You need to run domain-joined VM(s)
  • The(se) VM(s) must have a Managed Identity in order to access Azure SQL Database
In order to lift and shift your databases, you need to import them into Azure SQL Database. You can use many methods like the Microsoft Data Migration Assistant, SSMS or a manual import.

























It is likely that you need to remove all the Windows users, and the table and stored procedures used for the scheduled backups. Remember that as of now it is still an experimental process - no support whatsoever for this, especially because you are modifying the database manually!

























Once the database are imported (S3 tier or above!) you need to run this query on the Azure DevOps databases:

CREATE USER <vmname> FROM EXTERNAL PROVIDER

ALTER ROLE db_owner ADD MEMBER <vmname>
ALTER USER <vmname> WITH DEFAULT_SCHEMA=dbo

Followed by this query in the master database:

CREATE USER <vmname> FROM EXTERNAL PROVIDER

ALTER ROLE dbmanager ADD MEMBER <vmname>

This is it really - then you can launch your Azure DevOps Server Configuration Wizard and proceed to an upgrade. Yes, even if you already installed Azure DevOps Server! Of course there are changes to perform here, so it makes sense to call it as such:



















Friday, 30 November 2018

Time for a change

Today is my last day at Quest Software and One Identity. Moving on has been a hard-thought decision, after over five and a half years. I met some great people there, and I managed to grow a lot given the different perspective I found myself in after day one.
Working with Vladimir Gusarov was thoroughly great, we've gone through a lot of scenarios and situations but we always managed to get positives out of them.

It is never easy to move on - so many memories and experiences to carry on with you. The company deserves a huge 'thank you!' for this ride, and I can only speak well about them. Thanks!

Now, where am I moving to? Well... I could tease you, but it would be kinda pointless 😊 I am moving back to Avanade, but in the UK branch.

This means I am going to work with a somewhat known group of people - Tarun Arora, Vlatko Ivanovski and Utkarsh Shigihalli for starters. But I will also manage to catch up with people I worked with when six years ago... maybe I should start rehearsing my best version of Terminator's "I am back" 😂

Monday, 26 November 2018

Big changes in Azure DevOps Server 2019 - Inherited Processes

Another huge feature brought by Azure DevOps Server 2019 is Process Inheritance - meaning you are going to get the same customisation experience you get today on Azure DevOps Services on your own Azure DevOps Server instance.

It is a collection-wide setting and it cannot be changed once the collection is created, so you won't get it for free when you upgrade to the new version of the product. But there are ways of moving stuff across, and if you do you will get all the benefits from the new model.




















Why am I so excited about it? Because as of now, the on-premise customisation was very powerful but also quite complex to master. The Process Template Editor in Visual Studio, witadmin.exe, storing the versioned changes somewhere else, all things that require time and effort, especially when you need to manage an instance that provide a service to your users.

With the new process model customisations can be implemented straight from the Web UI, and finally the concept of process inheritance comes on-premise, making your life much easier.





















Once you start using this different approach you will be able to easily apply derived processes to your projects, without going through all the existing fiddling with witadmin or the Process Editor. It is a massive improvement.

Wednesday, 21 November 2018

Big changes in Azure DevOps Server 2019 - SQL Azure Database support

This is the first of a (hopefully!) series of posts looking at the substantial new features of Azure DevOps Server, which was released yesterday in RC1.

If you follow my blog you know that despite everything going on around Azure DevOps Services, I have a soft spot for Azure DevOps Server (formerly Team Foundation Server) - the on-premise product.

Why? Well, since it is a quarterly snapshot of the code from the service it seems excellent value in terms of what it offers once brought on-premise!

This is quite an important feature I reckon: Azure DevOps Server now supports Azure SQL Database as the Data Tier.



















I can already see you are scratching your head, a little puzzled. Let's put some red lines here, shall we? This configuration works (not "is supported" - works!) only when you are running an Azure DevOps Server instance in an Azure VM. So this is already a huge restriction, but it makes sense - you cannot have better connectivity than something already within the Azure datacentre. After all, on-premise does not necessarily need to be inside a wholly-owned datacentre.

Also, when you create the Application Tier VM(s) you need to assign a System assigned Managed Identity to it - this is how the VM will authenticate with your database, and this is what will enable the Azure option in the deployment screen you saw before.




























Also, you need to provision at least two empty databases in advance: the Configuration database (name it Tfs_Configuration, for now) and the main Collection database (again, Tfs_DefaultCollection?). Once you have these two up and running, you need to set an AAD administrator user and assign these roles to your databases:













AT here is the name of the VM, as you are leveraging on the system assigned managed identity. AAD is required to actually manipulate the databases. Also, the first SQL script needs to run only against the master database, while the second one should run against the Configuration database and the Collection database.













 





What if you don't run these? The wizard put a series of checks in place to prevent a botched configuration. Hence, if you don't run the first script you are not allowing the VM to authenticate against the Azure SQL Database Server, causing this error:













Without the second script you will get an explicit error during the Readiness Checks. Eventually, all databases should run an S3 tier or above, otherwise you will be prevented to configure the instance (for the Configuration database) or you will get various errors and your collection will not be provisioned.


 




Why all of this? Put yourself in the shoes of someone who deals with a 3TB Collection on a daily basis. Backups, storage, DBCC, hardware performance and high availability. Can you see the reasons why? 😀


Tuesday, 13 November 2018

Tips on granular migrations with the Migration Tools for Azure DevOps

As you know, I am a huge fan of Martin Hinshelwood's Migration Tools for Azure DevOps. I've been using them for the past few months, and I put together a list of common occurrences that you are likely to face.

Throttling - it is going to happen!

You cannot do anything about it - you are hitting a cloud service, so it is inevitable that you are going to get throttled because of the irregular shape and the amount of data you are moving. What I can tell you is that if you are migrating an average amount of Work Items (in the low thousands I reckon) you are very likely to hit throttling using the LinkMigrationContext processor because of the load generated on the service.

Correct use of the ReflectedWorkItemID field

I experimented a fair bit with it, and the solution is to actually have it on both ends for the best outcome. Also remember that custom fields in Azure DevOps Services are unique, so don't be tempted to create the ReflectedWorkItemID field in the custom project used by only a project while you will need to re-use it across the board. 
Always create a custom process first - to be used as a starting point for migrated projects that is going to have that field - and then apply that to the target project with whatever further customisation you need.

Split your migrations into core and non-core processors

When do you need your users to be away from the source system? When can they start using the target system? All questions that are going to pop around, sooner rather than later. 
In my opinion if you are performing a Work Item migration they can start working after Areas/Iterations, Work Items and relationship links have been migrated. Why? Because unless you have someone who is really into his/her attachments, that is the main staple of the Work Item Tracking pillar of Azure DevOps.
Every project is different, of course. But these are my notes so they are skewed from my personal experience 😊

Identify what needs to be migrated right now and what can wait

If you have too much data to move and you cannot afford that downtime, you need to change your scope. A feasible approach is to move what is currently active, meaning people can start working right away. Once that is done, you can start batching all the closed items - remember that the WorkItemMigrationContext uses WIQL behind the scenes to identify what is going to be moved, so it is very straightforward.
Doing this makes sure that everything will eventually be migrated, but without the time pressure of the usage downtime. It is just down to coordination.

Sunday, 28 October 2018

Why you should scan your code within your pipelines

Like many I received this email from GitHub a couple of weeks ago on an old repository:


















This made me think about how important security scanning is in this day and age. Your code might have been top notch a couple of years ago, and being dangerous today.
So, to have a bit of a laugh, I hooked up WhiteSource Bolt to a build of that code to see the actual outcome on the open source libraries used there.
WhiteSource Bolt is also free for Azure DevOps, so there is really little stopping you from scanning your code 😊 this is the (kind of expected result):
























































This is code from a couple of years ago – do you think your code from two years ago is still as good as it was back then? 😊


Monday, 22 October 2018

Unblock the SonarQube upgrade process when using Azure AD plugin for authentication

There is a well known issue with SonarQube's Azure AD plugin, where an upgrade from v6.x to v7.x fails. Fixing this issue involves modifying the Users table manually outside of the upgrade process, and at the moment it is something you cannot avoid.

The reason why this happens is because the external_identity column does not contain a unique value, while instead it is filled with 'Azure AD' for each user. This is not a critical column, and you should be able to do this without issues.

Then I thought about a handy way of fixing this instead of just writing random data in it. Whenever you sign-in with the new plug-in, 'Azure AD' is going to be replaced with your email. So, I put together this very simple script.

Before you run this remember that I bear no responsibilities from it - it worked on my machine, it might not on yours 😊 always test it on backups first!














This takes care of the uniqueness of the value and enables the upgrade to go ahead. Needless to say that this script can be easily added to my proof-of-concept of automated pipeline!