Friday 30 November 2018

Time for a change

Today is my last day at Quest Software and One Identity. Moving on has been a hard-thought decision, after over five and a half years. I met some great people there, and I managed to grow a lot given the different perspective I found myself in after day one.
Working with Vladimir Gusarov was thoroughly great, we've gone through a lot of scenarios and situations but we always managed to get positives out of them.

It is never easy to move on - so many memories and experiences to carry on with you. The company deserves a huge 'thank you!' for this ride, and I can only speak well about them. Thanks!

Now, where am I moving to? Well... I could tease you, but it would be kinda pointless 😊 I am moving back to Avanade, but in the UK branch.

This means I am going to work with a somewhat known group of people - Tarun Arora, Vlatko Ivanovski and Utkarsh Shigihalli for starters. But I will also manage to catch up with people I worked with when six years ago... maybe I should start rehearsing my best version of Terminator's "I am back" 😂

Monday 26 November 2018

Big changes in Azure DevOps Server 2019 - Inherited Processes

Another huge feature brought by Azure DevOps Server 2019 is Process Inheritance - meaning you are going to get the same customisation experience you get today on Azure DevOps Services on your own Azure DevOps Server instance.

It is a collection-wide setting and it cannot be changed once the collection is created, so you won't get it for free when you upgrade to the new version of the product. But there are ways of moving stuff across, and if you do you will get all the benefits from the new model.




















Why am I so excited about it? Because as of now, the on-premise customisation was very powerful but also quite complex to master. The Process Template Editor in Visual Studio, witadmin.exe, storing the versioned changes somewhere else, all things that require time and effort, especially when you need to manage an instance that provide a service to your users.

With the new process model customisations can be implemented straight from the Web UI, and finally the concept of process inheritance comes on-premise, making your life much easier.





















Once you start using this different approach you will be able to easily apply derived processes to your projects, without going through all the existing fiddling with witadmin or the Process Editor. It is a massive improvement.

Wednesday 21 November 2018

Big changes in Azure DevOps Server 2019 - SQL Azure Database support

This is the first of a (hopefully!) series of posts looking at the substantial new features of Azure DevOps Server, which was released yesterday in RC1.

If you follow my blog you know that despite everything going on around Azure DevOps Services, I have a soft spot for Azure DevOps Server (formerly Team Foundation Server) - the on-premise product.

Why? Well, since it is a quarterly snapshot of the code from the service it seems excellent value in terms of what it offers once brought on-premise!

This is quite an important feature I reckon: Azure DevOps Server now supports Azure SQL Database as the Data Tier.



















I can already see you are scratching your head, a little puzzled. Let's put some red lines here, shall we? This configuration works (not "is supported" - works!) only when you are running an Azure DevOps Server instance in an Azure VM. So this is already a huge restriction, but it makes sense - you cannot have better connectivity than something already within the Azure datacentre. After all, on-premise does not necessarily need to be inside a wholly-owned datacentre.

Also, when you create the Application Tier VM(s) you need to assign a System assigned Managed Identity to it - this is how the VM will authenticate with your database, and this is what will enable the Azure option in the deployment screen you saw before.




























Also, you need to provision at least two empty databases in advance: the Configuration database (name it Tfs_Configuration, for now) and the main Collection database (again, Tfs_DefaultCollection?). Once you have these two up and running, you need to set an AAD administrator user and assign these roles to your databases:













AT here is the name of the VM, as you are leveraging on the system assigned managed identity. AAD is required to actually manipulate the databases. Also, the first SQL script needs to run only against the master database, while the second one should run against the Configuration database and the Collection database.













 





What if you don't run these? The wizard put a series of checks in place to prevent a botched configuration. Hence, if you don't run the first script you are not allowing the VM to authenticate against the Azure SQL Database Server, causing this error:













Without the second script you will get an explicit error during the Readiness Checks. Eventually, all databases should run an S3 tier or above, otherwise you will be prevented to configure the instance (for the Configuration database) or you will get various errors and your collection will not be provisioned.


 




Why all of this? Put yourself in the shoes of someone who deals with a 3TB Collection on a daily basis. Backups, storage, DBCC, hardware performance and high availability. Can you see the reasons why? 😀


Tuesday 13 November 2018

Tips on granular migrations with the Migration Tools for Azure DevOps

As you know, I am a huge fan of Martin Hinshelwood's Migration Tools for Azure DevOps. I've been using them for the past few months, and I put together a list of common occurrences that you are likely to face.

Throttling - it is going to happen!

You cannot do anything about it - you are hitting a cloud service, so it is inevitable that you are going to get throttled because of the irregular shape and the amount of data you are moving. What I can tell you is that if you are migrating an average amount of Work Items (in the low thousands I reckon) you are very likely to hit throttling using the LinkMigrationContext processor because of the load generated on the service.

Correct use of the ReflectedWorkItemID field

I experimented a fair bit with it, and the solution is to actually have it on both ends for the best outcome. Also remember that custom fields in Azure DevOps Services are unique, so don't be tempted to create the ReflectedWorkItemID field in the custom project used by only a project while you will need to re-use it across the board. 
Always create a custom process first - to be used as a starting point for migrated projects that is going to have that field - and then apply that to the target project with whatever further customisation you need.

Split your migrations into core and non-core processors

When do you need your users to be away from the source system? When can they start using the target system? All questions that are going to pop around, sooner rather than later. 
In my opinion if you are performing a Work Item migration they can start working after Areas/Iterations, Work Items and relationship links have been migrated. Why? Because unless you have someone who is really into his/her attachments, that is the main staple of the Work Item Tracking pillar of Azure DevOps.
Every project is different, of course. But these are my notes so they are skewed from my personal experience 😊

Identify what needs to be migrated right now and what can wait

If you have too much data to move and you cannot afford that downtime, you need to change your scope. A feasible approach is to move what is currently active, meaning people can start working right away. Once that is done, you can start batching all the closed items - remember that the WorkItemMigrationContext uses WIQL behind the scenes to identify what is going to be moved, so it is very straightforward.
Doing this makes sure that everything will eventually be migrated, but without the time pressure of the usage downtime. It is just down to coordination.