Thursday, 9 August 2018

How VSTS Sync Migrator is going to change then way you migrate to VSTS

Like I said in my last post, I really enjoy using VSTS Sync Migrator for Work Item migrations.
There are a few reasons why I believe this tool stands out from the rest, and it is not just because of its complexity - in a nutshell, you can use it not just for tip migrations, but to actually filter and sanitise what you are importing into your target TFS or VSTS.

Firstly, you can run each processor (call them steps if you want) independently. That is very important when it comes to understanding what each one of them does. You don't really want to use something that starts, does stuff and then fails with an enormous log file.

Each processor is extremely specialised and usually backed by a WIQL - again, quite complex sometimes but extremely powerful and flexible.

You can also run multiple instances of the Migrator, targeting different Team Projects in VSTS - having them side by side isn't usually a problem.

Then, you have a really powerful capability for shaping your data in the best possible way. By "shaping" I mean "mapping": field to field, replacing values, even replacing values via Regular Expressions or mapping different Work Item Types.

This can enable all sorts of scenarios where you can change a Process Template, or make your very verbose customised form much more readable by merging or moving around fields' data.
























Eventually, no ancillary item is left behind, including Work Item Queries (which carry a huge business value IMHO) and commit information. You can link commits to Work Items even if you migrated the repository with a different name.









It takes a while to get all the bits right - there are lots of options, but the documentation is quite good and it will easily guide you through. Fellow MVP Mohamed Radwan also recorded a quick demo of how to use it.

Now, onto more VSTS migrations 😀

Tuesday, 31 July 2018

A set of tools to deal with granular VSTS migrations

I am in the middle of a TFS to VSTS migration, and unfortunately I cannot use the TFS Database Import Service this time around. So I put together this list of tools to use for a granular migration, together with scenarios.

It is going to be mostly on the Work Item side to be fair - if you want to move code quickly look at the last post.

TFS Integration Platform

Yes, I start from the oldest of the bunch. While unsupported and fairly old, the Integration Platform still works decently given the chance. 

There are lots of limitations though: you are limited to the Client OM, and you need some tricks to make it work, like creating fake registry entry to make it believe you actually have Team Explorer 2012 (unless you install it, of course).

I reckon the Integration Platform these days works well with a limited scope migration. The pain here is that everything needs to be sorted manually and it gets sluggish after a while, for some reason.

TfsCmdlets

Say that you want to quickly work with Areas and Iterations, or that you want to script them. This is an example where the TfsCmdlets are extremely powerful. 

In my case, I am using them extensively to prepare empty target Team Projects. It is basic PowerShell, hence you can manipulate your objects as you like and they make your life extremely easy.

You don't migrate stuff with the TfsCmdlets, but it is a really invaluable tool for all the ancillary items around the migration itself.

VSTS Work Item Migrator

The Work Item Migrator is an open source project from Microsoft that leverages the REST API layer of TFS and VSTS.

It is more of a sample of how to deal with the APIs IMHO, but it is an excellent starting point. It is based off a Work Item Query as a source, which means you can easily scope what you want. Areas and Iterations need to be created beforehand.

One note here: if the validation succeeds, it is not guaranteed that the tool will migrate everything, but that depends on many factors.

VSTS Sync Migrator

Martin Hinshelwood's VSTS Sync Migrator is a real powerhouse - it is quite complex and it has lots of features (including reconnecting commits to Work Items), it can take a little while to refine the result but it is great.

You can also do remaps with this tool, so you can easily migrate from one Process Template to another. It is easy to do because you will configure it yourself in the configuration file. What I really like about this tool is that I can have a very complex configuration but keep some of the steps in a disabled state - so I can have a nice incremental experience.

Tuesday, 17 July 2018

I want to move my project from TFVC on TFS to Git on VSTS, without command-line tools. Can I do it?


Many often do not realise how easy is to consume technology to make it accomplish a certain scenario. This happened just last week to me.

For example: you have a project on a Team Foundation Server, which uses TFVC. TFS is only available via the corporate LAN, while you want to move it to the new company’s VSTS account and you also want to move to Git. Throwing an extra spanner in the works, you want something easy to use which does not require any kind of command-line use.

Does it sound too complicated? It is actually a matter of a couple of clicks.

The first step is to use the Import Repository feature on your local TFS – what you will do is to convert a branch from TFVC ($/MyProject/main for example) to a new Git repository:
































You can retain as much as 180 days of history, which is more than enough IMHO. If you need more, you can keep the old system around and look it out there. Why? Because of how TFVC and Git differ – it would not really make sense, and you are just adding stuff to a repository that should be as nimble as possible. Also, you are limited to 1GB per imported branch.

Once you are happy with it you can add your VSTS target repository as a remote, and push it there. Job done.

Tuesday, 10 July 2018

Review – Accelerate


As you know, I am not only a technology enthusiast but also very into the business side of DevOps. And as a fan of The Phoenix Project, I really could not refrain from purchasing it 😊 
Also, the focus is on High Performing Technology Organisations (HPTO from now on), which is a very broad subject intertwining technology, management, strategy. Enough to keep me interested.







































I read it twice before writing this review. Yes, twice. And the conclusion is very simple: it carries a huge horizontal value. This book is not the typical technical or business book, its approach is more scientific, almost academic.

A real HPTO is a well-oiled machine that requires lots of work all across the board. And that is where it shines for business value: despite this approach, the result is that each chapter can be picked by any company as a project on its own to improve itself and go towards the required maturity to ‘be’ an HPTO.

Technical best practices? Chapter four. Infosec and the shift left on security? Chapter six. Employee empowerment through management? Chapter nine. Each chapter has enough stuff to keep you, your teams and your companies busy for months, if you actually start a project on it. And given that I do not think every reader of this book works in a HPTO, you definitely should start some projects 😊

Summarising it in a single sentence, the issue at heart is that software is the actual business engine. That is what the book underlines as well - without a good software factory you simply cannot deliver value to your users, and if you don’t deliver value…

Wednesday, 27 June 2018

A set of tricky situations with HTTPS and TFS

HTTPS is more and more common-place, not just for public websites but also for internal websites. This is extremely good for a number of reasons, but from an administration standpoint there are a few bits to keep in mind.

In particular, when it comes to Team Foundation Server this is a list of errors and problems that go away with a common denominator: the right certificate.

The number one offender is of course the out-of-domain machine. If you have a domain-joined machines these problems simply do not happen because the internal certificate is deployed by the domain GPO - hence you don't have to fiddle with it. When your machine is not domain-joined, things can easily go south.

Bear in mind - these are not security tips, this is just a collection of situations which you will face if you deploy HTTPS with TFS.

Non domain-joined machines

If you are running a non domain-joined machine then you need to procure the root certificate for your domain and install it in the Trusted Root Certification Authorities store on your machine. This needs to be done on any machine not part of your domain, otherwise you won't be able to do pretty much anything.


Build agents

Build agents need to be reconfigured. You can't run away from this, if you don't do that they will be working until the authentication token expires, and then you will start seeing this error in the Event Log after they go offline:

Agent connect error: The audience of the token is invalid.. Retrying every 30 seconds until reconnected

You need to de-register (config.cmd remove) and re-register your build agents in any pool. Not too bad, but it needs to be planned for.


The Deploy Test Agent task in Build and Release

If you don't have your certificate installed on both the Agent (if outside the domain) and the target machine (again, if outside the domain) then you will get this cryptic error:

The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Exception calling ".ctor" with "2" argument(s): "One or more errors occurred."

It's a communication issue between the target machine and TFS. Once the certificate is installed it goes away and the task works normally. This GitHub issue also recommends enabling TLS v1.2, which is not a bad idea.


 Git

Git holds a special spot in this collection, because of how it handles SSL. While newer versions of Git for Windows made this really straightforward (hint: they support the Windows Credential Manager), but if you aren't running the latest and greatest then this is what could happen with Git on your local machine, even if it is joined to the domain:

C:\>git clone https://myserver/Collection/_git/Project 
Cloning into 'Project'... 
fatal: unable to access 'https://myserver/Collection/_git/Project/': SSL certificate problem: unable to get local issuer certificate

You can sort this out in many ways, but the best one is Philip Kelley's approach. It just works, even if it is a bit of a walkthrough. This applies not only on the client, but also on the build agent if you are not running a recent version of the agent itself. It can be easily corrected by replacing the ca-bundle.crt file over there, it is not going to be replaced until you update the agent to a newer version.

Also, a false friend:

error: RPC failed; curl 56 OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 10054
fatal: read error: Invalid argument, 255.05 MiB | 1.35 MiB/s
fatal: early EOF
fatal: index-pack failed

It can be all sorts of things, especially as the error points at OpenSSL - but check your connection's stability first before messing up with Git's postBuffer and compression 😃 if the git clone operation starts the problem is not the SSL authentication.

Wednesday, 20 June 2018

Easily handle internal settings while orchestrating components' deployments and parameters

After ten years of attending, then speaking at conferences it always strikes me what demos often miss are real world details that really make the difference.

Like...deploying an application with a pipeline. Everybody talks about it, right? And everybody (including myself!) have some demo-ready stuff to show around in case it might be required.

I am working on a sample application right now, and I realised how blind I was - even if I am deploying stuff to different slots and environments and whatnot, I am still treating everything as a single monolith. Not really what you want these days, right?

Well' let's sort it out. Say that you have an API component and a Frontend component, the best thing to do is to decouple the two of them so they can be independently deployed *and* mix-matched depending on the requirement.

It is .NET Core in my case, so in my Frontend component's appsettings.json I created this section:








Of course I modified the application so I could add the configuration in my ConfigureServices method and consume it in my Controller. The variable part in this case is the Slot property.

Now comes the fun side of the story - of course I have a pipeline in place. How do I handle these settings?



The best approach here, given the relative complexity of this exercise, is to scope the relevant value by environment. The Dev environment will always point at the Dev environment, Staging to Staging, and the last two environments are effectively production so I do not need to worry about adding a slot. It's not like I have cross-environment settings here.






The reason why the variables are named that way is because I am using the JSON variable substitution option in the Azure App Service Deploy task, and as my property is not on the first level then it needs to be explicitly written that way.







Doing it ensures that each environment has its own setting, and it also makes sure you remain sane while handling internal app settings across your applications and environments 😉 it is really easy to do as well, so there is really no reason to skimp on it.

Saturday, 16 June 2018

Quickly deploy a baseline SQL database with VSTS

"Sometimes we go full steam ahead with a complex solution for a very simple problem..."

That was the answer I gave to a friend of mine who asked me how to feed some baseline database for testing purposes with VSTS in Azure.

The obvious one would be to have your versioned SQL scripts in a dedicated repository which you can use to rebuild the whole thing from code (which is by all accounts the most correct solution to this problem). But in this case there are other avenues.

Databases have been treated like second class citizens for years - by tools and practices. For example, why not using BACPAC files for this exercise? At the end of the day, a BACPAC file contains the packaged version of a database at a certain point in time, including its data.

So if you have your BACPAC somewhere, get to an Azure storage account and run this SQLPackage command inside a VSTS PowerShell Script task (of course you need to replace the variables and provide the actual path):

& 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\130\sqlpackage.exe' /Action:Import  /TargetServerName:$(DBUrl) /TargetDatabaseName:$(DBName) /TargetUser:$(DBAdmin) /TargetPassword:$(DBPassword) /SourceFile:"<your location>/sample.bacpac"

Don't get me wrong, I love seeing a database fully integrated with the pipeline and that's how it should be. But in this specific case, I feel the tradeoff is worth it.

Also - this is a baseline database, nobody prevents us from running delta scripts against it depending on needs. But given it was for testing purposes, I highly doubt there is going to be much development on it in the future!