Friday, 12 October 2018

A small detail to keep in mind while exporting Build Definitions

As part of a migration process you might want to easily migrate the Build Definitions for your pipelines – you can easily do this by using the Export Definition in your Pipelines:






















This will create a .json file you can import in your destination project with all the properties of your build pipeline, but bear in mind that there is no magic going on here: if you import it in a different Team Project, it is not going to automatically re-target your definitions, hence you would be pointing at the old repository and the old branch.



















It can be quite worrisome if you are moving stuff across Team Projects while keeping them available – there will be no warning, meaning you will get odd errors like this. You’ve been warned 😊

Tuesday, 9 October 2018

Why Universal Packages?

You might have read about the new Universal Package, something I am quite a fan of. There is no need for a huge software system in order to use them: actually I read about the many situations where they come in handy, but I believe I have a great one-size-fits-all example.

We know that Git, a file-system based Version Control System, is not suited for binary storage. The solution I always recommended was to use a TFVC (yes! TFVC!) repository so that you will not only get transactional consistency when consuming these files, but also versioning. 

At the end of the day, these files would be stored in a database hence TFVC fits the bill quite well. But it was kind of a basic solution for this scenario, as it does not offer what Universal Packages do. The whole idea is to create packages to be easily consumed by other users, not fiddle with yet another Version Control System.

Universal Packages not only do this, but they also offer a great deal of compression. Something that is really welcome when it comes to binary files.

My example exactly: let’s say you store media files for your products. Images, videos, stuff that is not textual. You need to consume these files during your pipeline’s execution, in whatever scenario you need them.

Compression (in terms of package size) means performance when consuming them, something that is extremely welcome IMHO. And as you can version packages, you get versioning as well. All by using something that is optimised for that scenario, instead of bending some other sort of technology.

Tuesday, 25 September 2018

Should I use GitHub to use the ten free Azure Pipelines?

At yesterday's meetup we got this question: why should I use GitHub to get the ten free parallel Azure Pipelines if I already have a project in the service?

It is an excellent question, and the answer is that you should use GitHub only if you want to. As long as a project is marked as Public in Azure DevOps it will get the ten free pipelines!

You can verify it yourself: mark a project as Public:






























Now browse to the Retention and parallel jobs section of the Build and Release settings menu, and check it yourself under the Parallel Jobs tab - 10 jobs!















Having free Pipelines is not about being forced to use GitHub, it just means you get them as long as your project is public - regardless of the location.

Sunday, 23 September 2018

Use the free Azure Pipeline plan with your GitHub project!

It's been a couple of weeks from the Azure DevOps announcement, and I am contemplating an amazing London sunset while I prepare for tomorrow's event.

Before getting distracted by the landscape, I was setting up the free Azure Pipelines offer with a GitHub repository of mine... and I realised how frictionless it is!

Start from here and select the Free plan:

 

Then select if you want to apply that plan to all your repositories or if you want to use it for select ones:



Now, either select an existing organisation or create a new one, and use a project (in my case a new one called GitHub, but you can use an existing one a well) to refer to the GitHub project. I say refer because the level of interaction with Azure DevOps is kept to a minimum - you are consuming it, but you are not doing anything else with it as of now:

Once you are done, select the template you feel it is closer to your project. In my case I selected a .NET Desktop template because I am building legacy code so it would be the most appropriate:


This will create a yml definition in your repository. Save it, and trigger it - job done!




This was for something I had there since I barely remember when...hence it should not be too difficult to set up Pipelines for your project! 😊

The build is already set up to perform CI and PR validation, so there is little effort other than create it and potentially customise it.

And it is not a joke when we mention the ten parallel free pipelines - they are already there, provisioned for your account!



Monday, 17 September 2018

So... what happened to VSTS?


Yes I know – it is a bit of an old news, but I was on holiday and I realised that there are so many crumbs of information around, hence a nice summarising post would help.

On 10th September, Visual Studio Team Services became Azure DevOps. First things first: does this mean that now you cannot target on-premise, AWS or GCP? You couldn’t be more wrong – there is no change on that front. You are free to use any technology and to target any environment with it, it just happens to fall under the Azure umbrella.

I personally feel that the new name, despite being a huge change, underlines the fact that the stack is a business driver, not just a development tool. If you are an existing VSTS Azure DevOps user, what changes for you is how the product is packaged – if you had to get all of the VSTS Azure DevOps services before, now you can choose what to actually get: Boards, Pipelines, Repos, etc.
So you will get a nice per-project selector:

This means that if you want to use an Azure DevOps project just for the Work Item Tracking features and completely hide the Repos, you can totally do that.

Also, the whole UX changed. For the better, I reckon – I find it much improved in pretty much all areas, it just feels better to use. The URL formatting changed (from <org>.visualstudio.com to dev.azure.com/<org>), but it won’t break anything – Microsoft is well aware of this, and it is not going to touch the URLs for the foreseeable future.

Then, the elephant in the room – the open-source offering for Azure Pipelines. When I first heard about it, I had to double check I was not making a mistake. Ten free parallel jobs (effectively it is like having ten build machines) with unlimited minutes for OSS projects, regardless of what technologies you use. The agents run on Windows, Linux and MacOS, making it really cross-platform and open to everyone.

Put aside technology for a moment, and think about it. Ten parallel builds with unlimited minutes, for free. It would a relevant cost that is completely slashed away, making end-to-end OSS delivery as easy as drinking a cup of coffee. I believe it is quite unprecedented, and kudos to Microsoft for offering this.

Eventually, Team Foundation Server is going to be renamed to Azure DevOps Server from the next major release. No other changes on that front, it is still a regular snapshot of Azure DevOps brought on-premise. And no, I don't think it is going to be discontinued anytime soon!

That’s it in a nutshell. It’s a large revamp, but the underlying pillars are still there. Enjoy it!


Monday, 20 August 2018

A collection of SQL Server-related tips for the TFS Administrator


If you run Team Foundation Server on-premise, understanding how SQL Server works on the Data Tier is extremely important. Despite the push for the cloud, there might be so many reasons why you need to stick with your on-premise installation of TFS – and the bigger the instance is the more SQL Server knowledge you will eventually need.

I am not a SQL Server expert myself, but between my past as a consultant and now being in a position where I administer a huge TFS instance meant that sooner or later I had to deal with SQL Server on a one-to-one basis. Not always the happiest of encounters to be fair 😊 but still, I learned a lot. So I thought that if you are in my position – where you might be the TFS Administrator – then hopefully a collection of notes I took over time might be handy for you.

SQL Server always wins – be prepared for it – and never, ever touch the databases

It’s not really a tip, but something to keep in mind: given that Team Foundation Server is essentially a product where you have web services in front of a set of databases, hence if the databases have problems the whole product is down. Remember that – when something goes (very) wrong, keep in mind the Data Tier, sometimes it is silly stuff like the drive where the master database resides being full…
Also, never, ever touch the databases manually unless instructed by the Microsoft Support Team. Don’t be tempted to optimise the databases, the number of things that can go wrong is simply too high to risk being in a corrupted state or unsupported situation. Don’t do that.

Use the TFS Administration Console built-in backup tool if you can

The temptation of letting someone else (the IT department, a DBA, etc.) deal with the menial task of backing up your Data Tier can be very high, but if you can just use the built-in backup tool. It makes it transparent to you and takes away the hassle of creating maintenance plans.
If you have databases in Full Recovery Mode, it will take care of your Transaction Logs backup in an atomic manner – it is very important. If you take backups at different times for example, you might end up in a situation where you have identities in a Collection database which does not correlate with the Configuration database. To avoid this, you should mark your transactions as part of your backup plan.
Also don’t forget to backup your SSRS Encryption Key, otherwise you might be restoring a useless set of databases in a Disaster Recovery scenario!

High Availability means AlwaysOn!

To be fair it is not really the case, but it makes your life so much simpler. AlwaysOn makes Highly Available deployments a breeze, and even if you have to factor in some adjustments to your habits it is worth it every single time.
Beware though: even if you implement AlwaysOn for your Database Engine, you will not get Analysis Services for free on the same setup – that is a different deployment altogether.

Keep an eye on your drives

This is something I experienced fairly recently – aside from the usual recommendation on where to put your databases (system or otherwise), if you have a very large database you could run into file system limitations that prevent DBCC CHECKDB from running and make you lose sleep. If you happen to experience these, it is worth knowing that not everything is lost and you might not even need to restore from a backup.
NTFS has a switch (/L) that is designed around large files, it is an excellent starting point although you need to format your drives. Another solution revolves around using ReFS instead of NTFS – it is something somehow unknown, but after running it for a while in my homelab and using it to solve a portion of this problem I can say that ReFS is a powerful “tool” (I can’t really consider a file system a tool, but for lack of a better word…) to resort to in case you find the dreaded error 665 in your logs.

Remember to check what is going on

I use this couple of queries since… I don’t know, ages. They help, because they show in a transparent way what is going on within a SQL Server instance (especially if you need to understand what AlwaysOn is doing) and they provide information that can help diagnosing certain errors.


Thursday, 9 August 2018

How VSTS Sync Migrator is going to change then way you migrate to VSTS

Like I said in my last post, I really enjoy using VSTS Sync Migrator for Work Item migrations.
There are a few reasons why I believe this tool stands out from the rest, and it is not just because of its complexity - in a nutshell, you can use it not just for tip migrations, but to actually filter and sanitise what you are importing into your target TFS or VSTS.

Firstly, you can run each processor (call them steps if you want) independently. That is very important when it comes to understanding what each one of them does. You don't really want to use something that starts, does stuff and then fails with an enormous log file.

Each processor is extremely specialised and usually backed by a WIQL - again, quite complex sometimes but extremely powerful and flexible.

You can also run multiple instances of the Migrator, targeting different Team Projects in VSTS - having them side by side isn't usually a problem.

Then, you have a really powerful capability for shaping your data in the best possible way. By "shaping" I mean "mapping": field to field, replacing values, even replacing values via Regular Expressions or mapping different Work Item Types.

This can enable all sorts of scenarios where you can change a Process Template, or make your very verbose customised form much more readable by merging or moving around fields' data.
























Eventually, no ancillary item is left behind, including Work Item Queries (which carry a huge business value IMHO) and commit information. You can link commits to Work Items even if you migrated the repository with a different name.









It takes a while to get all the bits right - there are lots of options, but the documentation is quite good and it will easily guide you through. Fellow MVP Mohamed Radwan also recorded a quick demo of how to use it.

Now, onto more VSTS migrations 😀