Sunday, 28 October 2018

Why you should scan your code within your pipelines

Like many I received this email from GitHub a couple of weeks ago on an old repository:


















This made me think about how important security scanning is in this day and age. Your code might have been top notch a couple of years ago, and being dangerous today.
So, to have a bit of a laugh, I hooked up WhiteSource Bolt to a build of that code to see the actual outcome on the open source libraries used there.
WhiteSource Bolt is also free for Azure DevOps, so there is really little stopping you from scanning your code 😊 this is the (kind of expected result):
























































This is code from a couple of years ago – do you think your code from two years ago is still as good as it was back then? ðŸ˜Š


Monday, 22 October 2018

Unblock the SonarQube upgrade process when using Azure AD plugin for authentication

There is a well known issue with SonarQube's Azure AD plugin, where an upgrade from v6.x to v7.x fails. Fixing this issue involves modifying the Users table manually outside of the upgrade process, and at the moment it is something you cannot avoid.

The reason why this happens is because the external_identity column does not contain a unique value, while instead it is filled with 'Azure AD' for each user. This is not a critical column, and you should be able to do this without issues.

Then I thought about a handy way of fixing this instead of just writing random data in it. Whenever you sign-in with the new plug-in, 'Azure AD' is going to be replaced with your email. So, I put together this very simple script.

Before you run this remember that I bear no responsibilities from it - it worked on my machine, it might not on yours 😊 always test it on backups first!














This takes care of the uniqueness of the value and enables the upgrade to go ahead. Needless to say that this script can be easily added to my proof-of-concept of automated pipeline!

Friday, 12 October 2018

A small detail to keep in mind while exporting Build Definitions

As part of a migration process you might want to easily migrate the Build Definitions for your pipelines – you can easily do this by using the Export Definition in your Pipelines:






















This will create a .json file you can import in your destination project with all the properties of your build pipeline, but bear in mind that there is no magic going on here: if you import it in a different Team Project, it is not going to automatically re-target your definitions, hence you would be pointing at the old repository and the old branch.



















It can be quite worrisome if you are moving stuff across Team Projects while keeping them available – there will be no warning, meaning you will get odd errors like this. You’ve been warned 😊

Tuesday, 9 October 2018

Why Universal Packages?

You might have read about the new Universal Package, something I am quite a fan of. There is no need for a huge software system in order to use them: actually I read about the many situations where they come in handy, but I believe I have a great one-size-fits-all example.

We know that Git, a file-system based Version Control System, is not suited for binary storage. The solution I always recommended was to use a TFVC (yes! TFVC!) repository so that you will not only get transactional consistency when consuming these files, but also versioning. 

At the end of the day, these files would be stored in a database hence TFVC fits the bill quite well. But it was kind of a basic solution for this scenario, as it does not offer what Universal Packages do. The whole idea is to create packages to be easily consumed by other users, not fiddle with yet another Version Control System.

Universal Packages not only do this, but they also offer a great deal of compression. Something that is really welcome when it comes to binary files.

My example exactly: let’s say you store media files for your products. Images, videos, stuff that is not textual. You need to consume these files during your pipeline’s execution, in whatever scenario you need them.

Compression (in terms of package size) means performance when consuming them, something that is extremely welcome IMHO. And as you can version packages, you get versioning as well. All by using something that is optimised for that scenario, instead of bending some other sort of technology.