Sunday, 15 December 2013

Keys for a painless TFS upgrade experience

This weekend I had to work on the upgrade of the main Team Foundation Server we have in production. Everything went fine – despite not everything according to the planned slick idea we had. The whole thing has been longer than just yesterday, though, and it comes with several takeaways.

Here’s what, of course involving our shiny Team Foundation Server 2013. Keep in mind the scenario: thousand of users all around the world, terabytes of databases. Here we go:

  1. Plan, but do not overplan.
    We had a plan but you cannot cover anything. You cannot plan every single step and do not have a B, C, D plan. I am not talking about not having a plan, but I suggest of being confident and flexible.
    Start planning with enough time, so the right guys are involved, and keep in mind the basics of TFS and its known issues, the history of the server (in our case it was essential) and use the TFS Upgrade Guide to have a guideline to follow.
  2. Test the core, test the basic.
    This TFS survived through upgrades from 2005 on, we discovered what we thought impossible. Your team’s skills will let you fix the problems you might find.
    So test the core, verifying that all the essential pillars are working. If you find out issues, try to fix them or plan a deferral (maybe with the CSS?) if they are not critical.
    Do not start trying the latest, better tech combo. As a sample, do not start upgrading the test server to 2013 and upgrade the build servers as well. Test the existing servers (or clones of them) against the 2013 server, and only when you are sure of the result test the upgrade of them.
    File as much documentation as you can but – again – you cannot cover the 100% of cases.
  3. Problems are problems if they come out just in production. Otherwise they are issues to fix.
    You might find something to fix at a later time, which does not impact all the users but maybe a small percentage. You can work it out later.
    Instead, for instance, we hit a very nasty problem during our live production update. We had no plan for it, as it did not happen in the test. What did we do? We tried several solution (one worked, luckily Smile) while someone else worked on a mitigation plan to use in case no solution was fine.
    We did not have to use it, but if you are running a service, you have to make tradeoffs and stop the whole service for just a part of it is unacceptable.

Remember that Team Foundation Server is modular, so you can exclude a part of it in case of problems, and that usually the upgrade compatibility for who comes from the last version (2012 in our case) is almost effortless.

Tuesday, 10 December 2013

A pragmatic approach to a better manual testing

I recently had to deal with a small Functional Testing Team (two people in total) which was used to have a non-structured approach to manual testing, with several Word and Excel files used to track down the required information, absolutely no automation in place and a very personal way of dealing with issues reproduction and environments.

I thought that giving them all the Visual Studio ALM features was surely going to be overkill. Quite a greenfield situation, but combined with the requirement of retaining their habits and their knowhow as much as possible.

The first step is converting the sparse information they have as test cases into Test Cases in Team Foundation Server. It seems obvious, but it is not. You have to set up a descriptive standard, detailing steps and expected results, so that everything is coherent with the actual target but they feel comfortable with the new tool (MTM) and process.

It is also extremely important to standardize a bit the environments they work with. If they use their own local machines, set up a standard about not just toolings, but even how to reach them. If you do this, a newbie tester can be thrown in the mix and be able to start working in a near-zero time. Once they are aware that they can replicate their recordings and they can get even the videos, they will go nuts. But it is not enough…

The next logical step is either converting their action recordings into Coded UI tests or using Lab Management. There are valid reasons behind both:

Coded UI Tests

Lab Management

Validation Standardized environments
Extended behaviours Complex configurations
Integration with Visual Studio Commodity for the members

You could take both of course, but it depends on the resources you have. I went for the Lab Management route – as they are pure functional testers, so with very little code confidence - and they are happier and more productive than ever.

Wednesday, 4 December 2013

New TFS Upgrade Guide out!

The ALM Rangers just released the v3 of the TFS Upgrade Guide. Apart from the official announcement, etc., I wanted to give you a valid reason to download and use it if you are still sceptic.

Scenarios. This guide covers a handful lot of real world scenarios, like the upgrade from 2008 or different configurations. So it is not just a basic walkthrough, but it provides you a robust sample from people who already experienced the aforementioned scenarios.

Give it a try, you won’t regret it ;)

Wednesday, 13 November 2013

Recap of the Visual Studio launch!

So, very packed day!

Visual Studio Online is among us. Nothing more than Team Foundation Service up to yesterday, but now with a lot more.

First of all: the plans. We now have three plans, Basic, Professional and Advanced. The Basic and the Professional one expose the same set of features, apart from the fact that the Professional one includes a monthly subscription for Visual Studio Professional. Yes, the IDE! And of course you get five users for free with the Visual Studio Online Basic plan, as of today.
The Advanced one is another story – it is the biggest one, but without the IDE subscription.

Codename Monaco is something incredible - finally they managed to deliver Visual Studio (well, part of it I would say) through the browser. Yes. Visual Studio in the browser. Just for Azure Web Sites at the moment, but still awesome!

InRelease changed its name, becoming Release Management for Visual Studio 2013. It’s a very powerful solution for delivering artifacts, as we know, and it is perfectly integrated with Team Foundation Server and the other tools.

Visual Studio 2012 got some love as well - the Update 4 is out!

And eventually, Application Insights is the shiny new feature of Visual Studio Online – it fills the gap for a monitoring solution which is not SCOM. But keep in mind – you can use both! Here is my introductive post about that.

Application Insights in Visual Studio Online

One of the coolest things about Visual Studio ALM is that it is a continuously evolving platform. Application Insights is the latest addiction to the family, empowering the monitoring story on the Visual Studio Online side.
First of all, you have to download the latest Microsoft Monitoring Agent. This version allows you to configure both the Visual Studio Online Application Insights and the SCOM integration, it is a huge leap on. You can monitor .NET and Java applications, running on-premise or on Windows Azure, and Windows Phone 8 apps.
Then you must enter the Account ID and the Instrumentation Key. Both are unique to the Visual Studio Online account.
It is possible to choose Microsoft Update as a update source. It is the recommended option, and unless you already have another update source (WSUS, SCCM) you don’t have a lot of choice :)
That’s all. After we finish the installation, the configuration command prompt will automatically pop up, scanning your IIS. I am running it on a Microsoft demo VM, so never mind about the TFS applications and the others. The one we care about is the FabrikamFiber.Web.
Now you only need to give the service some minutes of data gathering in order to start composing your data. Meanwhile, one among the big amount of possible choices is to create a summary dashboard – here’s how:
Correct, you only need to set a name and select the Application you want to aggregate data for. It is pretty cool by the way :)
Opportunities are broad – you can monitor the Application or deep dive on a specific performance or reliability value. Keep in mind that it is possible to define a metric in a very quick and easy way, so you can even use your own metrics as a baseline for evaluating the Application’s lifecycle and its values.

Friday, 8 November 2013

Feature Flags, the cornerstone of Continuous Delivery – a jumpstart

I had to talk about that so it is worth to share it there as well!
One of the pillars of Continuous Delivery is the broad usage of Feature Flags in the code.
What’s that? It is a concept introduced by Flickr in their pioneering usage of DevOps and Continuous Delivery, and supported by Martin Fowler as well. To keep it simple, let's make an example: everybody has a control panel at home for electricity. The control panel complexity may vary – it could be simply on-off, or be shaped to split every single room of the house with one switch each. These switches are the Feature Flags.
I guess it is pretty obvious to understand why they should be used. With them, you are improving testability, troubleshooting, and facilitating the incremental shipping.
Of course technology is not everything. You should be backed by a very solid process, otherwise the tentative of being Flickr-like (no branches and just feature toggles) would be a bloodbath..
An easy way of starting could be as it follows:
image image
It is a beginning, we can use Feature Branches to develop features (separation of concerns, modularization design patterns apply here) and merge them to the main development branch. Fairly easy and fairly classic I’d say.
Now let’s go a bit more into the code. As a sample I am using a WinRT App, leveraging on FeatureToggle as a helper. FeatureToggle is a nice OSS library which enables Feature Flags in a very easy way, it was the easiest to set up (NuGet package) and use. And moreover, it works on Windows Desktop, WinRT and Windows Phone. Definitely worth a try IMHO. But keep in mind it is just a sample, with no willing of be production-ready :)
I created two classes with my features inside – I wanted to do the easiest possible example, so don’t mind about the code’s silliness – which implement one of the OOB toggles, simply an Interface.
After this, in the MainPage() I would set (for the example, because in WinRT there are no .config files. In a webapp I would read the web.config file instead) the available features - and of course in the FF.Dev I would get the Yes and No features because of the merge from the feature branches – and then the bounded controls would be instantiated.
If I do not want to enable the Yes feature, I just have to set its configuration value to false, and despite the code is there, it won’t be available to the user. Which is exactly the target of Feature Flags.

Just run the application with the settings you want, and you would get the result. In case of a old-fashioned web application or whatever else, you can rely on XML files (and their transformations, if needed) for configurations.

Friday, 1 November 2013

How to change a TFS GUID and why is it extremely important?

If you are working on a test upgrade of an existing Team Foundation Server by restoring it on another machine, keep in mind it is not enough to discern it from the existing instance.

Yes – you are changing machine names and IPs – but it would be needed to change the GUIDs as well, otherwise the Visual Studio cache is going to go crazy. You have one GUID per Team Project Collection, and they are stored inside the TFS_Configuration database.

You need to quiesce the server first, then use the TFSConfig ChangeServerID tool:

tfsconfig changeserverid  /sqlinstance:<your-sql instance> /DatabaseName:Tfs_Configuration /projectcollectionsonly

You can specify if you are running on SQL AlwaysOn as well.

Monday, 21 October 2013

Converting TFS 2008 servers to 2012 and 2013

As you know, Team Foundation Server 2008 had several limitations which prevents a direct migration to 2012 or 2013. It is a legacy release, good at its times but definitely not adequate anymore.
Migrating it through an in-place upgrade to 2010 and then a move to 2012/2013 version (from now on modern) is feasible, but often the server where the legacy version runs is old as well – 32 bit only CPU overall – making the migration path harder because of the move.
There is a very handy tool for dealing with this scenario: TFSConfig Import.
It is meant to convert the legacy server into a new Team Project Collection in the modern instance. It won’t migrate reports or SharePoint, but it is worth using it as it saves a ton of time in the process.
It is needed to move the databases onto the current Team Foundation Server – or upgrade the legacy one to a supported version by the modern release you are using – and then launching the command as an administrator:
TFSConfig Import /SQLInstance:<servername> /CollectionName:<MigratedCollectionName> /confirmed
The /confirmed switch is meant to confirm you have backups of the databases you are converting, as after the process they won’t be compatible with the old TFS.

Thursday, 17 October 2013

Visual Studio ALM 2013 is RTM: is it just Visual Studio and TFS?

As we know, Visual Studio ALM 2013 has been RTM’d and released. But is it just a new Visual Studio and a new Team Foundation Server?

Obviously not. Apart from these, we are talking about:

And – as usual - there is more on the Team Foundation Service side every three weeks, with their agile release cadence.

Thursday, 10 October 2013

Fix the “Cannot perform 'SetProperty of Text with value “” on the hidden control.” error

If you are trying to use Test Manager to record a manual test with a web application and then play it back, you might find a nasty issue while inputting stuff: the error “Cannot perform 'SetProperty of Text with value "value"' on the hidden control.”.

It is caused by the Microsoft Security Update MS13-069 (KB2870699), which introduces several security fixes as well as this strange behaviour.

To fix it and correctly playback the test, you would need to install the latest Update of Visual Studio 2012 or use Visual Studio 2013 which is not affected.

Friday, 4 October 2013

Microsoft Monitoring Agent 2013 overview

A couple of weeks ago Microsoft released the Microsoft Monitoring Agent 2013, the next generation of the very useful Standalone IntelliTrace Collector.

It is a very interesting tool – as I mentioned at DevReach – because it can be used as a standalone collector or integrated with System Center Operations Manager, being then the foundation of the bridge connecting IT Operations teams and Development teams.

If you want to run it, it is absolutely easy:

What’s next?

A wonderful IntelliTrace file Smile

Monday, 23 September 2013

Review – Professional Scrum with Team Foundation Server 2010

I can already hear the voices…”why are you reviewing a three years old book?” “It is on 2010, it is outdated!” and so on.

Instead – trust me – I would strongly suggest you to buy it, for a simple reason. All the features used and explained in this book are the same or very similar from 2010 on, as we are talking about essential, cornerstone features.
Moreover, the book covers the very needed concepts needed for Scrum, so it is very helpful when you have to introduce a team to it.
The authors (Steve Resnick – Architect at Courion Corporation, Michael de la Maza – Agile Coach and Investor, Aaron Bjork – Senior Program Manager at Microsoft) explain everything with a very good pace and the required level of detail: they go step-by-step in every required part of the Scrum methodology, and at the end you will find a dedicated chapter on Spikes and an appendix dedicated to the Scrum Assessment.
It is very handy to use as a quick reference, just grab the required chapter and you would get your answer.
I am going to use it with a new team to be inducted to Scrum, by keeping it on the table during the daily Scrum and every Meeting. I bet it is going to be helpful Smile

Tuesday, 17 September 2013

How an automated release looks like – the InRelease side

Once you have an automatic build leveraging InRelease for its automatic deploy, the workflow is pretty easy and straightforward.
Let’s say we make a whatever edit to a file in our project:
We check it in, and we trigger a Continuous Integration build.
Nothing too fancy at the moment. But of course we have to use the InRelease Build Definition Template for using those features, so we are going to see some specific activities in the build log, targeted at this.
The next steps are extremely easy and straightforward: depending on the Release Settings someone designed as Approver should approve the deployment:
Once this person/team accomplished this task, that’s basically all SmileThe only missing piece would be approving the Release (again, based on its settings. So it could be avoided, if needed)
A very useful aid is the possibility of scheduling the transition (i.e.: QA to Prod) to a future time, in order to be compliant with whatever company policy you might have.
And this is really all Smile

Monday, 9 September 2013

Work Item Queries Charts

In the latest sprint, Microsoft released a very nice feature for reporting purposes: Charts for Work Item Queries.
It is incredibly easy to use. You do not need anything but Team Foundation Service. You can click on the link while opening a WIQL query:
and creating a new Chart
Unfortunately at the moment is just supports flat lists, but in the future it is going to support the two other WIQL resultsets. It is very handy, you can embed it into an email or whatever, and it comes working with no configuration. Well done team!

How an automated release looks like – the Team Foundation Build side

Well, when it comes to the Automated Release our beloved Team Foundation Build has not so much to do…apart from hers usual stuff, with a slightly different Build Process Template.
For enabling an Automated Release, you have to add a custom Template and create a Build Definition using it. You are going to find it into the InRelease installation folder (…\InCycle Software\InRelease\Bin).
You are going to notice that there is a dedicated InRelease section for its parameters. If Release Build is set to True you are going to deploy the build.
You can set a Release Target Stage – if needed. Leaving it blank will make it go through all the possible stages unless stopped - and a specific Configuration to Release – leveraging the same configurations you might use in the Configuration Manager.
Every component must be configured to Build with Application or Build Externally. A Build Independently component won’t work. This is a known limitation of InRelease.
And obviously, both Acceptance Step and Deployment Step of the very first stage of the associated release path must be Automated.
The main task it does is the Configuration File Tokenization. This feature, together with a specific syntax into the configuration files themselves (usually __VALUE__), makes a build-time transformation based on the target stage.
For achieving this, it is needed to install the Client Component of InRelease on the Build Server.
It is not something that hard or trivial – you can integrate these specific tasks into a pre-existing customized Build Template so you are going to get a single template if you need.
After that, the InRelease Release Service is invoked, and everything passes through that, with the related configurations and settings.

Sunday, 1 September 2013

Your first InRelease Release

In  the previous posts we saw the main pieces of InRelease, so let’s see how they fit together!
You are going to create a new Release: it is based on a Template, and it has two essential information to fill: the Target Stage, which is basically “where to stop” and the build to use, which could be the latest or a selected one (or even a build on-the-run).
You should define the stages, if you didn’t. It is extremely easy, as it just explains who approves the stages and who is the owner. You can add automation (starting a build), if you need.
Releases are fully trackable, and you get a step-by-step progress sequence to check, with its logs.

Friday, 30 August 2013

Team Foundation Server 2013 in production? With some help! TFS Upgrade Weekend

Microsoft scheduled a very interesting initiative for the 13-15 September: the Team Foundation Server 2013 Upgrade Weekend.

What is that? It is a weekend where organizations can upgrade their Team Foundation Servers to the 2013 release with the latest go-live prerelease, and get free support in case of issues by the Microsoft CSS.

I strongly suggest to join us (I am going to upgrade several instances as well, both personal and corporate), to do this you are simply required to register here so they can staff the appropriate people.

If you need to download the previews, here you can find what you might need!

Thursday, 22 August 2013

InRelease Components and Release Templates, what are they?

The most important parts of InRelease are Components and Release Templates, which enable deployments to be smooth and automatic.
You can package as a Component almost everything, but everything starts from a build:
You can define to get the Component in several ways: it can be built with the application, independently, or just by picking it up from a file share.
The next step is on defining how to deploy this Component. You must use an InRelease Tool, which could be just your own basic script as you can see.
Beside of that, you can define some variables to be replaced and, most important, the associated Release Templates.
The Release Template is the Release Pipeline’s core. It is built on Windows Workflow Foundation, and it defines the step-by-step deployment activities to be carried on.
Right there you are going to shape every single step with the related configuration variable (like for the Create Web Site activity, for example). You can add as much Components (like Call Center Site) or Activities (which are like Create Web Site) as you want.

Monday, 12 August 2013

Some basic concepts of InRelease

Despite it is tightly integrated with Team Foundation Server, InRelease is – as far as today –  a standalone product.
It is formed of three components: a Server, a Client and as many Deployers you might need, installed on the servers where you are going to deploy you application.
Everything is so granular, it reminds of a Russian nested doll.
You are going to create a Release. The Release is based on a Template which defines the stages and the actions of a deployment.


It is based on Windows Workflow Foundation, and it is very intuitive. You can find a huge amount of out-of-the-box actions and the InRelease Components, we are going to see in a separate post what are they and how to customize them.

In order to be able to deploy, the Template must feature Environments, containers for the target Servers running the Deployer component.
In my case, I created two Environments (Development and Production), each containing one server (dev.domain.tld and prod.domain.tld).
As you can see in the picture, for this sample the deployment pipeline is extremely easy: create a folder in the target server and copy some files into it.
Then the simplest release workflow is based on approvals: you must approve the deployment and the successful completion. It is possible to make approvals and rejection from both the Console or the handy web interface.

These are the basic concepts you might need to know about InRelease Smile

Monday, 5 August 2013

Letting System Center Operations Manager 2013 and Team Foundation Server 2013 Preview talk together

Despite TFS 2013 is still a Preview, it is fully compatible with the other Microsoft products.
For enabling the DevOps story, you need to follow the TechNet procedure as usual, but you need:
  • to install the TFS 2012 Object Model
  • restart the SCOM HealthService via PowerShell (restart-service HealthService)
  • after you successfully connect to the server with the wizard, you should get a TF223006 error regarding the command-line tools. Don’t worry. Save the configuration, and manually add the OperationalIssue_11 Work Item Type from the Operations Manager installation media.
and it works!

Thursday, 1 August 2013

A little, unnoticed feature in Visual Studio – Notifications Center

It is not something you go around shouting “This is a life changer!”, but IMHO it has its own dignity and it is going to gain more and more importance in the future releases.
I am talking about the Visual Studio Notifications Center. It is something which is based on the Connected IDE, maybe the first example of its integration into Visual Studio, together with the Roaming Settings.
It is a small icon on top of the IDE…
…which opens a Notification pane:
IMHO it is important because it could be the notifications’ hub for informative content (help installation, license expiration) but updates as well.

Saturday, 20 July 2013

InRelease and Team Foundation Server, tips for the setup

Recently Microsoft released a preview (non go-live, unfortunately) of InRelease for Team Foundation Server 2013. I am starting to have a look at it, so expect some content in the near future Smile

First of all, the setup. You can even be the most important user in your network, but you would need to use the following command for the installation:

msiexec –I InCycle_InReleasePreview.msi


Otherwise you would get an error.

After successfully completing the installation you could go through some errors. Not too much to worry about anyway, I suggest you to keep an eye on the Windows Event Viewer (for IIS, InRelease uses a 32 bit Application Pool and it can be a trouble in certain environments) and on the logs into C:\ProgramData\

They are pretty useful as they are separed by component, so you would get an error log for the console, one for the server and one for the deployer.

Eventually here you can find a support page with the most common errors and solutions.

Thursday, 11 July 2013

Enhancing Agile Portfolio Management experience with Backlog Mapping

Yesterday the Team Foundation Service team released the Sprint 50 build, and one of the features shipped is the Backlog Mapping.

Nothing too big or hard to explain: if you are creating a Portfolio within Team Foundation Service, you would surely get a flat list of Features and a flat list of PBI/User Stories.

With the Backlog Mapping (which is just a pane on the right side), you can drag&drop the PBI you want to be child of a certain Feature, and it is going to be automatically linked.

It is a very nice and effective feature, which combines a strong User Experience with a tangible result.

Wednesday, 3 July 2013

How the best practice made it into the product: Workspace Mapping in Visual Studio 2013

It is well known that the best practice says “One workspace per project”. Unfortunately it is not always possible to make it as the default solution, for various reasons (typical audience, etc.).

In Visual Studio 2013 the best practice made it into the product: the default Workspace Mapping is exactly one-per-project! In fact if you configure the Workspace for a new Team Project, you are going to get prompted to configure it:


After clicking Configure, you are going to map the Team Project root into a specific folder into your User’s folder:



It is definitely a good improvement, as it prevents the creation of a one, big, monolithic workspace which can take ages to download when needed, with all the related performance problems.