Monday 22 December 2014

Troubleshoot a Team Project deletion

A colleague of mines once said “it’s never a stupid question if you don’t know the answer.” So that post might sound stupid, but I had people asking for it hence…there it goes!

You might need to delete a Team Project, and it is a matter of seconds, isn’t it?

image

It is not always the case, unfortunately. But you can do a lot to understand what goes south. Just using the TFS Admin Console.

Firstly, when you have a DeleteProject job running, you can actually check what it is doing. It is not very intuitive, but if you double-click it, you can access this:

image

Ok, the job fails. You know what? If you double-click the failed job you can get a very detailed log:

image

and digging down there you will surely find the reason why the job fails:

image

That specific case, well…just size your testing environment accordingly, ok? :)

Tuesday 9 December 2014

Reducing Technical Debt with Smart Unit Tests

One of the reasons behind Technical Debt is the lack of appropriate test suites around a certain feature. Especially when implementing something new, tests are critical in shaping a robust and quality solution. Often, if you have something in the works and you are not strictly operating TDD, tests are behind where they should be.

Visual Studio 2015 introduced Smart Unit Tests, which are nothing but the former MSR Pex project, rebranded and productised. What Pex/a Smart Unit Test does is to analyse your code and create a basic suite of unit tests to test the basic, border scenarios. Here is an example:

image

Right click on the method, Smart Unit Tests

image

and here is the result:

image

Of course – this is a really, really basic scenario. What is interesting IMHO is how it is doing it behind the scenes:

image

as mentioned, it is a full-fledged Unit Test. Very basic, but still a good starting point, saving time while in the works. And if you save it, the Smart Unit Test engine is automatically going to create a new Test Project with the aforementioned tests contained in there. Again, it is not meant to remain as-is (“Sum748” is not a great test name for instance…) but it is still better IMHO than doing everything on my own.

Let’s make things a bit harder now:

image

That is very crappy code in small scale. No exception management at all, just the plain and down-to-the-bone feature, potentially in development. I can ear people screaming, but it happens extremely often in every organisation. This is the output of Smart Unit Test in this scenario:

image

It seems I need to spend some time on handling DivideByZeroExceptions and OverflowExceptions, to begin with…

Monday 1 December 2014

Lab Management and Environments – what to remember

Lab Management’s SCVMM environments are nothing more than a bunch of Virtual Machines running somewhere in a datacentre. Really. I do not understand the reluctancy (almost fear!) when I mention it.

Let’s start with Network Isolation. Network Isolation is an extremely handy feature, allowing a side-by-side deployment of multiple instances of an environment with the same properties (machine name, IP addresses, basically everything which should not be duplicated in a network). It is very cool.

And guess what, there is a clear, step-by-step guide on how to create a Domain Controller VM to be used as a template for a Network Isolated Environment. Basically once you installed the ADDS you need to clean the DNS.

Once you have the VMs ready, I would suggest to compose some environments to be reused without searching for the VM every time. To then enable the Network Isolation, you need to check this checkbox in the Advanced tab of the Wizard:

image

That is all you need to do. SCVMM will then add a secondary Network Card to the VM to enable this feature, but it is nothing you should worry about.

Also remember that unless you set auto-provisioning, your VMs won’t be automatically shared among the Team Projects in a Collection. You can import them from the library you used to store the template anyway.

image

One last thing to remember on the VM Templates – always remember to enable the File and Printers Sharing firewall exception, otherwise the deployment would fail, and you won’t be able to connect to the VMs via the MTM Environment Viewer for instance.

If you want an all-in-one reference, have a look at this appendix from Testing for Continuous Delivery with Visual Studio 2012 – even if it is on the older version everything is still relevant. The whole book is actually on the matter, so I suggest to have a look at it. 

Another misunderstood topic seems to be Test Settings. We all have seen the fantastic demos with screen and audio recording, but then all of a sudden you cannot set it up in your lab.

To enable that feature, you need to install the Desktop Experience Feature on your Windows Server VMs:

image

and then select the Screen and Voice Recorder diagnostic data adapter from the Test Setting you want to use:

image

Each DDA can be configured to better suit the usage you want, in this case just bear in mind that you are storing big binary files inside the Team Project Collection database, so its size might increase very quickly if you use it a lot. Moreover, there is a number of useful settings you might use:

image

You can copy specific files (not tied to the Version Control or the build output) to the VMs, run pre and post-test execution scripts, or even force 32 or 64-bit execution in case you need it:

image 

Unfortunately the number of resources here is not immense – MSDN is extremely useful as usual, together with the aforementioned eBook, the Visual Studio ALM Rangers Lab Management Guide and the Pro Team Foundation Server 2013 book.

But again, this is not rocket science so you should be good with them.

Monday 17 November 2014

How to configure Visual Studio Team Lab Management 2013, once and for all

Every time I go at a conference/user group and Lab Management is mentioned I hear someone saying “Lab Management? I never understood how it sticks together…” “Wow, it must be an adventure to set it up!” and so on…

Well, after all Visual Studio Team Lab Management (yes, fancy name) is not rocket science at all! It is just a clever mix of many different components, each doing a different thing, to enable the “Virtual Test Fabric” scenario. Nothing more, nothing less.

To begin with, you would need System Center Virtual Machine Manager (2012 R2), at least one Hyper-V host, Team Foundation Server (2013.4 in this case), a Build Controller and a Test Controller.

Assuming SCVMM installed  and configured (how: install SQL Server Database Engine, install SCVMM pointing at it, add an Hyper-V host), you need to install the SCVMM Console on the Team Foundation Server Application Tier. Now you can configure Lab Management!

image

You just need to enter your SCVMM FQDN:

image

and – if you wish to use it – an IP Block and a DNS Suffix for your Network Isolated machines:

image

This is the core, infrastructure configuration. You are going to see that something is missing though…

image

You just configured the infrastructure for the whole Lab Management deployment, what’s missing is the configuration for each Team Project Collection you want to enable.

The two settings you need are:

  • A Library Share (a normal SMB share) containing the SCVMM templates used by VSTLM to create your VMsimage
  • A Host Group (it’s actually optional, as SCVMM creates a default “All Hosts” Host Group, which in your case is enough as we are assuming you are starting with one Hyper-V host server)
    image 

As mentioned, the Auto Provision flag enables all the Team Projects contained into your Collection.

Now the only missing piece is a Test Controller to bind to Lab Management. In fact, if you launch Test Manager and try to create a new Environment, it would complain:

image

So, let’s install the Test Controller and configure it:

image

If you need it, configure a Lab Service Account as well. This is helpful in cases where you need to resort to Shadow Accounts (or you can’t add the Service Account to the Local Administrators group), but let’s keep it simple and skip it for now. Just keep that in mind:

image

That’s all! This is the whole Lab Management configuration! Is it still rocket science? In another post we are going to look at the environments’ configurations and at some useful tips from the real world.

Sunday 16 November 2014

Why can’t I delete a Test Plan with MTM and TFS 2013 Update 3?

Do you want to delete a Test Plan from MTM? Fair enough.

Unfortunately the documentation is a bit outdated here – a quick Google to find this, and it is about Visual Studio 2010. It would work – but only if you are connected to a Team Foundation Server without Update 3.

If instead you are running 2013.3+, you would be greeted with a message saying … “Deleting a test plan is not supported for current version of Team Foundation Server. Use witadmin tool 'destroywi' command to destroy test plan work item.”

It is not a bug, but it is by design instead - it is the only downside of the conversion to Work Item Types of the Test Suites and Test Plans.

Basically prior to Team Foundation Server 2013.3 they were ‘special artifacts’, meaning you wouldn’t be able to treat them like Work Items – including advanced querying, charting, etc.

The Update 3 converted the whole thing to plain Work Item Types, but this means you no longer get the special feature of deleting it via MTM, instead you should run witadmin destroywi from the Developer Command Line – which is the only way of doing so. That is because deleting a Work Item is not really something that happens every day, and if done in the wrong way (for example, truncating relationships in linked Work Items) it could lead to issues with the Work Item Store.

Wednesday 5 November 2014

Visual Studio Lab Management and Auto Provisioning

Despite it is very handy, the Auto Provisioning feature of Lab Management can become a trouble pretty quickly. If enabled, every Team Project will be authorised to deploy VMs in the VSTLM hosts, a situation which – 99% of the times – becomes unmanageable.

image

It’s not a TFS problem, and it depends on how the users are used to work. But if your deployment is used (as it should be, to be fair) and considered ‘as a service’, then IMHO you need to limit the scope a little bit, otherwise your Hyper-V servers are going be clogged like Beijing at the rush hour…or the M25.

There is a pretty quick fix for this though – after you grant the permissions to the specific Team Project to use Lab Management, you need to use two TFSLabConfig (and not TFSConfig Lab) commands: tfslabconfig TPHostGroup and tfslabconfig TPLibraryShare.

After that, you are ready to go!

Tuesday 28 October 2014

Impact of the new Visual Studio Online European Region

With the latest update Microsoft addressed one of the most repeated requests about Visual Studio Online. It isn’t a specific feature or capability, but it is a EU-hosted region for it!

Up to yesterday, you did not have any choice on where your VSO data is hosted - the VSO tenants were only in Chicago, San Antonio and West Virginia.

It wasn’t a matter of performance or latency – I personally never had heavy problems unless a service-wide problem arised – but it was all about governance. If you are a EU-based company or anyway you have operations in the EU, you know data protection it is a pretty important matter.

We are not talking about Microsoft snooping into your source code and looking at your intellectual property, not at all, but depending on what you work on you might have strict regulatory policies to apply. In detail, if you have EU operations (which means even a single server running in the EU), the EU Data Protection Directive applies, and it is stricter than the US counterside, especially on when data leaves the EU.

Visual Studio Online is covered by the Safe Harbor since I recall its existence, a bilateral agreement between the US Federal Trade Commission and the European Commission providing reciprocal protection to personal and sensible data, but for certain businesses or countries it was just not enough. Germany is a good example, where its privacy laws are way stricter than the general EU umbrella.

Eventually, if you store your data – whatever it is – on a US hosted service, your data could be inspected by the US Law Enforcement agencies under the PATRIOT Act.

So, introducing a EU-hosted region (in Amsterdam, for completeness) means a lot in terms of governance, as all of your intellectual property hosted there is subject to the EU DPD as such, and that’s all.

What you will be lacking today is Application Insights – but it would reach the EU VSO in time for its General Availability.

Thursday 23 October 2014

Can I host multiple Git repositories in Team Foundation Server?

Of course you can!

I am in the middle of a migration project, and the team I am helping with has several Git repositories (converted from other version control systems) to upload to their Team Project.

It isn’t extremely intuitive – you need to open the Control Panel for your Team Project (https://yourserver.domain.tld/yourcollection/yourteamproject/_admin/_versioncontrol)

image

and from there you can create a new repository.

image

That’s all!

image

When you’ll navigate to the newly created Git repository you will get the Getting Started page as well, which is very helpful for first-time users.

Thursday 16 October 2014

Again on the logs: are errors in the logs going to stop an upgrade?

A quick but interesting question came out this week: “if I see an error in the Event Viewer of the Application Tier, is it going to break a TFS upgrade?”

Generally speaking – no. The errors you see in the Event Viewer are client-side logs reported to the server. You might see an error on a client not able to connect, another one because of a non-existent user or value in the Work Item query, or more critically an error because you cannot contact your data tier.

It is the data tier which contains all the meat. All the data is stored over there and, if you look at the Configuration logs after an upgrade, the big show is there.

Of course there is a correlation between the data tier’s schema and the application tier version – everything is handled by the same binaries.

Thursday 9 October 2014

Logs, logs, logs…

Team Foundation Server is by no means an easy product – especially with large deployments. One of the most important aids in the daily maintenance is taking care of the logs, which are very descriptive.

Apart from the usual suspects (IIS, SharePoint, SQL Server, SCVMM) inside the Event Viewer you are going to find all the logs related to the TFS Services – these logs are especially invaluable when it’s time to troubleshoot a client issue, because they will contain exactly the error the user experienced plus many information about his environment (in particular which client triggered the error).

One example is an error like this:

logs2

“TF10158: The user or group name <group> contains unsupported characters, is empty, or too long" Once it is logged inside the Event Viewer I am going to get the error itself, plus who (the user account) and from which client (Visual Studio, a browser, MSTest, MTM, etc…) experienced that – guess what happens if you receive a question on such a group Smile

But what about setup or update logs? They are elsewhere – you are going to find the link into the TFS Administration Console.

logs1
What is amazing is their verbosity – did you ever try opening one of them?

logs0 
They are pretty self-explanative, and when it comes to a version update you will find detailed information about every step the setup does. This explains many things…for instance, did you know that the 2013.3 update which enabled Test Suites and Test Plans customisation has much of its foundations in the Update 2 bits?

Tuesday 23 September 2014

Hitting the limit on a Local Workspace

Everybody remember the introduction of the local workspaces in 2012, enabling offline scenarios with Team Foundation Server. But did you know they have a limit on the number of files prior to performance degradation?

This limit is 100000 elements.

TF401190: The local workspace temp_WS;User has 248536 items in it,
which exceeds the recommended limit of 100000 items. To improve
performance, either reduce the number of items in the workspace,
or convert the workspace to a server workspace.

There it is. It’s not a bug, but it is a design choice by the Team Foundation Server team.

Local workspaces work leveraging the content of the hidden $tf folder, which tracks all the changes for a file (deltas) from check-out to check-in. That’s how you get features like Candidate Changes. The side effect is that despite the source copy is compressed, it is still a copy, hence you have a physically bigger workspace.

The workarounds in this case is to use a server workspace (easy) or to split the huge, monolithic workspace into several smaller workspace so you won’t hit the issue. This could be harder than just using a server workspace, but with a bit of planning it is absolutely feasible.

This post by Philip Kelley is extremely enlightening, as it is a deep comparison between local and server workspaces. Right there he explains the differences, and how they are implemented (the PendChange permission, the +R bit, etc.).

Saturday 20 September 2014

Application Insights: what’s going on?

I guess it has been a little overlooked, but there is a lot of moving around Application Insights…
The biggest thing is that with Visual Studio 2013 Update 3, Application Insights is moving towards version 2.0. It’s not a mere version change…
Application Insights is being moved to Microsoft Azure, and 2.0 is the first version of it. The move is not complete yet, so the 1.3.2 version – running on Visual Studio Online – works, and it contains all the current feature set, but bear in mind that they are “rebuilding it from the ground up as part of Microsoft Azure”.
If you want to understand which version you are running, just check the ApplicationInsights.config file: if it contains a schemaVersion, then you are using the 2.0 release.
The Azure version lacks several features at the moment (Windows Store and Windows Phone apps monitoring, different APIs) and there are a couple of architectural changes, most notably the agent-free performance monitoring.
But it does not mean you are losing anything: it was in preview on VSO, it is in preview on Azure, and you can use both. If your application or service is configured to send data to the 1.3.2 version, this is not changing as there is no automatic upgrade.
There is only one thing to consider: if you remove the 2.0 package and you restore the 1.3.2 one, you cannot return to the 2.0 without repairing the Visual Studio installation.

Thursday 11 September 2014

Again, again and again on the backups

This is a topic which I find coming back every now and then: backups of the Team Foundation Server.

Team Foundation Server is a SQL Server-based product – hence most of the backups’ work happens there. Full, Copy Only, Differential, Transaction Log: choose your flavour, as long as you are confident it’s good.

IMHO it is a good practice to keep things simple: a daily Full Backup with hourly Transaction Logs Backups provide a good level of protection without involving the (IMHO) complicated Differential Backups.

If you can, use the OOB tool: it is mature enough to do its job without too many worries. But if you happen to need a manual backup, there are a couple of information to keep in mind…

In order to be supported by the Microsoft CSS your backups must be synchronized – no exceptions. The safest way of doing that, as it required manual interaction with the TFS databases, is to follow this MSDN walkthrough. I introduced a slight modification as I manage a big deployment which uses SQL Server AlwaysOn, which is just the verification of the preferred backup instance, but the core steps are the same.

The reason behind that is pretty simple: the Team Project Collection databases refer to objects (like IDs, or identities) stored in the TFS_Configuration database. If you restore a Team Project Collection database which contains something not aligned with the Configuration DB, it is going to end badly…

And remember to test the restore – otherwise you do not have a backup Smile

Thursday 28 August 2014

TFS Transaction Marking on SQL Server AlwaysOn Data Tier

If you need to manually backup the Team Foundation Server – you might have several reasons for not using the OOB tool – you need to follow this walkthrough on MSDN.

What I’d like to share is a small script you might use while you have to backup your Team Foundation Server running on an AlwaysOn-backed Data Tier.

I created a hourly job in both nodes, running one minute prior to the Transaction Log Backup job, as it follows:

image

In our case, we backup the Primary Replica, so before initiating the transaction I check for the preferred Replica. If it is 1, it’s the primary, otherwise it is the secondary (2) or it is resolving (0), both cases where my job cannot run.

It might be a little bit overzealous, because if you run the very same job on a non-preferred Replica (the secondary in our case) you are going to get an execution error stating the databases are read-only, but better safe than sorry!

Wednesday 20 August 2014

Why is my Incremental Analysis Database Sync going on forever?


Sometimes it happens...
 
And that’s just because I stopped it. Why does it happen?
 
The reason is pretty easy: if the job is running, but you have a network problem – an outage, like it happened to me – the TFS Job Agent might not report the state and it may goes on for hours, even if it has released all the resource locks.
 
You can safely stop the job by invoking the Web Service on the TFS Application Tier – you’ll need to set the SetAnalysisJobEnabledState to FullyDisabled first and then to Enabled in order to restart with the next scheduled job.
 
And remember – do NOT process the TFS_Analysis OLAP cube with SSMS, as it is not supported by the Microsoft CSS.

Wednesday 13 August 2014

How did I learn to get on well with Git

Who knows me certainly know I am not the biggest…err…fan of Git Smile

Thanks to Gian Maria and its continuous support on it I managed to understand how Git works and why it is so powerful. I am not saying it is “better than” something else – it is different and it has some pros and some cons.

So, it’s distributed. Distributed does not mean anarchist – it means distributed. If you want to have some sort of centralisation, go for a Remote. You can use it as a shared repository to be used like a central depot without losing any advantage of the DVCS concept.

When you commit something is different than when you push something. A commit is local, a push is toward a Remote.

A Git Fetch gets all the objects from the Remote which are not in your local repository. A Git Pull does more: it merges those files on your local repository, like a Get Latest Version.

Eventually – install SourceTree. It’s an amazing GUI tool with a fantastic branch visualisation tool.

Monday 4 August 2014

Can’t refresh the TfsOlapReport connection? Have a look at the Trusted Data Providers…

You open the SharePoint Dashboard and you suddenly see this error:

image

An error occurred during an attempt to establish a connection to the external data source. The following connections failed to refresh. TfsOlapReport

Fair enough, something happened to the Excel Services. Does it? Actually not – and if you try opening that specific report you will see your local Excel refreshing the data and working as usual.

What happened?

That specific error is a generic refresh error in our case. I tried back and forth on all the usual suspects – SSAS permissions, SSS token in the file, SharePoint settings, even firewall ports – but nothing changed.

Suddenly I noticed some reports were working (the Burndown one for example) while this (Active Bugs by Priority) didn’t. So what?

Looking at the Connection String I saw the Burndown report had MSOLAP.3 as a provider, while the broken report was using MSOLAP.5

A quick double check on the SharePoint server (Manage Excel Services Application –> Trusted Data Providers) brought to the solution: MSOLAP.5 was not listed as a Trusted Data Provider.

Once I added MSOLAP.5 to the list everything worked as expected again and the reports were correctly showing.

Tuesday 15 July 2014

Demystifying the Scrum of Scrums

The Scrum of Scrums is often saw as something ‘which grew out of control’, ‘just for Scrum Masters’ or something suited just to very large organizations.

It isn’t, actually…and it’s not rocket science, either.

A Scrum of Scrums is the best possible way of clearing doubts and questions raised among teams. It must not be merged or confused with a bigger standup meeting (as I’ve heard it…) because it is something brought on by the teams’ representatives – the Scrum Masters.

Its scope is to get a clear understanding of the problem’s domain and provide a solution – as the Scrum Master is there to remove impediments.

And yes: a Scrum of Scrums can have its own backlog, Jeff Sutherland defines the Scrum of Scrums as “…an operational delivery mechanism”, so having a backlog is perfectly reasonable.

Thursday 10 July 2014

Why is the new VSO Stakeholder Plan a game changer?

Yesterday Brian Harry announced the new Visual Studio Online Stakeholder Plan – basically, full access to Work Items only in Visual Studio Online (and on-premise Team Foundation Server) for everybody, free of charge.

I believe this is a true game changer: it’s at least four years that we talk about ‘involving stakeholders in the process’, ‘synergy among the parts in the organization’, Product Owners, etc. We could do that, for a fee (the Visual Studio Online plan or the on-premise TFS CAL) but it was perceived as a bit unfair against who could use that CAL/plan at full power (a developer would use all the features provided by the platform, a stakeholder certainly wouldn’t, 99% of the time).

Right now there are no excuses anymore Smile as you can involve as many stakeholders as you wish without paying a penny.

As there are evidences of how involving stakeholders in the development process is a staggering improvement compared to other methodologies, it is a great opportunity to push hard on the quality pedal and starting to achieve great results!

Tuesday 8 July 2014

Test Suite and Test Plan customizations in TFS 2013 Update 3 – synergic work between development and testing

IMHO the most exciting feature of Team Foundation Server 2013 Update 3, among all its goodness, is the migration of Test Suites and Test Plans to plain Work Item Types – and the reason is pretty simple.

Despite all the effort spent, developers and testers still had a tiny line which kept splitting them and their worlds:

image

Which eventually led to limited shared information (pinned items on the Web Access) between them. Which was a cause of pain and frustration, especially among the testers. Using Tags was a way of sorting it, but to be fair not properly the best…

But right now all the testing artefacts are Work Item Types – Test Plans, Suites and Cases – so you can query them and, mostly important, you can easily add custom fields, rules and workflows like with the existing WITs.

For example, I might have Test Suites with a specific Feature Area (potentially reused across other Work Item Types) as well as a Planned Release field. I can easily add them to the Test Suite Work Item Type:

image

The cool stuff is that all of these are now queryable!

image

…and obviously, pinnable to the team’s home page:

image

That’s why it is way easier for testers now to access and share relevant information.

Monday 7 July 2014

TFS Audits – how to create reports on your TFS usage in five minutes or so…

Some months ago I wrote about the TFS Audit Log, because a big part of my daily job is about governance, regulation and access management.

This log is very, very, very verbose as it is a flat list of every single user and group into the Team Foundation Server’s ACLs. How to get some more meaningful information from it?

The basic Sort functions, together with the Text to Columns of Excel are a must in order to format it in the best possible way. You can then create a PivotTable and mix/match your data and the criteria you want to show:

image  This fairly basic PivotTable is going to give you this result: image

It is basic, but it is something you can do in zero time. Add a chart to the mix and you have a nice (reusable, as you just need to replace the datasource, which is the whole original Audit Log. So in case you need to do it monthly you just need to replace the appropriate sheet and you’re done) report for understanding the ratio between users and group, without the need of complex SSRS reports or even PowerPivot.

Then for instance…if I want to get a text document with all the users contained in each TFS group, the only command I need to launch is tfssecurity /imx. As I have hundreds of groups it is quite…long for a manual interaction. So, I’d create a text file with all the groups - copy, paste from the original audit log, one for each line – and then launch the following command from the Visual Studio Command Prompt:

for /f "tokens=* delims=," %l in (<path to groups.txt>) do tfssecurity /imx "%l" /server:http://<tfs>:8080 >> <path to mygroupaudit.txt>

this simple command is going to execute tfssecurity /imx for each group and, thanks to the >> sign it is going to append the tfssecurity’s output to the mygroupaudit.txt file. I usually launch it on an unattended machine and keep it running on the background until it finishes.

Then in order to get a basic but polished report, open the resulting file in Word, and change the font setting to Bold for these four lines, except for the very first on top which are IMHO useful for a nice presentation:

Done.
Microsoft (R) TFSSecurity - Team Foundation Server Security Tool
Copyright (c) Microsoft Corporation. All rights reserved.
The target Team Foundation Server is http://<tfs>:8080/.

Word has a very useful feature for this needs: select text with similar formatting. Do that and wipe those lines away! You’ll get something neat and polished – change fonts, and add whatever else you need and you’ll get an Audit Report in five minutes :)

Of course it’s possible to use that file in a better way, but these basic tips will give you something in a near-to-none time!

Going slightly deeper in the reporting technologies – PowerPivot is perfect for that. You just need to load the .csv file and you have a dynamic model perfect for drill-down queries. With this:

image you’re going to get this chart: image

which is pretty basic. But what if you want to know how many users with Full Access (which require a CAL as we know) accessed the server on a specific date…well, you can create a chart having Last Accessed UTC on the Axis as well as on a Slicer, so you can filter your timeframe – and Sum of Full on the Values:

image

But as it is PowerPivot, you can drill down as you wish…so if from my dates selection I select just today at 13:09 (where this is the only access for today on this test instance, by me) and I drill down by Display Name what I get is …

image

…a big pie with just me as a value:

image

These are just easy samples, but as soon as you get that this csv data is a data source…the whole reporting world will welcome you!