Should I Write Tests?

I often see this question sometimes gets asked in various forums on-line, and everyone rushes in to say “yes of course you should” and although I do agree, I don’t think automated testing should just be blindly included in every piece of work. So I wanted to describe the scenarios where automated testing really is beneficial.

  1. Complex distributed logic that it is impossible to get your head around quickly, particularly after a long time away from the code
  2. Complex isolated logic that has so many permutations that it is hard to cover all of them with a manual test
  3. Logic that is dependent on scenarios that are difficult to reproduce with manual testing
  4. Logic on which we depend but don’t control (third-party packages or APIs)

Complex Distributed Logic

This is the kind of logic that has multiple moving parts, distributed as separate services within a whole solution, and a change to one part can inadvertently bring a large part of the application down.

The testing will be in the form of high-level integration tests, either because unit test coverage isn’t good enough, or we haven’t had enough time to isolate mock scenarios but have had time to generate fake data for them (pretty much the same as not having enough unit test coverage).

This kind of automated integration testing stops development (and likewise refactoring) from grinding to a halt when it’s impossible to get a run-through of the entire application into your head at one time.

Complex Isolated logic

Sometimes a change, particularly a bug fix, appears on the surface to be simple but in reality has so many permutations that it is difficult to pin down all the scenarios it has to support. Automated testing is invaluable here, and can be the difference between a successful deployment and an immediate rollback.

I’ve been in scenarios where QA is waiting bug fixes to be deployed to their test environment and their tests couldn’t be allowed to fail due to an upcoming release window. I had time constraints of my own (often needing to complete fixes within a matter of hours) and without unit tests it would be impossible to develop quickly and be confident that the fixes would work.

This is like the unit testing equivalent of the integration testing of distributed logic above.

Hard to Reproduce Test Scenarios

If you’ve ever done work across time zones you’ll know that it’s unfeasible to manually test an application by changing the timezone of the local machine’s clock and running through a test script. The only way to really test this kind of thing is by injecting a system clock into your code, and faking an instance of it for your tests.

This also applies to code that tests various permutations of asynchronous result handling. It’s impossible to manually reproduce results being returned in certain orders and after certain times, without faking it in a test.

There is some overlap here with 2. Complex Isolated Logic, as the hard to reproduce scenarios often pin down complex logic.

Uncontrolled Logic

Being able to fake a third-party is incredibly useful. We can make our assumptions explicit in our mocking code, start to build before new third-party functionality is available, fake exceptional behaviour, and make expensive API calls without incurring a cost. All of this often makes a good third-party mock well worth the development effort.

Building a mock of a third-party is often a no-brainer as our automated tests can’t be run against a live API. We can sometimes take an approach that it isn’t our concern whether a third-party API works as expected, as we can always raise a ticket if it doesn’t, but an accurate mock saves us from any last-minute surprises.

This ties in with 3. Hard to Reproduce Scenarios, as it we can use out fake third-party to reproduce error conditions which we could never trigger against a real API.

Conclusion

Of course after writing this all out I’ve come to the conclusion that at least one of the points above is likely to be satisfied in most non-trivial applications sooner or later, meaning that tests will become essential sooner or later.

Client Side Package Management in Visual Studio 2015

If like me you’ve always had one foot in the open source development camp, then you’ll be really pleased by the recent changes in ASP.NET 5. Microsoft have stopped reinventing the wheel and accepted that the existing open source tools for client-side package management should be integrated into Visual Studio.

Gulp, Grunt, Bower, NPM – what’s the difference exactly?

I’ll start with a summary:

  • NPM, is the package manager that installs the other package managers discussed in this post, as they all run on node.js locally.
  • Gulp and Grunt are both task runners running on the node.js runtime, and their main functions are to pre-process and/or bundle our client side JavaScript and CSS.
  • Bower is a package manager for all the HTML, JavaScript, CSS, fonts, and images that are bundled with a modern UI package or framework.

The NPM and Bower package managers are smart enough to resolve all the dependencies required by a package, and make sure that we only download a single instance of a given dependency.

NPM

NPM is a JavaScript package manager, and became the standard package manager for node.js a number of years ago. Every NPM package comes with a package.json file which has details of the package’s current version, dependencies, contact info and documentation, and scripts that should be run at specific points in its life-cycle.

Gulp

ASP.NET Gulp Docs

Gulp is so named as it is based on piping streams through multiple commands until all the commands are complete. This piping means that Gulp takes more advantage of the asynchronous nature of node.js, and can give better performance.

The standard ASP.NET 5 project templates use Gulp as the default task runner, so if you create an ASP.NET 5 project with Visual Studio, Gulp will be available straight away. If you right-click on the gulpfile.js in the Solution Explorer and then click the Task Runner Explorer, then you will be able to see all the individual tasks that are defined.

Gulp Tasks

Tasks are defined using a JavaScript function, and can have dependent tasks specified as an array of strings of existing task names. If tasks are going to take a long time to run it can be good to split them up if we only want to run one.

Gulp Modules

Tasks run code from modules that are required by the gulpfile to do things such as cleaning out your build directory. You would do this by requiring the rimraf module and then calling it within a “clean” task, passing your build directory in as a parameter.

Gulp modules are installed using NPM.

Grunt

ASP.NET Grunt Docs

Grunt is a Task Runner similar to Gulp, and also has integration in Visual Studio 2015. It takes more of a declarative approach to defining tasks: you require already available Grunt tasks, and specify parameters for them by using JSON. I won’t go into as much detail on Grunt here as I’m planning to stick with Gulp for task running in future.

Bower

ASP.NET Bower Docs

Bower is a package manager for client-side code, it was created by the team behind Bootstrap to give people a standard way of obtaining updates to it. We require so much client side code from so many sources, each with their own dependencies, that it has become too much of a handful to just commit random snippets into version control and expect ourselves to manually keep everything up to date.

You can think of Bower as NuGet for the static third-party code that your web application requires. Rather than downloading packages from the web, including possibly resolving dependencies manually, it will take care of downloading everything we need for a particular package.

If you have experience of working on an application with a decent amount of JavaScript, then you will know that formally managing your third-party JavaScript, and the dependencies that it brings with it, really pays off in the long-term.

Yeoman

ASP.NET Yeoman Docs

Yeoman is like Bower except it just generates projects from templates. It does the same job as the project templates that already exist within Visual Studio, so I’m not going to go into too much detail on it in this post.