Wednesday, September 16, 2009

Why Test Scripts Suck

I’ll trade one motivated business user in for five IT testing professionals for making sure an application works as designed. Why? Because that business user has all of her day to day requirements, pain points and frustrations invested into the new application. On the other hand, the testing professional has to make sure the test script is executed flawlessly, that’s it, on to the next project.

Below is the typical best practice, software development mantra that project managers will promote. I’ve added some notes under each of them and few new mantras.

The software implements the required functions.

Have the requirements been aloud to change during the project? Flexibility is key to the perceived success of any project. If concessions can not be made without huge push back and it’s a pain to change requirements from the business’s perspective, the project should be stopped and re-evaluated for its purposes (and management). This is very pronounced with large projects.

Added to normal Project Manager’s software development lifecycle list:


Prototyping functionality for business users to experience (see) what’s been talked about and promised. If a third party or internal team cannot commit to show their application during development for fear of “giving a bad impression” or “scaring” the business then there are trust and lack of communication issues going on that need to be dealt with now rather than during the full User Acceptance Testing.

The software has passed unit testing.

Make sure the developers know the architecture of the application as a whole, its requirements and the importance of the unit testing and integration into the larger application or service. If one developer is slacking the project is at risk of failure. Project managers should have a good idea of who can perform and who needs help at this point in the project timeline.

Added to normal Project Manager’s software development lifecycle list:

Code Reviews

A junior developer cannot possibly know all of the ins and outs of the application if they are focused on coding specific components or services. All code, at least initially, should be reviewed by senior developers and architects to assure efficiency and scalability.

The software source code has been checked into the repository.

This can be a pain in the ass if the project is small, however necessary if you are developing with others and integrating into a larger repository. This also is a good check for senior developers to do quick code reviews.

The software has been compiled into the current build (for compiled systems) and deployed into the appropriate test environment.

Without proper safe guards, one developer’s code could break a whole series of other test scripts. Smoke testing is highly advised before fully committing the code.

The team has developed appropriate test scripts that exercise the software's capabilities.

These scripts are usually end to end tests that are few clueless testers, not irrational business users, who change their minds, cancel, go to lunch, upload their whole hard drive, etc.

The software has passed integration and system testing, and been deployed into the user acceptance test environment.

Many large projects are desperate for true testing environments are usually skimp on resources for them. This poses an issue when the new build is supposed to be deployed and fully functional in a test environment that has kludges.

Added to normal Project Manager’s software development lifecycle list:

Performance and Scalability Testing

Normal third party developers comment during the development phase of the project that they wonder if this will scale or how it will perform under a load. These developers talk to the project manager and usually the discussion ends there. If this is brought up at this time in the project with no time allotted to it then forget about it. Also, why would the third party be motivated to do this when this is a typical reason that they get called back in to do more business?

The users have had an opportunity to use and respond to the software, and their change requests have been acknowledged and implemented where appropriate.

Again, this is important, but there should be no surprises at this time. The users should have had their requirements changed and prototyped and testing by this time.

The software has been documented in accordance with whatever standards your project follows.

Have you ever seen a test script written for documentation accuracy?

If documentation is not ongoing during the whole project this document will be worthless. I have not worked on a project where the design document is perfect after being signed off on. During development and fixing, the design doc needs to be corrected, changed, or expanded on.

Also, the deployment and knowledge transfer documentation should be complete and tested.

Sunday, September 13, 2009

What the F’ Happened to the Customer’s Vision?

It seems every new technology or architecture or new way of looking at the complexities of content is like building a new platform on quicksand. It eventually sinks below the surface and then a new “genius” comes along with a solution that gets sold to our “shock and awe” addicted users.

The customer used to always be right, now they are sold what’s “right”. What is sold to the customer is pretty and “easy-to-use” technology which is over their heads. They become reliant on experts to build the solution and to come up with language that makes the Manager/Director look good to his superiors.

Once ECM is in place, the users look at it and inevitably want their old system back. After a while they become more comfortable to the new ways of doing things. Then they want continuous improvement. By the time this happens a new version is released, new bugs cause the experts to come in and fix them. The continuous improvement requirements get scaled down by technology issues of performance and content growth. The IT department thinks they own the system. The Business Units get frustrated with IT. Yada, yada, yada.

At this point “shadow IT” starts its cycle again. In the late 90’s it was websites popping up every where as intranets via easy to use, inexpensive website publishing tools. Now it’s Sharepoint portals. These portals are what the customers want. They want messy rooms (unstructured content spaces) where they can play with content and ideas, not technology. Metadata, security, taxonomy, workflow, lifecycles, retention, etc. need to be worked into these “messy rooms”, periodically cleaning them up, organizing the useful content and throwing away the building blocks. EMC’s CenterStage, like Sharepoint, is trying to fulfill the need for users to produce, edit (collaborate) content while the systems handles structuring and storing behind the scenes.

This introduces the big gray area of ECM: the void between structure and unstructured content. Let’s say an invoice is structure content because it originated from a database and has a number. The problem is that this invoice was printed to paper, signed, scanned, and place back into a content repository. The number is still there, but the systems are different. Even though the systems are interoperable there is no source of record anymore. What is more important the financial aspect of the content or the actual scanned proof of purchase? It depends who you ask.