Business Office

TILT (Test Data Generator)       PMEP_TIL

Timed Input & Load Testing

This tool is focused on managing quality assurance testing to confirm that the products match client expectations. Core functionality includes generating transaction and driver files, but that same basic functionality can be extended to manual and automated test execution, in order to confirm that the implemented products meet or exceed stakeholder expectations. TILT supports smaller loads for manual testing, generating test scripts for people to execute, and it supports different options for test automation (directly or as a front-end tool to generate input for specialized test automation drivers) and batch-application test execution.

The purpose of this utility process is to facilitate the load testing of applications by creating volume data sources with or without timed input structures: the basic time dimension is reflected internal to the TILT process to establish a representative mix of transactions to properly test an application under test. The basic functionality is implemented in an Excel™-VBA workbook process that serves both as a repository and as a generator. It also incorporates basic testing and auditing capabilities. To perform specific test sessions we can apply batch files or use purpose-built drivers to feed the batch file contents into target applications. The framework for such a driver is explained in this guide using basic Excel™-VBA tools as examples of how that works in concept. The complexity of the data can range from simple log files to a carefully crafted and formatted transactions file that drives a batch application or interacts with an entry screen. The software is for generating data only: it does not measure the application performance directly, but works with device specific tools to drive that data through a system.

Timed Input Load Testing (TILT) is about generating input that can be applied at a prescribed rate to test that the solution is capable of keeping up with that workload. We can drip-feed timed input at the rate specified to manage a user-machine dialog, to fill a log file in a monitoring environment, or to execute a number of other timed operations. We can also present a transactions file as input to a process and do a throughput time test to validate that the solution can process that file (or those files) within the time allowed for execution. Not only can we present the correct data in such transactions files, we can also submit deliberate exception conditions to trigger the appropriate responses in the applications software that validates the error handling capabilities.

Every development project needs testing that begins with data to test with and/or test against. It is easy to create data that are correct: most systems already deliver that. We also need data that can contain conditions we hardly ever see in real life as the edit rules along the way try to prevent those conditions persisting. If we want to confirm that we get an alert at the operators console when we have a serious hardware failure it is a lot less destructive to put the appropriate log entry into the system via a TILT transactions file than to trigger a destructive event, especially if we need to repeat that testing a few times until the kinks are ironed out to ensure that the message reaches its destination. Whether or not an actual destructive event is capable of logging such that we can detect the condition and push the alert to the console in production is a different concern from verifying that the software does what it is supposed to do if the event triggers.

We cannot simply repeat the same transactions because the account eventually drains and rejects the inputs, thus invalidating the testing process. Still, many load testing cases hammer a single account with a limited variety of options because they lack the facility to spread the transactions so they address different accounts. The failures that result are test failures: the application does what it is supposed to do to prevent overdrawing the account. There are many other types of test situations in which we need to carefully consider how we can generate a sufficient load on the system without that resulting in an unrealistic focus on a limited set of data that increase the propensity for test failure.

As a result we use small sets of test cases that do not represent thorough testing, so that there is a real risk of failure in mission critical applications. Risk discourages innovation and development, which is not in the best interest of an IT development organization. We can reduce risk by making testing part of the development work effort so that the software is validated as it is being developed. The goal of a QA team is not to find as many errors as possible: it is to prevent as many errors as possible by creating an environment for development to stress-test the work in progress at every opportunity. To do that, we need to focus on a solid test environment.

There is a reason why we refer to a server failure as “tanking” because the process can reach a point of saturation beyond which the deterioration of the process becomes quite dramatic in a short time. This can be an objective: to find the threshold at which point failure becomes inevitable, with the intent that we can establish upper-bounds to throttle the input to within limits that ensure uninterrupted service. With our tools it is not too difficult to produce huge volumes of data: a single TILT session can produce in excess of 13,000,000 records at a load representing a transactions arrival interval of 0.001 seconds.

We can produce multiple output files, and therefore we can feed multiple instances of an input source concurrently (depending on the nature of the system under test), or feed a single driver process that in turn can interact with multiple target applications. It is useful to keep in mind that we stress the target system as well as the driver in this process. The point of the GEN command to drive the generator logic is that TILT is not the restrictive tool within a validation process. It is very easy to adjust the transactions mix to vary the transaction arrivals rate or to change the proportion of different transaction types. It is easy to design similar transactions for use in a more traditional batch process. Most components of the system ensure predictable, repeatable results unless the user explicitly asks for a random selection.

It is easy for tools to create havoc if we are not careful. Although the generator can produce copious amounts of input data there is a risk that the driver tools will not be able to handle that volume, start to back up, and eventually fall over and crash the application they were meant to test. There is no risk that would possibly happen to the TILT generator because the production of data is based on simulated time rather than on clock time. We can produce many sources of test data in a PC that we then present to a system under test. When that is presented to a driver, however, the driver has to be able to keep up with the simulated time that must be reconciled to clock time, but in the worst case scenario the actual rate will simply be the top-speed of the driver that controls when the next transaction is presented.

What can be a problem in the generator process itself is that data constantly change to reflect any new conditions. As the transaction files become more complex the maintenance of test data becomes a big challenge. Master files change and/or the conditions change simply because of the passage of time that causes the same master file contents to be interpreted differently. A loan in good standing will become delinquent if we don’t run payment transactions for a couple of months. This challenge affects regular testing, but especially regression testing when we expect to use what we have already used in the past. If you generate transactions for an application that assumes currency of the data you need to plan for a current status that allows the application to process that data.

One of the keys is that we should not employ hard-coded values for data elements that are subject to a change due to the passage of time (or other similar causes). Instead of embedding the date in the body of a transaction image, we embed a reference, and we define the date as a variable that is easy to change globally for the transaction generation source. This does not change anything in the details of a transaction: it changes the effort required to keep the transaction generation source current. We don’t want our test system to fall over because of changes that are easy to predict and cater for: instead, we make our test system resilient by adapting the data based on how the application needs change.

With control commands we can serve up exactly the right mix of transaction data that we need to achieve certain test objectives. As with concrete we can test the strength of our validation efforts with different proportions of the ingredients to the mix. A further analogy is that we can prepare a blend as separate measured sets of inputs that are delivered in a proper composition to suit what is needed for that part of the job at hand. We can establish some blend and then throttle volume in a separate setting that applies proportionately to all inputs so a lesser quantity can be produced but with the same proportions.

We can also blend by using the appropriate drivers, each with a timed input file as a unique mix of transaction data. It is possible to generate automation macros that simulate physical keyboard data entry and use parameters in a generated data file to trigger the appropriate action by agents that drive specific processes. In this guide we provide examples, whereas you can implement any custom agents that make use of the trigger information in other creative ways to achieve unique application objectives.

Testing is tedious and error prone: it should be done throughout the development life cycle but is usually delayed until the end of the project (almost in the hope we run out of time and/or money so it is a slam-dunk to cut back on testing scope). TILT will change that by taking the tedium out of the process. Besides, how does one perform load testing unless there is a way to drive the effort through some form of automation? In lieu of having hundreds of users pounding away at a keyboard we have hundreds of robots running concurrently in a limited number of workstations getting the same results. With fewer typing errors. Non-stop. As often as we want, so we can afford to do testing during development to catch the errors before they get buried inside the software, when it is easier (and cheaper) to detect, analyze, and fix the problems. We should be able to run the testing overnight to analyze results the next morning rather than to run and analyze each case one at a time during prime business hours.

With TILT the focus on manual testing deprecates to validating the “look and feel” of the application for the user. In many cases there is no “look and feel” because the application does not have an interface designed for users: it is a batch processing paradigm that still does the bulk of the IT work but that gets a comparatively limited amount of attention from the testing community. When major players in the test management domain emphasize that over 70% of all testing is still done manually, and we know that in most cases this manual testing is too limited in scope for a thorough assessment of the software, we can imagine that as equivalent to mixing a batch of cement in a wheelbarrow compared to a continuous pour from a large cement mixer truck: not very effective if you have anything serious to construct.

People contend that automated testing is expensive: TILT is simply an Excel™-VBA based application, and many test situations can be dealt with using simple “driver” tools that read the prepared transactions and hammer them into the application input screen. The major complexity is the input – with TILT this is taken care of because the timers ensure all contributing PC devices input into the system at a consistent rate. Even many expensive testing tools have one major Achilles Heel = the data to be entered. By making this data available in a “pre-digested” form so that even the most simplistic driver programs can enter that data where it needs to be input that problem is easily solved.

Learning Formats       PMAP_TIL

This course is currently available in a classroom setting (public or company private) with approximately 30 contact hours (5 days).

PDF – Certificate Of Completion

Each course offers a certificate of completion that identifies the course, the student, and a brief description of the course. To receive a certificate the student must have attended at least 80% of the course sessions. This personalized certificate is forwarded to the student by Email.

PDF – Course Notebook

Each course includes a notebook in PDF format that provides the minimum knowledge the student must master in order to obtain the certificate. In the notebook you will find references to other study materials. Students receive the notebook by Email when their registration is confirmed.

PDF – Program Overview

An overview of this study program can be downloaded from the website by right-clicking on the program link on the enquiry page.

PDF – Current Training Schedule

A list of upcoming training sessions can be downloaded from the website by right-clicking on the schedule link on the enquiry page.

Registration – Service Providers

To register for any training course please look on the enquiry link page of your service provider (from where you accessed this website). On the page you will find a registration request form where you can order the course that you are interested in. The availability dates will be provided to you, along with payment instructions if you decide to go ahead.