Much of the upgrade series so far has been about “knowing your environment”. As I get closer to the end of the overall series, there are a couple more topics to discuss. Today's topic is testing and what to consider/plan for.

Posts in the series

  1. Series intro
  2. Process overview
  3. Know your environment (part 1)
  4. Know your environment (part 2)
  5. Know your environment (part 3)
  6. Know your environment (part 4)
  7. Know your environment (part 5)
  8. Know your environment (wrap-up)

Types of Testing

When I'm planning an upgrade, I think about testing in a few different ways depending on what I am testing as there are different considerations to plan for.

Data Validation

This is the first and usually the most important as it is a validation that the core data upgrade went well. This type of testing involves comparing reports before and after the GP upgrade from a representative sample of the modules. At a live upgrade, typically the process is running various reports right before the environment is "passed off" to the technical folks who do the upgrade itself, and then the first thing done after the environment is ready again is to re-check the same reports.

I rarely see issues here and when I do, it's often been a case where a user kept using GP after the "cutoff" before the data was copied to get started OR started using GP before validation was done. I have never seen an issue with an entire upgrade failing but there are always issues that need to be addressed. It's a step I never recommend skipping. The key is to check a sample of reports across all or most of the modules in use, don't just check that the GL data upgraded!

For test upgrades, the larger challenge is one of timing for when the backup is taken and "pre" reports. I typically take a set of backups to use for the test upgrade and then restore the same backups to test companies. Users can then print "pre" reports from the test companies and "post" reports from the upgrade test companies to compare. If that's not an option, then the same kind of cutoff planning may need to occur for the test upgrade in terms of stopping access to the system, running 'before' reports, taking backups for the upgrade, and resuming the use of the system in production as normal.

In general, my recommendation is to use out-of-the-box reports for this validation. This is done first, before other testing starts. If there are some issues with a modified report, it should not prevent users from comparing the "original" report for data validation purposes. The content is what is being validated, not the "look" of the report itself.

Process testing

This is typically the area where the most testing time occurs and what most people refer to or think of as "testing" the upgrade. This is where users will go through and test many of the same functions and processes as they perform both day-to-day and "routine-based" processes like month end or year end related routines. The closer the users can get to doing the same things as normal, the more likely it is that any issues that need to be reviewed before going live will be caught.

I encourage users initially to post and print the posting journals (to PDF or paper) and check them. The reason I suggest this is that often some of the modified reports in the environment are posting journals or edit lists and reviewing the content and looking for errors in the reports themselves is necessary. Once the reports are deemed to be ok, they may not need to be printed for every transaction or batch posted after that.

Other types of testing

The rest are offshoots of some of the things I've mentioned in my Know Your Environment parts of this series: modified reports and forms testing, reporting tools testing, integration testing, customization testing, and external application testing.

Each of those areas has different things to consider and while the testing of those can be integrated into the Process testing, there are often tests specific to those items outside of that. "Does the tool work?", "Can I get into the tool?" etc.

An example I use to explain this is reporting tool testing. "Can I connect to the data source?" is something that should be tested before giving a user the instructions to test the process that requires that report as part of the output.

The other part of this section can be a simple "inventory" test: make sure that all of the ISV products that are licensed and used are installed. It wouldn't be the first time that a product wasn't installed and depending on what it is, it may not be noticed immediately.

Test Plans

Make sure part of the planning includes creating test plans or at least identifying what should be tested. Testing is useless if users don't know what the expected outcome is or if they are just "winging it" without a plan.

Testing is a chance to ensure the odd situations work as well as the "vanilla" ones, i.e., the simple scenarios are not the only thing that should be tested. For example, if multicurrency is enabled, include M/C transactions in the testing.

The point of testing

The goal of a test upgrade is to ensure the process works, end to end. The entire process is a test right from how the new version is installed through to making sure that the test transactions post correctly. I tell clients that we are not doing "QA" (quality assurance) that Dynamics GP works, we're testing that users can do the things they usually do day to day, and noting things that are new/different/broken so they can address them before going live. Or, in some cases, know what to expect at going live.

I'm not a big fan of "post 3 of every type of transaction" type of test plans. I would rather see a focus on process-based testing, "order to cash", "procure to pay" etc. as it mimics how people use GP day to day. Take transactions through the lifecycle - create a PO, receive the PO, invoice match the PO, return the items etc. and in between, check the reporting that might be done daily to see how those are flowing.

One thing to note here is I am not suggesting "running in parallel" with production during a test upgrade. While some things done in production can also be done in the test environment, trying to keep the systems in sync is a mistake. Any time organizations have done this, they spend more time double-entering and reconciling why the systems are not identical than its worth.

Summary

This is a very brief overview of testing considerations in planning the next upgrade. Finding issues during testing is OK, think of them as yet another reason why it is important to do a test upgrade. Which is worse? Finding out something isn't working after going live, or during the test phase when there is more time to sort things out?