quinta-feira, 18 de maio de 2017

SW Testing: Testing activities and maybe some tools?

Ah well, testing. So good (or bad, depending on your point of view)!

Testing Activities

Testing activities (belonging to a test process) include things such as planning, specifying, executing and reporting (status). Most of you could have only executed tests (exploratory tests). This is sometimes called dynamic testing. Some methodologies will consider document and code reviews as testing also (static testing).

When we talk of the "execute tests" activity (or activities) we are talking of some common things you should be doing in order to verify (and validate, depending on the phase / goals you are trying to achieve) your software:

#1 - Unit testing,
#2 - System Testing; integration testing
#3 - Acceptance testing.

Test levels attempt to achieve different goals (by doing almost the same thing at different phases: execute tests). All test levels include executing tests (which consists of verifying that some part of the system is "OK", whatever OK means) but the focus and goals to reach, and the (sub)tasks (and toolings to use) could be completely different "beasts".

#1 are sometimes called developer tests, and ideally (if possible) are written (and executed) by the code author (developer). Could be reused in continuous integration environments (or continuous delivery).

#2 - Are used to test components after integration (HW + SW) / deployment to a pre-production environment. The focus is to test end-to-end functionality against (written, please) specifications (SRS, SAS, ICD,/ICS, DDS/SDD, ...).

#3 - Are used to make sure the customer needs were addressed; involves the customer, contractor or a(n  independent) team approved by him.

Testing Tools? 

As for tooling, you can do almost all of this without tools (you'll need at least an Office Suite for the reporting part, Excels, Words, ... unless you are willing to use hand-written paper sheets) maybe except for #1. But some tools will enhance your productivity (add you years of life) and ease up "regression testing" (i.e. making sure that a system did not regress in terms of behavior).

Good examples of tools and frameworks that you'll need to master are:
#1 - Unit testing frameworks: Think JUnit, NUNit, ... Also think on mocking behaviour with complementary frameworks (how to test your class without the "surrounding" classes?).

#2 - You'll have a (test) specification (with the test case catalog). Then you'll need a test management tool to choose subsets of it, assign them to testers (also called QA Engineers, SPA Engineers), record results (pass, fail, on-hold, etc). Note: If you have no test management tool available, use at least a spreadsheet shared with the team (configured for concurrent editing - even MS Office supports that, and it works well for small teams on a local network share). Only make sure that no passed tests are edited to failed by the concurrent edits (make frequent backups of the shared spreadsheet).

X-.Ray: If you're using JIRA for issue tracking, X-Ray could be of interest for test management. More on this below.

Still #2 - If you want to automate tests (think: "test automation"), you are writing (i.e. recording/programming) test scripts, which shall be rigorously aligned with the test specifications (that takes lots of effort to keep updated i.e. alive, so plan ahead for it).

SOAP UI: You could test automatically the business layer (data persistence issues like CRUDs, business rules) and there are tools for SOA architectures: Think SOAP UI (free and not free editions).

Robot FW: Sometimes you'll want to automate tests via the presentation layer, including (several) browser-based applications. Think on things that support several browsers, and that could allow you to record interactively the interaction, without X-Y dependencies, and to edit the resulting recorded script (to make values come from files, test databases, etc). Think Robot Framework and RIDE, which is based on Selenium.

Testing mobile apps (remotely) is yet another challenge (out of scope of this article, for now).

#3 - Acceptance testing is typically done manually (customers will want to "click" through), but it could be different. DevOps from the customer could do it - manually or any other way - on their side (and on their pre-production, production environments) before "accepting" deliveries, formally.
And we as suppliers of [deployed working] solutions could be asked by managers to execute (full) "dry runs" to make sure all specified acceptance test cases are OK as far as our teams can foresee (in our acceptance test beds, or even the customer's, if access is given to us).

If you need test automation at this test level, the same tools could be used (Robot FW, for instance).

(INTERNAL) Training

https://goo.gl/lWJ3IN - Info about Robot FW (INTERNAL, delivery)
https://goo.gl/Q4jOfc - XRay test management tool (integrated with JIRA)
https://goo.gl/TLwyqH - Training on SoapUI, testing via the service layer (INTERNAL, HR)
https://goo.gl/69nfuh - Training on Robot FW, testing via the presentation layer(s) (INTERNAL, HR)

Also, remember to... READ THE QMS PROCESS (ENG04 - Software Testing).