TABLE OF CONTENTS:
One of the biggest financial institutions in Poland creates a private banking application. Now the application needs to be tested extensively in both desktop and mobile versions. But manual testing, used up to this point, is quite slow. This is where we come in – we decide to automate the whole process to introduce the new product to the market as soon as possible.
Before starting any work on test automation we had to do the basics for this kind of projects.
We defined:
Having defined the tasks to realise we set out milestones and moved on to the execution stage.
The project consisted of 22 sprints, each taking three weeks to finish and including strict monitoring of gathered metrics, such as milestone priorities progress, test coverage for every scenario, the number of created test scenarios, or the number of created automatic tests.
For each sprint, we prepared a demo version of a given function so that it was easier for us to control the direction of the project and the client was able to watch our progress on the fly.
This approach is typical for safety-critical software development, where product quality is often a matter of life and death. Yet we believe good practices developed for such applications can benefit any project and are highly advised.
We talked about it in one of our webinars – “What we have learned from building safety-critical systems.”, which you can see here.
Apart from automating the tests previously performed manually by the client, we prepared methodology for creating test scenarios, automatic test creation processes, and a library of automated test scenarios, all with the Behaviour-Driven Development approach in mind.
This method assumes that while the software is developed, it is also described from a potential end-users point of view. For this purpose the Gherkin language is used with keyword phrases such as Given – When – Then. As a result, communication between the developing team and the client is smooth and efficient, as it is not a problem for technically-inclined engineers to interpret business needs put in this form.
When the necessary documentation was finished it was time for the main part of the project – creating automatic tests for both versions of the application.
Firstly, we wrote the code for an automatic test in accordance with the “clean code” rules. This way we eliminated the need for a significant number of changes that would otherwise happen in the refactoring and review stages.
Secondly, the test was moved through the initial review stage, correction, and the secondary review performed by a senior developer.
We had assumed that the automation procedure of a test should end when the test was executed and results were generated. Those results could be then verified by an automated testing maintenance specialist to ultimately confirm the reliability of the test. Only then could the test be added to the test cycle.
Due to a large number of automatic tests and test configurations, their execution time became one of the most important factors in the project.
In order to achieve maximum efficiency, we decided to organize the environment in such a way, that tests for each configuration would be run independently. Both virtual and physical machines were utilized, using Docker images adjusted to our specific needs provided by Katalon and the Katalon software itself respectively.
Why physical devices? We wanted to test the application under Internet Explorer and on mobile devices.
This key metric, especially valuable from the business point of view, allows to asses how much of the application’s functionality is covered by the existing test scenarios and shows which of its parts lack tests. To put it simply, it shows how complete the testing process is at any given stage.
Thanks to this metric it is possible to control work progress and assess the amount of work required to finalize the project. Clearly, this information is most useful from the perspective of a Project Manager.
Apart from typical metrics, we were monitoring certain trends in metrics. It allowed us to assess the project in a longer perspective, check its progress’s compliance with the assumed schedule, and detect possible obstacles before they had any impact on the project.
After each sprint, every three weeks, we met with our client to present them with a summary of the stage and ongoing tasks. Such practice is highly beneficial to all projects, not just those regarding testing. It helps the team to control progress and gives the client a clear image of the project’s state and results.
Test automation effectively accelerates the process of introducing new software to the market. The shorter the Time-To-Market marker, the higher the profit, and ultimately, that is what it is all about.
Within this project, we finished:
What is more, the co-operation resulted in the client’s team receiving additional material for use in future projects. Project documentation included detailed information on such topics as:
For many years Solwit has been one of the platinum-level partners of the International Software Testing Qualifications Board – our specialists hold certificates of the Foundation and Advanced levels. This means our testing services are always aligned with the strict ISTQB standards.
Our work has also allowed us to be qualified as one of the top three biggest providers of testing services in Poland by Computerworld Top 200 reports in recent years.
Striving for the best possible quality of service we always supervise the progress of implementing our solutions and extend our planning to the client’s team. At the stage of designing scenarios and setting out priorities we always choose to communicate with the Business Owner on the client’s site directly.
Technologies used in the project: