Why do you need to learn SQL as a tester?

In the last 5 years or so, we have been training and mentoring Software Testers and Business Analysts and as part of the mentoring session, I ask about their experience of interviews; generally I am looking for the sort of questions the mentees are being asked and more importantly what answers were provided to the interviewers. My observations in recent years is that there is an increasing demand for manual testers with technical skills and knowledge of databases is one that sits high on the list of technical skills.

So, I had one of these sessions yesterday and the guy I was having the conversation with was asked “what does he do with the database?” and “when does he query the database?”.  It is a valid question, and even though I will query the database several times in the data, whenever i am testing a system; I had to take a step back to think about this and I will be sharing some of the reasons i would want to look into the database.

Most applications have a data store which is a part of the application that persistently holds data and for a lot of people that are new to testing, you would normally do your checks on the user interface. So why check in the database?

  • Learn more about the system: As part of exploratory testing and getting to know the system, looking at the database might help you understand how the data is structured. When you create, update or delete a record in the database, looking at the related tables might help you explore the UI better.
  • Debug / Investigating a potential bug: Often times, I don’t stop at raising a bug; i like to poke around the system to under the root cause of a bug. (Disclaimer: Be careful not to spend too much time doing this). A good example is a feature which is expected to sort data on a page using last updated time; if i find that the functionality is not working as expected, i might query the database, and run a query based on my understanding of the feature being tested; hoping that i can reproduce the issue.
  • Test Data: Its good practice to create your own test data to reduce the the chances of unexpected side effects, but sometimes, i find it useful to run a very quick query in the database to find data that i could potentially use to execute my intended test scenarios.

There are a few more reasons that i could think about but i think these 3 reasons broadly covers why i would query the database as part of a testing session. For a lot of seasoned testers, these might be already obvious to you but i thought for newbies, this post might just help you on your journey to being a better tester.

 

Advertisements
Posted in Agile Delivery, Software Testing | Tagged , , , | Leave a comment

Should API test and UI Acceptance Tests be written in the same programming language?

I have written in my previous blog about how we need to distribute our tests across the testing pyramid for reasons which i have explained in the previous blog. This particular post is focussed on how we ought to design our tests across the higher levels of abstractions which includes the UI test and API tests.

For the purpose of the blog, I would be using a case study of team that writes its UI Acceptance tests in protractor (Javascript) and the API tests
in Java using RestAssured library. The content of this post is an expression of my experience and observation and I am happy for other people to share their experiences.

1. Maintenance cost is even higher:  I have already established in my previous post how expensive these high level tests could be.
For an application that requires a lot of data to be setup before the tests are run, the effort of setting up data is duplicated as these are effectively two different set of tests written in two different languages. Also these test would requiring two different build runners e.g gradle/maven for java and gulp for javascript which adds to the complexity in my own opinion.

2. Silos are Encouraged: One of the few practices, that I encourage on an Agile Team is ensuring that everyone works together and continuously collaborating on code and the tests. In my experience when team decides to split the higher end test as described above, i.e Java for API Tests and Javascript for UI Acceptance test, have observed that one of these could happen

  • Testers and back end developers contribute to the API Tests while front end developers contribute to the UI Acceptance Test.
  • Testers and front end developers contribute to the UI Acceptance Tests while backend develops would write/maintain the Java API Tests.

This is bad as we have just reintroduced silos into the team. Another side effect is that the definition of “Done” becomes blurred. We expect that the definition of “Done” to be when all code has been written, tested, deployed and signed off by the product owner. However when we have two separate test suites, back end developers can write the Java API test as soon as the back end code is completed and provided the test are passing, the team is tempted to split stories in back end stories and front stories over time.

3. Higher chances of duplicating test scenario in both suites: For the reasons that have been described above, I have observed that the chances for similar test scenarios to be implemented in the UI Acceptance tests as well as in the API test suite is highly increased. This is because when the team doesn’t work together on the tests and assumptions are made, most likely than not, a team that cares about test coverage would try to write as many test as possible which ends up in having the same scenarios implemented twice. Similarly, it is a lot easier to introduce gaps into the testing, this occurs for instance if a backend developer assumes a particular test should be written as part of the front end test and this is not communicated; the other person that is supposedly writing the front end test might not write the test for similar reasons.

The advice

In other to reduce the chances for some of these problems that i have observed, I would always advice for a single code base and single language for API Test and the UI Acceptance Test.
I worked with a client a couple of years ago, and before i joined the team, the set of their API Tests and UI Acceptance Tests was as described above. The problem was even worse at the time, as the UI Acceptance tests were considered an optional set of tests.
On joining the team, i got everyone together so that we could discuss how we were doing tests as a team and where we would like to be. It wasn’t an easy task but the team decided to use a combination of Spock (for defining test scenarios), Geb (a browser automation tool based on WebDriver) and Rest Assured.

There was always a tussle between front end developers and back end developers in that the front end developers weren’t keen on writing full blown Java code whilst back end developers weren’t keen on writing Javascript test. As a result of this we decided on the tool set as described as above for  the reason highlight below:

  • Groovy: The base language used in the test suite was Groovy which is a JVM based language; this felt as a middle ground for both java developers
    and javascript developers. Groovy is not a difficult language to pick up for a front end developer as the syntax has some resemblance to javascript
    or any other scripted languages.
  • Geb: This library was the main reason we tilted towards Groovy, Geb has a nice syntax that is very jQuery like and this is the reason we decided
    to use this language.

After we had adopted these set of tools, below are some of the advantages that we experienced:

1. Creation of Test Automation scripts became an atomic task in that “the automated tests have been written” meant that both front end and back end tests have been thought through
and implemented accordingly.

2. Reusability of set up data came naturally, as we had a single suite of test; hence a singular test set up.

3. Reusability of code including helper class, page object and api wrappers.

4. The feedback was much faster as we had a singular test suite and all we had to do was run one set of test.

5. Singular source of Truth for Test as source of System Documentation.

I really hope when team are just about to start your next set of tests, maybe on a brand new project or a rewrite of an existing test suite; you would think about some of the points discussed above, make sure your api tests and ui tests are in the same language and within the same project.

Posted in Agile Delivery, Software Testing, Test Strategy | Tagged , , | Leave a comment

Uploading CSS files using “Shrine” gem

I have recently switched from carrierwave to shrine in my Rails app and i thought i follow the advice of Alan Richardson, saying when you learn something new, irregardless of how trivial it might seem to you, share it and save someone else that headache.

So in the Yangah application, we upload a number of files to amazon s3 which include images, font files and css files for these fonts. However, i have found that with shrine, out of the box it was not able to correctly set the mime type for the uploaded file on s3.

I actually did try to fix this myself and eventually reach out to the Shrine google group and Janko, the author of the Shrine gem was able to provided a suggestion which worked first time and i am grateful for that.

Here is a link to the actual thread but just incase it gets deleted; this is extract of the message sent by Janko.

The `determine_mime_type` plugin uses the “file” utility by default for recognizing the MIME type, and it seems to recognize .css files as “text/plain” (and it’s the same for .js files). As noted in the `determine_mime_type` plugin documentation, one analyzer won’t be able to correctly detect correctly all types of files.

For text-based file formats (non-binary) it’s probably better to use the “mime_types” analyzer, which uses the mime-types gem to determine the MIME type from the file extension directly (rather than file content). So you could built a custom analyzer that mixes and matches “file” and “mime_types” analyzers; trying the “file” analyzer first, and if the best MIME type that it could come up with was “text/plain” (which is technically correct), then you can call the “mime_types” analyzer to determine which text-based format is it exactly.
This way you still get the benefits of the `file` utility preventing someone from uploading a binary file with a .css extension, and use the precision of determining from file extension when needed.

I installed the gem “mime_type” and update the FileUploader class with code snippet below.

plugin :determine_mime_type, analyzer: ->(io, analyzers) do

mime_type = analyzers[:file].call(io)

mime_type = analyzers[:mime_types].call(io) if mime_type == “text/plain”

mime_type

end

 

And that’s it!!! i have the css file now being uploaded correctly with the mime type of “text/css”.

 

 

 

 

Posted in Programming | Tagged , , , , , , | Leave a comment

Testing an Asynchronous System – Part 2

In this post, I would like to consider how to write automated tests for negative scenarios
for a system built using an asynchronous / event driven architecture. In my previous post, i have mentioned,
that these systems are peculiar because the effect of the “write” actions are not immediately stored
in the persistence layer of the application under test.

As an example, in the current system which i have been testing on the write side of the application, the write commands traverses
though the Command API layer, the command controller layer and the command Handler layer before it ends up in the Event Store.
Once data reaches the Event Store, it is almost certain that persistence would eventually happen. i.e. there isn’t much that could go
wrong at the stage except things like infrastructure break down.

For End to End tests, what we would usually do is

SendAPostRequestToCreateOrUpdateAnEntity()
QueryAndWaitForEntityToBePersisted()

This usually works for happy path test scenarios but in a case where the test condition is that the Post Request should fail.
In the current system that is being referred to:

Business rules/Complex Logic checks is usually implemented at the Command controller Layer so if I want that I cannot persist a set of attributes because it
does NOT comply to some set of business rules, a naive way would be to write:

SendA,InvalidPostRequestToCreateOrUpdateAnEntity()
QueryAndWaitForEntityToNeverPersisted()

This 2nd line would never fail because immediately the test code executes that line, because of the nature of the system,
even if there was a bug in the system, we do NOT expect the value to have been persisted on first pass through the wait loop.
In fact, in an environment where Test driven development is being practised, we expect our tests (unit, integration and end-to-end tests) to fail
before the code to implement the business logic checks is implemented but in this case the tests would never fail. I hope these are enough reasons to
point that this is definitely the wrong way to test such functionality in an asynchronous system.

An alternative, which i feel is too heavy and unnecessary expense for these sort of test is to modify your tests as such:

SendAPostRequestToCreateAnEntityComplyingWithBusinessRules()
QueryAndWaitForEntityToBePersisted()
SendAPostRequestToUpdateAnEntityNOTComplyingWithBusinessRules()
QueryAndWaitForUpdateEntityToNeverPersisted()

So i guess the question in your mind would be … What are you proposing?

As i have said in my previous post, your test coverage is a combination of tests written at the different layers of the testing triangle.
I would push this test to a lower level such as unit test for the appropriate layer in the architecture. This is an activity that needs to be carried out in
conjunction with developers and being able to make a compelling case should get you the necessary support to write such unit and/or integration tests.

Posted in Software Testing | Tagged , , | Leave a comment

Testing an Asynchronous System

In recent months i have been testing a system built using the CQRS pattern. CQRS stands for Command
Query Responsibility Segregation and you can read more about it here.

As a tester, one of the key takeaway for me is that there is a Read side and a Write side to the
application. This is very different to the other applications that i have tested in the past where a write operation would NOT be successfully until the data being sent to the application is successfully persisted.

In the case of this system that has been built using an event driven architecture, the write side of the application would always come back with a successful response which implies that the command is successful (provided that the api contracts and json schema validations are met) and this doesn’t mean that the data has been persisted. This is due to a concept referred to as eventual consistency.

Without going into the technical details, the internals of the system is event based, which means that every actions triggers one or more events which are picked up by other parts of the system and eventually the data is persisted; provided all the business rules and validation conditions are met.

So this poses a challenge as to how best to test this system and in my opinion this forces technical testers to look beyond writing API test and seeks cheaper ways of testing the system i.e Traversing the entire testing triangle pushing tests to integration tests and unit tests.
See my previous post here where i have spent a lot of time discussing how testers should consider the true costs of testing.

For an asynchronous system, there is an incredible amount of polling that has to take place in end-to-end tests that involve the full stack.
Also, testers have to understand the dependency between different actions that could be happening around the same time, so our tests would look like below:

doAction1()
waitForAction1ToBePersisted()
doAction2()
waitForAction2ToBePersisted()
.
.
.
.
doActionN()
waitForActionNToBePersisted()

I have always considered end-to-end test to be very expensive but the nature of this system adds more complexity to our tests and tests would take an awful long time to complete; hence it is more important that we reconsider how and where we test our system.

Posted in Software Testing | Tagged , , , | 1 Comment

Geb and Spock: My favourites for test automation

So I have used quite a number of tools in my time building test automation frameworks over the years; and in the last 2years i have evaluated a number of these tools and i am becoming more and more opinionated in my choice of tools.

For someone that has used selenium/webdriver and cucumber a lot in the past, i find myself using Geb and Spock these days and it hasnt been much difficulty in switch to these tools.

Geb has won my heart with the syntax and for people that have used it before it is uses webdriver under the cover and you can drop down to the layer below it; if you choose to use webdriver directly. In the time i have used Geb, there are times i’ve had to do this but i love the DSL which Geb provides and i hate when i have to use webdriver directly.

I love cucumber, and i know that the regex matching between step files and step definition files is quite powerful; however i find that the manner in which spock eliminates the need for a seperate text file  cool. Also with a very big test suite, i have that the Regex could introduce a maintenance overhead, hence the reason i prefer to use Spock.

However, I didn’t like the default report that comes with spock and i decided to include a library called ‘spock-report‘ which provides better reporting that one bundled with spock.

I have created an example project and it might be a good start for anyone wanting to look at this ..

 

Posted in Agile Delivery, Software Testing | Tagged , , , , , | Leave a comment

Never too early to start thinking about your tests

This post is spurned by taking a reflection on my journey as a tester and it’s amazing to see how much i have moved from designing large UI test suites to very small UI test suites. For emphasis, the largest test suite had taken over 8 hrs and the smallest has been 3mins of execution time.
There are a lot of posts on the internet about how fragile a UI test framework could be and i would expect the next question to be how have you been able to achieve this. I would try to address this in this post.

1. Thinking about tests while designing the system: There are some decisions made in the early stages of the projects that has a direct impact on how easy or how difficult It would be to test a system. E.g. a web application that has been built with RESTful api and with minimal business logic embedded within the UI lend itself to proper layering of test automation when compared to a system with loads of logic embedded in the UI logic.

2. Testing is done at the cheapest level:
There is always a cost associated with test automation, cost in terms of test creation, test maintenance and test execution. As testers and developers we ought to think about the cost we incur and determine at what level the tests should be written.

Test Creation: The project test suite should be split between unit test, integration tests and end-to-end tests. The end to end tests can be further split into two categories, api tests and user interface tests. The cost associated with test creations would generally increase as we move up from lower levels of test automation to more abstract levels such as User interface tests. However, in some instances e.g. rewrite of legacy applications without any form of test automation, it might be easier to create a quick UI test suite using record and play tools. There isn’t any rule of the thumb on this but I would advise the team to have this conversation at all times discussing how to best automate tests for new features and updates to existing features.

Test Maintenance: This is the cost associated with updating tests as the system continues to evolve, and in my experience its generally cheaper to update lower levels of test automation and this can be done in parallel with development of features.

Test Execution: This can best be expressed in amount of time it takes for a test suite to run from start to finish, and I would like to include the time elapsed when the tester and/or developer is waiting for the test result. It also makes sense to include efforts involved in interpreting the any test results including test failures. Bearing in mind that the automated test suite is expected to be executed several times in a day, over the life time of the project this is one aspect that could potential be the most expensive. The teams test strategy should ensure that the lower level tests which are generally faster needs to contain the majority of the tests I.e. unit, integration, api tests.

3. Collaboration with developers: No matter how solid a test automation strategy is, it would never be complete if there hasn’t been any thoughts about how to collaborate with developers. It is very common to have the unit tests and integration test be designed and maintained by developers while testers would design and maintain the end-to-end test suite. The consequences of such practices include test duplication and test omission. At the end of the day we should all be working as a team, testers should take interest in Unit tests and integration tests; ensuring that these tests are relevant at the least and we should be able to suggest scenarios to be added to the test suite even if we do not have the skills to write such on our own.
On the other end, developers need to be involved with design and maintenance of end to end tests, developers should be able to advice testers if a proposed end to end test (usually expensive) can be written cheaply in a unit test and/or integration test.

These are some of the ideas that have helped me with achieving a lean UI test automation suite and i do hope you find this useful as well.

Posted in Agile Delivery, Test Strategy | Tagged , , , , , | 3 Comments