Assuming that unit tests are handled by development, is there any reason for QA to have knowledge of the details of how a product works? By which I mean, do they need to know what's going on in the background and should they test segments of a product without using the normal UI? For example, would it make sense for a tester to go into a database and manually change values to see what will happen?
EDIT:
Let's assume that we're working with an application to be used by non-developers, we're not working on something with an API attached.
相关问题
- Selecting an item from a combo box selenium driver
- Should I Keep Registering A Failure?
- Calabash Android: Is there a special perform_actio
- How to resolve org.openqa.selenium.WebDriverExcept
- When is it time to have a QA department? [closed]
相关文章
- When is it time to have a QA department? [closed]
- How can one get a QA team more involved in the SDL
- how to clear JSESSIONID of each Thread(user) in Jm
- Dependencies analysis tool - updating regression t
- Entity Framework SaveChanges - Customize Behavior?
- How to click a specified li for an autocomplete ul
- How to set Proxy Authentication in seleniumWebdriv
- Positive test cases and negative test cases
It absolutely makes sense for the testers to know as much about the implementation of the software as they can. That'll help them test better.
Black-box testing is a useful and necessary technique, but knowing a little bit about what's happening under the hood makes it easier to define the really interesting test cases.
The problem with relying on developers' unit tests for all your white-box testing needs is that developers, by and large, are not very thorough testers, especially when it comes to code that they've written.
Surely it depends on the architecture. I worked on a project where the db tier was developed, managed and tested by a completely separate team in a different building. Their QA definitely wriggled around with data to see whether the procedures, queries and the like all ran in a range of test conditions.
If you are at the UI end then there are two levels, one is simple functional testing for which the QA need no working knowledge of the application (and all of which should prbably be automated) and then there is the QA which says whether the app does what it is supposed to do. For the second kind it really helps if the QA team know how it works. It saves a lot of time rejecting silly bugs for a start, but more importantly they need to behave like users and have end to end use cases which try out some more complex overlaying scenarios. To design such tests they have to have a good knowledge of the application.
I think a hybrid approach works well. If you use a combination of white-box testing (unit tests) and black-box testing, you end up with better coverage. Each has its pros and cons, but they do partially cover weaknesses in the other.
Understanding the inner workings of code will cause you to test in a certain way, which is not always the best way to uncover certain problems.
It depends on the approach and the kind of software you are writing. There are different kinds of QA. If the software should be fault-tolerant, QA should simulate faults. Also, knowing how a product works can help QA think of potentially problematic cases and test them more thoroughly.
On the other hand, knowing how a product works might prevent the QA from testing completely from the user's point of view. So maybe first the basic tests should be designed without knowing the internals, and then more in-depth tests based on potential problems.
I think it depends on the role that your QA team plays on a given project. I think you can make an argument that situations that arise from specific values being present in the database should be represented by test cases, and if they can be represented in that way, then developers should write (should have written) unit tests for those situations.
If you've also used code inspections to identify and fix defects, it may not be necessary to expose QA to anything behind the scenes. I suppose there are projects where it might be helpful for them to test code outside the user experience, but I would probably use a QA team for black-box testing rather than white- or clear-box testing.
On projects that I've been involved with, QA tested from the user's perspective and their tests were from a standpoint of meeting requirements. Their testing was black box testing. White box testing was done by the devs. We never expected a QA person to open up a DB query tool and manually change values. This was the responsibility of the dev's unit tests.