I have set up a Web Performance Test and Load Test project in Visual Studio 2013 that uses data driven web tests based on a local SQL Server data source. Eventually I would like to set up a test controller and test agents in order to distribute the load from multiple clients.
My question is about the architecture of the controller and agents. Does each test agent also need access to the data source that generates content for the web tests or is the web test (and its dynamic parameters) generated at the controller and then distributed to the test agents?
Below is a diagram I found of the architecture:
Agents do not need access to the data source. The load test controller arranges that the required data is deployed to the correct agents so they can run the test.
Vastly simplified: The controller is instructed to run a load test. It collects the test suite and the data source values. It splits the virtual users across the available agents and deploys the test suite and the data source values to those agents. The agents then run the individual tests, the dynamic data aspects of each test case are handled within the agent. As individual web tests finish their results are passed back to the controller and it write the data to the SQL results database, it also provides data to Visual Studio for the graphs etc displayed as the load test runs. See this Microsoft page for more details.
One complication is the handling of the data source depending on the access method selected. For Sequential and Random a full copy of all of the data is sent to each agent. For Unique the data is split into pieces and each agent gets one piece, thus maintaining the desired "use each data value only once". See this Microsoft page for more details.