TestProject Forum

Any way to increase the timeout delay for queued virtual agent tests?

We have multiple very long jobs that we want to run on the browserstack virtual agent as part of our build pipeline. We have attempted to queue them all up at the same time and leave them to run. With our current plan, Browserstack only allows one test to run at a time, so this means that the jobs other than the one currently running are queued for an extended period of time. Every 820 seconds that a job is queued, a test in that job is discarded with the following error message:

“Failed to execute test ‘[test]’ on BrowserStack: All parallel tests are currently in use, including the queued tests. Please wait to finish or upgrade your plan to add more sessions. (WARNING: The server did not provide any stacktrace information)”

This is quite problematic.

Is there any way to increase the duration until a test times out here, or is this an issue with browserstack out of testproject’s control?

When you are start the virtual agent multiple time multiple agent are initializing and the tests are starting to execute at the same time,
to handle this scenario from TestProject side please use the scheduler to schedule different jobs to different times so that they will not be executed parallely you can do it from here:


Our tests include user sign up forms and similar parameters that must be unique for each time the tests are run. In order to accomplish that, we use the API to run tests with programmatically generated unique parameters. Is there a way to use the API to schedule jobs in the future with custom parameters? I see a way to schedule jobs under PUT /v2/projects/{projectId}/jobs/{jobId}/schedule, but there don’t seem to be API parameters in place to set custom test parameters using that API path. There’s also POST /v2/projects/{projectId}/jobs/{jobId}/run which can run a test with custom parameters, but I don’t see an API parameter to schedule the job in the future.

It can be solved by either setting up a custom script to execute the API calls with unique data,
Or by running a test between schedule jobs that dynamically change project parameters and then you can use this dynamically generated values assigned to the project parameters in your jobs.