Hello, I'm aggregating data from 5 different collections and occasionally running into the 2 second BL timeout error. I've moved each collection query into a async.js paralell task with no improvement in performance. Each collection has 2 or 3 small documents (but no more than 10) for testing purposes. I'm concerned about 2 things:
* The execution time limit error is triggered intermittently and execution times seem to vary wildly from 200ms to >2000ms even for exactly the same query(s). It would be easier to isolate the performance issue if the execution times were a bit more consistent.
* Are we using BL in a way that was not intended? I can imagine that as we add data to these collections, the 2 second timeout error would make them unusable to us.
Any help or direction on the above would be greatly appreciated.
Is there any correlation to when you receive the timeouts to how many requests you are making at the time and the frequency of those requests? Also, what types of queries are you running?
over 7 years ago
Micheal, I don't see a correlation there. My test consists of calling the Business Logic via the REST API as quickly as I can from a single client. The query in the BL simply returns all records (or documents if you prefer) in each of the four collections.
I've noticed that the frequency of timeouts is greatly reduced in my testing today. Have there been any changes on the backend that would affect this?
Also, do you include the time to fully transmit the response from the server to the client in your time limit calculations?
about 7 years ago
We're continually making changes in the way the logic is processed to help make it more efficient. How are things currently?
The time limit only covers the time it takes to compile and execute the script. So if it takes 500 ms to get from the client to the server, 500 ms to get back, you still have 2000ms to compile and execute the script. Estimate about 10-50 ms for compilation of the script.