Start a new topic
Answered

Perfomance Questions

Hello,

1) Since you are based on MongoDB I have heard that using query with regex is significantly faster and more efficient than using ExactMatchForValue , which has something to do with 'indices', is it true?

2) Are grouping and aggregation are that inefficient and I should rather look for other ways extracting the data, I am talking about grouping 100 items, let's assume app is used with 100k+ active users.


Best Answer

Grisha, 

1. The symbols 0-9 and a-f are used in the automatic creation.  You can use whatever you want outside of that naming scheme (I think there may be some prohibited characters like ., or +_- etc )

2. There are typically other hurdles that are more prohibitive (inability to regex search full strings or case insensitive search) that causes issues.  At this time we do not support query.cancel, as most queries would already complete and return before you would be able to send the request to cancel it.

3. The system is pretty load resilient (we have customers that make multiple millions of queries a day), but as above, we do not have a query.cancel feature, as most queries complete too quickly for that to be a common ask. 

4. This is currently a console bug that we are in the process of addressing

5. Nope, Mongo uses a queue and locking, so these queries won't be handled concurrently and therefore won't cause any data to be lost.


Thanks,


1. Yes, assuming we build an index a regex will have to sort significantly less data than an entire collection,  Searching for a regex searches for a fraction of that data, which is even more efficient.

2. Grouping and aggregation at an abstract level don't necessarily need to be highly inefficient, but it really depends on your exact use case.  I can't give advice without a more specific use case, and I can't speak to how efficient or inefficient it would be compared to other solutions without quantifiable data.


Thanks,

Thank you,  that clears a lot , I have 2 more questions:
1) Which symbols are used when kinvey database automatically creates "_id" field.  I am asking because I want to create my own "_id" which will be equal to   "1UserId-2UserId" , is the  "-" sign used in automatically created id or it's only 24 alphabet letters + 0123456789 involved.


2) I have a live search feature, therefore I want to increase perfomance by cancelling all active queries when a new symbol is typed into search bar, is there any query.cancel or something used in this case?

Thanks,

Grisha

2) I am talking about 1 query to get data about 10 events, and then another group query(instead of 10 count queries) to get if you have liked this event ( like - entity in table activity).

3) I have a live search, that means every time new symbol is detected, new query is executed, is there any way to cancel previously executed queries so they don't load the system here ? Or any other approach?

4) I have noticed when there is only 1 entity in any table, it just keeps appearing and disappearing , both in console and in the app. 

5)  Last question, I have a field on 'Events' collection called 'numberOfLikes' , it auto increments (business logic hooks) when a like appears in 'Activity' collection, if there are assuming tons of users creating like at the same time will there be any concurrency problems when a lot of hooks will try to access single document

Answer

Grisha, 

1. The symbols 0-9 and a-f are used in the automatic creation.  You can use whatever you want outside of that naming scheme (I think there may be some prohibited characters like ., or +_- etc )

2. There are typically other hurdles that are more prohibitive (inability to regex search full strings or case insensitive search) that causes issues.  At this time we do not support query.cancel, as most queries would already complete and return before you would be able to send the request to cancel it.

3. The system is pretty load resilient (we have customers that make multiple millions of queries a day), but as above, we do not have a query.cancel feature, as most queries complete too quickly for that to be a common ask. 

4. This is currently a console bug that we are in the process of addressing

5. Nope, Mongo uses a queue and locking, so these queries won't be handled concurrently and therefore won't cause any data to be lost.


Thanks,


1 person likes this

Thank you very much,  I have no more questions.

Login or Signup to post a comment