Make API rate limits more sophisticated, especially for large enterprises
When working with a Box Enterprise that has tens of thousands of users, scripts designed to do things like user information syncing, group syncing, or user provisioning/deprovisioning often run into real issues with the Box API rate limits.
Some calls (e.g. downloading or uploading a file) are clearly significantly more resource-intensive than others (e.g. getting a user’s aliases). The method Google has implemented for the Gmail API (see https://developers.google.com/gmail/api/v1/reference/quota) seems like a really great way to balance the need to protect the back-end systems against the need for increased performance for less-intensive calls. It would be nice if Box implemented a similar model.
Thanks for your patience and feedback, Ian.
I am curious to hear if we were to implement something like what Google has, which endpoints would you expect to see higher rate limits around?
Also, would you mind sharing some more details around the frequency of the API calls your processes are making right now, and any approximations around what rate limits would suffice for you?
Even our organization is in trouble; we're trying to figure out the tenant situation first, based on our trust and verify policy. To scrutinize, we need to capture events, but with a limit of 500 events per call and a 100,000 call limit per month, we are limited to using an in-house solution and need to use a specific third party solution approved by Box(which has no limitations pointed out), such as Splunk. The cost and inflexibility of third party solutions is a heavy constraint on Box.
We would like to remove the general constraints on the monitoring API for administration if possible.