Skip to main content

BatchData API Rate Limits

Charles Parra avatar
Written by Charles Parra
Updated this week

To ensure high availability and efficient performance, we apply various limits on the BatchData APIs such as the number of API requests allowed per minute and the number of request items per API request.

This guide explains API rate limits and API request item limits and how they are applied to the BatchData APIs. Understanding these limits is crucial for developers who use the BatchData APIs to create applications.

What are API Rate Limits?

API rate limits are used to restrict the number of API requests you can make within a specific timeframe. These limits are in place to prevent abuse and ensure that all of our customers have access to the BatchData APIs.

BatchData API Rate Limits:

  • We allow up to 3,000 API requests per minute per API access token. For more details feel free to contact us at [email protected]

Consequences of Exceeding API Rate Limits:

  • If you exceed the API rate limit, you will receive a HTTP 429 response code indicating an error. Depending on the severity of the violation, you may be temporarily blocked from making further requests.

How to Avoid Exceeding Rate Limits:

  • Monitor your API usage. The BatchData usage reports provide you with the tools to track your API usage and identify potential rate limit violations.

  • Implement exponential backoff. If you receive an error message indicating that you have exceeded a rate limit, wait a short period of time before retrying your request. The wait time should be doubled for each subsequent retry.

What are API Request Item Limits?

API request item limits restrict the number of items you can include in a single API request. For example, the skip trace api endpoint accepts 1 or more property addresses as input. Each property counts as a request item. These limits are in place to prevent overloading the API and ensure efficient performance.

BatchData Request Item Limits:

  • Maximum number of items per request: This limit varies depending on the specific API endpoint. Refer to the table below for API specific limits and recommended limits to adhere to.

API Endpoints

Maximum Request Items

Recommended Request Items

Address Verification

5,000

1,000

Phone Verification

10,000

1,000

Phone DNC

10,000

1,000

Phone TCPA

250

250

Property Lookup

1,000

500

Skip Trace (Synchronous)

100

100

Skip Trace (Asynchronous)

1,000

1,000

Geocode

90

75

Property Search

1,000

1,000

Consequences of Exceeding Request item Limits:

  • If you exceed an API request item limit, you will receive an error message. More specifically, the api will return a HTTP 400 response code.

How to Avoid Exceeding Request Item Limits:

  • Split your data into multiple requests. If your data is too large to fit in a single request, you can split it into multiple requests which are still subject to api rate limits.

  • Use the asynchronous api endpoint. If you need to retrieve large amounts of data it is recommended that you use the asynchronous api endpoints instead of the synchronous versions.

Synchronous versus Asynchronous api endpoints

Synchronous APIs are designed for quick request–response cycles where the client waits for the result before moving on. When an API call takes more than a few seconds, several problems can arise:

  1. Risk of client and server timeouts

    • HTTP clients (browsers, mobile apps, SDKs) often have default timeouts (e.g., 30–60 seconds).

    • API gateways, proxies, and load balancers (like AWS API Gateway, NGINX) also impose timeouts.

    • If the backend takes too long, the client may drop the connection or the proxy may terminate the request, causing wasted compute work on the server.

  2. Ties up server and infrastructure resources

    • Long-lived synchronous requests hold open web server threads or connections, reducing the number of concurrent requests your system can handle.

    • This leads to poor scalability and higher risk of bottlenecks under load.

  3. Poor user experience

    • End users and client apps dislike waiting without feedback.

    • If an API call stalls for >5–10 seconds, end users may assume the app is broken or unresponsive.

    • This also increases the risk of client-side retries, making the problem worse.

  4. Higher risk of failures under variable load

    • Backend systems (e.g., databases, third-party APIs) can slow down under load.

    • If your synchronous API assumes consistent backend speed, it becomes fragile and prone to cascading failures during peak traffic.

The solution is to use an asynchronous api endpoint when a synchronous request would lead to long request-response cycles. The request to the asynchronous endpoint is treated as a job. The webhook url passed in the asynchronous api request will receive a callback/webhook when the job has completed. This approach frees the client, improves scalability, and gives the user predictable feedback.

Did this answer your question?