Retry Strategy Guideline

Retry Strategies

These strategies are appropriate to implement in cases like receiving 5xx status code from server and system is under maintenance and end date of maintanance is not specified.

When implementing retry logic in your application, it is important to have a clear retry strategy in place.

A retry strategy defines how and when to retry a request after an error. Here are some common strategies for retrying requests:

  • Fixed delay retry: In this strategy, the same amount of time is waited between each retry attempt. For example, you could wait 1 second between each retry attempt. This is the simplest retry strategy, but it can be less effective if the error is not transient and the delay is not long enough to allow the issue to be resolved.

  • Incremental delay retry: In this strategy, the amount of time waited between retry attempts is increased with each retry. For example, you could start by waiting 1 second, then 2 seconds, then 3 seconds, and so on. This strategy can be more effective at allowing temporary errors to be resolved in case they are not transient.

  • Exponential backoff retry: In this strategy, the amount of time waited between retry attempts is increased exponentially with each retry. For example, you could start by waiting 1 second, then 2 seconds, then 4 seconds, then 8 seconds, and so on. This strategy is similar to incremental delay, but the increase in delay is exponential rather than linear. This strategy can be more effective at allowing temporary errors to be resolved in case they are not transient.

There are pros and cons to Incremental Delay Retry and Exponential Back-off Retry strategies.

Incremental delay retry is simpler to implement and may be more predictable, but it may not be as effective at handling temporary failures.

Exponential backoff retry is more complex to implement, but it may be more effective at handling temporary failures because the delays between retries increase over time, giving the system more time to recover.

Overall, the choice of which retry strategy to use depends on the specific needs of the system and the type of failures that are being encountered. In some cases, a combination of both strategies may be used to achieve the desired level of reliability.

You can choose the retry strategy that best fits your needs, or you can implement a custom retry strategy that combines different strategies.

Also while implementing a retry strategy, keep in mind that POST method is not idempotent.

So retrying can cause multiple insertions.

DELETE method is idempotent. So you can safely implement retry mechanism for DELETE endpoints.

To prevent multiple insertions when inserting a new resource, we recommend checking that the resource does not already exist on the server before making a POST request. You can use GET endpoints to verify the existence of the resource before attempting to insert it.

Rate Limits

If you receive a response with a status code of 429 (Too Many Requests), it means that you have exceeded the rate limit for the endpoint you are trying to access. In this case, you will need to wait for a certain amount of time before making a new request.

This time is indicated in the Retry-After header of the response. For example, a Retry-After value of 52 means that you need to wait 52 seconds before making a new request.

It is important to implement proper retry logic in your application to handle these rate-limit errors. Here are some best practices to follow when implementing retry logic:

  • Always check for a Retry-After header in the response if you receive a status code 429, before retrying a request.

    • If the Retry-After header is present, wait for the amount of time indicated in the header before retrying the request. Do not retry the request immediately, because you will keep getting 429 responses until a specified amount of time in the header passes.

    • If the header is not present, you can implement one of the retry strategies described above.

You can get information about our rate limits by visiting; https://docs.btcturk.com/rate-limits

WebSocket Connections

WebSocket connections can be prone to errors and disconnections due to network issues or other factors.

Here are some common problems that can occur with WebSocket connections and how to handle them:

  • Connection lost: If the WebSocket connection is lost, the client application will need to re-establish the connection.

  • Connection error: If an error occurs while establishing or maintaining the WebSocket connection, the client application will need to handle the error and decide whether to retry the connection.

    For example, if the error is due to a network issue, it may be worth retrying the connection after a short delay. However, if the error is due to an authentication issue, it is not worth retrying the connection. Also check the documentation for fixing authentication errors. https://docs.btcturk.com/websocket-feed/authentication

  • Connection closed: If the WebSocket connection is closed by the server, the client application will need to handle the close message and retry to establish a new connection. Connection close can be related to a temporary issue, such as a server restart, scale-up or scale-down.

If you do not implement the ping-pong mechanism, the websocket server will terminate the connection. However, some WebSocket frameworks handle this mechanism for you. So please check that if the WebSocket framework handles ping-pong mechanism in default before starting to implement it.

Last updated