The service is currently unavailable
Indicates the service is temporarily unable to handle the request. This is a classic transient condition and clients should retry, ideally with backoff.
- 1The server is down for maintenance or deployment.
- 2A downstream dependency of the service is unavailable.
- 3The server is overloaded and cannot accept new connections or a proxy cannot reach any healthy backends.
A client attempts to call a gRPC service that is currently offline or restarting.
// gRPC client code example
try {
const response = await client.myMethod(request);
} catch (e) {
if (e.code === grpc.status.UNAVAILABLE) {
console.error(e.message);
}
}expected output
StatusCode.UNAVAILABLE: The service is currently unavailable
Fix 1
Implement Retry with Exponential Backoff
WHEN When the service is temporarily overloaded or restarting.
// Retry with backoff
for (let attempt = 0; attempt < 5; attempt++) {
try {
return await client.myMethod(request);
} catch (e) {
if (e.code === grpc.status.UNAVAILABLE && attempt < 4) {
const delay = 100 * Math.pow(2, attempt);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw e;
}
}
}Why this works
Exponential backoff avoids overwhelming the service by increasing the delay between retries, giving it time to recover.
Fix 2
Use a Service Mesh with Automatic Retries
WHEN For a platform-level solution in a microservices environment.
// This is a configuration change in a service mesh like Istio or Linkerd.
// Example Istio VirtualService config:
http:
- route:
- destination:
host: my-service
retries:
attempts: 3
perTryTimeout: 2sWhy this works
A service mesh can automatically handle retries for UNAVAILABLE errors, making individual clients more resilient without code changes.
✕ Retry immediately in a tight loop without backoff
Causes a 'thundering herd' problem and can prevent a recovering service from coming back online.
Content generated with AI assistance and reviewed for accuracy. Found an error? hello@errcodes.dev