SDK Performance Optimization Guide
This guide provides best practices for optimizing AnkaSecure SDK performance in production applications.
Connection Management
Reuse Client Instances
✅ DO: Create client once, reuse for all operations
// ✅ GOOD: Single client instance (application startup)
@Configuration
public class AnkaSecureConfig {
@Bean
public AnkaSecureClient ankaSecureClient() {
ClientConfig config = ClientConfig.builder()
.baseUrl(env.getProperty("ankasecure.api.url"))
.apiKey(env.getProperty("ankasecure.api.key"))
.tenant(env.getProperty("ankasecure.tenant.id"))
.connectionPoolSize(20) // Connection pooling
.build();
return new AnkaSecureClient(config);
}
}
// Inject and reuse
@Service
public class EncryptionService {
private final AnkaSecureClient client;
public EncryptionService(AnkaSecureClient client) {
this.client = client; // Reused for all requests
}
public String encrypt(String data) {
return client.encrypt(
EncryptRequest.builder().keyId("key-1").plaintext(data).build()
).getCiphertext();
}
}
❌ DON'T: Create new client per request
// ❌ BAD: New client every time (connection overhead)
public String encrypt(String data) {
AnkaSecureClient client = new AnkaSecureClient(config); // DON'T DO THIS
return client.encrypt(request).getCiphertext();
}
Performance Impact: Reusing client saves ~50-100ms per request (TLS handshake, connection setup).
Connection Pooling
Configure Connection Pool
Default: 5 concurrent connections
Recommended: 10-20 connections (based on concurrency)
ClientConfig config = ClientConfig.builder()
.baseUrl("https://api.ankasecure.com")
.apiKey(apiKey)
.tenant(tenantId)
.connectionPoolSize(20) // Max concurrent requests
.connectionTimeout(10000) // 10 seconds
.readTimeout(30000) // 30 seconds
.build();
Tuning Guide: - Low concurrency (<10 concurrent requests): 5 connections - Medium concurrency (10-50 concurrent requests): 10-20 connections - High concurrency (>50 concurrent requests): 30-50 connections
Monitoring:
ClientStats stats = client.getConnectionStats();
System.out.println("Active connections: " + stats.getActiveConnections());
System.out.println("Idle connections: " + stats.getIdleConnections());
System.out.println("Requests waiting: " + stats.getPendingRequests());
Batch Operations
Batch Encrypt Multiple Payloads
Avoid: N separate API calls for N payloads
Optimize: Batch requests using parallel streams
import java.util.concurrent.CompletableFuture;
import java.util.stream.Collectors;
public List<String> batchEncrypt(List<String> plaintexts, String keyId) {
// Parallel encryption (uses connection pool)
return plaintexts.parallelStream()
.map(plaintext -> {
EncryptRequest request = EncryptRequest.builder()
.keyId(keyId)
.plaintext(plaintext)
.build();
return client.encrypt(request).getCiphertext();
})
.collect(Collectors.toList());
}
Performance Improvement: 5-10x faster than sequential processing (for >10 items).
Async Operations with CompletableFuture
public CompletableFuture<EncryptResponse> encryptAsync(String data, String keyId) {
return CompletableFuture.supplyAsync(() -> {
EncryptRequest request = EncryptRequest.builder()
.keyId(keyId)
.plaintext(data)
.build();
return client.encrypt(request);
});
}
// Usage: Fire-and-forget or await results
List<CompletableFuture<EncryptResponse>> futures = plaintexts.stream()
.map(data -> encryptAsync(data, keyId))
.collect(Collectors.toList());
// Wait for all to complete
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
// Collect results
List<EncryptResponse> results = futures.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList());
Caching Strategies
Cache Key Metadata
Avoid: Fetching key metadata on every encryption
// ❌ BAD: Fetch key metadata every time
public String encrypt(String data) {
KeyResponse key = client.getKey("my-key"); // Network call!
if (key.getStatus().equals("ACTIVE")) {
return client.encrypt(...);
}
}
Optimize: Cache key metadata with TTL
// ✅ GOOD: Cache key metadata
import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
public class CachedKeyService {
private final AnkaSecureClient client;
private final Cache<String, KeyResponse> keyCache;
public CachedKeyService(AnkaSecureClient client) {
this.client = client;
this.keyCache = Caffeine.newBuilder()
.expireAfterWrite(5, TimeUnit.MINUTES) // 5-minute TTL
.maximumSize(1000)
.build();
}
public KeyResponse getKey(String keyId) {
return keyCache.get(keyId, id -> client.getKey(id)); // Cache miss → fetch
}
}
Performance Impact: Saves ~20-50ms per encryption (avoid key lookup).
Cache Public Keys
For encryption-only operations (no decryption):
// Cache public keys locally (no security risk)
private final Map<String, PublicKey> publicKeyCache = new ConcurrentHashMap<>();
public String encryptWithCachedPublicKey(String data, String keyId) {
PublicKey publicKey = publicKeyCache.computeIfAbsent(keyId, id -> {
KeyResponse key = client.getKey(id);
return parsePublicKey(key.getPublicKey()); // Cache public key
});
// Use cached public key for encryption (no API call)
return localEncrypt(data, publicKey);
}
Security Note: Only cache public keys (never private keys).
Payload Optimization
Use Streaming for Large Files
Threshold: Use streaming API for payloads >5 MB
public void encryptLargeFile(File inputFile, File outputFile, String keyId) {
if (inputFile.length() > 5 * 1024 * 1024) {
// ✅ Use streaming API (>5 MB)
client.streamEncrypt(
StreamEncryptRequest.builder()
.keyId(keyId)
.inputStream(new FileInputStream(inputFile))
.outputStream(new FileOutputStream(outputFile))
.build()
);
} else {
// ✅ Use compact API (≤5 MB)
byte[] plaintext = Files.readAllBytes(inputFile.toPath());
String ciphertext = client.encrypt(
EncryptRequest.builder()
.keyId(keyId)
.plaintext(Base64.getEncoder().encodeToString(plaintext))
.build()
).getCiphertext();
Files.writeString(outputFile.toPath(), ciphertext);
}
}
Compress Before Encryption
For compressible data (text, JSON, XML):
import java.util.zip.GZIPOutputStream;
public String encryptCompressed(String data, String keyId) {
// Compress first (reduces payload size)
ByteArrayOutputStream baos = new ByteArrayOutputStream();
try (GZIPOutputStream gzipOut = new GZIPOutputStream(baos)) {
gzipOut.write(data.getBytes());
}
byte[] compressed = baos.toByteArray();
String base64Compressed = Base64.getEncoder().encodeToString(compressed);
// Encrypt compressed data
EncryptResponse response = client.encrypt(
EncryptRequest.builder()
.keyId(keyId)
.plaintext(base64Compressed)
.build()
);
return response.getCiphertext();
}
Performance Impact: 50-80% size reduction for text (faster transmission).
Algorithm Selection for Performance
Choose Fast Algorithms
Fastest Encryption (for high-throughput APIs): - ChaCha20-Poly1305: 87 MB/s - AES-256-GCM: 74 MB/s - ML-KEM-768: 82 MB/s (quantum-resistant)
// High-throughput scenario: Use ChaCha20 or AES
KeyGenerationRequest fastKey = KeyGenerationRequest.builder()
.algorithm("ChaCha20-Poly1305") // Fastest symmetric
.keyId("high-throughput-key")
.build();
Fastest Signatures: - HMAC-SHA256: 218 MB/s (symmetric MAC) - ML-DSA-65: 59 MB/s (quantum-resistant)
Error Handling Optimization
Implement Retry Logic
With exponential backoff:
public <T> T executeWithRetry(Supplier<T> operation, int maxRetries) {
int attempt = 0;
int backoffMs = 1000; // Start with 1 second
while (attempt < maxRetries) {
try {
return operation.get();
} catch (RateLimitException e) {
attempt++;
if (attempt >= maxRetries) {
throw e;
}
try {
Thread.sleep(backoffMs);
backoffMs *= 2; // Exponential: 1s, 2s, 4s, 8s
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
throw new RuntimeException(ie);
}
}
}
throw new RuntimeException("Max retries exceeded");
}
// Usage
EncryptResponse response = executeWithRetry(() ->
client.encrypt(request), 5
);
Circuit Breaker Pattern
Prevent cascading failures:
import io.github.resilience4j.circuitbreaker.CircuitBreaker;
import io.github.resilience4j.circuitbreaker.CircuitBreakerConfig;
public class ResilientAnkaSecureClient {
private final AnkaSecureClient client;
private final CircuitBreaker circuitBreaker;
public ResilientAnkaSecureClient(AnkaSecureClient client) {
this.client = client;
CircuitBreakerConfig config = CircuitBreakerConfig.custom()
.failureRateThreshold(50) // Open if >50% failures
.waitDurationInOpenState(Duration.ofSeconds(30)) // Wait 30s before retry
.slidingWindowSize(10) // Last 10 requests
.build();
this.circuitBreaker = CircuitBreaker.of("ankasecure", config);
}
public EncryptResponse encrypt(EncryptRequest request) {
return circuitBreaker.executeSupplier(() -> client.encrypt(request));
}
}
Benefits: - ⚡ Fast failure (avoid waiting for timeout when service down) - 🛡️ Prevent overload (stop sending requests to failing service)
Monitoring & Metrics
Track Performance Metrics
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
@Service
public class MonitoredEncryptionService {
private final AnkaSecureClient client;
private final Timer encryptionTimer;
public MonitoredEncryptionService(AnkaSecureClient client, MeterRegistry registry) {
this.client = client;
this.encryptionTimer = registry.timer("ankasecure.encryption");
}
public String encrypt(String data, String keyId) {
return encryptionTimer.record(() -> {
EncryptRequest request = EncryptRequest.builder()
.keyId(keyId)
.plaintext(data)
.build();
return client.encrypt(request).getCiphertext();
});
}
}
Metrics to Track: - Latency: p50, p95, p99 percentiles - Throughput: Requests/second, MB/second - Error Rate: Errors/second, % of requests - Rate Limiting: 429 responses/minute
Performance Checklist
✅ DO
- ✅ Reuse
AnkaSecureClientinstance (singleton or bean) - ✅ Configure connection pooling (10-20 connections)
- ✅ Use streaming API for files >5 MB
- ✅ Implement retry logic with exponential backoff
- ✅ Cache key metadata (5-minute TTL)
- ✅ Choose fast algorithms (ML-KEM-768, ChaCha20)
- ✅ Compress data before encryption (if compressible)
- ✅ Monitor performance metrics (latency, throughput, errors)
❌ DON'T
- ❌ Create new client per request (connection overhead)
- ❌ Use compact API for large files (memory exhaustion)
- ❌ Retry indefinitely on errors (respect rate limits)
- ❌ Fetch key metadata on every operation (network latency)
- ❌ Use slow algorithms when performance critical (Classic McEliece, SLH-DSA-S)
- ❌ Ignore rate limit headers (leads to throttling)
Performance Targets
Expected Latency (5 MB Payload)
Encryption: - ChaCha20-Poly1305: 60-80ms (encryption) + 20-50ms (network) = 80-130ms total - ML-KEM-768: 64-90ms (encryption) + 20-50ms (network) = 84-140ms total
Signing: - ML-DSA-65: 89-100ms (signing) + 20-50ms (network) = 109-150ms total
If exceeding targets: - Check network latency (ping api.ankasecure.com) - Verify connection pooling configured - Consider using faster algorithm (if security allows)
Related Resources
- Performance Benchmarks - Algorithm throughput data
- Algorithm Selection - Choose optimal algorithm
- Testing Guide - Performance testing patterns
- Troubleshooting - Common performance issues
Documentation Version: 3.0.0 Last Updated: 2025-12-26