Make Your Life Easier
In modern iOS apps, multiple parts of your UI often need the same data simultaneously. With the right architecture, you can turn this into an advantage - serving all callers from a single network request.
The best part? You can wrap your existing code without introducing new services or major refactors. Just add a coalescer layer, and suddenly your app makes fewer requests, your server breathes easier, and your users get faster responses.
Try it yourself: Check out the TaskCoalescingExample demo app to see both coalescers in action with visual event logs.
Give Your Server a Break: How to Aggregate Repeated Requests
Consider this common scenario when a user opens your app:
// User opens the app...
TabBarController.viewDidLoad() // fetchUserProfile()
ProfileHeader.onAppear() // fetchUserProfile()
NotificationBadge.onAppear() // fetchUserProfile()
// Without coalescing: 3 identical network requests
// With coalescing: 1 request, 3 satisfied callers
This pattern appears everywhere:
-
Multiple UI components requesting the same data on appear - your tab bar, header, and badges all wake up at once and ask for user data
-
Prefetching data early, then handing it over when the screen loads - you start fetching during the splash screen, and when the main screen appears, it picks up the already-running (or completed) request instead of starting a new one
-
Pull-to-refresh while a request is already in-flight - user pulls to refresh, but data is already being fetched from a previous action
-
Pagination requesting the same page multiple times - rapid scrolling triggers multiple requests for page 2 before the first one completes
Swift Concurrency gives us the perfect building blocks for this - actors and reentrancy.
Let’s build something elegant.
The InFlightTaskCoalescer
Think of it like a reception desk at a hotel.
Imagine multiple guests arrive and ask: “What’s the WiFi password?” The receptionist doesn’t call IT five times. Instead:
- First guest asks → receptionist calls IT and asks them to wait
- Second guest asks → “Someone’s already getting that answer, please wait with them”
- Third guest asks → same thing, joins the waiting group
- IT responds → receptionist tells all waiting guests at once
That’s exactly what our coalescer does with network requests.
Why Actors?
Actors give us thread-safe state - we need to track which requests are “in-flight” (being fetched), and multiple callers might check this state simultaneously. Actors guarantee that only one piece of code accesses this state at a time, eliminating race conditions.
Why Reentrancy?
Actor reentrancy lets new callers join while others wait - when an actor hits an await point, it doesn’t block. Other callers can enter, see there’s already a request in progress, and join the wait. Without reentrancy, callers would queue up and each start their own request.
The Implementation
// InFlightTaskCoalescer.swift
public actor InFlightTaskCoalescer<Key: Hashable & Sendable, Value: Sendable> {
// The "guest list" - tracks who's already fetching what
private var inFlightTasks: [Key: Task<Value, Error>] = [:]
public init() {}
public func deduplicated(
key: Key,
priority: TaskPriority? = nil,
name: String? = nil,
operation: @escaping @Sendable () async throws -> Value
) async throws -> Value {
// "Is someone already getting this answer?"
if let existingTask = inFlightTasks[key] {
// Yes! Wait for their result instead of asking again
return try await existingTask.value
}
// No one's asking yet - we'll be the one to fetch it
// IMPORTANT: Register BEFORE awaiting, so others can find us
let task = Task(name: name, priority: priority) {
try await operation()
}
inFlightTasks[key] = task
// When done, remove from the list (we have the answer now)
defer {
inFlightTasks[key] = nil
}
// Wait for our own request and return the result
return try await task.value
}
/// Cancel a specific in-flight request
@discardableResult
public func cancel(key: Key) -> Bool {
guard let task = inFlightTasks[key] else { return false }
task.cancel()
inFlightTasks[key] = nil
return true
}
/// Cancel everything (logout, major state change)
public func cancelAll() {
inFlightTasks.values.forEach { $0.cancel() }
inFlightTasks.removeAll()
}
}
How It Flows
Guest #1: [═══════ calling IT ═══════]→ gets password
Guest #2: [waiting with #1────]→ same password
Guest #3: [waiting──]→ same password
↓
request done, removed
Guest #4: [═══ new call ═══]
The critical detail: we register the task before awaiting. This ensures any concurrent callers will find it and join, rather than starting their own.
Where to Use It
class ProfileService {
private let coalescer = InFlightTaskCoalescer<String, UserProfile>()
func getProfile(userId: String) async throws -> UserProfile {
// Wrap your existing fetch - that's it!
try await coalescer.deduplicated(
key: userId,
name: "ProfileService.getProfile(\(userId))"
) {
// Your existing code, unchanged
try await api.fetchProfile(userId: userId)
}
}
}
// Now multiple simultaneous calls = single network request
async let profile1 = profileService.getProfile(userId: "123")
async let profile2 = profileService.getProfile(userId: "123")
async let profile3 = profileService.getProfile(userId: "123")
let results = try await (profile1, profile2, profile3)
// 1 API call, 3 happy callers!
Notice how you just wrap your existing api.fetchProfile() call. No need to refactor your API layer or change your architecture.
Make Your App Feel Instant: Adding Prefetch Support
Want your screens to load instantly? Start fetching during the splash screen, and when the user arrives at the main screen, the data is already there waiting.
Here’s the flow:
// Splash screen: "Let me start fetching while the user watches the logo"
func prepareForUser() {
prefetchFeedData() // Fire starts early
}
// Main screen appears: "Is that data ready yet?"
func loadFeed() async {
let data = await getFeedData() // Picks up the in-flight or completed request
}
The basic InFlightTaskCoalescer handles this when the request is still running. But what if the prefetch completes before the screen appears? We need to hold onto that result briefly.
The PrefetchableTaskCoalescer
Think of it as the reception desk with a “recently answered” board. If someone just asked a question, the answer stays posted for a bit so the next person doesn’t have to wait.
Key behavior:
- Only
prefetch()stores results deduplicated()consumes prefetch (one-time use) but never stores its own result
// PrefetchableTaskCoalescer.swift
public actor PrefetchableTaskCoalescer<Key: Hashable & Sendable, Value: Sendable> {
private let coalescer = InFlightTaskCoalescer<Key, Value>()
// The "recently answered" board
private var prefetchedResults: [Key: PrefetchedResult<Value>] = [:]
private let prefetchTimeout: Duration
public init(prefetchTimeout: Duration) {
self.prefetchTimeout = prefetchTimeout
}
/// Check the board first, then ask if needed
public func deduplicated(
key: Key,
name: String? = nil,
operation: @escaping @Sendable () async throws -> Value
) async throws -> Value {
// "Is this answer on the board?"
if let prefetched = prefetchedResults[key] {
if prefetched.isExpired(timeout: prefetchTimeout) {
// Answer's too old, erase it
prefetchedResults[key] = nil
} else {
// Fresh answer! Take it and erase (one-time use)
prefetchedResults[key] = nil
return prefetched.value
}
}
// Not on the board - go through normal coalescing
return try await coalescer.deduplicated(key: key, name: name, operation: operation)
}
/// Fetch and post the answer on the board for later
public func prefetch(
key: Key,
name: String? = nil,
operation: @escaping @Sendable () async throws -> Value
) async throws {
// Already on the board and fresh? Nothing to do
if let prefetched = prefetchedResults[key], !prefetched.isExpired(timeout: prefetchTimeout) {
return
}
// Fetch (with coalescing) and post the result
let result = try await coalescer.deduplicated(key: key, name: name, operation: operation)
prefetchedResults[key] = PrefetchedResult(value: result)
}
/// Erase the board (for refresh scenarios)
public func invalidate(key: Key) async {
prefetchedResults[key] = nil
await coalescer.cancel(key: key)
}
/// Clear everything (logout, major state change)
public func invalidateAll() async {
prefetchedResults.removeAll()
await coalescer.cancelAll()
}
}
private struct PrefetchedResult<Value>: Sendable where Value: Sendable {
let value: Value
let createdAt: ContinuousClock.Instant
init(value: Value) {
self.value = value
self.createdAt = ContinuousClock.now
}
func isExpired(timeout: Duration) -> Bool {
ContinuousClock.now >= createdAt.advanced(by: timeout)
}
}
The Five Scenarios
SCENARIO A: Main screen loads WHILE prefetch in-flight (coalescing)
─────────────────────────────────────────────────────────────────────
prefetch(): [═══════ task running ═══════]
↓ result stored
deduplicated(): [await same task──────]→ gets result → prefetch cleared
↑ joins in-flight
SCENARIO B: Main screen loads AFTER prefetch completes (prefetch hit)
─────────────────────────────────────────────────────────────────────
prefetch(): [═══ task ═══]→ result stored (waiting for consumer)
↓
deduplicated(): [instant return]→ prefetch cleared
↑ no network request!
SCENARIO C: Multiple consumers after prefetch (first one wins)
─────────────────────────────────────────────────────────────────────
prefetch(): [═══ task ═══]→ result stored
↓
deduplicated() #1: [instant]→ prefetch cleared
deduplicated() #2: [new task - nothing stored]
SCENARIO D: No prefetch - just coalescing (nothing stored)
─────────────────────────────────────────────────────────────────────
deduplicated() #1: [═══════ task ═══════]→ result returned (not stored)
deduplicated() #2: [await same───]→ gets same result
deduplicated() #3: [new task - nothing stored]
SCENARIO E: Prefetch expires before consumption (timeout safety net)
─────────────────────────────────────────────────────────────────────
prefetch(): [═══ task ═══]→ result stored
↓
... timeout expires ... prefetch cleared
deduplicated(): [new task]
Scenario C is particularly important: first consumer wins. If two screens both try to consume a prefetch, only the first gets the instant result. The second makes a fresh request (but doesn’t store it).
Real-World Example
class FeedService {
private let coalescer = PrefetchableTaskCoalescer<FeedKey, FeedData>(
prefetchTimeout: .seconds(30)
)
// Called during splash screen - start early!
func prefetchFeed() {
Task {
try await coalescer.prefetch(
key: .explore,
name: "FeedService.prefetch(explore)"
) {
try await api.fetchExploreFeed()
}
}
}
// Called when feed screen appears - instant if prefetched!
func loadFeed() async throws -> FeedData {
try await coalescer.deduplicated(
key: .explore,
name: "FeedService.loadFeed(explore)"
) {
try await api.fetchExploreFeed()
}
}
// Pull-to-refresh - clear the board, fetch fresh
func refreshFeed() async throws -> FeedData {
await coalescer.invalidate(key: .explore)
return try await loadFeed()
}
}
Easier Debugging with Task Names
The name parameter (Swift 6.2+) helps you identify tasks in debugging tools:
- Instruments - Swift Concurrency template shows task names in the timeline
- LLDB -
swift task infocommand lists running tasks with their names
try await coalescer.deduplicated(
key: .explore,
name: "ExploreFeed.fetch(page: \(page))"
) {
try await api.fetchExploreFeed(page: page)
}
When debugging, pause execution and run swift task info in LLDB:
(lldb) swift task info
Task 0x600000c70000
Name: "ExploreFeed.fetch(page: 2)"
State: running
Priority: medium
Task 0x600000c71000
Name: "ProfileService.getProfile(user_123)"
State: waiting on Task 0x600000c70000
Priority: high
Instead of deciphering anonymous task IDs, you immediately see what’s running and what’s waiting.
Best Practices
Use Meaningful Keys
Your key should capture everything that makes a request unique:
// Good: Different filters = different requests
enum FeedKey: Hashable {
case explore(filter: String, page: Int)
case following
case trending(category: String)
}
// Risky: "feed" with filter=A and filter=B would collide
let key = "feed"
Choose Appropriate Timeouts
Match the timeout to your user flow:
// Fast transition (splash → home): short timeout
let splashCoalescer = PrefetchableTaskCoalescer<FeedKey, FeedData>(
prefetchTimeout: .seconds(15)
)
// Slow flow (onboarding): longer timeout
let onboardingCoalescer = PrefetchableTaskCoalescer<UserData, UserProfile>(
prefetchTimeout: .seconds(60)
)
Remember: Coalescing ≠ Caching
Coalescers merge concurrent requests. Sequential requests (after completion) start fresh:
// These happen one after another = TWO requests
let result1 = try await coalescer.deduplicated(key: "data") { ... }
// ... first completes ...
let result2 = try await coalescer.deduplicated(key: "data") { ... }
// These happen simultaneously = ONE request
async let result1 = coalescer.deduplicated(key: "data") { ... }
async let result2 = coalescer.deduplicated(key: "data") { ... }
Quick Reference
| Feature | InFlightTaskCoalescer | PrefetchableTaskCoalescer |
|---|---|---|
| Merges concurrent requests | Yes | Yes |
| Stores completed results | No | Yes (via prefetch()) |
| Timeout for stored results | N/A | Yes |
| Cancellation support | cancel(key:), cancelAll() | invalidate(key:), invalidateAll() |
| Best for | Deduplicating simultaneous calls | Instant screen loads |
Key Takeaways
- Wrap, don’t rewrite - add coalescing around existing code without major refactors
- Actors + reentrancy - the building blocks for safe, elegant request merging
- Register before awaiting - the critical detail that makes coalescing work
- Prefetch for instant screens - start early, hand off when the screen appears
- Coalescing ≠ caching - only concurrent requests get merged
- Name your tasks - future-you debugging at 2am will be grateful