Featured image

Synchronization patterns coordinate timing between goroutines. They ensure goroutines wait for each other or for results to be ready before proceeding.

Verdicts Link to heading

  • Idiomatic - Considered a best practice in Go.
  • Code Smell - Not wrong, but if you are using it, make sure you understand what you are doing.
  • Anti-Pattern - Avoid unless you are an expert and know it is okay to break this rule.

Future Design Pattern Link to heading

Problem Space Link to heading

The Future pattern represents a value that will be available at some point. You start an async operation and get back a handle to retrieve the result later. This lets you kick off work early and block only when you actually need the result.

Practical Example 1 Link to heading

cmd/future/main.go

package main

import (
    "fmt"
    "time"
)

func fetchUser(id int) <-chan string {
    ch := make(chan string, 1)
    go func() {
        time.Sleep(100 * time.Millisecond)
        ch <- fmt.Sprintf("user-%d", id)
    }()
    return ch
}

func fetchOrder(id int) <-chan string {
    ch := make(chan string, 1)
    go func() {
        time.Sleep(150 * time.Millisecond)
        ch <- fmt.Sprintf("order-%d", id)
    }()
    return ch
}

func main() {
    // Start both fetches concurrently
    userFuture := fetchUser(1)
    orderFuture := fetchOrder(100)
    
    // Do other work here if needed
    fmt.Println("fetching...")
    
    // Block only when we need results
    user := <-userFuture
    order := <-orderFuture
    
    fmt.Println("user:", user)
    fmt.Println("order:", order)
}

Practical Example 2 Link to heading

internal/future/future.go

package future

import "sync"

type Future[T any] struct {
    result T
    err    error
    done   chan struct{}
    once   sync.Once
}

func New[T any](fn func() (T, error)) *Future[T] {
    f := &Future[T]{
        done: make(chan struct{}),
    }
    go func() {
        f.result, f.err = fn()
        close(f.done)
    }()
    return f
}

func (f *Future[T]) Get() (T, error) {
    <-f.done
    return f.result, f.err
}

func (f *Future[T]) Done() <-chan struct{} {
    return f.done
}

cmd/future/main.go

package main

import (
    "fmt"
    "io"
    "myapp/internal/future"
    "net/http"
)

func main() {
    // Start async fetches
    googleFuture := future.New(func() (string, error) {
        resp, err := http.Get("https://www.google.com")
        if err != nil {
            return "", err
        }
        defer resp.Body.Close()
        body, err := io.ReadAll(resp.Body)
        return string(body[:100]), err
    })
    
    githubFuture := future.New(func() (string, error) {
        resp, err := http.Get("https://api.github.com")
        if err != nil {
            return "", err
        }
        defer resp.Body.Close()
        body, err := io.ReadAll(resp.Body)
        return string(body[:100]), err
    })
    
    fmt.Println("requests started...")
    
    // Get results
    google, err := googleFuture.Get()
    if err != nil {
        fmt.Println("google error:", err)
    } else {
        fmt.Println("google:", google)
    }
    
    github, err := githubFuture.Get()
    if err != nil {
        fmt.Println("github error:", err)
    } else {
        fmt.Println("github:", github)
    }
}

Considerations Link to heading

Example 1: A buffered channel of size 1 is the simplest future. The function returns immediately with a channel, and the caller reads when ready. No wrapper type needed.

Example 2: A generic Future type with error handling. Provides a cleaner API when you have many async operations and want consistent error handling. The Done() channel allows select-based waiting.

For simple cases, return a channel directly. For complex applications with many async operations, a Future type adds clarity and error handling.

Verdict Link to heading

  • Example 1: Idiomatic
  • Example 2: Idiomatic (when error handling and reuse matter)

Barrier Concurrency Pattern Link to heading

Problem Space Link to heading

The Barrier pattern synchronizes multiple goroutines at a specific point. All goroutines must reach the barrier before any can proceed. This is useful when you need to ensure all parallel tasks complete a phase before moving to the next.

Practical Example 1 Link to heading

cmd/barrier/main.go

package main

import (
    "fmt"
    "sync"
    "time"
)

func main() {
    var wg sync.WaitGroup
    
    workers := 5
    wg.Add(workers)
    
    for i := range workers {
        go func(id int) {
            defer wg.Done()
            
            // Simulate work
            time.Sleep(time.Duration(id*100) * time.Millisecond)
            fmt.Printf("worker %d: phase 1 complete\n", id)
        }(i)
    }
    
    // Barrier: wait for all workers to complete phase 1
    wg.Wait()
    fmt.Println("--- all workers finished phase 1 ---")
    
    // Phase 2
    wg.Add(workers)
    
    for i := range workers {
        go func(id int) {
            defer wg.Done()
            
            time.Sleep(time.Duration(id*50) * time.Millisecond)
            fmt.Printf("worker %d: phase 2 complete\n", id)
        }(i)
    }
    
    wg.Wait()
    fmt.Println("--- all workers finished phase 2 ---")
}

Practical Example 2 Link to heading

cmd/errgroup/main.go

package main

import (
    "context"
    "fmt"
    "time"
    
    "golang.org/x/sync/errgroup"
)

func main() {
    g, ctx := errgroup.WithContext(context.Background())
    
    results := make([]string, 3)
    
    // Fetch from multiple sources concurrently
    g.Go(func() error {
        select {
        case <-ctx.Done():
            return ctx.Err()
        case <-time.After(100 * time.Millisecond):
            results[0] = "data from API"
            fmt.Println("API fetch complete")
            return nil
        }
    })
    
    g.Go(func() error {
        select {
        case <-ctx.Done():
            return ctx.Err()
        case <-time.After(150 * time.Millisecond):
            results[1] = "data from database"
            fmt.Println("database fetch complete")
            return nil
        }
    })
    
    g.Go(func() error {
        select {
        case <-ctx.Done():
            return ctx.Err()
        case <-time.After(80 * time.Millisecond):
            results[2] = "data from cache"
            fmt.Println("cache fetch complete")
            return nil
        }
    })
    
    // Barrier: wait for all fetches
    if err := g.Wait(); err != nil {
        fmt.Println("error:", err)
        return
    }
    
    fmt.Println("--- all fetches complete ---")
    for i, r := range results {
        fmt.Printf("result %d: %s\n", i, r)
    }
}

Considerations Link to heading

Example 1: sync.WaitGroup is the simplest barrier. Add the count, spawn goroutines that call Done, and Wait blocks until all complete. Good for one-shot synchronization.

Example 2: errgroup from golang.org/x/sync combines a barrier with error propagation and cancellation. If any goroutine returns an error, the context is cancelled. Other goroutines should check ctx.Done() to exit early, avoiding wasted work. This is the preferred approach when tasks can fail.

For most cases, use sync.WaitGroup or errgroup. The reusable barrier with sync.Cond is rarely needed and adds complexity.

Verdict Link to heading

  • Example 1: Idiomatic
  • Example 2: Idiomatic