Storage
Unified object storage with S3, GCS, Azure, and local filesystem backends
Overview
github.com/xraph/forge/extensions/storage provides a multi-backend object storage layer.
It registers a StorageManager in the DI container that manages named backends and exposes
upload, download, delete, list, copy, move, and presigned URL operations through a unified
Storage interface.
The extension ships production-ready local filesystem and AWS S3 backends. GCS and Azure are declared in config types but not yet implemented.
What It Registers
| Service | DI Key | Type |
|---|---|---|
| Storage manager | storage.manager | *StorageManager |
| Default backend | storage.default | Storage |
The manager is registered as an eager Vessel singleton. Backends are initialized during Register
with a 30-second timeout.
Quick Start
package main
import (
"context"
"fmt"
"strings"
"github.com/xraph/forge"
"github.com/xraph/forge/extensions/storage"
)
func main() {
app := forge.NewApp(forge.AppConfig{Name: "my-app", Version: "1.0.0"})
// Local filesystem backend
app.RegisterExtension(storage.NewExtension(
storage.WithDefault("uploads"),
storage.WithLocalBackend("uploads", "/var/data/uploads"),
))
// Or S3 backend
app.RegisterExtension(storage.NewExtension(
storage.WithDefault("files"),
storage.WithS3Backend("files", "my-bucket", "us-east-1", "uploads/"),
))
ctx := context.Background()
app.Start(ctx)
defer app.Stop(ctx)
mgr := storage.MustGetManager(app.Container())
// Upload a file
data := strings.NewReader("Hello, World!")
mgr.Upload(ctx, "greeting.txt", data,
storage.WithContentType("text/plain"),
storage.WithMetadata(map[string]string{"author": "alice"}),
)
// Download
reader, _ := mgr.Download(ctx, "greeting.txt")
defer reader.Close()
// Check existence
exists, _ := mgr.Exists(ctx, "greeting.txt")
fmt.Println("exists:", exists)
// List objects
objects, _ := mgr.List(ctx, "", storage.WithRecursive(true))
for _, obj := range objects {
fmt.Printf(" %s (%d bytes)\n", obj.Key, obj.Size)
}
// Get metadata
meta, _ := mgr.Metadata(ctx, "greeting.txt")
fmt.Println("content-type:", meta.ContentType)
// Generate a presigned download URL (valid for 1 hour)
url, _ := mgr.PresignDownload(ctx, "greeting.txt", 1*time.Hour)
fmt.Println("download URL:", url)
// Copy and move
mgr.Copy(ctx, "greeting.txt", "backup/greeting.txt")
mgr.Move(ctx, "greeting.txt", "archive/greeting.txt")
// Delete
mgr.Delete(ctx, "archive/greeting.txt")
}Using Storage in Your Services
Inject *storage.StorageManager for automatic DI resolution:
type FileService struct {
storage *storage.StorageManager
logger forge.Logger
}
func NewFileService(s *storage.StorageManager, logger forge.Logger) *FileService {
return &FileService{storage: s, logger: logger}
}
func (fs *FileService) SaveAvatar(ctx context.Context, userID string, data io.Reader) error {
key := fmt.Sprintf("avatars/%s.jpg", userID)
return fs.storage.Upload(ctx, key, data,
storage.WithContentType("image/jpeg"),
storage.WithACL("public-read"),
)
}
func (fs *FileService) GetAvatarURL(ctx context.Context, userID string) (string, error) {
key := fmt.Sprintf("avatars/%s.jpg", userID)
return fs.storage.PresignDownload(ctx, key, 24*time.Hour)
}Register with Vessel:
forge.ProvideConstructor(app.Container(), NewFileService)Working with Named Backends
Access specific backends by name for multi-storage architectures:
mgr := storage.MustGetManager(app.Container())
// Use a specific backend
s3 := mgr.Backend("s3-production")
s3.Upload(ctx, "reports/q4.pdf", reportData)
// Default backend (configured via WithDefault)
def, _ := mgr.DefaultBackend()
def.Upload(ctx, "temp/scratch.txt", tmpData)Key Concepts
- Named backends -- configure multiple storage backends (e.g.
"uploads"on S3,"temp"on local) and access them by name through the manager. - Default backend -- one backend is designated as default. Manager methods like
Upload,Download,Deletedelegate to it. - Resilience -- every backend is automatically wrapped in
ResilientStoragewhich provides exponential backoff retries, circuit breaker, and token-bucket rate limiting. - Enhanced local backend -- the local filesystem driver supports file-level locking, atomic writes (temp + rename), buffer pooling, ETag caching, path validation, and metadata sidecar files.
- Presigned URLs -- generate time-limited upload and download URLs. The S3 backend uses native AWS presigning; the local backend uses HMAC-SHA256 tokens.
- CDN support -- when
EnableCDNis configured,GetURL()returns CDN URLs instead of presigned URLs. - Health checks -- the manager's health checker performs write-read-delete probes or list probes per backend.
Important Runtime Notes
- Upload options support
ContentType,Metadata, andACLper object. - List operations support pagination via
Limit,Marker, andRecursiveoptions. - Path validation rejects keys with leading dots (except
.health), trailing slashes,..traversal, and keys exceeding 1024 characters. - Metadata is limited to 100 entries with restricted key character sets.
Detailed Pages
How is this guide?