Compare commits

..

6 Commits

Author SHA1 Message Date
pandeymangg 77506a7a3f updates cursor rules 2025-08-26 17:23:21 +05:30
pandeymangg 2ba079da68 feedback 2025-08-26 16:08:30 +05:30
pandeymangg e1607def05 updates cursor rules 2025-08-26 15:26:05 +05:30
pandeymangg 9d7dac33be fix: batch size 2025-08-26 14:58:32 +05:30
pandeymangg b9d544f36f fix: adds error handling 2025-08-26 12:39:13 +05:30
pandeymangg 7abd0e9aed adds pagination 2025-08-25 15:30:53 +05:30
5 changed files with 3553 additions and 629 deletions
File diff suppressed because it is too large Load Diff
+253 -556
View File
@@ -1,587 +1,284 @@
# Storage Package Rules for Formbricks
## Package Overview
## Package Purpose & Design Philosophy
The `@formbricks/storage` package provides S3-compatible cloud storage functionality for Formbricks. It's a standalone TypeScript library that handles file uploads, downloads, and deletions with comprehensive error handling and type safety.
The `@formbricks/storage` package provides a **type-safe, environment-agnostic S3 storage abstraction** for Formbricks. It's designed as a standalone library that can work with any S3-compatible storage provider (AWS S3, MinIO, LocalStack, etc.).
## Key Files
### Key Design Decisions
### Core Storage Infrastructure
1. **Result Type Pattern**: All operations return `Result<T, StorageError>` instead of throwing exceptions, enabling explicit error handling
2. **Environment-based Configuration**: Zero hardcoded values - all configuration comes from environment variables
3. **Graceful Degradation**: When S3 is unavailable, the package fails gracefully without crashing the application
4. **Minimal Dependencies**: Only includes necessary AWS SDK packages, avoiding the bloated umbrella package
5. **Internal Implementation Hiding**: Only exports the public API, keeping client creation and constants internal
- [packages/storage/src/service.ts](mdc:packages/storage/src/service.ts) - Main storage service with S3 operations
- [packages/storage/src/client.ts](mdc:packages/storage/src/client.ts) - S3 client creation and configuration
- [packages/storage/src/constants.ts](mdc:packages/storage/src/constants.ts) - Environment variable exports
- [packages/storage/src/types/error.ts](mdc:packages/storage/src/types/error.ts) - Result type system and error definitions
- [packages/storage/src/index.ts](mdc:packages/storage/src/index.ts) - Package exports
## Core Use Cases
### Configuration Files
### File Upload Flow
- [packages/storage/package.json](mdc:packages/storage/package.json) - Package configuration with AWS SDK dependencies
- [packages/storage/vite.config.ts](mdc:packages/storage/vite.config.ts) - Build configuration for library bundling
- [packages/storage/tsconfig.json](mdc:packages/storage/tsconfig.json) - TypeScript configuration
```typescript
// Generate presigned URL for secure client-side uploads
const uploadResult = await getSignedUploadUrl(
"user-avatar.jpg",
"image/jpeg",
"users/123/avatars",
5 * 1024 * 1024 // 5MB limit
);
## Architecture Patterns
if (uploadResult.ok) {
// Client uploads directly to S3 using signed URL
const { signedUrl, presignedFields } = uploadResult.data;
}
```
### File Download Flow
```typescript
// Generate temporary download links for private files
const downloadResult = await getSignedDownloadUrl("users/123/avatars/user-avatar.jpg");
if (downloadResult.ok) {
// Redirect user to temporary download URL (expires in 30 minutes)
return redirect(downloadResult.data);
}
```
### Cleanup Operations
```typescript
// Single file deletion
await deleteFile("users/123/temp/upload.pdf");
// Bulk cleanup (handles pagination automatically)
await deleteFilesByPrefix("surveys/456/responses/"); // Deletes all response files
```
## Package Architecture
### Module Responsibilities
- **`service.ts`**: Core business logic - the four main operations
- **`client.ts`**: S3 client factory with environment validation
- **`constants.ts`**: Environment variable exports (internal use only)
- **`types/error.ts`**: Result type system and error definitions
- **`index.ts`**: Public API exports (consumers only see this)
### Error Handling Strategy
```typescript
// All functions use consistent error types
type StorageError = {
code: "unknown" | "s3_client_error" | "s3_credentials_error" | "file_not_found_error";
};
// Consumers handle errors explicitly
const result = await deleteFilesByPrefix("path/");
if (!result.ok) {
switch (result.error.code) {
case "s3_credentials_error":
// Handle missing/invalid credentials
case "file_not_found_error":
// Handle missing files
default:
// Handle unexpected errors
}
}
```
## Environment Configuration
### Required Variables
```bash
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=us-east-1
S3_BUCKET_NAME=formbricks-storage
```
### Optional Variables (for non-AWS providers)
```bash
S3_ENDPOINT_URL=http://localhost:9000 # MinIO/LocalStack
S3_FORCE_PATH_STYLE=1 # Required for MinIO
```
### Configuration Validation
- Validation happens at **client creation time**, not at startup
- Missing credentials result in `s3_credentials_error`
- Invalid credentials are detected during first operation
## Bulk Operations Design
### Why Pagination + Batching?
S3 has two key limitations:
1. **ListObjects** returns max 1000 objects per request → Use pagination
2. **DeleteObjects** accepts max 1000 objects per request → Use batching
### Implementation Pattern
```typescript
// 1. Paginate through all objects with prefix
const paginator = paginateListObjectsV2(client, { Bucket, Prefix });
for await (const page of paginator) {
// Collect all keys
}
// 2. Batch deletions in groups of 1000
for (let i = 0; i < keys.length; i += 1000) {
const batch = keys.slice(i, i + 1000);
await s3Client.send(new DeleteObjectsCommand({ Delete: { Objects: batch } }));
}
// 3. Handle partial failures gracefully
// Log errors but don't fail the entire operation
```
## Integration Patterns
### In Formbricks Web App
```typescript
// Survey file cleanup when survey is deleted
await deleteFilesByPrefix(`surveys/${surveyId}/`);
// Response file cleanup when response is deleted
await deleteFilesByPrefix(`surveys/${surveyId}/responses/${responseId}/`);
// User avatar upload
const uploadUrl = await getSignedUploadUrl(file.name, file.type, `users/${userId}/avatars`, maxAvatarSize);
```
### Testing Strategy
- **Mock the entire `@aws-sdk/client-s3` module** - don't try to mock individual operations
- **Use `paginateListObjectsV2` mocks** with async generators for bulk operations
- **Test error scenarios** - missing credentials, network failures, partial deletions
- **Mock environment variables** consistently across tests
## Performance Considerations
### Presigned URL Expiration
- **Upload URLs**: 2 minutes (short for security)
- **Download URLs**: 30 minutes (balance between security and UX)
### Bulk Operation Optimization
- **Concurrent batch processing**: Delete batches in parallel using `Promise.all()`
- **Memory efficient pagination**: Process one page at a time, don't load all keys into memory
- **Partial failure handling**: Continue processing even if some batches fail
### Client Reuse
- **Single client instance** created at module level
- **Avoid recreating clients** for each operation
- **Fail fast** if client creation fails due to missing credentials
## Common Pitfalls & Solutions
### ❌ Don't expose internal details
```typescript
// Wrong - exposes implementation
export { S3_BUCKET_NAME, createS3Client } from "./internal";
```
### ✅ Keep implementation internal
```typescript
// Correct - only expose business operations
export { deleteFile, getSignedUploadUrl } from "./service";
```
### ❌ Don't use generic error handling
```typescript
// Wrong - loses error context
catch (error) {
throw new Error("Something went wrong");
}
```
### ✅ Use specific error types
```typescript
// Correct - categorize errors appropriately
catch (error) {
logger.error({ error }, "S3 operation failed");
return err({ code: ErrorCode.S3ClientError });
}
```
### ❌ Don't hardcode configuration
```typescript
// Wrong - not environment-agnostic
const s3Client = new S3Client({
region: "us-east-1",
endpoint: "https://s3.amazonaws.com",
});
```
### ✅ Use environment variables
```typescript
// Correct - works with any S3-compatible provider
const s3Client = new S3Client({
region: S3_REGION,
endpoint: S3_ENDPOINT_URL,
forcePathStyle: S3_FORCE_PATH_STYLE,
});
```
## Dependencies & Versioning
### AWS SDK Strategy
- **Use specific packages** (`@aws-sdk/client-s3`) not umbrella package (`aws-sdk`)
- **Pin exact versions** to avoid breaking changes
- **External dependencies**: All AWS SDK packages are externalized in build
### Package Structure
```
packages/storage/
├── src/
│ ├── client.ts # S3 client creation and configuration
│ ├── service.ts # Core storage operations (upload, download, delete)
│ ├── constants.ts # Environment variable exports
│ ├── index.ts # Package exports
│ ├── types/
│ │ └── error.ts # Result type system and error definitions
│ ├── *.test.ts # Unit tests for each module
└── dist/ # Built library output
```
### Result Type System
All storage operations use a Result type pattern for comprehensive error handling:
```typescript
// ✅ Use Result<T, E> for all async operations
export const storageOperation = async (): Promise<
Result<SuccessData, UnknownError | S3CredentialsError | S3ClientError>
> => {
try {
// Implementation
return ok(data);
} catch (error) {
logger.error("Operation failed", { error });
return err({
code: "unknown",
message: "Operation failed",
});
}
};
// ✅ Handle Results properly in calling code
const result = await storageOperation();
if (!result.ok) {
// Handle error
return result; // Propagate error
}
// Use result.data
```
### Error Type Definitions
Always use the predefined error types:
```typescript
// ✅ Standard error types
interface UnknownError {
code: "unknown";
message: string;
}
interface S3CredentialsError {
code: "s3_credentials_error";
message: string;
}
interface S3ClientError {
code: "s3_client_error";
message: string;
}
// ✅ Use ok() and err() utility functions
return ok(successData);
return err({ code: "s3_client_error", message: "Failed to connect" });
```
## S3 Client Patterns
### Environment Configuration
All S3 configuration comes from environment variables:
```typescript
// ✅ Export environment variables from constants.ts
export const S3_ACCESS_KEY = process.env.S3_ACCESS_KEY;
export const S3_SECRET_KEY = process.env.S3_SECRET_KEY;
export const S3_REGION = process.env.S3_REGION;
export const S3_ENDPOINT_URL = process.env.S3_ENDPOINT_URL;
export const S3_FORCE_PATH_STYLE = process.env.S3_FORCE_PATH_STYLE === "1";
export const S3_BUCKET_NAME = process.env.S3_BUCKET_NAME;
// ✅ Validate in a function (e.g., inside createS3ClientFromEnv)
if (!S3_ACCESS_KEY || !S3_SECRET_KEY || !S3_BUCKET_NAME || !S3_REGION) {
return err({
code: "s3_credentials_error",
message: "S3 credentials are not set",
});
}
```
### Client Creation Pattern
Use the factory pattern for S3 client creation:
```typescript
// ✅ Factory function with Result type
export const createS3ClientFromEnv = (): Result<S3Client, S3CredentialsError | UnknownError> => {
try {
// Validation and client creation
const s3ClientInstance = new S3Client({
credentials: { accessKeyId: S3_ACCESS_KEY, secretAccessKey: S3_SECRET_KEY },
region: S3_REGION,
endpoint: S3_ENDPOINT_URL,
forcePathStyle: S3_FORCE_PATH_STYLE,
});
return ok(s3ClientInstance);
} catch (error) {
logger.error("Error creating S3 client", { error });
return err({ code: "unknown", message: "Error creating S3 client" });
}
};
// ✅ Wrapper function for fallback handling
export const createS3Client = (): S3Client | undefined => {
const result = createS3ClientFromEnv();
return result.ok ? result.data : undefined;
};
```
## Service Function Patterns
### Function Signature Standards
All service functions follow consistent patterns:
```typescript
// ✅ Comprehensive TSDoc comments
/**
* Get a signed URL for uploading a file to S3
* @param fileName - The name of the file to upload
* @param contentType - The content type of the file
* @param filePath - The path to the file in S3
* @param maxSize - Maximum file size allowed (optional)
* @returns A Result containing the signed URL and presigned fields or an error
*/
export const getSignedUploadUrl = async (
fileName: string,
contentType: string,
filePath: string,
maxSize?: number
): Promise<
Result<
{
signedUrl: string;
presignedFields: PresignedPostOptions["Fields"];
},
UnknownError | S3CredentialsError | S3ClientError
>
> => {
// Implementation
};
```
### Error Handling Patterns
Always validate inputs and handle S3 client errors:
```typescript
// ✅ Standard validation and error handling
export const storageFunction = async (param: string): Promise<Result<Data, Errors>> => {
try {
// Client validation
if (!s3Client) {
logger.error("S3 client is not available");
return err({
code: "s3_credentials_error",
message: "S3 credentials are not set",
});
}
// AWS SDK operations with error handling
const command = new SomeS3Command({
/* params */
});
const response = await s3Client.send(command);
return ok(response);
} catch (error) {
logger.error("S3 operation failed", { error, param });
// Categorize errors appropriately
if (error.name === "CredentialsError") {
return err({
code: "s3_credentials_error",
message: "Invalid S3 credentials",
});
}
return err({
code: "s3_client_error",
message: `S3 operation failed: ${error.message}`,
});
}
};
```
## Testing Standards
### Test File Organization
Each source file should have a corresponding test file:
```typescript
// ✅ Test file naming: [module].test.ts
// packages/storage/src/client.test.ts
// packages/storage/src/service.test.ts
// packages/storage/src/constants.test.ts
// ✅ Test structure
describe("Storage Client", () => {
describe("createS3ClientFromEnv", () => {
it("should create S3 client with valid credentials", () => {
// Test implementation
});
it("should return error with missing credentials", () => {
// Test implementation
});
});
});
```
### Mock Environment Variables
Always mock environment variables in tests:
```typescript
// ✅ Mock environment setup
beforeEach(() => {
vi.stubEnv("S3_ACCESS_KEY", "test-access-key");
vi.stubEnv("S3_SECRET_KEY", "test-secret-key");
vi.stubEnv("S3_REGION", "us-east-1");
vi.stubEnv("S3_BUCKET_NAME", "test-bucket");
});
afterEach(() => {
vi.unstubAllEnvs();
});
```
## Build Configuration
### Vite Library Setup
Configure vite for library bundling with external dependencies:
```typescript
// ✅ vite.config.ts pattern
export default defineConfig({
build: {
lib: {
entry: resolve(__dirname, "src/index.ts"),
name: "formbricksStorage",
fileName: "index",
formats: ["es", "cjs"], // Both ESM and CommonJS
},
rollupOptions: {
// Externalize AWS SDK and Formbricks dependencies
external: [
"@aws-sdk/client-s3",
"@aws-sdk/s3-presigned-post",
"@aws-sdk/s3-request-presigner",
"@formbricks/logger",
],
},
},
test: {
environment: "node",
globals: true,
coverage: {
reporter: ["text", "json", "html", "lcov"],
exclude: ["src/types/**"], // Exclude type definitions
include: ["src/**/*.ts"],
},
},
plugins: [dts({ rollupTypes: true })], // Generate type declarations
});
```
### Package.json Configuration
Essential package.json fields for the storage library:
```json
{
"exports": {
"import": "./dist/index.js",
"require": "./dist/index.cjs",
"types": "./dist/index.d.ts"
"require": "./dist/index.cjs"
},
"files": ["dist"],
"main": "./dist/index.js",
"name": "@formbricks/storage",
"private": true,
"scripts": {
"build": "tsc && vite build",
"test": "vitest run",
"test:coverage": "vitest run --coverage"
},
"type": "module",
"types": "./dist/index.d.ts"
}
```
## AWS SDK Integration
## Function Reference
### Dependency Management
### `getSignedUploadUrl(fileName, contentType, filePath, maxSize?)`
Use specific AWS SDK packages, not the umbrella package:
**Purpose**: Generate presigned POST URL for secure client-side uploads
**Returns**: `{ signedUrl: string, presignedFields: Record<string, string> }`
**Use Case**: File uploads from browser without exposing S3 credentials
```json
// ✅ Specific AWS SDK dependencies
"dependencies": {
"@aws-sdk/client-s3": "3.864.0",
"@aws-sdk/s3-presigned-post": "3.864.0",
"@aws-sdk/s3-request-presigner": "3.864.0"
}
### `getSignedDownloadUrl(fileKey)`
// ❌ Don't use umbrella package
"dependencies": {
"aws-sdk": "..." // Too large and unnecessary
}
```
**Purpose**: Generate temporary download URL for private files
**Returns**: `string` (temporary URL valid for 30 minutes)
**Use Case**: Serving private files without making S3 bucket public
### Command Patterns
### `deleteFile(fileKey)`
Use the AWS SDK v3 command pattern:
**Purpose**: Delete a single file from S3
**Returns**: `void` on success
**Use Case**: Remove uploaded files when user deletes content
```typescript
// ✅ AWS SDK v3 command pattern
import { DeleteObjectCommand, GetObjectCommand } from "@aws-sdk/client-s3";
import { createPresignedPost } from "@aws-sdk/s3-presigned-post";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
### `deleteFilesByPrefix(prefix)`
// Delete operation
const deleteCommand = new DeleteObjectCommand({
Bucket: S3_BUCKET_NAME,
Key: filePath,
});
await s3Client.send(deleteCommand);
**Purpose**: Bulk delete all files matching a prefix pattern
**Returns**: `void` on success (partial failures are logged but don't fail operation)
**Use Case**: Cleanup entire folders when surveys/users are deleted
// Presigned URL for download
const getCommand = new GetObjectCommand({
Bucket: S3_BUCKET_NAME,
Key: filePath,
});
const signedUrl = await getSignedUrl(s3Client, getCommand, { expiresIn: 3600 });
// Presigned POST for upload
const { url, fields } = await createPresignedPost(s3Client, {
Bucket: S3_BUCKET_NAME,
Key: filePath,
Conditions: [
["content-length-range", 0, maxSize || DEFAULT_MAX_SIZE],
["eq", "$Content-Type", contentType],
],
Expires: 3600,
});
```
## Export Patterns
### Selective Exports
Only export the main service functions:
```typescript
// ✅ packages/storage/src/index.ts
export { deleteFile, getSignedDownloadUrl, getSignedUploadUrl } from "./service";
// ❌ Don't export internal utilities
// export { createS3Client } from "./client"; // Internal only
// export { S3_BUCKET_NAME } from "./constants"; // Internal only
```
### Type Exports
Export types that consumers might need:
```typescript
// ✅ Export relevant types if needed by consumers
export type { Result, UnknownError, S3CredentialsError, S3ClientError } from "./types/error";
```
## Logging Standards
### Use Formbricks Logger
Always use the Formbricks logger for consistency:
```typescript
// ✅ Import and use Formbricks logger
import { logger } from "@formbricks/logger";
// Error logging with context
logger.error("S3 operation failed", {
operation: "upload",
fileName,
error: error.message,
});
// Warning for recoverable issues
logger.warn("S3 client fallback used", { reason: "credentials_error" });
```
### Logging Levels
Use appropriate logging levels:
```typescript
// ✅ Error for failures that need attention
logger.error("Critical S3 operation failed", { error });
// ✅ Warn for recoverable issues
logger.warn("S3 credentials not set, client unavailable");
// ✅ Debug for development (avoid in production)
logger.debug("S3 operation successful", { operation, duration });
// ❌ Avoid info logging for routine operations
// logger.info("File uploaded successfully"); // Too verbose
```
## Common Pitfalls to Avoid
1. **Don't expose internal implementation details** - Keep client creation and constants internal
2. **Always validate S3 client availability** - Check for undefined client before operations
3. **Use specific error types** - Don't use generic Error objects
4. **Handle AWS SDK errors appropriately** - Categorize errors by type
5. **Don't hardcode S3 configuration** - Always use environment variables
6. **Include comprehensive TSDoc** - Document all parameters and return types
7. **Test error scenarios** - Test both success and failure cases
8. **Use Result types consistently** - Never throw exceptions in service functions
9. **Version pin AWS SDK dependencies** - Avoid breaking changes from updates
10. **Keep package.json focused** - Only include necessary dependencies and scripts
## Environment Variables
### Required Variables
The storage package requires these environment variables:
```bash
# ✅ Required S3 configuration
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=us-east-1
S3_BUCKET_NAME=your-bucket-name
# ✅ Optional S3 configuration
S3_ENDPOINT_URL=https://s3.amazonaws.com # For custom endpoints
S3_FORCE_PATH_STYLE=1 # For minio/localstack compatibility
```
### Validation Strategy
Always validate required environment variables at startup:
```typescript
// ✅ Fail fast on missing required variables
const requiredVars = [S3_ACCESS_KEY, S3_SECRET_KEY, S3_BUCKET_NAME, S3_REGION];
const missingVars = requiredVars.filter(v => !v);
if (missingVars.length > 0) {
return err({
code: "s3_credentials_error",
message: "Required S3 environment variables are not set",
});
}
```
## Performance Considerations
### S3 Client Reuse
Create S3 client once and reuse:
```typescript
// ✅ Single client instance
const s3Client = createS3Client(); // Created once at module level
// ✅ Reuse in all operations
export const uploadFile = async () => {
if (!s3Client) return err(/* credentials error */);
// Use s3Client
};
// ❌ Don't create new clients for each operation
export const uploadFile = async () => {
const client = createS3Client(); // Inefficient
};
```
### Presigned URL Expiration
Use appropriate expiration times:
```typescript
// ✅ Reasonable expiration times
const UPLOAD_URL_EXPIRY = 3600; // 1 hour for uploads
const DOWNLOAD_URL_EXPIRY = 3600; // 1 hour for downloads
// ❌ Don't use excessively long expiration
const LONG_EXPIRY = 86400 * 7; // 7 days - security risk
```
### Error Message Safety
Don't expose sensitive information in error messages:
```typescript
// ✅ Safe error messages
return err({
code: "s3_client_error",
message: "File operation failed", // Generic message
});
// ❌ Don't expose internal details
return err({
code: "s3_client_error",
message: `AWS Error: ${awsError.message}`, // May contain sensitive info
});
```
## Integration Guidelines
### Usage in Other Packages
When using the storage package in other Formbricks packages:
```typescript
// ✅ Import specific functions
import { deleteFile, getSignedUploadUrl } from "@formbricks/storage";
// ✅ Handle Result types properly
const uploadResult = await getSignedUploadUrl(fileName, contentType, filePath);
if (!uploadResult.ok) {
// Handle error appropriately
throw new Error(uploadResult.error.message);
}
// Use uploadResult.data
const { signedUrl, presignedFields } = uploadResult.data;
```
### Dependency Declaration
Add storage package as workspace dependency:
```json
// ✅ In dependent package's package.json
"dependencies": {
"@formbricks/storage": "workspace:*"
}
```
Remember: The storage package is designed to be a self-contained, reusable library that provides type-safe S3 operations with comprehensive error handling. Follow these patterns to maintain consistency and reliability across the Formbricks storage infrastructure.
Remember: This package is designed to be **infrastructure-agnostic** and **error-resilient**. It should work seamlessly whether you're using AWS S3, MinIO for local development, or any other S3-compatible storage provider.
+368 -54
View File
@@ -1,21 +1,25 @@
/* eslint-disable @typescript-eslint/require-await -- used for mocking*/
import {
DeleteObjectCommand,
DeleteObjectsCommand,
GetObjectCommand,
HeadObjectCommand,
ListObjectsCommand,
type ListObjectsV2CommandOutput,
paginateListObjectsV2,
} from "@aws-sdk/client-s3";
import { createPresignedPost } from "@aws-sdk/s3-presigned-post";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { beforeEach, describe, expect, test, vi } from "vitest";
type Paginator<T> = AsyncGenerator<T, undefined, unknown>;
// Mock AWS SDK modules
vi.mock("@aws-sdk/client-s3", () => ({
DeleteObjectCommand: vi.fn(),
DeleteObjectsCommand: vi.fn(),
GetObjectCommand: vi.fn(),
HeadObjectCommand: vi.fn(),
ListObjectsCommand: vi.fn(),
paginateListObjectsV2: vi.fn(),
}));
vi.mock("@aws-sdk/s3-presigned-post", () => ({
@@ -37,7 +41,7 @@ const mockDeleteObjectCommand = vi.mocked(DeleteObjectCommand);
const mockDeleteObjectsCommand = vi.mocked(DeleteObjectsCommand);
const mockGetObjectCommand = vi.mocked(GetObjectCommand);
const mockHeadObjectCommand = vi.mocked(HeadObjectCommand);
const mockListObjectsCommand = vi.mocked(ListObjectsCommand);
const mockPaginateListObjectsV2 = vi.mocked(paginateListObjectsV2);
const mockCreatePresignedPost = vi.mocked(createPresignedPost);
const mockGetSignedUrl = vi.mocked(getSignedUrl);
@@ -585,30 +589,39 @@ describe("service.ts", () => {
}));
const mockS3Client = {
send: vi
.fn()
.mockResolvedValueOnce({
Contents: [
{ Key: "uploads/images/file1.jpg" },
{ Key: "uploads/images/file2.png" },
{ Key: "uploads/images/subfolder/file3.gif" },
],
})
.mockResolvedValueOnce({}), // DeleteObjectsCommand response
send: vi.fn().mockResolvedValueOnce({}), // DeleteObjectsCommand response
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return pages with files
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: [
{ Key: "uploads/images/file1.jpg" },
{ Key: "uploads/images/file2.png" },
{ Key: "uploads/images/subfolder/file3.gif" },
],
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("uploads/images/");
expect(mockListObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/images/",
});
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/images/",
}
);
expect(mockDeleteObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
@@ -621,7 +634,7 @@ describe("service.ts", () => {
},
});
expect(mockS3Client.send).toHaveBeenCalledTimes(2);
expect(mockS3Client.send).toHaveBeenCalledTimes(1);
expect(result.ok).toBe(true);
@@ -636,27 +649,39 @@ describe("service.ts", () => {
}));
const mockS3Client = {
send: vi.fn().mockResolvedValueOnce({
Contents: undefined, // No files found
}),
send: vi.fn(),
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return empty pages
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: undefined, // No files found
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("uploads/non-existent/");
expect(mockListObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/non-existent/",
});
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/non-existent/",
}
);
// Should not call DeleteObjectsCommand when no files found
expect(mockDeleteObjectsCommand).not.toHaveBeenCalled();
expect(mockS3Client.send).toHaveBeenCalledTimes(1);
expect(mockS3Client.send).not.toHaveBeenCalled();
expect(result.ok).toBe(true);
@@ -671,27 +696,39 @@ describe("service.ts", () => {
}));
const mockS3Client = {
send: vi.fn().mockResolvedValueOnce({
Contents: [], // Empty array
}),
send: vi.fn(),
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return empty array
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: [], // Empty array
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("uploads/empty/");
expect(mockListObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/empty/",
});
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/empty/",
}
);
// Should not call DeleteObjectsCommand when Contents is empty
expect(mockDeleteObjectsCommand).not.toHaveBeenCalled();
expect(mockS3Client.send).toHaveBeenCalledTimes(1);
expect(mockS3Client.send).not.toHaveBeenCalled();
expect(result.ok).toBe(true);
@@ -706,26 +743,35 @@ describe("service.ts", () => {
}));
const mockS3Client = {
send: vi
.fn()
.mockResolvedValueOnce({
Contents: [{ Key: "surveys/123/responses/response1.json" }],
})
.mockResolvedValueOnce({}),
send: vi.fn().mockResolvedValueOnce({}), // DeleteObjectsCommand response
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return a single file
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: [{ Key: "surveys/123/responses/response1.json" }],
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("surveys/123/responses/");
expect(mockListObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "surveys/123/responses/",
});
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "surveys/123/responses/",
}
);
expect(mockDeleteObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
@@ -763,26 +809,35 @@ describe("service.ts", () => {
vi.doMock("./constants", () => mockConstants);
const mockS3Client = {
send: vi
.fn()
.mockResolvedValueOnce({
Contents: [{ Key: "test-file.txt" }],
})
.mockRejectedValueOnce(new Error("AWS Delete Error")), // DeleteObjectsCommand fails
send: vi.fn().mockRejectedValueOnce(new Error("AWS Delete Error")), // DeleteObjectsCommand fails
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return files
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: [{ Key: "test-file.txt" }],
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("uploads/test/");
expect(mockListObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/test/",
});
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/test/",
}
);
expect(mockDeleteObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
@@ -797,5 +852,264 @@ describe("service.ts", () => {
expect(result.error.code).toBe("unknown");
}
});
test("should handle pagination with multiple pages", async () => {
vi.doMock("./constants", () => ({
...mockConstants,
}));
const mockS3Client = {
send: vi.fn().mockResolvedValueOnce({}), // DeleteObjectsCommand response
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return multiple pages
const mockPaginator = {
async *[Symbol.asyncIterator]() {
// First page
yield {
Contents: [{ Key: "page1/file1.jpg" }, { Key: "page1/file2.png" }],
};
// Second page
yield {
Contents: [{ Key: "page2/file3.gif" }, { Key: "page2/file4.pdf" }],
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("uploads/paginated/");
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "uploads/paginated/",
}
);
// Should delete all objects from both pages
expect(mockDeleteObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Delete: {
Objects: [
{ Key: "page1/file1.jpg" },
{ Key: "page1/file2.png" },
{ Key: "page2/file3.gif" },
{ Key: "page2/file4.pdf" },
],
},
});
expect(result.ok).toBe(true);
});
test("should handle batching when more than 1000 objects", async () => {
vi.doMock("./constants", () => ({
...mockConstants,
}));
// Create 1500 files to test batching
const files = Array.from({ length: 1500 }, (_, i) => ({
Key: `batch/file${(i + 1).toString()}.txt`,
}));
const mockS3Client = {
send: vi
.fn()
.mockResolvedValueOnce({}) // First batch delete
.mockResolvedValueOnce({}), // Second batch delete
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return large file set
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: files,
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("batch/");
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "batch/",
}
);
// Should call DeleteObjectsCommand twice for batching
expect(mockDeleteObjectsCommand).toHaveBeenCalledTimes(2);
// First batch: 1000 objects
expect(mockDeleteObjectsCommand).toHaveBeenNthCalledWith(1, {
Bucket: mockConstants.S3_BUCKET_NAME,
Delete: {
Objects: files.slice(0, 1000),
},
});
// Second batch: remaining 500 objects
expect(mockDeleteObjectsCommand).toHaveBeenNthCalledWith(2, {
Bucket: mockConstants.S3_BUCKET_NAME,
Delete: {
Objects: files.slice(1000, 1500),
},
});
expect(result.ok).toBe(true);
});
test("should handle empty prefix", async () => {
vi.doMock("./constants", () => ({
...mockConstants,
}));
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("");
expect(result.ok).toBe(false);
if (!result.ok) {
expect(result.error.code).toBe("invalid_input");
}
});
test("should handle root prefix", async () => {
vi.doMock("./constants", () => ({
...mockConstants,
}));
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("/");
expect(result.ok).toBe(false);
if (!result.ok) {
expect(result.error.code).toBe("invalid_input");
}
});
test("should handle pagination with empty pages", async () => {
vi.doMock("./constants", () => ({
...mockConstants,
}));
const mockS3Client = {
send: vi.fn().mockResolvedValueOnce({}), // DeleteObjectsCommand response
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return mixed pages (one with files, one empty)
const mockPaginator = {
async *[Symbol.asyncIterator]() {
// First page with files
yield {
Contents: [{ Key: "file1.txt" }],
};
// Second page empty
yield {
Contents: [], // Empty page
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("mixed/");
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "mixed/",
}
);
// Should only delete the file from first page
expect(mockDeleteObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Delete: {
Objects: [{ Key: "file1.txt" }],
},
});
expect(result.ok).toBe(true);
});
test("should handle files with undefined Key gracefully", async () => {
vi.doMock("./constants", () => ({
...mockConstants,
}));
const mockS3Client = {
send: vi.fn().mockResolvedValueOnce({}), // DeleteObjectsCommand response
};
vi.doMock("./client", () => ({
createS3Client: vi.fn(() => mockS3Client),
}));
// Mock paginator to return mixed valid and invalid keys
const mockPaginator = {
async *[Symbol.asyncIterator]() {
yield {
Contents: [
{ Key: "valid-file.txt" },
{ Key: undefined }, // Invalid key
{ Key: "another-valid-file.pdf" },
{}, // Object without Key property
],
};
},
} as unknown as Paginator<ListObjectsV2CommandOutput>;
mockPaginateListObjectsV2.mockReturnValueOnce(mockPaginator);
const { deleteFilesByPrefix } = await import("./service");
const result = await deleteFilesByPrefix("mixed-keys/");
expect(mockPaginateListObjectsV2).toHaveBeenCalledWith(
{ client: mockS3Client },
{
Bucket: mockConstants.S3_BUCKET_NAME,
Prefix: "mixed-keys/",
}
);
// Should only delete objects with valid keys
expect(mockDeleteObjectsCommand).toHaveBeenCalledWith({
Bucket: mockConstants.S3_BUCKET_NAME,
Delete: {
Objects: [{ Key: "valid-file.txt" }, { Key: "another-valid-file.pdf" }],
},
});
expect(result.ok).toBe(true);
});
});
});
+66 -19
View File
@@ -1,9 +1,10 @@
import {
DeleteObjectCommand,
DeleteObjectsCommand,
type DeleteObjectsCommandOutput,
GetObjectCommand,
HeadObjectCommand,
ListObjectsCommand,
paginateListObjectsV2,
} from "@aws-sdk/client-s3";
import {
type PresignedPost,
@@ -188,33 +189,79 @@ export const deleteFilesByPrefix = async (prefix: string): Promise<Result<void,
});
}
const listObjectsCommand = new ListObjectsCommand({
Bucket: S3_BUCKET_NAME,
Prefix: prefix,
});
const normalizedPrefix = prefix.trim();
if (!normalizedPrefix || normalizedPrefix === "/") {
logger.error({ prefix }, "Refusing to delete files with an empty or root prefix");
return err({
code: ErrorCode.InvalidInput,
});
}
const listObjectsOutput = await s3Client.send(listObjectsCommand);
const keys: { Key: string }[] = [];
if (!listObjectsOutput.Contents) {
const paginator = paginateListObjectsV2(
{ client: s3Client },
{
Bucket: S3_BUCKET_NAME,
Prefix: normalizedPrefix,
}
);
for await (const page of paginator) {
const pageKeys = page.Contents?.flatMap((obj) => (obj.Key ? [{ Key: obj.Key }] : [])) ?? [];
keys.push(...pageKeys);
}
if (keys.length === 0) {
return ok(undefined);
}
const objectsToDelete = listObjectsOutput.Contents.map((obj) => {
return { Key: obj.Key };
});
const deletionPromises: Promise<DeleteObjectsCommandOutput>[] = [];
if (!objectsToDelete.length) {
return ok(undefined);
for (let i = 0; i < keys.length; i += 1000) {
const batch = keys.slice(i, i + 1000);
const deleteObjectsCommand = new DeleteObjectsCommand({
Bucket: S3_BUCKET_NAME,
Delete: {
Objects: batch,
},
});
deletionPromises.push(s3Client.send(deleteObjectsCommand));
}
const deleteObjectsCommand = new DeleteObjectsCommand({
Bucket: S3_BUCKET_NAME,
Delete: {
Objects: objectsToDelete,
},
});
const results = await Promise.all(deletionPromises);
await s3Client.send(deleteObjectsCommand);
// Check for partial failures and log them
let totalErrors = 0;
let totalDeleted = 0;
for (const result of results) {
if (result.Deleted) {
totalDeleted += result.Deleted.length;
logger.debug({ count: result.Deleted.length }, "Successfully deleted objects in batch");
}
if (result.Errors && result.Errors.length > 0) {
totalErrors += result.Errors.length;
logger.error(
{
errors: result.Errors.map((e) => ({
key: e.Key,
code: e.Code,
message: e.Message,
})),
},
"Some objects failed to delete"
);
}
}
// Log the issues
if (totalErrors > 0) {
logger.warn({ totalErrors, totalDeleted }, "Bulk delete completed with some failures");
}
return ok(undefined);
} catch (error) {
+1
View File
@@ -24,6 +24,7 @@ export enum ErrorCode {
S3CredentialsError = "s3_credentials_error",
S3ClientError = "s3_client_error",
FileNotFoundError = "file_not_found_error",
InvalidInput = "invalid_input",
}
export interface StorageError {