Compare commits

..

1 Commits

Author SHA1 Message Date
Matti Nannt
623e82ff4d fix: Add rate limit to forgot password route
Add rate limiting to the `/auth/forgot-password` route.

* Import `loginLimiter` in `apps/web/app/api/v1/users/forgot-password/route.ts` and apply it to the `POST` function.
* Add `forgotPasswordLimiter` in `apps/web/app/middleware/bucket.ts` with the same limits as `loginLimiter`.
* Add `forgotPasswordRoute` function in `apps/web/app/middleware/endpointValidator.ts` to identify the `/auth/forgot-password` route.
* Update `apps/web/middleware.ts` to include `forgotPasswordLimiter` and `forgotPasswordRoute` in the rate limiting logic.

---

For more details, open the [Copilot Workspace session](https://copilot-workspace.githubnext.com/formbricks/formbricks?shareId=XXXX-XXXX-XXXX-XXXX).
2024-10-28 21:36:30 +01:00
4309 changed files with 149168 additions and 370261 deletions

View File

@@ -1,71 +0,0 @@
# Skill Filter
Automatically filters Vercel React best practices to reduce AI token costs while keeping high-impact performance patterns.
## Quick Start
```bash
# Fetch and filter (first time)
pnpm filter-skills --fetch
# Re-filter after config changes
pnpm filter-skills
```
**Result:** ~50% reduction in skill files (keeps CRITICAL/HIGH/MEDIUM priorities, removes LOW priority rules)
## Configuration
Edit `.agent/skills/react-best-practices/skill-filter-config.json`:
```json
{
"featureFlags": {
"keepCriticalPriority": true, // async-*, bundle-*
"keepHighPriority": true, // server-*
"keepMediumPriority": true, // rerender-*
"keepLowPriority": false, // js-*, rendering-*, advanced-*
"removeJsOptimizations": true,
"removeRenderingOptimizations": true,
"removeAdvancedPatterns": true
}
}
```
**Toggle LOW priority rules:** Set `keepLowPriority: true`
## What It Does
1. Downloads latest skills from GitHub (with `--fetch`)
2. Filters based on priority and used technologies
3. Archives unused rules to `.archived/` (not tracked in git)
4. Formats markdown with Prettier to match project style
## Why This Works
- **AI Skills = Proactive:** Guide developers to write correct code from the start
- **Linting = Reactive:** Catch mistakes after code is written
- **Together:** AI prevents issues, linting catches what slips through
Token costs are an investment in preventing technical debt rather than fixing it later.
## Restore Archived Rules
```bash
mv .agent/skills/react-best-practices/.archived/rule-name.md \
.agent/skills/react-best-practices/rules/
```
Then re-run: `pnpm filter-skills`
## Commands
```bash
pnpm filter-skills # Filter with current config
pnpm filter-skills:dry-run # Preview changes
pnpm filter-skills --fetch # Fetch latest + filter
```
---
**Source:** [vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)

View File

@@ -1,439 +0,0 @@
#!/usr/bin/env tsx
import * as fs from 'node:fs';
import * as path from 'node:path';
import * as os from 'node:os';
import { execSync } from 'node:child_process';
interface FilterConfig {
featureFlags: {
keepCriticalPriority: boolean;
keepHighPriority: boolean;
keepMediumPriority: boolean;
keepLowPriority: boolean;
removeJsOptimizations: boolean;
removeRenderingOptimizations: boolean;
removeAdvancedPatterns: boolean;
};
priorities: {
keep: string[];
conditionalKeep: string[];
remove: string[];
};
technologyDetection: Record<string, {
packageNames: string[];
codePatterns: string[];
relatedRules: string[];
}>;
alwaysKeep: string[];
alwaysRemove: string[];
}
interface FilterReport {
kept: { file: string; reason: string }[];
archived: { file: string; reason: string }[];
technologiesDetected: string[];
summary: {
totalRules: number;
keptRules: number;
archivedRules: number;
reductionPercent: number;
};
}
const PROJECT_ROOT = path.resolve(__dirname, '../..');
const SKILLS_DIR = path.join(PROJECT_ROOT, '.agent/skills/react-best-practices');
const RULES_DIR = path.join(SKILLS_DIR, 'rules');
const ARCHIVE_DIR = path.join(SKILLS_DIR, '.archived');
const CONFIG_PATH = path.join(PROJECT_ROOT, '.agent/skill-filter-config.json');
const PACKAGE_JSON_PATH = path.join(PROJECT_ROOT, 'package.json');
// Parse command line arguments
const args = new Set(process.argv.slice(2));
const isDryRun = args.has('--dry-run');
function loadConfig(): FilterConfig {
const configContent = fs.readFileSync(CONFIG_PATH, 'utf-8');
return JSON.parse(configContent);
}
function validateConfig(config: FilterConfig): void {
console.log('✓ Configuration is valid');
console.log(` - ${config.alwaysKeep.length} rules marked as always keep`);
console.log(` - ${config.alwaysRemove.length} rules marked as always remove`);
console.log(` - ${Object.keys(config.technologyDetection).length} technologies configured for detection`);
}
function hasRipgrep(): boolean {
try {
execSync('rg --version', { stdio: 'ignore' });
return true;
} catch {
return false;
}
}
function detectTechnologies(config: FilterConfig): Set<string> {
const detected = new Set<string>();
// Check package.json dependencies
const packageJson = JSON.parse(fs.readFileSync(PACKAGE_JSON_PATH, 'utf-8'));
const allDeps = {
...packageJson.dependencies,
...packageJson.devDependencies,
};
const hasRg = hasRipgrep();
if (!hasRg && Object.keys(config.technologyDetection).some(t => config.technologyDetection[t].codePatterns.length > 0)) {
console.warn('⚠️ Ripgrep (rg) not found. Code pattern detection will be skipped.');
}
for (const [techName, techConfig] of Object.entries(config.technologyDetection)) {
// Check for package dependencies
const hasPackage = techConfig.packageNames.some(pkg => allDeps[pkg]);
if (hasPackage) {
detected.add(techName);
continue;
}
// Check for code patterns using ripgrep
if (hasRg && techConfig.codePatterns.length > 0) {
for (const pattern of techConfig.codePatterns) {
try {
// Use ripgrep to search for patterns in TypeScript/JavaScript files
// Use String.raw to avoid escaping issues, though redundant with single quotes in shell
execSync(
`rg -q '${pattern.replace(/'/g, "\\'")}' -g '*.ts' -g '*.tsx' -g '*.js' -g '*.jsx' "${PROJECT_ROOT}"`,
{ stdio: 'ignore' }
);
detected.add(techName);
break;
} catch {
// Pattern not found or error, continue checking
}
}
}
}
return detected;
}
function getRulePriority(filename: string): string | null {
const content = fs.readFileSync(path.join(RULES_DIR, filename), 'utf-8');
const match = content.match(/impact:\s*([A-Z-]+)/);
return match ? match[1] : null;
}
function shouldKeepRule(
filename: string,
config: FilterConfig,
detectedTechnologies: Set<string>
): { keep: boolean; reason: string } {
const flags = config.featureFlags;
// 1. Check feature flag naming conventions (hardcoded optimization flags)
if (flags.removeJsOptimizations && filename.startsWith('js-')) {
return { keep: false, reason: 'Feature flag: removeJsOptimizations' };
}
if (flags.removeRenderingOptimizations && filename.startsWith('rendering-')) {
return { keep: false, reason: 'Feature flag: removeRenderingOptimizations' };
}
if (flags.removeAdvancedPatterns && filename.startsWith('advanced-')) {
return { keep: false, reason: 'Feature flag: removeAdvancedPatterns' };
}
// 2. Check always keep/remove lists
if (config.alwaysKeep.includes(filename)) {
return { keep: true, reason: 'Always keep (critical pattern)' };
}
if (config.alwaysRemove.includes(filename)) {
return { keep: false, reason: 'Always remove (low priority optimization)' };
}
// 3. Check technology detection
for (const [techName, techConfig] of Object.entries(config.technologyDetection)) {
if (techConfig.relatedRules.includes(filename)) {
if (detectedTechnologies.has(techName)) {
return { keep: true, reason: `Technology detected: ${techName}` };
} else {
return { keep: false, reason: `Technology not used: ${techName}` };
}
}
}
// 4. Check priority
const priority = getRulePriority(filename);
if (priority) {
// Feature flag overrides for priorities
if (priority === 'CRITICAL' && !flags.keepCriticalPriority) return { keep: false, reason: 'Feature flag: keepCriticalPriority disabled' };
if (priority === 'HIGH' && !flags.keepHighPriority) return { keep: false, reason: 'Feature flag: keepHighPriority disabled' };
if ((priority === 'MEDIUM' || priority === 'MEDIUM-HIGH') && !flags.keepMediumPriority) return { keep: false, reason: 'Feature flag: keepMediumPriority disabled' };
if ((priority === 'LOW' || priority === 'LOW-MEDIUM') && flags.keepLowPriority) return { keep: true, reason: 'Feature flag: keepLowPriority enabled' };
// Standard priority lists
if (config.priorities.keep.includes(priority)) return { keep: true, reason: `Priority: ${priority}` };
if (config.priorities.conditionalKeep.includes(priority)) return { keep: true, reason: `Priority: ${priority} (conditional keep)` };
if (config.priorities.remove.includes(priority)) return { keep: false, reason: `Priority: ${priority}` };
}
// Default
return { keep: true, reason: 'Default (no matching rule)' };
}
function filterRules(config: FilterConfig, detectedTechnologies: Set<string>): FilterReport {
const report: FilterReport = {
kept: [],
archived: [],
technologiesDetected: Array.from(detectedTechnologies),
summary: {
totalRules: 0,
keptRules: 0,
archivedRules: 0,
reductionPercent: 0,
},
};
const ruleFiles = fs.readdirSync(RULES_DIR).filter(f => f.endsWith('.md'));
report.summary.totalRules = ruleFiles.length;
for (const filename of ruleFiles) {
const decision = shouldKeepRule(filename, config, detectedTechnologies);
if (decision.keep) {
report.kept.push({ file: filename, reason: decision.reason });
report.summary.keptRules++;
} else {
report.archived.push({ file: filename, reason: decision.reason });
report.summary.archivedRules++;
if (!isDryRun) {
// Move to archive
const sourcePath = path.join(RULES_DIR, filename);
const archivePath = path.join(ARCHIVE_DIR, filename);
fs.mkdirSync(ARCHIVE_DIR, { recursive: true });
fs.renameSync(sourcePath, archivePath);
}
}
}
report.summary.reductionPercent = Math.round(
report.summary.totalRules > 0
? (report.summary.archivedRules / report.summary.totalRules) * 100
: 0
);
return report;
}
function printReport(report: FilterReport): void {
console.log('\n' + '='.repeat(80));
console.log('SKILL FILTER REPORT');
console.log('='.repeat(80) + '\n');
console.log('📊 SUMMARY');
console.log(` Total rules: ${report.summary.totalRules}`);
console.log(` Kept: ${report.summary.keptRules} (${100 - report.summary.reductionPercent}%)`);
console.log(` Archived: ${report.summary.archivedRules} (${report.summary.reductionPercent}%)`);
console.log('');
console.log('🔍 TECHNOLOGIES DETECTED');
if (report.technologiesDetected.length > 0) {
report.technologiesDetected.forEach(tech => console.log(`${tech}`));
} else {
console.log(' (none detected)');
}
console.log('');
console.log('✅ KEPT RULES (' + report.kept.length + ')');
report.kept.forEach(({ file, reason }) => {
console.log(`${file.padEnd(45)}${reason}`);
});
console.log('');
console.log('📦 ARCHIVED RULES (' + report.archived.length + ')');
report.archived.forEach(({ file, reason }) => {
console.log(`${file.padEnd(45)}${reason}`);
});
console.log('');
if (isDryRun) {
console.log('🔍 DRY RUN MODE - No files were modified');
} else {
console.log('✨ Filtering complete! Archived rules moved to .archived/');
}
console.log('');
}
function saveReport(report: FilterReport): void {
const reportPath = path.join(SKILLS_DIR, 'filter-report.json');
fs.writeFileSync(reportPath, JSON.stringify(report, null, 2));
console.log(`📝 Report saved to: ${reportPath}`);
}
function fetchSkills(): void {
console.log('📥 Fetching Vercel React best practices from GitHub...\n');
// Use os.tmpdir() for safer temp directory
const tempBase = fs.mkdtempSync(path.join(os.tmpdir(), 'agent-skills-'));
const tarballUrl = 'https://github.com/vercel-labs/agent-skills/archive/refs/heads/main.tar.gz';
try {
console.log(' → Downloading tarball from GitHub...');
// Download and extract the entire repo first
try {
execSync(
`curl -sL ${tarballUrl} | tar -xz -C "${tempBase}"`,
{ stdio: 'pipe' }
);
} catch (e) {
throw new Error('Failed to download skills. Check your internet connection or curl availability.');
}
// Find the extracted directory and move the skills subdirectory
const extractedDir = path.join(tempBase, 'agent-skills-main/skills/react-best-practices');
if (!fs.existsSync(extractedDir)) {
throw new Error(`Skills directory not found in downloaded content: ${extractedDir}`);
}
// Move to final location
if (fs.existsSync(SKILLS_DIR)) {
console.log(' → Removing old skills...');
fs.rmSync(SKILLS_DIR, { recursive: true, force: true });
}
console.log(' → Installing to .agent/skills/...');
fs.mkdirSync(path.dirname(SKILLS_DIR), { recursive: true });
fs.renameSync(extractedDir, SKILLS_DIR);
// Create default config file if it doesn't exist AND not present in the new location
// Note: We moved the config out, so we don't need to recreate it inside SKILLS_DIR
// But if the external one is missing, we could offer to create it?
// For now, let's keep the logic simple and rely on the external config.
if (!fs.existsSync(CONFIG_PATH)) {
console.log('⚠️ Config file missing at new location. Creating default...');
const defaultConfig = {
featureFlags: {
keepCriticalPriority: true,
keepHighPriority: true,
keepMediumPriority: true,
keepLowPriority: false,
removeJsOptimizations: true,
removeRenderingOptimizations: true,
removeAdvancedPatterns: true
},
priorities: {
keep: ["CRITICAL", "HIGH"],
conditionalKeep: ["MEDIUM", "MEDIUM-HIGH"],
remove: ["LOW", "LOW-MEDIUM"]
},
technologyDetection: {},
alwaysKeep: [
"async-defer-await.md",
"async-parallel.md",
"async-dependencies.md",
"async-api-routes.md",
"bundle-barrel-imports.md",
"bundle-dynamic-imports.md",
"bundle-defer-third-party.md",
"bundle-conditional.md",
"bundle-preload.md",
"rerender-functional-setstate.md",
"rerender-memo.md",
"rerender-dependencies.md",
"rerender-defer-reads.md"
],
alwaysRemove: [
"js-batch-dom-css.md",
"js-cache-property-access.md",
"js-combine-iterations.md",
"js-early-exit.md",
"js-hoist-regexp.md",
"js-index-maps.md",
"js-length-check-first.md",
"js-min-max-loop.md",
"js-set-map-lookups.md",
"js-tosorted-immutable.md",
"js-cache-function-results.md",
"rendering-activity.md",
"rendering-animate-svg-wrapper.md",
"rendering-conditional-render.md",
"rendering-content-visibility.md",
"rendering-hoist-jsx.md",
"rendering-hydration-no-flicker.md",
"rendering-svg-precision.md",
"advanced-event-handler-refs.md",
"advanced-use-latest.md"
]
};
fs.writeFileSync(CONFIG_PATH, JSON.stringify(defaultConfig, null, 2));
}
console.log('✓ Skills fetched successfully\n');
} finally {
// Always clean up temp directory
try {
fs.rmSync(tempBase, { recursive: true, force: true });
} catch (e) {
// Ignore cleanup errors
}
}
}
function formatSkills(): void {
console.log('🎨 Formatting skill files to match project code style...\n');
try {
execSync(
`prettier --write "${SKILLS_DIR}/**/*.md"`,
{ stdio: 'inherit', cwd: PROJECT_ROOT }
);
console.log('✓ Formatting complete\n');
} catch (error) {
console.log('⚠️ Formatting failed (non-critical):', error);
}
}
// Main execution
try {
const shouldFetch = args.has('--fetch') || !fs.existsSync(SKILLS_DIR);
// Auto-fetch if skills don't exist
if (shouldFetch) {
fetchSkills();
}
// Check if skills exist after potential fetch
if (!fs.existsSync(SKILLS_DIR)) {
console.error('❌ Skills directory not found!\n');
console.error('Please run with --fetch flag:');
console.error(' pnpm filter-skills --fetch\n');
process.exit(1);
}
const validateOnly = args.has('--validate-config');
const config = loadConfig();
if (validateOnly) {
validateConfig(config);
process.exit(0);
}
console.log('🔍 Detecting technologies used in codebase...\n');
const detectedTechnologies = detectTechnologies(config);
console.log('🎯 Filtering skills...\n');
const report = filterRules(config, detectedTechnologies);
printReport(report);
if (!isDryRun) {
saveReport(report);
formatSkills();
}
process.exit(0);
} catch (error) {
console.error('❌ Error:', error);
process.exit(1);
}

View File

@@ -1,63 +0,0 @@
{
"featureFlags": {
"keepCriticalPriority": true,
"keepHighPriority": true,
"keepMediumPriority": true,
"keepLowPriority": false,
"removeJsOptimizations": true,
"removeRenderingOptimizations": true,
"removeAdvancedPatterns": true
},
"priorities": {
"keep": [
"CRITICAL",
"HIGH"
],
"conditionalKeep": [
"MEDIUM",
"MEDIUM-HIGH"
],
"remove": [
"LOW",
"LOW-MEDIUM"
]
},
"technologyDetection": {},
"alwaysKeep": [
"async-defer-await.md",
"async-parallel.md",
"async-dependencies.md",
"async-api-routes.md",
"bundle-barrel-imports.md",
"bundle-dynamic-imports.md",
"bundle-defer-third-party.md",
"bundle-conditional.md",
"bundle-preload.md",
"rerender-functional-setstate.md",
"rerender-memo.md",
"rerender-dependencies.md",
"rerender-defer-reads.md"
],
"alwaysRemove": [
"js-batch-dom-css.md",
"js-cache-property-access.md",
"js-combine-iterations.md",
"js-early-exit.md",
"js-hoist-regexp.md",
"js-index-maps.md",
"js-length-check-first.md",
"js-min-max-loop.md",
"js-set-map-lookups.md",
"js-tosorted-immutable.md",
"js-cache-function-results.md",
"rendering-activity.md",
"rendering-animate-svg-wrapper.md",
"rendering-conditional-render.md",
"rendering-content-visibility.md",
"rendering-hoist-jsx.md",
"rendering-hydration-no-flicker.md",
"rendering-svg-precision.md",
"advanced-event-handler-refs.md",
"advanced-use-latest.md"
]
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,127 +0,0 @@
# React Best Practices
A structured repository for creating and maintaining React Best Practices optimized for agents and LLMs.
## Structure
- `rules/` - Individual rule files (one per rule)
- `_sections.md` - Section metadata (titles, impacts, descriptions)
- `_template.md` - Template for creating new rules
- `area-description.md` - Individual rule files
- `src/` - Build scripts and utilities
- `metadata.json` - Document metadata (version, organization, abstract)
- **`AGENTS.md`** - Compiled output (generated)
- **`test-cases.json`** - Test cases for LLM evaluation (generated)
## Getting Started
1. Install dependencies:
```bash
pnpm install
```
2. Build AGENTS.md from rules:
```bash
pnpm build
```
3. Validate rule files:
```bash
pnpm validate
```
4. Extract test cases:
```bash
pnpm extract-tests
```
## Creating a New Rule
1. Copy `rules/_template.md` to `rules/area-description.md`
2. Choose the appropriate area prefix:
- `async-` for Eliminating Waterfalls (Section 1)
- `bundle-` for Bundle Size Optimization (Section 2)
- `server-` for Server-Side Performance (Section 3)
- `client-` for Client-Side Data Fetching (Section 4)
- `rerender-` for Re-render Optimization (Section 5)
- `rendering-` for Rendering Performance (Section 6)
- `js-` for JavaScript Performance (Section 7)
- `advanced-` for Advanced Patterns (Section 8)
3. Fill in the frontmatter and content
4. Ensure you have clear examples with explanations
5. Run `pnpm build` to regenerate AGENTS.md and test-cases.json
## Rule File Structure
Each rule file should follow this structure:
````markdown
---
title: Rule Title Here
impact: MEDIUM
impactDescription: Optional description
tags: tag1, tag2, tag3
---
## Rule Title Here
Brief explanation of the rule and why it matters.
**Incorrect (description of what's wrong):**
```typescript
// Bad code example
```
````
**Correct (description of what's right):**
```typescript
// Good code example
```
Optional explanatory text after examples.
Reference: [Link](https://example.com)
## File Naming Convention
- Files starting with `_` are special (excluded from build)
- Rule files: `area-description.md` (e.g., `async-parallel.md`)
- Section is automatically inferred from filename prefix
- Rules are sorted alphabetically by title within each section
- IDs (e.g., 1.1, 1.2) are auto-generated during build
## Impact Levels
- `CRITICAL` - Highest priority, major performance gains
- `HIGH` - Significant performance improvements
- `MEDIUM-HIGH` - Moderate-high gains
- `MEDIUM` - Moderate performance improvements
- `LOW-MEDIUM` - Low-medium gains
- `LOW` - Incremental improvements
## Scripts
- `pnpm build` - Compile rules into AGENTS.md
- `pnpm validate` - Validate all rule files
- `pnpm extract-tests` - Extract test cases for LLM evaluation
- `pnpm dev` - Build and validate
## Contributing
When adding or modifying rules:
1. Use the correct filename prefix for your section
2. Follow the `_template.md` structure
3. Include clear bad/good examples with explanations
4. Add appropriate tags
5. Run `pnpm build` to regenerate AGENTS.md and test-cases.json
6. Rules are automatically sorted by title - no need to manage numbers!
## Acknowledgments
Originally created by [@shuding](https://x.com/shuding) at [Vercel](https://vercel.com).

View File

@@ -1,138 +0,0 @@
---
name: vercel-react-best-practices
description: React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.js code to ensure optimal performance patterns. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements.
license: MIT
metadata:
author: vercel
version: "1.0.0"
---
# Vercel React Best Practices
Comprehensive performance optimization guide for React and Next.js applications, maintained by Vercel. Contains 57 rules across 8 categories, prioritized by impact to guide automated refactoring and code generation.
## When to Apply
Reference these guidelines when:
- Writing new React components or Next.js pages
- Implementing data fetching (client or server-side)
- Reviewing code for performance issues
- Refactoring existing React/Next.js code
- Optimizing bundle size or load times
## Rule Categories by Priority
| Priority | Category | Impact | Prefix |
| -------- | ------------------------- | ----------- | ------------ |
| 1 | Eliminating Waterfalls | CRITICAL | `async-` |
| 2 | Bundle Size Optimization | CRITICAL | `bundle-` |
| 3 | Server-Side Performance | HIGH | `server-` |
| 4 | Client-Side Data Fetching | MEDIUM-HIGH | `client-` |
| 5 | Re-render Optimization | MEDIUM | `rerender-` |
| 6 | Rendering Performance | MEDIUM | `rendering-` |
| 7 | JavaScript Performance | LOW-MEDIUM | `js-` |
| 8 | Advanced Patterns | LOW | `advanced-` |
## Quick Reference
### 1. Eliminating Waterfalls (CRITICAL)
- `async-defer-await` - Move await into branches where actually used
- `async-parallel` - Use Promise.all() for independent operations
- `async-dependencies` - Use better-all for partial dependencies
- `async-api-routes` - Start promises early, await late in API routes
- `async-suspense-boundaries` - Use Suspense to stream content
### 2. Bundle Size Optimization (CRITICAL)
- `bundle-barrel-imports` - Import directly, avoid barrel files
- `bundle-dynamic-imports` - Use next/dynamic for heavy components
- `bundle-defer-third-party` - Load analytics/logging after hydration
- `bundle-conditional` - Load modules only when feature is activated
- `bundle-preload` - Preload on hover/focus for perceived speed
### 3. Server-Side Performance (HIGH)
- `server-auth-actions` - Authenticate server actions like API routes
- `server-cache-react` - Use React.cache() for per-request deduplication
- `server-cache-lru` - Use LRU cache for cross-request caching
- `server-dedup-props` - Avoid duplicate serialization in RSC props
- `server-serialization` - Minimize data passed to client components
- `server-parallel-fetching` - Restructure components to parallelize fetches
- `server-after-nonblocking` - Use after() for non-blocking operations
### 4. Client-Side Data Fetching (MEDIUM-HIGH)
- `client-swr-dedup` - Use SWR for automatic request deduplication
- `client-event-listeners` - Deduplicate global event listeners
- `client-passive-event-listeners` - Use passive listeners for scroll
- `client-localstorage-schema` - Version and minimize localStorage data
### 5. Re-render Optimization (MEDIUM)
- `rerender-defer-reads` - Don't subscribe to state only used in callbacks
- `rerender-memo` - Extract expensive work into memoized components
- `rerender-memo-with-default-value` - Hoist default non-primitive props
- `rerender-dependencies` - Use primitive dependencies in effects
- `rerender-derived-state` - Subscribe to derived booleans, not raw values
- `rerender-derived-state-no-effect` - Derive state during render, not effects
- `rerender-functional-setstate` - Use functional setState for stable callbacks
- `rerender-lazy-state-init` - Pass function to useState for expensive values
- `rerender-simple-expression-in-memo` - Avoid memo for simple primitives
- `rerender-move-effect-to-event` - Put interaction logic in event handlers
- `rerender-transitions` - Use startTransition for non-urgent updates
- `rerender-use-ref-transient-values` - Use refs for transient frequent values
### 6. Rendering Performance (MEDIUM)
- `rendering-animate-svg-wrapper` - Animate div wrapper, not SVG element
- `rendering-content-visibility` - Use content-visibility for long lists
- `rendering-hoist-jsx` - Extract static JSX outside components
- `rendering-svg-precision` - Reduce SVG coordinate precision
- `rendering-hydration-no-flicker` - Use inline script for client-only data
- `rendering-hydration-suppress-warning` - Suppress expected mismatches
- `rendering-activity` - Use Activity component for show/hide
- `rendering-conditional-render` - Use ternary, not && for conditionals
- `rendering-usetransition-loading` - Prefer useTransition for loading state
### 7. JavaScript Performance (LOW-MEDIUM)
- `js-batch-dom-css` - Group CSS changes via classes or cssText
- `js-index-maps` - Build Map for repeated lookups
- `js-cache-property-access` - Cache object properties in loops
- `js-cache-function-results` - Cache function results in module-level Map
- `js-cache-storage` - Cache localStorage/sessionStorage reads
- `js-combine-iterations` - Combine multiple filter/map into one loop
- `js-length-check-first` - Check array length before expensive comparison
- `js-early-exit` - Return early from functions
- `js-hoist-regexp` - Hoist RegExp creation outside loops
- `js-min-max-loop` - Use loop for min/max instead of sort
- `js-set-map-lookups` - Use Set/Map for O(1) lookups
- `js-tosorted-immutable` - Use toSorted() for immutability
### 8. Advanced Patterns (LOW)
- `advanced-event-handler-refs` - Store event handlers in refs
- `advanced-init-once` - Initialize app once per app load
- `advanced-use-latest` - useLatest for stable callback refs
## How to Use
Read individual rule files for detailed explanations and code examples:
```
rules/async-parallel.md
rules/bundle-barrel-imports.md
```
Each rule file contains:
- Brief explanation of why it matters
- Incorrect code example with explanation
- Correct code example with explanation
- Additional context and references
## Full Compiled Document
For the complete guide with all rules expanded: `AGENTS.md`

View File

@@ -1,249 +0,0 @@
{
"kept": [
{
"file": "_sections.md",
"reason": "Default (no matching rule)"
},
{
"file": "_template.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "async-api-routes.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "async-defer-await.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "async-dependencies.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "async-parallel.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "async-suspense-boundaries.md",
"reason": "Priority: HIGH"
},
{
"file": "bundle-barrel-imports.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "bundle-conditional.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "bundle-defer-third-party.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "bundle-dynamic-imports.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "bundle-preload.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "client-localstorage-schema.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "client-passive-event-listeners.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "client-swr-dedup.md",
"reason": "Priority: MEDIUM-HIGH (conditional keep)"
},
{
"file": "rerender-defer-reads.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "rerender-dependencies.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "rerender-derived-state-no-effect.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "rerender-derived-state.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "rerender-functional-setstate.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "rerender-lazy-state-init.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "rerender-memo-with-default-value.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "rerender-memo.md",
"reason": "Always keep (critical pattern)"
},
{
"file": "rerender-move-effect-to-event.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "rerender-transitions.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "rerender-use-ref-transient-values.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "server-after-nonblocking.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "server-auth-actions.md",
"reason": "Priority: CRITICAL"
},
{
"file": "server-cache-lru.md",
"reason": "Priority: HIGH"
},
{
"file": "server-cache-react.md",
"reason": "Priority: MEDIUM (conditional keep)"
},
{
"file": "server-parallel-fetching.md",
"reason": "Priority: CRITICAL"
},
{
"file": "server-serialization.md",
"reason": "Priority: HIGH"
}
],
"archived": [
{
"file": "advanced-event-handler-refs.md",
"reason": "Feature flag: removeAdvancedPatterns"
},
{
"file": "advanced-init-once.md",
"reason": "Feature flag: removeAdvancedPatterns"
},
{
"file": "advanced-use-latest.md",
"reason": "Feature flag: removeAdvancedPatterns"
},
{
"file": "client-event-listeners.md",
"reason": "Priority: LOW"
},
{
"file": "js-batch-dom-css.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-cache-function-results.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-cache-property-access.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-cache-storage.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-combine-iterations.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-early-exit.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-hoist-regexp.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-index-maps.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-length-check-first.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-min-max-loop.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-set-map-lookups.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "js-tosorted-immutable.md",
"reason": "Feature flag: removeJsOptimizations"
},
{
"file": "rendering-activity.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-animate-svg-wrapper.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-conditional-render.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-content-visibility.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-hoist-jsx.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-hydration-no-flicker.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-hydration-suppress-warning.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-svg-precision.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rendering-usetransition-loading.md",
"reason": "Feature flag: removeRenderingOptimizations"
},
{
"file": "rerender-simple-expression-in-memo.md",
"reason": "Priority: LOW-MEDIUM"
},
{
"file": "server-dedup-props.md",
"reason": "Priority: LOW"
}
],
"technologiesDetected": [],
"summary": {
"totalRules": 59,
"keptRules": 32,
"archivedRules": 27,
"reductionPercent": 46
}
}

View File

@@ -1,15 +0,0 @@
{
"version": "1.0.0",
"organization": "Vercel Engineering",
"date": "January 2026",
"abstract": "Comprehensive performance optimization guide for React and Next.js applications, designed for AI agents and LLMs. Contains 40+ rules across 8 categories, prioritized by impact from critical (eliminating waterfalls, reducing bundle size) to incremental (advanced patterns). Each rule includes detailed explanations, real-world examples comparing incorrect vs. correct implementations, and specific impact metrics to guide automated refactoring and code generation.",
"references": [
"https://react.dev",
"https://nextjs.org",
"https://swr.vercel.app",
"https://github.com/shuding/better-all",
"https://github.com/isaacs/node-lru-cache",
"https://vercel.com/blog/how-we-optimized-package-imports-in-next-js",
"https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast"
]
}

View File

@@ -1,46 +0,0 @@
# Sections
This file defines all sections, their ordering, impact levels, and descriptions.
The section ID (in parentheses) is the filename prefix used to group rules.
---
## 1. Eliminating Waterfalls (async)
**Impact:** CRITICAL
**Description:** Waterfalls are the #1 performance killer. Each sequential await adds full network latency. Eliminating them yields the largest gains.
## 2. Bundle Size Optimization (bundle)
**Impact:** CRITICAL
**Description:** Reducing initial bundle size improves Time to Interactive and Largest Contentful Paint.
## 3. Server-Side Performance (server)
**Impact:** HIGH
**Description:** Optimizing server-side rendering and data fetching eliminates server-side waterfalls and reduces response times.
## 4. Client-Side Data Fetching (client)
**Impact:** MEDIUM-HIGH
**Description:** Automatic deduplication and efficient data fetching patterns reduce redundant network requests.
## 5. Re-render Optimization (rerender)
**Impact:** MEDIUM
**Description:** Reducing unnecessary re-renders minimizes wasted computation and improves UI responsiveness.
## 6. Rendering Performance (rendering)
**Impact:** MEDIUM
**Description:** Optimizing the rendering process reduces the work the browser needs to do.
## 7. JavaScript Performance (js)
**Impact:** LOW-MEDIUM
**Description:** Micro-optimizations for hot paths can add up to meaningful improvements.
## 8. Advanced Patterns (advanced)
**Impact:** LOW
**Description:** Advanced patterns for specific cases that require careful implementation.

View File

@@ -1,28 +0,0 @@
---
title: Rule Title Here
impact: MEDIUM
impactDescription: Optional description of impact (e.g., "20-50% improvement")
tags: tag1, tag2
---
## Rule Title Here
**Impact: MEDIUM (optional impact description)**
Brief explanation of the rule and why it matters. This should be clear and concise, explaining the performance implications.
**Incorrect (description of what's wrong):**
```typescript
// Bad code example here
const bad = example();
```
**Correct (description of what's right):**
```typescript
// Good code example here
const good = example();
```
Reference: [Link to documentation or resource](https://example.com)

View File

@@ -1,35 +0,0 @@
---
title: Prevent Waterfall Chains in API Routes
impact: CRITICAL
impactDescription: 2-10× improvement
tags: api-routes, server-actions, waterfalls, parallelization
---
## Prevent Waterfall Chains in API Routes
In API routes and Server Actions, start independent operations immediately, even if you don't await them yet.
**Incorrect (config waits for auth, data waits for both):**
```typescript
export async function GET(request: Request) {
const session = await auth();
const config = await fetchConfig();
const data = await fetchData(session.user.id);
return Response.json({ data, config });
}
```
**Correct (auth and config start immediately):**
```typescript
export async function GET(request: Request) {
const sessionPromise = auth();
const configPromise = fetchConfig();
const session = await sessionPromise;
const [config, data] = await Promise.all([configPromise, fetchData(session.user.id)]);
return Response.json({ data, config });
}
```
For operations with more complex dependency chains, use `better-all` to automatically maximize parallelism (see Dependency-Based Parallelization).

View File

@@ -1,80 +0,0 @@
---
title: Defer Await Until Needed
impact: HIGH
impactDescription: avoids blocking unused code paths
tags: async, await, conditional, optimization
---
## Defer Await Until Needed
Move `await` operations into the branches where they're actually used to avoid blocking code paths that don't need them.
**Incorrect (blocks both branches):**
```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
const userData = await fetchUserData(userId);
if (skipProcessing) {
// Returns immediately but still waited for userData
return { skipped: true };
}
// Only this branch uses userData
return processUserData(userData);
}
```
**Correct (only blocks when needed):**
```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
if (skipProcessing) {
// Returns immediately without waiting
return { skipped: true };
}
// Fetch only when needed
const userData = await fetchUserData(userId);
return processUserData(userData);
}
```
**Another example (early return optimization):**
```typescript
// Incorrect: always fetches permissions
async function updateResource(resourceId: string, userId: string) {
const permissions = await fetchPermissions(userId)
const resource = await getResource(resourceId)
if (!resource) {
return { error: 'Not found' }
}
if (!permissions.canEdit) {
return { error: 'Forbidden' }
}
return await updateResourceData(resource, permissions)
}
// Correct: fetches only when needed
async function updateResource(resourceId: string, userId: string) {
const resource = await getResource(resourceId)
if (!resource) {
return { error: 'Not found' }
}
const permissions = await fetchPermissions(userId)
if (!permissions.canEdit) {
return { error: 'Forbidden' }
}
return await updateResourceData(resource, permissions)
}
```
This optimization is especially valuable when the skipped branch is frequently taken, or when the deferred operation is expensive.

View File

@@ -1,48 +0,0 @@
---
title: Dependency-Based Parallelization
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, dependencies, better-all
---
## Dependency-Based Parallelization
For operations with partial dependencies, use `better-all` to maximize parallelism. It automatically starts each task at the earliest possible moment.
**Incorrect (profile waits for config unnecessarily):**
```typescript
const [user, config] = await Promise.all([fetchUser(), fetchConfig()]);
const profile = await fetchProfile(user.id);
```
**Correct (config and profile run in parallel):**
```typescript
import { all } from "better-all";
const { user, config, profile } = await all({
async user() {
return fetchUser();
},
async config() {
return fetchConfig();
},
async profile() {
return fetchProfile((await this.$.user).id);
},
});
```
**Alternative without extra dependencies:**
We can also create all the promises first, and do `Promise.all()` at the end.
```typescript
const userPromise = fetchUser();
const profilePromise = userPromise.then((user) => fetchProfile(user.id));
const [user, config, profile] = await Promise.all([userPromise, fetchConfig(), profilePromise]);
```
Reference: [https://github.com/shuding/better-all](https://github.com/shuding/better-all)

View File

@@ -1,24 +0,0 @@
---
title: Promise.all() for Independent Operations
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, promises, waterfalls
---
## Promise.all() for Independent Operations
When async operations have no interdependencies, execute them concurrently using `Promise.all()`.
**Incorrect (sequential execution, 3 round trips):**
```typescript
const user = await fetchUser();
const posts = await fetchPosts();
const comments = await fetchComments();
```
**Correct (parallel execution, no waterfall):**
```typescript
const [user, posts, comments] = await Promise.all([fetchUser(), fetchPosts(), fetchComments()]);
```

View File

@@ -1,99 +0,0 @@
---
title: Strategic Suspense Boundaries
impact: HIGH
impactDescription: faster initial paint
tags: async, suspense, streaming, layout-shift
---
## Strategic Suspense Boundaries
Instead of awaiting data in async components before returning JSX, use Suspense boundaries to show the wrapper UI faster while data loads.
**Incorrect (wrapper blocked by data fetching):**
```tsx
async function Page() {
const data = await fetchData(); // Blocks entire page
return (
<div>
<div>Sidebar</div>
<div>Header</div>
<div>
<DataDisplay data={data} />
</div>
<div>Footer</div>
</div>
);
}
```
The entire layout waits for data even though only the middle section needs it.
**Correct (wrapper shows immediately, data streams in):**
```tsx
function Page() {
return (
<div>
<div>Sidebar</div>
<div>Header</div>
<div>
<Suspense fallback={<Skeleton />}>
<DataDisplay />
</Suspense>
</div>
<div>Footer</div>
</div>
);
}
async function DataDisplay() {
const data = await fetchData(); // Only blocks this component
return <div>{data.content}</div>;
}
```
Sidebar, Header, and Footer render immediately. Only DataDisplay waits for data.
**Alternative (share promise across components):**
```tsx
function Page() {
// Start fetch immediately, but don't await
const dataPromise = fetchData();
return (
<div>
<div>Sidebar</div>
<div>Header</div>
<Suspense fallback={<Skeleton />}>
<DataDisplay dataPromise={dataPromise} />
<DataSummary dataPromise={dataPromise} />
</Suspense>
<div>Footer</div>
</div>
);
}
function DataDisplay({ dataPromise }: { dataPromise: Promise<Data> }) {
const data = use(dataPromise); // Unwraps the promise
return <div>{data.content}</div>;
}
function DataSummary({ dataPromise }: { dataPromise: Promise<Data> }) {
const data = use(dataPromise); // Reuses the same promise
return <div>{data.summary}</div>;
}
```
Both components share the same promise, so only one fetch occurs. Layout renders immediately while both components wait together.
**When NOT to use this pattern:**
- Critical data needed for layout decisions (affects positioning)
- SEO-critical content above the fold
- Small, fast queries where suspense overhead isn't worth it
- When you want to avoid layout shift (loading → content jump)
**Trade-off:** Faster initial paint vs potential layout shift. Choose based on your UX priorities.

View File

@@ -1,62 +0,0 @@
---
title: Avoid Barrel File Imports
impact: CRITICAL
impactDescription: 200-800ms import cost, slow builds
tags: bundle, imports, tree-shaking, barrel-files, performance
---
## Avoid Barrel File Imports
Import directly from source files instead of barrel files to avoid loading thousands of unused modules. **Barrel files** are entry points that re-export multiple modules (e.g., `index.js` that does `export * from './module'`).
Popular icon and component libraries can have **up to 10,000 re-exports** in their entry file. For many React packages, **it takes 200-800ms just to import them**, affecting both development speed and production cold starts.
**Why tree-shaking doesn't help:** When a library is marked as external (not bundled), the bundler can't optimize it. If you bundle it to enable tree-shaking, builds become substantially slower analyzing the entire module graph.
**Incorrect (imports entire library):**
```tsx
// Loads 1,583 modules, takes ~2.8s extra in dev
// Runtime cost: 200-800ms on every cold start
import { Button, TextField } from "@mui/material";
import { Check, Menu, X } from "lucide-react";
// Loads 2,225 modules, takes ~4.2s extra in dev
```
**Correct (imports only what you need):**
```tsx
// Loads only 3 modules (~2KB vs ~1MB)
import Button from "@mui/material/Button";
import TextField from "@mui/material/TextField";
import Check from "lucide-react/dist/esm/icons/check";
import Menu from "lucide-react/dist/esm/icons/menu";
import X from "lucide-react/dist/esm/icons/x";
// Loads only what you use
```
**Alternative (Next.js 13.5+):**
```js
// Then you can keep the ergonomic barrel imports:
import { Check, Menu, X } from "lucide-react";
// next.config.js - use optimizePackageImports
module.exports = {
experimental: {
optimizePackageImports: ["lucide-react", "@mui/material"],
},
};
// Automatically transformed to direct imports at build time
```
Direct imports provide 15-70% faster dev boot, 28% faster builds, 40% faster cold starts, and significantly faster HMR.
Libraries commonly affected: `lucide-react`, `@mui/material`, `@mui/icons-material`, `@tabler/icons-react`, `react-icons`, `@headlessui/react`, `@radix-ui/react-*`, `lodash`, `ramda`, `date-fns`, `rxjs`, `react-use`.
Reference: [How we optimized package imports in Next.js](https://vercel.com/blog/how-we-optimized-package-imports-in-next-js)

View File

@@ -1,35 +0,0 @@
---
title: Conditional Module Loading
impact: HIGH
impactDescription: loads large data only when needed
tags: bundle, conditional-loading, lazy-loading
---
## Conditional Module Loading
Load large data or modules only when a feature is activated.
**Example (lazy-load animation frames):**
```tsx
function AnimationPlayer({
enabled,
setEnabled,
}: {
enabled: boolean;
setEnabled: React.Dispatch<React.SetStateAction<boolean>>;
}) {
const [frames, setFrames] = useState<Frame[] | null>(null);
useEffect(() => {
if (enabled && !frames && typeof window !== "undefined") {
import("./animation-frames.js").then((mod) => setFrames(mod.frames)).catch(() => setEnabled(false));
}
}, [enabled, frames, setEnabled]);
if (!frames) return <Skeleton />;
return <Canvas frames={frames} />;
}
```
The `typeof window !== 'undefined'` check prevents bundling this module for SSR, optimizing server bundle size and build speed.

View File

@@ -1,46 +0,0 @@
---
title: Defer Non-Critical Third-Party Libraries
impact: MEDIUM
impactDescription: loads after hydration
tags: bundle, third-party, analytics, defer
---
## Defer Non-Critical Third-Party Libraries
Analytics, logging, and error tracking don't block user interaction. Load them after hydration.
**Incorrect (blocks initial bundle):**
```tsx
import { Analytics } from "@vercel/analytics/react";
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<Analytics />
</body>
</html>
);
}
```
**Correct (loads after hydration):**
```tsx
import dynamic from "next/dynamic";
const Analytics = dynamic(() => import("@vercel/analytics/react").then((m) => m.Analytics), { ssr: false });
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<Analytics />
</body>
</html>
);
}
```

View File

@@ -1,32 +0,0 @@
---
title: Dynamic Imports for Heavy Components
impact: CRITICAL
impactDescription: directly affects TTI and LCP
tags: bundle, dynamic-import, code-splitting, next-dynamic
---
## Dynamic Imports for Heavy Components
Use `next/dynamic` to lazy-load large components not needed on initial render.
**Incorrect (Monaco bundles with main chunk ~300KB):**
```tsx
import { MonacoEditor } from "./monaco-editor";
function CodePanel({ code }: { code: string }) {
return <MonacoEditor value={code} />;
}
```
**Correct (Monaco loads on demand):**
```tsx
import dynamic from "next/dynamic";
const MonacoEditor = dynamic(() => import("./monaco-editor").then((m) => m.MonacoEditor), { ssr: false });
function CodePanel({ code }: { code: string }) {
return <MonacoEditor value={code} />;
}
```

View File

@@ -1,44 +0,0 @@
---
title: Preload Based on User Intent
impact: MEDIUM
impactDescription: reduces perceived latency
tags: bundle, preload, user-intent, hover
---
## Preload Based on User Intent
Preload heavy bundles before they're needed to reduce perceived latency.
**Example (preload on hover/focus):**
```tsx
function EditorButton({ onClick }: { onClick: () => void }) {
const preload = () => {
if (typeof window !== "undefined") {
void import("./monaco-editor");
}
};
return (
<button onMouseEnter={preload} onFocus={preload} onClick={onClick}>
Open Editor
</button>
);
}
```
**Example (preload when feature flag is enabled):**
```tsx
function FlagsProvider({ children, flags }: Props) {
useEffect(() => {
if (flags.editorEnabled && typeof window !== "undefined") {
void import("./monaco-editor").then((mod) => mod.init());
}
}, [flags.editorEnabled]);
return <FlagsContext.Provider value={flags}>{children}</FlagsContext.Provider>;
}
```
The `typeof window !== 'undefined'` check prevents bundling preloaded modules for SSR, optimizing server bundle size and build speed.

View File

@@ -1,74 +0,0 @@
---
title: Version and Minimize localStorage Data
impact: MEDIUM
impactDescription: prevents schema conflicts, reduces storage size
tags: client, localStorage, storage, versioning, data-minimization
---
## Version and Minimize localStorage Data
Add version prefix to keys and store only needed fields. Prevents schema conflicts and accidental storage of sensitive data.
**Incorrect:**
```typescript
// No version, stores everything, no error handling
localStorage.setItem("userConfig", JSON.stringify(fullUserObject));
const data = localStorage.getItem("userConfig");
```
**Correct:**
```typescript
const VERSION = "v2";
function saveConfig(config: { theme: string; language: string }) {
try {
localStorage.setItem(`userConfig:${VERSION}`, JSON.stringify(config));
} catch {
// Throws in incognito/private browsing, quota exceeded, or disabled
}
}
function loadConfig() {
try {
const data = localStorage.getItem(`userConfig:${VERSION}`);
return data ? JSON.parse(data) : null;
} catch {
return null;
}
}
// Migration from v1 to v2
function migrate() {
try {
const v1 = localStorage.getItem("userConfig:v1");
if (v1) {
const old = JSON.parse(v1);
saveConfig({ theme: old.darkMode ? "dark" : "light", language: old.lang });
localStorage.removeItem("userConfig:v1");
}
} catch {}
}
```
**Store minimal fields from server responses:**
```typescript
// User object has 20+ fields, only store what UI needs
function cachePrefs(user: FullUser) {
try {
localStorage.setItem(
"prefs:v1",
JSON.stringify({
theme: user.preferences.theme,
notifications: user.preferences.notifications,
})
);
} catch {}
}
```
**Always wrap in try-catch:** `getItem()` and `setItem()` throw in incognito/private browsing (Safari, Firefox), when quota exceeded, or when disabled.
**Benefits:** Schema evolution via versioning, reduced storage size, prevents storing tokens/PII/internal flags.

View File

@@ -1,48 +0,0 @@
---
title: Use Passive Event Listeners for Scrolling Performance
impact: MEDIUM
impactDescription: eliminates scroll delay caused by event listeners
tags: client, event-listeners, scrolling, performance, touch, wheel
---
## Use Passive Event Listeners for Scrolling Performance
Add `{ passive: true }` to touch and wheel event listeners to enable immediate scrolling. Browsers normally wait for listeners to finish to check if `preventDefault()` is called, causing scroll delay.
**Incorrect:**
```typescript
useEffect(() => {
const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX);
const handleWheel = (e: WheelEvent) => console.log(e.deltaY);
document.addEventListener("touchstart", handleTouch);
document.addEventListener("wheel", handleWheel);
return () => {
document.removeEventListener("touchstart", handleTouch);
document.removeEventListener("wheel", handleWheel);
};
}, []);
```
**Correct:**
```typescript
useEffect(() => {
const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX);
const handleWheel = (e: WheelEvent) => console.log(e.deltaY);
document.addEventListener("touchstart", handleTouch, { passive: true });
document.addEventListener("wheel", handleWheel, { passive: true });
return () => {
document.removeEventListener("touchstart", handleTouch);
document.removeEventListener("wheel", handleWheel);
};
}, []);
```
**Use passive when:** tracking/analytics, logging, any listener that doesn't call `preventDefault()`.
**Don't use passive when:** implementing custom swipe gestures, custom zoom controls, or any listener that needs `preventDefault()`.

View File

@@ -1,56 +0,0 @@
---
title: Use SWR for Automatic Deduplication
impact: MEDIUM-HIGH
impactDescription: automatic deduplication
tags: client, swr, deduplication, data-fetching
---
## Use SWR for Automatic Deduplication
SWR enables request deduplication, caching, and revalidation across component instances.
**Incorrect (no deduplication, each instance fetches):**
```tsx
function UserList() {
const [users, setUsers] = useState([]);
useEffect(() => {
fetch("/api/users")
.then((r) => r.json())
.then(setUsers);
}, []);
}
```
**Correct (multiple instances share one request):**
```tsx
import useSWR from "swr";
function UserList() {
const { data: users } = useSWR("/api/users", fetcher);
}
```
**For immutable data:**
```tsx
import { useImmutableSWR } from "@/lib/swr";
function StaticContent() {
const { data } = useImmutableSWR("/api/config", fetcher);
}
```
**For mutations:**
```tsx
import { useSWRMutation } from "swr/mutation";
function UpdateButton() {
const { trigger } = useSWRMutation("/api/user", updateUser);
return <button onClick={() => trigger()}>Update</button>;
}
```
Reference: [https://swr.vercel.app](https://swr.vercel.app)

View File

@@ -1,39 +0,0 @@
---
title: Defer State Reads to Usage Point
impact: MEDIUM
impactDescription: avoids unnecessary subscriptions
tags: rerender, searchParams, localStorage, optimization
---
## Defer State Reads to Usage Point
Don't subscribe to dynamic state (searchParams, localStorage) if you only read it inside callbacks.
**Incorrect (subscribes to all searchParams changes):**
```tsx
function ShareButton({ chatId }: { chatId: string }) {
const searchParams = useSearchParams();
const handleShare = () => {
const ref = searchParams.get("ref");
shareChat(chatId, { ref });
};
return <button onClick={handleShare}>Share</button>;
}
```
**Correct (reads on demand, no subscription):**
```tsx
function ShareButton({ chatId }: { chatId: string }) {
const handleShare = () => {
const params = new URLSearchParams(window.location.search);
const ref = params.get("ref");
shareChat(chatId, { ref });
};
return <button onClick={handleShare}>Share</button>;
}
```

View File

@@ -1,45 +0,0 @@
---
title: Narrow Effect Dependencies
impact: LOW
impactDescription: minimizes effect re-runs
tags: rerender, useEffect, dependencies, optimization
---
## Narrow Effect Dependencies
Specify primitive dependencies instead of objects to minimize effect re-runs.
**Incorrect (re-runs on any user field change):**
```tsx
useEffect(() => {
console.log(user.id);
}, [user]);
```
**Correct (re-runs only when id changes):**
```tsx
useEffect(() => {
console.log(user.id);
}, [user.id]);
```
**For derived state, compute outside effect:**
```tsx
// Incorrect: runs on width=767, 766, 765...
useEffect(() => {
if (width < 768) {
enableMobileMode();
}
}, [width]);
// Correct: runs only on boolean transition
const isMobile = width < 768;
useEffect(() => {
if (isMobile) {
enableMobileMode();
}
}, [isMobile]);
```

View File

@@ -1,40 +0,0 @@
---
title: Calculate Derived State During Rendering
impact: MEDIUM
impactDescription: avoids redundant renders and state drift
tags: rerender, derived-state, useEffect, state
---
## Calculate Derived State During Rendering
If a value can be computed from current props/state, do not store it in state or update it in an effect. Derive it during render to avoid extra renders and state drift. Do not set state in effects solely in response to prop changes; prefer derived values or keyed resets instead.
**Incorrect (redundant state and effect):**
```tsx
function Form() {
const [firstName, setFirstName] = useState("First");
const [lastName, setLastName] = useState("Last");
const [fullName, setFullName] = useState("");
useEffect(() => {
setFullName(firstName + " " + lastName);
}, [firstName, lastName]);
return <p>{fullName}</p>;
}
```
**Correct (derive during render):**
```tsx
function Form() {
const [firstName, setFirstName] = useState("First");
const [lastName, setLastName] = useState("Last");
const fullName = firstName + " " + lastName;
return <p>{fullName}</p>;
}
```
References: [You Might Not Need an Effect](https://react.dev/learn/you-might-not-need-an-effect)

View File

@@ -1,29 +0,0 @@
---
title: Subscribe to Derived State
impact: MEDIUM
impactDescription: reduces re-render frequency
tags: rerender, derived-state, media-query, optimization
---
## Subscribe to Derived State
Subscribe to derived boolean state instead of continuous values to reduce re-render frequency.
**Incorrect (re-renders on every pixel change):**
```tsx
function Sidebar() {
const width = useWindowWidth(); // updates continuously
const isMobile = width < 768;
return <nav className={isMobile ? "mobile" : "desktop"} />;
}
```
**Correct (re-renders only when boolean changes):**
```tsx
function Sidebar() {
const isMobile = useMediaQuery("(max-width: 767px)");
return <nav className={isMobile ? "mobile" : "desktop"} />;
}
```

View File

@@ -1,77 +0,0 @@
---
title: Use Functional setState Updates
impact: MEDIUM
impactDescription: prevents stale closures and unnecessary callback recreations
tags: react, hooks, useState, useCallback, callbacks, closures
---
## Use Functional setState Updates
When updating state based on the current state value, use the functional update form of setState instead of directly referencing the state variable. This prevents stale closures, eliminates unnecessary dependencies, and creates stable callback references.
**Incorrect (requires state as dependency):**
```tsx
function TodoList() {
const [items, setItems] = useState(initialItems);
// Callback must depend on items, recreated on every items change
const addItems = useCallback(
(newItems: Item[]) => {
setItems([...items, ...newItems]);
},
[items]
); // ❌ items dependency causes recreations
// Risk of stale closure if dependency is forgotten
const removeItem = useCallback((id: string) => {
setItems(items.filter((item) => item.id !== id));
}, []); // ❌ Missing items dependency - will use stale items!
return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />;
}
```
The first callback is recreated every time `items` changes, which can cause child components to re-render unnecessarily. The second callback has a stale closure bug—it will always reference the initial `items` value.
**Correct (stable callbacks, no stale closures):**
```tsx
function TodoList() {
const [items, setItems] = useState(initialItems);
// Stable callback, never recreated
const addItems = useCallback((newItems: Item[]) => {
setItems((curr) => [...curr, ...newItems]);
}, []); // ✅ No dependencies needed
// Always uses latest state, no stale closure risk
const removeItem = useCallback((id: string) => {
setItems((curr) => curr.filter((item) => item.id !== id));
}, []); // ✅ Safe and stable
return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />;
}
```
**Benefits:**
1. **Stable callback references** - Callbacks don't need to be recreated when state changes
2. **No stale closures** - Always operates on the latest state value
3. **Fewer dependencies** - Simplifies dependency arrays and reduces memory leaks
4. **Prevents bugs** - Eliminates the most common source of React closure bugs
**When to use functional updates:**
- Any setState that depends on the current state value
- Inside useCallback/useMemo when state is needed
- Event handlers that reference state
- Async operations that update state
**When direct updates are fine:**
- Setting state to a static value: `setCount(0)`
- Setting state from props/arguments only: `setName(newName)`
- State doesn't depend on previous value
**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler can automatically optimize some cases, but functional updates are still recommended for correctness and to prevent stale closure bugs.

View File

@@ -1,56 +0,0 @@
---
title: Use Lazy State Initialization
impact: MEDIUM
impactDescription: wasted computation on every render
tags: react, hooks, useState, performance, initialization
---
## Use Lazy State Initialization
Pass a function to `useState` for expensive initial values. Without the function form, the initializer runs on every render even though the value is only used once.
**Incorrect (runs on every render):**
```tsx
function FilteredList({ items }: { items: Item[] }) {
// buildSearchIndex() runs on EVERY render, even after initialization
const [searchIndex, setSearchIndex] = useState(buildSearchIndex(items));
const [query, setQuery] = useState("");
// When query changes, buildSearchIndex runs again unnecessarily
return <SearchResults index={searchIndex} query={query} />;
}
function UserProfile() {
// JSON.parse runs on every render
const [settings, setSettings] = useState(JSON.parse(localStorage.getItem("settings") || "{}"));
return <SettingsForm settings={settings} onChange={setSettings} />;
}
```
**Correct (runs only once):**
```tsx
function FilteredList({ items }: { items: Item[] }) {
// buildSearchIndex() runs ONLY on initial render
const [searchIndex, setSearchIndex] = useState(() => buildSearchIndex(items));
const [query, setQuery] = useState("");
return <SearchResults index={searchIndex} query={query} />;
}
function UserProfile() {
// JSON.parse runs only on initial render
const [settings, setSettings] = useState(() => {
const stored = localStorage.getItem("settings");
return stored ? JSON.parse(stored) : {};
});
return <SettingsForm settings={settings} onChange={setSettings} />;
}
```
Use lazy initialization when computing initial values from localStorage/sessionStorage, building data structures (indexes, maps), reading from the DOM, or performing heavy transformations.
For simple primitives (`useState(0)`), direct references (`useState(props.value)`), or cheap literals (`useState({})`), the function form is unnecessary.

View File

@@ -1,36 +0,0 @@
---
title: Extract Default Non-primitive Parameter Value from Memoized Component to Constant
impact: MEDIUM
impactDescription: restores memoization by using a constant for default value
tags: rerender, memo, optimization
---
## Extract Default Non-primitive Parameter Value from Memoized Component to Constant
When memoized component has a default value for some non-primitive optional parameter, such as an array, function, or object, calling the component without that parameter results in broken memoization. This is because new value instances are created on every rerender, and they do not pass strict equality comparison in `memo()`.
To address this issue, extract the default value into a constant.
**Incorrect (`onClick` has different values on every rerender):**
```tsx
const UserAvatar = memo(function UserAvatar({ onClick = () => {} }: { onClick?: () => void }) {
// ...
})
// Used without optional onClick
<UserAvatar />
```
**Correct (stable default value):**
```tsx
const NOOP = () => {};
const UserAvatar = memo(function UserAvatar({ onClick = NOOP }: { onClick?: () => void }) {
// ...
})
// Used without optional onClick
<UserAvatar />
```

View File

@@ -1,44 +0,0 @@
---
title: Extract to Memoized Components
impact: MEDIUM
impactDescription: enables early returns
tags: rerender, memo, useMemo, optimization
---
## Extract to Memoized Components
Extract expensive work into memoized components to enable early returns before computation.
**Incorrect (computes avatar even when loading):**
```tsx
function Profile({ user, loading }: Props) {
const avatar = useMemo(() => {
const id = computeAvatarId(user);
return <Avatar id={id} />;
}, [user]);
if (loading) return <Skeleton />;
return <div>{avatar}</div>;
}
```
**Correct (skips computation when loading):**
```tsx
const UserAvatar = memo(function UserAvatar({ user }: { user: User }) {
const id = useMemo(() => computeAvatarId(user), [user]);
return <Avatar id={id} />;
});
function Profile({ user, loading }: Props) {
if (loading) return <Skeleton />;
return (
<div>
<UserAvatar user={user} />
</div>
);
}
```
**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, manual memoization with `memo()` and `useMemo()` is not necessary. The compiler automatically optimizes re-renders.

View File

@@ -1,45 +0,0 @@
---
title: Put Interaction Logic in Event Handlers
impact: MEDIUM
impactDescription: avoids effect re-runs and duplicate side effects
tags: rerender, useEffect, events, side-effects, dependencies
---
## Put Interaction Logic in Event Handlers
If a side effect is triggered by a specific user action (submit, click, drag), run it in that event handler. Do not model the action as state + effect; it makes effects re-run on unrelated changes and can duplicate the action.
**Incorrect (event modeled as state + effect):**
```tsx
function Form() {
const [submitted, setSubmitted] = useState(false);
const theme = useContext(ThemeContext);
useEffect(() => {
if (submitted) {
post("/api/register");
showToast("Registered", theme);
}
}, [submitted, theme]);
return <button onClick={() => setSubmitted(true)}>Submit</button>;
}
```
**Correct (do it in the handler):**
```tsx
function Form() {
const theme = useContext(ThemeContext);
function handleSubmit() {
post("/api/register");
showToast("Registered", theme);
}
return <button onClick={handleSubmit}>Submit</button>;
}
```
Reference: [Should this code move to an event handler?](https://react.dev/learn/removing-effect-dependencies#should-this-code-move-to-an-event-handler)

View File

@@ -1,40 +0,0 @@
---
title: Use Transitions for Non-Urgent Updates
impact: MEDIUM
impactDescription: maintains UI responsiveness
tags: rerender, transitions, startTransition, performance
---
## Use Transitions for Non-Urgent Updates
Mark frequent, non-urgent state updates as transitions to maintain UI responsiveness.
**Incorrect (triggers high-frequency re-renders):**
```tsx
function ScrollTracker() {
const [scrollY, setScrollY] = useState(0);
useEffect(() => {
const handler = () => setScrollY(window.scrollY);
window.addEventListener("scroll", handler, { passive: true });
return () => window.removeEventListener("scroll", handler);
}, []);
}
```
**Correct (non-blocking updates):**
```tsx
import { startTransition } from "react";
function ScrollTracker() {
const [scrollY, setScrollY] = useState(0);
useEffect(() => {
const handler = () => {
startTransition(() => setScrollY(window.scrollY));
};
window.addEventListener("scroll", handler, { passive: true });
return () => window.removeEventListener("scroll", handler);
}, []);
}
```

View File

@@ -1,73 +0,0 @@
---
title: Use useRef for Transient Values
impact: MEDIUM
impactDescription: avoids unnecessary re-renders on frequent updates
tags: rerender, useref, state, performance
---
## Use useRef for Transient Values
When a value changes frequently and you don't want a re-render on every update (e.g., mouse trackers, intervals, transient flags), store it in `useRef` instead of `useState`. Keep component state for UI; use refs for temporary DOM-adjacent values. Updating a ref does not trigger a re-render.
**Incorrect (renders every update):**
```tsx
function Tracker() {
const [lastX, setLastX] = useState(0);
useEffect(() => {
const onMove = (e: MouseEvent) => setLastX(e.clientX);
window.addEventListener("mousemove", onMove);
return () => window.removeEventListener("mousemove", onMove);
}, []);
return (
<div
style={{
position: "fixed",
top: 0,
left: lastX,
width: 8,
height: 8,
background: "black",
}}
/>
);
}
```
**Correct (no re-render for tracking):**
```tsx
function Tracker() {
const lastXRef = useRef(0);
const dotRef = useRef<HTMLDivElement>(null);
useEffect(() => {
const onMove = (e: MouseEvent) => {
lastXRef.current = e.clientX;
const node = dotRef.current;
if (node) {
node.style.transform = `translateX(${e.clientX}px)`;
}
};
window.addEventListener("mousemove", onMove);
return () => window.removeEventListener("mousemove", onMove);
}, []);
return (
<div
ref={dotRef}
style={{
position: "fixed",
top: 0,
left: 0,
width: 8,
height: 8,
background: "black",
transform: "translateX(0px)",
}}
/>
);
}
```

View File

@@ -1,73 +0,0 @@
---
title: Use after() for Non-Blocking Operations
impact: MEDIUM
impactDescription: faster response times
tags: server, async, logging, analytics, side-effects
---
## Use after() for Non-Blocking Operations
Use Next.js's `after()` to schedule work that should execute after a response is sent. This prevents logging, analytics, and other side effects from blocking the response.
**Incorrect (blocks response):**
```tsx
import { logUserAction } from "@/app/utils";
export async function POST(request: Request) {
// Perform mutation
await updateDatabase(request);
// Logging blocks the response
const userAgent = request.headers.get("user-agent") || "unknown";
await logUserAction({ userAgent });
return new Response(JSON.stringify({ status: "success" }), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}
```
**Correct (non-blocking):**
```tsx
import { cookies, headers } from "next/headers";
import { after } from "next/server";
import { logUserAction } from "@/app/utils";
export async function POST(request: Request) {
// Perform mutation
await updateDatabase(request);
// Log after response is sent
after(async () => {
const userAgent = (await headers()).get("user-agent") || "unknown";
const sessionCookie = (await cookies()).get("session-id")?.value || "anonymous";
logUserAction({ sessionCookie, userAgent });
});
return new Response(JSON.stringify({ status: "success" }), {
status: 200,
headers: { "Content-Type": "application/json" },
});
}
```
The response is sent immediately while logging happens in the background.
**Common use cases:**
- Analytics tracking
- Audit logging
- Sending notifications
- Cache invalidation
- Cleanup tasks
**Important notes:**
- `after()` runs even if the response fails or redirects
- Works in Server Actions, Route Handlers, and Server Components
Reference: [https://nextjs.org/docs/app/api-reference/functions/after](https://nextjs.org/docs/app/api-reference/functions/after)

View File

@@ -1,96 +0,0 @@
---
title: Authenticate Server Actions Like API Routes
impact: CRITICAL
impactDescription: prevents unauthorized access to server mutations
tags: server, server-actions, authentication, security, authorization
---
## Authenticate Server Actions Like API Routes
**Prevent unauthorized access to server mutations**
Server Actions (functions with `"use server"`) are exposed as public endpoints, just like API routes. Always verify authentication and authorization **inside** each Server Action—do not rely solely on middleware, layout guards, or page-level checks, as Server Actions can be invoked directly.
Next.js documentation explicitly states: "Treat Server Actions with the same security considerations as public-facing API endpoints, and verify if the user is allowed to perform a mutation."
**Incorrect (no authentication check):**
```typescript
"use server";
export async function deleteUser(userId: string) {
// Anyone can call this! No auth check
await db.user.delete({ where: { id: userId } });
return { success: true };
}
```
**Correct (authentication inside the action):**
```typescript
"use server";
import { verifySession } from "@/lib/auth";
import { unauthorized } from "@/lib/errors";
export async function deleteUser(userId: string) {
// Always check auth inside the action
const session = await verifySession();
if (!session) {
throw unauthorized("Must be logged in");
}
// Check authorization too
if (session.user.role !== "admin" && session.user.id !== userId) {
throw unauthorized("Cannot delete other users");
}
await db.user.delete({ where: { id: userId } });
return { success: true };
}
```
**With input validation:**
```typescript
"use server";
import { z } from "zod";
import { verifySession } from "@/lib/auth";
const updateProfileSchema = z.object({
userId: z.string().uuid(),
name: z.string().min(1).max(100),
email: z.string().email(),
});
export async function updateProfile(data: unknown) {
// Validate input first
const validated = updateProfileSchema.parse(data);
// Then authenticate
const session = await verifySession();
if (!session) {
throw new Error("Unauthorized");
}
// Then authorize
if (session.user.id !== validated.userId) {
throw new Error("Can only update own profile");
}
// Finally perform the mutation
await db.user.update({
where: { id: validated.userId },
data: {
name: validated.name,
email: validated.email,
},
});
return { success: true };
}
```
Reference: [https://nextjs.org/docs/app/guides/authentication](https://nextjs.org/docs/app/guides/authentication)

View File

@@ -1,41 +0,0 @@
---
title: Cross-Request LRU Caching
impact: HIGH
impactDescription: caches across requests
tags: server, cache, lru, cross-request
---
## Cross-Request LRU Caching
`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.
**Implementation:**
```typescript
import { LRUCache } from "lru-cache";
const cache = new LRUCache<string, any>({
max: 1000,
ttl: 5 * 60 * 1000, // 5 minutes
});
export async function getUser(id: string) {
const cached = cache.get(id);
if (cached) return cached;
const user = await db.user.findUnique({ where: { id } });
cache.set(id, user);
return user;
}
// Request 1: DB query, result cached
// Request 2: cache hit, no DB query
```
Use when sequential user actions hit multiple endpoints needing the same data within seconds.
**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.
**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching.
Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)

View File

@@ -1,76 +0,0 @@
---
title: Per-Request Deduplication with React.cache()
impact: MEDIUM
impactDescription: deduplicates within request
tags: server, cache, react-cache, deduplication
---
## Per-Request Deduplication with React.cache()
Use `React.cache()` for server-side request deduplication. Authentication and database queries benefit most.
**Usage:**
```typescript
import { cache } from "react";
export const getCurrentUser = cache(async () => {
const session = await auth();
if (!session?.user?.id) return null;
return await db.user.findUnique({
where: { id: session.user.id },
});
});
```
Within a single request, multiple calls to `getCurrentUser()` execute the query only once.
**Avoid inline objects as arguments:**
`React.cache()` uses shallow equality (`Object.is`) to determine cache hits. Inline objects create new references each call, preventing cache hits.
**Incorrect (always cache miss):**
```typescript
const getUser = cache(async (params: { uid: number }) => {
return await db.user.findUnique({ where: { id: params.uid } });
});
// Each call creates new object, never hits cache
getUser({ uid: 1 });
getUser({ uid: 1 }); // Cache miss, runs query again
```
**Correct (cache hit):**
```typescript
const getUser = cache(async (uid: number) => {
return await db.user.findUnique({ where: { id: uid } });
});
// Primitive args use value equality
getUser(1);
getUser(1); // Cache hit, returns cached result
```
If you must pass objects, pass the same reference:
```typescript
const params = { uid: 1 };
getUser(params); // Query runs
getUser(params); // Cache hit (same reference)
```
**Next.js-Specific Note:**
In Next.js, the `fetch` API is automatically extended with request memoization. Requests with the same URL and options are automatically deduplicated within a single request, so you don't need `React.cache()` for `fetch` calls. However, `React.cache()` is still essential for other async tasks:
- Database queries (Prisma, Drizzle, etc.)
- Heavy computations
- Authentication checks
- File system operations
- Any non-fetch async work
Use `React.cache()` to deduplicate these operations across your component tree.
Reference: [React.cache documentation](https://react.dev/reference/react/cache)

View File

@@ -1,83 +0,0 @@
---
title: Parallel Data Fetching with Component Composition
impact: CRITICAL
impactDescription: eliminates server-side waterfalls
tags: server, rsc, parallel-fetching, composition
---
## Parallel Data Fetching with Component Composition
React Server Components execute sequentially within a tree. Restructure with composition to parallelize data fetching.
**Incorrect (Sidebar waits for Page's fetch to complete):**
```tsx
export default async function Page() {
const header = await fetchHeader();
return (
<div>
<div>{header}</div>
<Sidebar />
</div>
);
}
async function Sidebar() {
const items = await fetchSidebarItems();
return <nav>{items.map(renderItem)}</nav>;
}
```
**Correct (both fetch simultaneously):**
```tsx
async function Header() {
const data = await fetchHeader();
return <div>{data}</div>;
}
async function Sidebar() {
const items = await fetchSidebarItems();
return <nav>{items.map(renderItem)}</nav>;
}
export default function Page() {
return (
<div>
<Header />
<Sidebar />
</div>
);
}
```
**Alternative with children prop:**
```tsx
async function Header() {
const data = await fetchHeader();
return <div>{data}</div>;
}
async function Sidebar() {
const items = await fetchSidebarItems();
return <nav>{items.map(renderItem)}</nav>;
}
function Layout({ children }: { children: ReactNode }) {
return (
<div>
<Header />
{children}
</div>
);
}
export default function Page() {
return (
<Layout>
<Sidebar />
</Layout>
);
}
```

View File

@@ -1,38 +0,0 @@
---
title: Minimize Serialization at RSC Boundaries
impact: HIGH
impactDescription: reduces data transfer size
tags: server, rsc, serialization, props
---
## Minimize Serialization at RSC Boundaries
The React Server/Client boundary serializes all object properties into strings and embeds them in the HTML response and subsequent RSC requests. This serialized data directly impacts page weight and load time, so **size matters a lot**. Only pass fields that the client actually uses.
**Incorrect (serializes all 50 fields):**
```tsx
async function Page() {
const user = await fetchUser(); // 50 fields
return <Profile user={user} />;
}
("use client");
function Profile({ user }: { user: User }) {
return <div>{user.name}</div>; // uses 1 field
}
```
**Correct (serializes only 1 field):**
```tsx
async function Page() {
const user = await fetchUser();
return <Profile name={user.name} />;
}
("use client");
function Profile({ name }: { name: string }) {
return <div>{name}</div>;
}
```

View File

@@ -1,11 +1,11 @@
{
"$schema": "https://unpkg.com/@changesets/config@2.2.0/schema.json",
"access": "public",
"baseBranch": "main",
"changelog": "@changesets/cli/changelog",
"commit": false,
"fixed": [],
"ignore": ["@formbricks/demo", "@formbricks/web"],
"linked": [],
"updateInternalDependencies": "patch"
"access": "public",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": ["@formbricks/demo", "@formbricks/web"]
}

View File

@@ -1,352 +0,0 @@
# Create New Question Element
Use this command to scaffold a new question element component in `packages/survey-ui/src/elements/`.
## Usage
When creating a new question type (e.g., `single-select`, `rating`, `nps`), follow these steps:
1. **Create the component file** `{question-type}.tsx` with this structure:
```typescript
import * as React from "react";
import { ElementHeader } from "../components/element-header";
import { useTextDirection } from "../hooks/use-text-direction";
import { cn } from "../lib/utils";
interface {QuestionType}Props {
/** Unique identifier for the element container */
elementId: string;
/** The main question or prompt text displayed as the headline */
headline: string;
/** Optional descriptive text displayed below the headline */
description?: string;
/** Unique identifier for the input/control group */
inputId: string;
/** Current value */
value?: {ValueType};
/** Callback function called when the value changes */
onChange: (value: {ValueType}) => void;
/** Whether the field is required (shows asterisk indicator) */
required?: boolean;
/** Error message to display */
errorMessage?: string;
/** Text direction: 'ltr' (left-to-right), 'rtl' (right-to-left), or 'auto' (auto-detect from content) */
dir?: "ltr" | "rtl" | "auto";
/** Whether the controls are disabled */
disabled?: boolean;
// Add question-specific props here
}
function {QuestionType}({
elementId,
headline,
description,
inputId,
value,
onChange,
required = false,
errorMessage,
dir = "auto",
disabled = false,
// ... question-specific props
}: {QuestionType}Props): React.JSX.Element {
// Ensure value is always the correct type (handle undefined/null)
const currentValue = value ?? {defaultValue};
// Detect text direction from content
const detectedDir = useTextDirection({
dir,
textContent: [headline, description ?? "", /* add other text content from question */],
});
return (
<div className="w-full space-y-4" id={elementId} dir={detectedDir}>
{/* Headline */}
<ElementHeader
headline={headline}
description={description}
required={required}
htmlFor={inputId}
/>
{/* Question-specific controls */}
{/* TODO: Add your question-specific UI here */}
{/* Error message */}
{errorMessage && (
<div className="text-destructive flex items-center gap-1 text-sm" dir={detectedDir}>
<span>{errorMessage}</span>
</div>
)}
</div>
);
}
export { {QuestionType} };
export type { {QuestionType}Props };
```
2. **Create the Storybook file** `{question-type}.stories.tsx`:
```typescript
import type { Decorator, Meta, StoryObj } from "@storybook/react";
import React from "react";
import { {QuestionType}, type {QuestionType}Props } from "./{question-type}";
// Styling options for the StylingPlayground story
interface StylingOptions {
// Question styling
questionHeadlineFontFamily: string;
questionHeadlineFontSize: string;
questionHeadlineFontWeight: string;
questionHeadlineColor: string;
questionDescriptionFontFamily: string;
questionDescriptionFontWeight: string;
questionDescriptionFontSize: string;
questionDescriptionColor: string;
// Add component-specific styling options here
}
type StoryProps = {QuestionType}Props & Partial<StylingOptions>;
const meta: Meta<StoryProps> = {
title: "UI-package/Elements/{QuestionType}",
component: {QuestionType},
parameters: {
layout: "centered",
docs: {
description: {
component: "A complete {question type} question element...",
},
},
},
tags: ["autodocs"],
argTypes: {
headline: {
control: "text",
description: "The main question text",
table: { category: "Content" },
},
description: {
control: "text",
description: "Optional description or subheader text",
table: { category: "Content" },
},
value: {
control: "object",
description: "Current value",
table: { category: "State" },
},
required: {
control: "boolean",
description: "Whether the field is required",
table: { category: "Validation" },
},
errorMessage: {
control: "text",
description: "Error message to display",
table: { category: "Validation" },
},
dir: {
control: { type: "select" },
options: ["ltr", "rtl", "auto"],
description: "Text direction for RTL support",
table: { category: "Layout" },
},
disabled: {
control: "boolean",
description: "Whether the controls are disabled",
table: { category: "State" },
},
onChange: {
action: "changed",
table: { category: "Events" },
},
// Add question-specific argTypes here
},
};
export default meta;
type Story = StoryObj<StoryProps>;
// Decorator to apply CSS variables from story args
const withCSSVariables: Decorator<StoryProps> = (Story, context) => {
const args = context.args as StoryProps;
const {
questionHeadlineFontFamily,
questionHeadlineFontSize,
questionHeadlineFontWeight,
questionHeadlineColor,
questionDescriptionFontFamily,
questionDescriptionFontSize,
questionDescriptionFontWeight,
questionDescriptionColor,
// Extract component-specific styling options
} = args;
const cssVarStyle: React.CSSProperties & Record<string, string | undefined> = {
"--fb-question-headline-font-family": questionHeadlineFontFamily,
"--fb-question-headline-font-size": questionHeadlineFontSize,
"--fb-question-headline-font-weight": questionHeadlineFontWeight,
"--fb-question-headline-color": questionHeadlineColor,
"--fb-question-description-font-family": questionDescriptionFontFamily,
"--fb-question-description-font-size": questionDescriptionFontSize,
"--fb-question-description-font-weight": questionDescriptionFontWeight,
"--fb-question-description-color": questionDescriptionColor,
// Add component-specific CSS variables
};
return (
<div style={cssVarStyle} className="w-[600px]">
<Story />
</div>
);
};
export const StylingPlayground: Story = {
args: {
headline: "Example question?",
description: "Example description",
// Default styling values
questionHeadlineFontFamily: "system-ui, sans-serif",
questionHeadlineFontSize: "1.125rem",
questionHeadlineFontWeight: "600",
questionHeadlineColor: "#1e293b",
questionDescriptionFontFamily: "system-ui, sans-serif",
questionDescriptionFontSize: "0.875rem",
questionDescriptionFontWeight: "400",
questionDescriptionColor: "#64748b",
// Add component-specific default values
},
argTypes: {
// Question styling argTypes
questionHeadlineFontFamily: {
control: "text",
table: { category: "Question Styling" },
},
questionHeadlineFontSize: {
control: "text",
table: { category: "Question Styling" },
},
questionHeadlineFontWeight: {
control: "text",
table: { category: "Question Styling" },
},
questionHeadlineColor: {
control: "color",
table: { category: "Question Styling" },
},
questionDescriptionFontFamily: {
control: "text",
table: { category: "Question Styling" },
},
questionDescriptionFontSize: {
control: "text",
table: { category: "Question Styling" },
},
questionDescriptionFontWeight: {
control: "text",
table: { category: "Question Styling" },
},
questionDescriptionColor: {
control: "color",
table: { category: "Question Styling" },
},
// Add component-specific argTypes
},
decorators: [withCSSVariables],
};
export const Default: Story = {
args: {
headline: "Example question?",
// Add default props
},
};
export const WithDescription: Story = {
args: {
headline: "Example question?",
description: "Example description text",
},
};
export const Required: Story = {
args: {
headline: "Example question?",
required: true,
},
};
export const WithError: Story = {
args: {
headline: "Example question?",
errorMessage: "This field is required",
required: true,
},
};
export const Disabled: Story = {
args: {
headline: "Example question?",
disabled: true,
},
};
export const RTL: Story = {
args: {
headline: "مثال على السؤال؟",
description: "مثال على الوصف",
// Add RTL-specific props
},
};
```
3. **Add CSS variables** to `packages/survey-ui/src/styles/globals.css` if needed:
```css
/* Component-specific CSS variables */
--fb-{component}-{property}: {default-value};
```
4. **Export from** `packages/survey-ui/src/index.ts`:
```typescript
export { {QuestionType}, type {QuestionType}Props } from "./elements/{question-type}";
```
## Key Requirements
- ✅ Always use `ElementHeader` component for headline/description
- ✅ Always use `useTextDirection` hook for RTL support
- ✅ Always handle undefined/null values safely (e.g., `Array.isArray(value) ? value : []`)
- ✅ Always include error message display if applicable
- ✅ Always support disabled state if applicable
- ✅ Always add JSDoc comments to props interface
- ✅ Always create Storybook stories with styling playground
- ✅ Always export types from component file
- ✅ Always add to index.ts exports
## Examples
- `open-text.tsx` - Text input/textarea question (string value)
- `multi-select.tsx` - Multiple checkbox selection (string[] value)
## Checklist
When creating a new question element, verify:
- [ ] Component file created with proper structure
- [ ] Props interface with JSDoc comments for all props
- [ ] Uses `ElementHeader` component (don't duplicate header logic)
- [ ] Uses `useTextDirection` hook for RTL support
- [ ] Handles undefined/null values safely
- [ ] Storybook file created with styling playground
- [ ] Includes common stories: Default, WithDescription, Required, WithError, Disabled, RTL
- [ ] CSS variables added to `globals.css` if component needs custom styling
- [ ] Exported from `index.ts` with types
- [ ] TypeScript types properly exported
- [ ] Error message display included if applicable
- [ ] Disabled state supported if applicable

16
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,16 @@
# [Choice] Node.js version (use -bullseye variants on local arm64/Apple Silicon): 18, 16, 14, 18-bullseye, 16-bullseye, 14-bullseye, 18-buster, 16-buster, 14-buster
ARG VARIANT=20
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
# [Optional] Uncomment if you want to install an additional version of node using nvm
# ARG EXTRA_NODE_VERSION=10
# RUN su node -c "source /usr/local/share/nvm/nvm.sh && nvm install ${EXTRA_NODE_VERSION}"
# [Optional] Uncomment if you want to install more global node modules
# RUN su node -c "npm install -g <your-package-list-here>"
RUN su node -c "npm install -g pnpm"

View File

@@ -1,6 +1,29 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.2/containers/javascript-node-postgres
// Update the VARIANT arg in docker-compose.yml to pick a Node.js version
{
"features": {},
"image": "mcr.microsoft.com/devcontainers/universal:2",
"postAttachCommand": "pnpm go",
"postCreateCommand": "cp .env.example .env && sed -i '/^ENCRYPTION_KEY=/c\\ENCRYPTION_KEY='$(openssl rand -hex 32) .env && sed -i '/^NEXTAUTH_SECRET=/c\\NEXTAUTH_SECRET='$(openssl rand -hex 32) .env && sed -i '/^CRON_SECRET=/c\\CRON_SECRET='$(openssl rand -hex 32) .env && pnpm install && pnpm db:migrate:dev"
"name": "Node.js & PostgreSQL",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
// Configure tool-specific properties.
"customizations": {
// Configure properties specific to VS Code.
"vscode": {
// Add the IDs of extensions you want installed when the container is created.
"extensions": ["dbaeumer.vscode-eslint"]
}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// This can be used to network with other containers or with the host.
"forwardPorts": [3000, 5432, 8025],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "cp .env.example .env && sed -i '/^ENCRYPTION_KEY=/c\\ENCRYPTION_KEY='$(openssl rand -hex 32) .env && sed -i '/^NEXTAUTH_SECRET=/c\\NEXTAUTH_SECRET='$(openssl rand -hex 32) .env && sed -i '/^CRON_SECRET=/c\\CRON_SECRET='$(openssl rand -hex 32) .env && pnpm install && pnpm db:migrate:dev",
"postAttachCommand": "pnpm dev --filter=@formbricks/web... --filter=@formbricks/demo...",
// Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "node"
}

View File

@@ -0,0 +1,51 @@
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
# Update 'VARIANT' to pick an LTS version of Node.js: 20, 18, 16, 14.
# Append -bullseye or -buster to pin to an OS version.
# Use -bullseye variants on local arm64/Apple Silicon.
VARIANT: "20"
volumes:
- ..:/workspace:cached
# Overrides default command so things don't shut down after the process ends.
command: sleep infinity
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
network_mode: service:db
# Uncomment the next line to use a non-root user for all processes.
# user: node
# Use "forwardPorts" in **devcontainer.json** to forward an app port locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
db:
image: pgvector/pgvector:pg17
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: formbricks
# Add "forwardPorts": ["5432"] to **devcontainer.json** to forward PostgreSQL locally.
# (Adding the "ports" property to this file will not forward from a Codespace.)
mailhog:
image: mailhog/mailhog
network_mode: service:app
logging:
driver:
"none" # disable saving logs
# ports:
# - 8025:8025 # web ui
# 1025:1025 # smtp server
volumes:
postgres-data: null

View File

@@ -1,56 +1,39 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
**/node_modules
# **/node_modules
.pnp
.pnp.js
.pnpm-store/
# testing
**/coverage
coverage
# next.js
**/.next/
**/out/
**/.next
**/out
**/build
# node
**/dist/
**/dist
# misc
**/.DS_Store
.DS_Store
*.pem
Zone.Identifier
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# local env files
**/.env
**/.env.local
**/.env.development.local
**/.env.test.local
**/.env.production.local
!packages/database/.env
!apps/web/.env
# build tools
# turbo
.turbo
**/*vite.config.*.timestamp-*
# environment specific
# nixos stuff
.direnv
# Playwright
/test-results/
/playwright-report/
/blob-report/
/playwright/.cache/
.vscode
.github
**/.turbo
# project specific
packages/lib/uploads
apps/web/public/js
packages/database/migrations
branch.json
.env

View File

@@ -9,12 +9,8 @@
WEBAPP_URL=http://localhost:3000
# Required for next-auth. Should be the same as WEBAPP_URL
# If your pplication uses a custom base path, specify the route to the API endpoint in full, e.g. NEXTAUTH_URL=https://example.com/custom-route/api/auth
NEXTAUTH_URL=http://localhost:3000
# Can be used to deploy the application under a sub-path of a domain. This can only be set at build time
# BASE_PATH=
# Encryption keys
# Please set both for now, we will change this in the future
@@ -29,9 +25,6 @@ NEXTAUTH_SECRET=
# You can use: `openssl rand -hex 32` to generate a secure one
CRON_SECRET=
# Set the minimum log level(debug, info, warn, error, fatal)
LOG_LEVEL=info
##############
# DATABASE #
##############
@@ -46,7 +39,6 @@ DATABASE_URL='postgresql://postgres:postgres@localhost:5432/formbricks?schema=pu
# See optional configurations below if you want to disable these features.
MAIL_FROM=noreply@example.com
MAIL_FROM_NAME=Formbricks
SMTP_HOST=localhost
SMTP_PORT=1025
# Enable SMTP_SECURE_ENABLED for TLS (port 465)
@@ -54,9 +46,6 @@ SMTP_SECURE_ENABLED=0
SMTP_USER=smtpUser
SMTP_PASSWORD=smtpPassword
# If set to 0, the server will not require SMTP_USER and SMTP_PASSWORD(default is 1)
# SMTP_AUTHENTICATED=
# If set to 0, the server will accept connections without requiring authorization from the list of supplied CAs (default is 1).
# SMTP_REJECT_UNAUTHORIZED_TLS=0
@@ -66,6 +55,9 @@ SMTP_PASSWORD=smtpPassword
# Uncomment the variables you would like to use and customize the values.
# Custom local storage path for file uploads
#UPLOADS_DIR=
##############
# S3 STORAGE #
##############
@@ -81,9 +73,6 @@ S3_ENDPOINT_URL=
# Force path style for S3 compatible storage (0 for disabled, 1 for enabled)
S3_FORCE_PATH_STYLE=0
# Set this URL to add a public domain for all your client facing routes(default is WEBAPP_URL)
# PUBLIC_URL=https://survey.example.com
#####################
# Disable Features #
#####################
@@ -94,13 +83,16 @@ EMAIL_VERIFICATION_DISABLED=1
# Password Reset. If you enable Password Reset functionality you have to setup SMTP-Settings, too.
PASSWORD_RESET_DISABLED=1
# Signup. Disable the ability for new users to create an account.
# Note: This variable is only available to the SaaS setup of Formbricks Cloud. Signup is disable by default for self-hosting.
# SIGNUP_DISABLED=1
# Email login. Disable the ability for users to login with email.
# EMAIL_AUTH_DISABLED=1
# Organization Invite. Disable the ability for invited users to create an account.
# INVITE_DISABLED=1
##########
# Other #
##########
@@ -109,15 +101,6 @@ PASSWORD_RESET_DISABLED=1
PRIVACY_URL=
TERMS_URL=
IMPRINT_URL=
IMPRINT_ADDRESS=
# Configure Turnstile in signup flow
# TURNSTILE_SITE_KEY=
# TURNSTILE_SECRET_KEY=
# Google reCAPTCHA v3 keys
RECAPTCHA_SITE_KEY=
RECAPTCHA_SECRET_KEY=
# Configure Github Login
GITHUB_ID=
@@ -139,9 +122,6 @@ AZUREAD_TENANT_ID=
# OIDC_DISPLAY_NAME=
# OIDC_SIGNING_ALGORITHM=
# Configure SAML SSO
# SAML_DATABASE_URL=postgresql://postgres:postgres@localhost:5432/formbricks-saml
# Configure this when you want to ship JS & CSS files from a complete URL instead of the current domain
# ASSET_PREFIX_URL=
@@ -153,6 +133,11 @@ NOTION_OAUTH_CLIENT_SECRET=
STRIPE_SECRET_KEY=
STRIPE_WEBHOOK_SECRET=
# Configure Formbricks usage within Formbricks
NEXT_PUBLIC_FORMBRICKS_API_HOST=
NEXT_PUBLIC_FORMBRICKS_ENVIRONMENT_ID=
NEXT_PUBLIC_FORMBRICKS_ONBOARDING_SURVEY_ID=
# Oauth credentials for Google sheet integration
GOOGLE_SHEETS_CLIENT_ID=
GOOGLE_SHEETS_CLIENT_SECRET=
@@ -168,18 +153,15 @@ SLACK_CLIENT_SECRET=
# Enterprise License Key
ENTERPRISE_LICENSE_KEY=
# Internal Environment (production, staging) - used for internal staging environment
# ENVIRONMENT=production
# Automatically assign new users to a specific organization and role within that organization
# Insert an existing organization id or generate a valid CUID for a new one at https://www.getuniqueid.com/cuid (e.g. cjld2cjxh0000qzrmn831i7rn)
# (Role Management is an Enterprise feature)
# AUTH_SSO_DEFAULT_TEAM_ID=
# AUTH_SKIP_INVITE_FOR_SSO=
# DEFAULT_ORGANIZATION_ID=
# DEFAULT_ORGANIZATION_ROLE=admin
# Send new users to Brevo
# BREVO_API_KEY=
# BREVO_LIST_ID=
# Send new users to customer.io
# CUSTOMER_IO_API_KEY=
# CUSTOMER_IO_SITE_ID=
# Ignore Rate Limiting across the Formbricks app
# RATE_LIMITING_DISABLED=1
@@ -191,38 +173,16 @@ ENTERPRISE_LICENSE_KEY=
UNSPLASH_ACCESS_KEY=
# The below is used for Next Caching (uses In-Memory from Next Cache if not provided)
REDIS_URL=redis://localhost:6379
# REDIS_URL=redis://localhost:6379
# The below is used for Rate Limiting (uses In-Memory LRU Cache if not provided) (You can use a service like Webdis for this)
# REDIS_HTTP_URL:
# Chatwoot
# CHATWOOT_BASE_URL=
# CHATWOOT_WEBSITE_TOKEN=
# Disable custom cache handler if necessary (e.g. if deployed on Vercel)
# CUSTOM_CACHE_DISABLED=1
# Enable Prometheus metrics
# PROMETHEUS_ENABLED=
# PROMETHEUS_EXPORTER_PORT=
# The SENTRY_DSN is used for error tracking and performance monitoring with Sentry.
# SENTRY_DSN=
# The SENTRY_AUTH_TOKEN variable is picked up by the Sentry Build Plugin.
# It's used automatically by Sentry during the build for authentication when uploading source maps.
# SENTRY_AUTH_TOKEN=
# The SENTRY_ENVIRONMENT is the environment which the error will belong to in the Sentry dashboard
# SENTRY_ENVIRONMENT=
# Configure the minimum role for user management from UI(owner, manager, disabled)
# USER_MANAGEMENT_MINIMUM_ROLE="manager"
# Configure the maximum age for the session in seconds. Default is 86400 (24 hours)
# SESSION_MAX_AGE=86400
# Audit logs options. Default 0.
# AUDIT_LOG_ENABLED=0
# If the ip should be added in the log or not. Default 0
# AUDIT_LOG_GET_USER_IP=0
# Lingo.dev API key for translation generation
LINGODOTDEV_API_KEY=your_api_key_here
# Azure AI settings
# AI_AZURE_RESSOURCE_NAME=
# AI_AZURE_API_KEY=
# AI_AZURE_EMBEDDINGS_DEPLOYMENT_ID=
# AI_AZURE_LLM_DEPLOYMENT_ID=

View File

@@ -1,13 +0,0 @@
module.exports = {
root: true,
ignorePatterns: ["node_modules/", "dist/", "coverage/"],
overrides: [
{
files: ["packages/cache/**/*.{ts,js}"],
extends: ["@formbricks/eslint-config/library.js"],
parserOptions: {
project: "./packages/cache/tsconfig.json",
},
},
],
};

View File

@@ -1,33 +1,81 @@
name: Bug report
description: "Found a bug? Please fill out the sections below. \U0001F44D"
type: bug
projects: "formbricks/8"
labels: ["bug"]
title: "[BUG]"
labels: bug
assignees: []
body:
- type: textarea
id: issue-summary
attributes:
label: Issue Summary
description: A summary of the issue. This needs to be a clear detailed-rich summary.
validations:
required: true
- type: textarea
id: issue-expected-behavior
attributes:
label: Expected Behavior
description: A clear and concise description of what you expected to happen.
validations:
required: false
- type: textarea
id: other-information
attributes:
label: Other information (incl. screenshots, Formbricks version, steps to reproduce,...)
validations:
required: false
- type: dropdown
id: environment
attributes:
label: Your Environment
options:
- Formbricks Cloud (app.formbricks.com)
- Self-hosted Formbricks
- type: textarea
id: issue-summary
attributes:
label: Issue Summary
description: A summary of the issue. This needs to be a clear detailed-rich summary.
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
value: |
1. (for example) Went to ...
2. Clicked on...
3. ...
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: other-information
attributes:
label: Other information
validations:
required: false
- type: textarea
id: screenshots
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem.
validations:
required: false
- type: checkboxes
id: environment
attributes:
label: Environment
options:
- label: Formbricks Cloud (app.formbricks.com)
- label: Self-hosted Formbricks
- type: textarea
id: desktop-version
attributes:
label: Desktop (please complete the following information)
description: |
examples:
- **OS**: [e.g. iOS]
- **Browser**: [e.g. chrome, safari]
- **Version**: [e.g. 22]
value: |
- OS:
- Node:
- npm:
render: markdown
validations:
required: true
- type: markdown
id: nodejs-version
attributes:
value: |
#### Node.JS version
[e.g. v18.15.0]
- type: markdown
id: anything-else
attributes:
value: |
#### Anything else?
- Screen recording, console logs, network requests: You can make a recording with [Loom](https://www.loom.com).
- Anything else that you think could be an issue?

View File

@@ -1,5 +1,5 @@
blank_issues_enabled: true
blank_issues_enabled: false
contact_links:
- name: Questions
url: https://github.com/formbricks/formbricks/discussions
about: Need help selfhosting or ask a general question about the project? Open a discussion
url: https://formbricks.com/discord
about: Ask a general question about the project on our Discord server

View File

@@ -1,7 +1,8 @@
name: Feature request
description: "Suggest an idea for this project \U0001F680"
type: feature
projects: "formbricks/21"
title: "[FEATURE]"
labels: enhancement
assignees: []
body:
- type: textarea
id: problem-description
@@ -17,6 +18,13 @@ body:
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternate-solution-description
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: additional-context
attributes:
@@ -25,9 +33,15 @@ body:
validations:
required: false
- type: markdown
id: formbricks-info
attributes:
value: |
### Additional resources 🤓
### How we code at Formbricks 🤓
- Check out our [Contributor Docs](https://formbricks.com/docs/developer-docs/contributing/get-started)
- Anything unclear? [Ask in Github Discussions](https://github.com/formbricks/formbricks/discussions)
- Follow Best Practices lined out in our [Contributor Docs](https://formbricks.com/docs/contributing/how-we-code)
- First time: Please read our [introductory blog post](https://formbricks.com/blog/join-the-formtribe)
- All UI components are in the package `formbricks/ui`
- Run `pnpm go` to find a demo app to test in-app surveys at `localhost:3002`
- Everything is type-safe.
- We use **chatGPT** to help refactor code.
- Anything unclear? [Ask in Discord](https://formbricks.com/discord)

View File

@@ -0,0 +1,33 @@
name: oss.gg hack submission 🕹️
description: "Submit your contribution for the for the oss.gg hackathon"
title: "[🕹️]"
labels: 🕹️ oss.gg, player submission, hacktoberfest
assignees: []
body:
- type: textarea
id: contribution-name
attributes:
label: What side quest or challenge are you solving?
description: Add the name of the side quest or challenge.
validations:
required: true
- type: textarea
id: points
attributes:
label: Points
description: How many points are assigned to this contribution?
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: What's the task your performed?
validations:
- type: textarea
id: proof
attributes:
label: Provide proof that you've completed the task
description: Screenshots, loom recordings, links to the content you shared or interacted with.
validations:
required: true

40
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,40 @@
<!-- We require pull request titles to follow the Conventional Commits specification ( https://www.conventionalcommits.org/en/v1.0.0/#summary ). Please make sure your title follow these conventions -->
## What does this PR do?
<!-- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. -->
Fixes # (issue)
<!-- Please provide a screenshots or a loom video for visual changes to speed up reviews
Loom Video: https://www.loom.com/
-->
## How should this be tested?
<!-- Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration -->
- Test A
- Test B
## Checklist
<!-- We're starting to get more and more contributions. Please help us making this efficient for all of us and go through this checklist. Please tick off what you did -->
### Required
- [ ] Filled out the "How to test" section in this PR
- [ ] Read [How we Code at Formbricks](<[https://github.com/formbricks/formbricks/blob/main/CONTRIBUTING.md](https://formbricks.com/docs/contributing/how-we-code)>)
- [ ] Self-reviewed my own code
- [ ] Commented on my code in hard-to-understand bits
- [ ] Ran `pnpm build`
- [ ] Checked for warnings, there are none
- [ ] Removed all `console.logs`
- [ ] Merged the latest changes from main onto my branch with `git pull origin main`
- [ ] My changes don't cause any responsiveness issues
- [ ] First PR at Formbricks? [Please sign the CLA!](https://cla-assistant.io/formbricks/formbricks) Without it we wont be able to merge it 🙏
### Appreciated
- [ ] If a UI change was made: Added a screen recording or screenshots to this PR
- [ ] Updated the Formbricks Docs if changes were necessary

View File

@@ -1,319 +0,0 @@
name: Build and Push Docker Image
description: |
Unified Docker build and push action for both ECR and GHCR registries.
Supports:
- ECR builds for Formbricks Cloud deployment
- GHCR builds for community self-hosting
- Automatic version resolution and tagging
- Conditional signing and deployment tags
inputs:
registry_type:
description: "Registry type: 'ecr' or 'ghcr'"
required: true
# Version input
version:
description: "Explicit version (SemVer only, e.g., 1.2.3). If provided, this version is used directly. If empty, version is auto-generated from branch name."
required: false
experimental_mode:
description: "Enable experimental timestamped versions"
required: false
default: "false"
# ECR specific inputs
ecr_registry:
description: "ECR registry URL (required for ECR builds)"
required: false
ecr_repository:
description: "ECR repository name (required for ECR builds)"
required: false
ecr_region:
description: "ECR AWS region (required for ECR builds)"
required: false
aws_role_arn:
description: "AWS role ARN for ECR authentication (required for ECR builds)"
required: false
# GHCR specific inputs
ghcr_image_name:
description: "GHCR image name (required for GHCR builds)"
required: false
# Deployment options
deploy_production:
description: "Tag image for production deployment"
required: false
default: "false"
deploy_staging:
description: "Tag image for staging deployment"
required: false
default: "false"
is_prerelease:
description: "Whether this is a prerelease (auto-tags for staging/production)"
required: false
default: "false"
make_latest:
description: "Whether to tag as latest/production (from GitHub release 'Set as the latest release' option)"
required: false
default: "false"
# Build options
dockerfile:
description: "Path to Dockerfile"
required: false
default: "apps/web/Dockerfile"
context:
description: "Build context"
required: false
default: "."
outputs:
image_tag:
description: "Resolved image tag used for the build"
value: ${{ steps.version.outputs.version }}
registry_tags:
description: "Complete registry tags that were pushed"
value: ${{ steps.build.outputs.tags }}
image_digest:
description: "Image digest from the build"
value: ${{ steps.build.outputs.digest }}
runs:
using: "composite"
steps:
- name: Validate inputs
shell: bash
env:
REGISTRY_TYPE: ${{ inputs.registry_type }}
ECR_REGISTRY: ${{ inputs.ecr_registry }}
ECR_REPOSITORY: ${{ inputs.ecr_repository }}
ECR_REGION: ${{ inputs.ecr_region }}
AWS_ROLE_ARN: ${{ inputs.aws_role_arn }}
GHCR_IMAGE_NAME: ${{ inputs.ghcr_image_name }}
run: |
set -euo pipefail
if [[ "$REGISTRY_TYPE" != "ecr" && "$REGISTRY_TYPE" != "ghcr" ]]; then
echo "ERROR: registry_type must be 'ecr' or 'ghcr', got: $REGISTRY_TYPE"
exit 1
fi
if [[ "$REGISTRY_TYPE" == "ecr" ]]; then
if [[ -z "$ECR_REGISTRY" || -z "$ECR_REPOSITORY" || -z "$ECR_REGION" || -z "$AWS_ROLE_ARN" ]]; then
echo "ERROR: ECR builds require ecr_registry, ecr_repository, ecr_region, and aws_role_arn"
exit 1
fi
fi
if [[ "$REGISTRY_TYPE" == "ghcr" ]]; then
if [[ -z "$GHCR_IMAGE_NAME" ]]; then
echo "ERROR: GHCR builds require ghcr_image_name"
exit 1
fi
fi
echo "SUCCESS: Input validation passed for $REGISTRY_TYPE build"
- name: Resolve Docker version
id: version
uses: ./.github/actions/resolve-docker-version
with:
version: ${{ inputs.version }}
current_branch: ${{ github.ref_name }}
experimental_mode: ${{ inputs.experimental_mode }}
- name: Update package.json version
uses: ./.github/actions/update-package-version
with:
version: ${{ steps.version.outputs.version }}
- name: Configure AWS credentials (ECR only)
if: ${{ inputs.registry_type == 'ecr' }}
uses: aws-actions/configure-aws-credentials@7474bc4690e29a8392af63c5b98e7449536d5c3a # v4.2.0
with:
role-to-assume: ${{ inputs.aws_role_arn }}
aws-region: ${{ inputs.ecr_region }}
- name: Log in to Amazon ECR (ECR only)
if: ${{ inputs.registry_type == 'ecr' }}
uses: aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076 # v2.0.1
- name: Set up Docker build tools
uses: ./.github/actions/docker-build-setup
with:
registry: ${{ inputs.registry_type == 'ghcr' && 'ghcr.io' || '' }}
setup_cosign: ${{ inputs.registry_type == 'ghcr' && 'true' || 'false' }}
skip_login_on_pr: ${{ inputs.registry_type == 'ghcr' && 'true' || 'false' }}
- name: Build ECR tag list
if: ${{ inputs.registry_type == 'ecr' }}
id: ecr-tags
shell: bash
env:
IMAGE_TAG: ${{ steps.version.outputs.version }}
ECR_REGISTRY: ${{ inputs.ecr_registry }}
ECR_REPOSITORY: ${{ inputs.ecr_repository }}
DEPLOY_PRODUCTION: ${{ inputs.deploy_production }}
DEPLOY_STAGING: ${{ inputs.deploy_staging }}
IS_PRERELEASE: ${{ inputs.is_prerelease }}
MAKE_LATEST: ${{ inputs.make_latest }}
run: |
set -euo pipefail
# Start with the base image tag
TAGS="${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}"
# Handle automatic tagging based on release type
if [[ "${IS_PRERELEASE}" == "true" ]]; then
TAGS="${TAGS}\n${ECR_REGISTRY}/${ECR_REPOSITORY}:staging"
echo "Adding staging tag for prerelease"
elif [[ "${IS_PRERELEASE}" == "false" && "${MAKE_LATEST}" == "true" ]]; then
TAGS="${TAGS}\n${ECR_REGISTRY}/${ECR_REPOSITORY}:production"
echo "Adding production tag for stable release marked as latest"
fi
# Handle manual deployment overrides
if [[ "${DEPLOY_PRODUCTION}" == "true" ]]; then
TAGS="${TAGS}\n${ECR_REGISTRY}/${ECR_REPOSITORY}:production"
echo "Adding production tag (manual override)"
fi
if [[ "${DEPLOY_STAGING}" == "true" ]]; then
TAGS="${TAGS}\n${ECR_REGISTRY}/${ECR_REPOSITORY}:staging"
echo "Adding staging tag (manual override)"
fi
echo "ECR tags generated:"
echo -e "${TAGS}"
{
echo "tags<<EOF"
echo -e "${TAGS}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: Generate additional GHCR tags for releases
if: ${{ inputs.registry_type == 'ghcr' && inputs.experimental_mode == 'false' && (github.event_name == 'workflow_call' || github.event_name == 'release' || github.event_name == 'workflow_dispatch') }}
id: ghcr-extra-tags
shell: bash
env:
VERSION: ${{ steps.version.outputs.version }}
IMAGE_NAME: ${{ inputs.ghcr_image_name }}
IS_PRERELEASE: ${{ inputs.is_prerelease }}
MAKE_LATEST: ${{ inputs.make_latest }}
run: |
set -euo pipefail
# Start with base version tag
TAGS="ghcr.io/${IMAGE_NAME}:${VERSION}"
# For proper SemVer releases, add major.minor and major tags
if [[ "${VERSION}" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
# Extract major and minor versions
MAJOR=$(echo "${VERSION}" | cut -d. -f1)
MINOR=$(echo "${VERSION}" | cut -d. -f2)
TAGS="${TAGS}\nghcr.io/${IMAGE_NAME}:${MAJOR}.${MINOR}"
TAGS="${TAGS}\nghcr.io/${IMAGE_NAME}:${MAJOR}"
echo "Added SemVer tags: ${MAJOR}.${MINOR}, ${MAJOR}"
fi
# Add latest tag for stable releases marked as latest
if [[ "${IS_PRERELEASE}" == "false" && "${MAKE_LATEST}" == "true" ]]; then
TAGS="${TAGS}\nghcr.io/${IMAGE_NAME}:latest"
echo "Added latest tag for stable release marked as latest"
fi
echo "Generated GHCR tags:"
echo -e "${TAGS}"
# Debug: Show what will be passed to Docker build
echo "DEBUG: Tags for Docker build step:"
echo -e "${TAGS}"
{
echo "tags<<EOF"
echo -e "${TAGS}"
echo "EOF"
} >> "${GITHUB_OUTPUT}"
- name: Build GHCR metadata (experimental)
if: ${{ inputs.registry_type == 'ghcr' && inputs.experimental_mode == 'true' }}
id: ghcr-meta-experimental
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ghcr.io/${{ inputs.ghcr_image_name }}
tags: |
type=ref,event=branch
type=raw,value=${{ steps.version.outputs.version }}
- name: Debug Docker build tags
shell: bash
run: |
echo "=== DEBUG: Docker Build Configuration ==="
echo "Registry Type: ${{ inputs.registry_type }}"
echo "Experimental Mode: ${{ inputs.experimental_mode }}"
echo "Event Name: ${{ github.event_name }}"
echo "Is Prerelease: ${{ inputs.is_prerelease }}"
echo "Make Latest: ${{ inputs.make_latest }}"
echo "Version: ${{ steps.version.outputs.version }}"
if [[ "${{ inputs.registry_type }}" == "ecr" ]]; then
echo "ECR Tags: ${{ steps.ecr-tags.outputs.tags }}"
elif [[ "${{ inputs.experimental_mode }}" == "true" ]]; then
echo "GHCR Experimental Tags: ${{ steps.ghcr-meta-experimental.outputs.tags }}"
else
echo "GHCR Extra Tags: ${{ steps.ghcr-extra-tags.outputs.tags }}"
fi
- name: Build and push Docker image
id: build
uses: depot/build-push-action@636daae76684e38c301daa0c5eca1c095b24e780 # v1.14.0
with:
project: tw0fqmsx3c
token: ${{ env.DEPOT_PROJECT_TOKEN }}
context: ${{ inputs.context }}
file: ${{ inputs.dockerfile }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ inputs.registry_type == 'ecr' && steps.ecr-tags.outputs.tags || (inputs.registry_type == 'ghcr' && inputs.experimental_mode == 'true' && steps.ghcr-meta-experimental.outputs.tags) || (inputs.registry_type == 'ghcr' && inputs.experimental_mode == 'false' && steps.ghcr-extra-tags.outputs.tags) || (inputs.registry_type == 'ghcr' && format('ghcr.io/{0}:{1}', inputs.ghcr_image_name, steps.version.outputs.version)) || (inputs.registry_type == 'ecr' && format('{0}/{1}:{2}', inputs.ecr_registry, inputs.ecr_repository, steps.version.outputs.version)) }}
labels: ${{ inputs.registry_type == 'ghcr' && inputs.experimental_mode == 'true' && steps.ghcr-meta-experimental.outputs.labels || '' }}
secrets: |
database_url=${{ env.DUMMY_DATABASE_URL }}
encryption_key=${{ env.DUMMY_ENCRYPTION_KEY }}
redis_url=${{ env.DUMMY_REDIS_URL }}
sentry_auth_token=${{ env.SENTRY_AUTH_TOKEN }}
env:
DEPOT_PROJECT_TOKEN: ${{ env.DEPOT_PROJECT_TOKEN }}
DUMMY_DATABASE_URL: ${{ env.DUMMY_DATABASE_URL }}
DUMMY_ENCRYPTION_KEY: ${{ env.DUMMY_ENCRYPTION_KEY }}
DUMMY_REDIS_URL: ${{ env.DUMMY_REDIS_URL }}
SENTRY_AUTH_TOKEN: ${{ env.SENTRY_AUTH_TOKEN }}
- name: Sign GHCR image (GHCR only)
if: ${{ inputs.registry_type == 'ghcr' && (github.event_name == 'workflow_call' || github.event_name == 'release' || github.event_name == 'workflow_dispatch') }}
shell: bash
env:
TAGS: ${{ inputs.experimental_mode == 'true' && steps.ghcr-meta-experimental.outputs.tags || steps.ghcr-extra-tags.outputs.tags }}
DIGEST: ${{ steps.build.outputs.digest }}
run: |
set -euo pipefail
echo "${TAGS}" | xargs -I {} cosign sign --yes "{}@${DIGEST}"
- name: Output build summary
shell: bash
env:
REGISTRY_TYPE: ${{ inputs.registry_type }}
IMAGE_TAG: ${{ steps.version.outputs.version }}
VERSION_SOURCE: ${{ steps.version.outputs.source }}
run: |
echo "SUCCESS: Built and pushed Docker image to $REGISTRY_TYPE"
echo "Image Tag: $IMAGE_TAG (source: $VERSION_SOURCE)"
if [[ "$REGISTRY_TYPE" == "ecr" ]]; then
echo "ECR Registry: ${{ inputs.ecr_registry }}"
echo "ECR Repository: ${{ inputs.ecr_repository }}"
else
echo "GHCR Image: ghcr.io/${{ inputs.ghcr_image_name }}"
fi

View File

@@ -8,14 +8,6 @@ on:
required: false
default: "0"
inputs:
turbo_token:
description: "Turborepo token"
required: false
turbo_team:
description: "Turborepo team"
required: false
runs:
using: "composite"
steps:
@@ -49,7 +41,7 @@ runs:
if: steps.cache-build.outputs.cache-hit != 'true'
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
uses: pnpm/action-setup@v4
if: steps.cache-build.outputs.cache-hit != 'true'
- name: Install dependencies
@@ -62,18 +54,17 @@ runs:
shell: bash
- name: Fill ENCRYPTION_KEY, ENTERPRISE_LICENSE_KEY and E2E_TESTING in .env
env:
E2E_TESTING_MODE: ${{ inputs.e2e_testing_mode }}
run: |
RANDOM_KEY=$(openssl rand -hex 32)
sed -i "s/ENCRYPTION_KEY=.*/ENCRYPTION_KEY=${RANDOM_KEY}/" .env
echo "E2E_TESTING=$E2E_TESTING_MODE" >> .env
sed -i "s/CRON_SECRET=.*/CRON_SECRET=${RANDOM_KEY}/" .env
sed -i "s/NEXTAUTH_SECRET=.*/NEXTAUTH_SECRET=${RANDOM_KEY}/" .env
sed -i "s/ENTERPRISE_LICENSE_KEY=.*/ENTERPRISE_LICENSE_KEY=${RANDOM_KEY}/" .env
echo "E2E_TESTING=${{ inputs.e2e_testing_mode }}" >> .env
shell: bash
- run: |
pnpm build --filter=@formbricks/web...
if: steps.cache-build.outputs.cache-hit != 'true'
shell: bash
env:
TURBO_TOKEN: ${{ inputs.turbo_token }}
TURBO_TEAM: ${{ inputs.turbo_team }}

View File

@@ -1,106 +0,0 @@
name: Docker Build Setup
description: |
Sets up common Docker build tools and authentication with security validation.
Security Features:
- Registry URL validation
- Input sanitization
- Conditional setup based on event type
- Post-setup verification
Supports Depot CLI, Cosign signing, and Docker registry authentication.
inputs:
registry:
description: "Docker registry hostname to login to (e.g., ghcr.io, registry.example.com:5000). No paths allowed."
required: false
default: "ghcr.io"
setup_cosign:
description: "Whether to install cosign for image signing"
required: false
default: "true"
skip_login_on_pr:
description: "Whether to skip registry login on pull requests"
required: false
default: "true"
runs:
using: "composite"
steps:
- name: Validate inputs
shell: bash
env:
REGISTRY: ${{ inputs.registry }}
SETUP_COSIGN: ${{ inputs.setup_cosign }}
SKIP_LOGIN_ON_PR: ${{ inputs.skip_login_on_pr }}
run: |
set -euo pipefail
# Security: Validate registry input - must be hostname[:port] only, no paths
# Allow empty registry for cases where login is handled externally (e.g., ECR)
if [[ -n "$REGISTRY" ]]; then
if [[ "$REGISTRY" =~ / ]]; then
echo "ERROR: Invalid registry format: $REGISTRY"
echo "Registry must be host[:port] with no path (e.g., 'ghcr.io' or 'registry.example.com:5000')"
echo "Path components like 'ghcr.io/org' are not allowed as they break docker login"
exit 1
fi
# Validate hostname with optional port format
if [[ ! "$REGISTRY" =~ ^[a-zA-Z0-9.-]+(\:[0-9]+)?$ ]]; then
echo "ERROR: Invalid registry hostname format: $REGISTRY"
echo "Registry must be a valid hostname optionally with port (e.g., 'ghcr.io' or 'registry.example.com:5000')"
exit 1
fi
fi
# Validate boolean inputs
if [[ "$SETUP_COSIGN" != "true" && "$SETUP_COSIGN" != "false" ]]; then
echo "ERROR: setup_cosign must be 'true' or 'false', got: $SETUP_COSIGN"
exit 1
fi
if [[ "$SKIP_LOGIN_ON_PR" != "true" && "$SKIP_LOGIN_ON_PR" != "false" ]]; then
echo "ERROR: skip_login_on_pr must be 'true' or 'false', got: $SKIP_LOGIN_ON_PR"
exit 1
fi
echo "SUCCESS: Input validation passed"
- name: Set up Depot CLI
uses: depot/setup-action@b0b1ea4f69e92ebf5dea3f8713a1b0c37b2126a5 # v1.6.0
- name: Install cosign
# Install cosign when requested AND when we might actually sign images
# (i.e., non-PR contexts or when we login on PRs)
if: ${{ inputs.setup_cosign == 'true' && (inputs.skip_login_on_pr == 'false' || github.event_name != 'pull_request') }}
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2
- name: Log into registry
if: ${{ inputs.registry != '' && (inputs.skip_login_on_pr == 'false' || github.event_name != 'pull_request') }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ${{ inputs.registry }}
username: ${{ github.actor }}
password: ${{ github.token }}
- name: Verify setup completion
shell: bash
run: |
set -euo pipefail
# Verify Depot CLI is available
if ! command -v depot >/dev/null 2>&1; then
echo "ERROR: Depot CLI not found in PATH"
exit 1
fi
# Verify cosign if it should be installed (same conditions as install step)
if [[ "${{ inputs.setup_cosign }}" == "true" ]] && [[ "${{ inputs.skip_login_on_pr }}" == "false" || "${{ github.event_name }}" != "pull_request" ]]; then
if ! command -v cosign >/dev/null 2>&1; then
echo "ERROR: Cosign not found in PATH despite being requested"
exit 1
fi
fi
echo "SUCCESS: Docker build setup completed successfully"

View File

@@ -1,192 +0,0 @@
name: Resolve Docker Version
description: |
Resolves and validates Docker-compatible SemVer versions for container builds with comprehensive security.
Security Features:
- Command injection protection
- Input sanitization and validation
- Docker tag character restrictions
- Length limits and boundary checks
- Safe branch name handling
Supports multiple modes: release, manual override, branch auto-detection, and experimental timestamped versions.
inputs:
version:
description: "Explicit version (SemVer only, e.g., 1.2.3-beta). If provided, this version is used directly. If empty, version is auto-generated from branch name."
required: false
current_branch:
description: "Current branch name for auto-detection"
required: true
experimental_mode:
description: "Enable experimental mode with timestamp-based versions"
required: false
default: "false"
outputs:
version:
description: "Resolved Docker-compatible SemVer version"
value: ${{ steps.resolve.outputs.version }}
source:
description: "Source of version (release|override|branch)"
value: ${{ steps.resolve.outputs.source }}
normalized:
description: "Whether the version was normalized (true/false)"
value: ${{ steps.resolve.outputs.normalized }}
runs:
using: "composite"
steps:
- name: Resolve and validate Docker version
id: resolve
shell: bash
env:
EXPLICIT_VERSION: ${{ inputs.version }}
CURRENT_BRANCH: ${{ inputs.current_branch }}
EXPERIMENTAL_MODE: ${{ inputs.experimental_mode }}
run: |
set -euo pipefail
# Function to validate SemVer format (Docker-compatible, no '+' build metadata)
validate_semver() {
local version="$1"
local context="$2"
if [[ ! "$version" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?$ ]]; then
echo "ERROR: Invalid $context format. Must be semver without build metadata (e.g., 1.2.3, 1.2.3-alpha)"
echo "Provided: $version"
echo "Note: Docker tags cannot contain '+' characters. Use prerelease identifiers instead."
exit 1
fi
}
# Function to generate branch-based version
generate_branch_version() {
local branch="$1"
local use_timestamp="${2:-true}"
local timestamp
if [[ "$use_timestamp" == "true" ]]; then
timestamp=$(date +%s)
else
timestamp=""
fi
# Sanitize branch name for Docker compatibility
local sanitized_branch=$(echo "$branch" | sed 's/[^a-zA-Z0-9.-]/-/g' | sed 's/--*/-/g' | sed 's/^-\|-$//g')
# Additional safety: truncate if too long (reserve space for prefix and timestamp)
if (( ${#sanitized_branch} > 80 )); then
sanitized_branch="${sanitized_branch:0:80}"
echo "INFO: Branch name truncated for Docker compatibility" >&2
fi
local version
# Generate version based on branch name (unified approach)
# All branches get alpha versions with sanitized branch name
if [[ -n "$timestamp" ]]; then
version="0.0.0-alpha-$sanitized_branch-$timestamp"
echo "INFO: Branch '$branch' detected - alpha version: $version" >&2
else
version="0.0.0-alpha-$sanitized_branch"
echo "INFO: Branch '$branch' detected - alpha version: $version" >&2
fi
echo "$version"
}
# Input validation and sanitization
if [[ -z "$CURRENT_BRANCH" ]]; then
echo "ERROR: current_branch input is required"
exit 1
fi
# Security: Validate inputs to prevent command injection
# Use grep to check for dangerous characters (more reliable than bash regex)
validate_input() {
local input="$1"
local name="$2"
# Check for dangerous characters using grep
if echo "$input" | grep -q '[;|&`$(){}\\[:space:]]'; then
echo "ERROR: $name contains potentially dangerous characters: $input"
echo "Input should only contain letters, numbers, hyphens, underscores, dots, and forward slashes"
return 1
fi
return 0
}
# Validate current branch
if ! validate_input "$CURRENT_BRANCH" "Branch name"; then
exit 1
fi
# Validate explicit version if provided
if [[ -n "$EXPLICIT_VERSION" ]] && ! validate_input "$EXPLICIT_VERSION" "Explicit version"; then
exit 1
fi
# Main resolution logic (ultra-simplified)
NORMALIZED="false"
if [[ -n "$EXPLICIT_VERSION" ]]; then
# Use provided explicit version (from either workflow_call or manual input)
validate_semver "$EXPLICIT_VERSION" "explicit version"
# Normalize to lowercase for Docker/ECR compatibility
RESOLVED_VERSION="${EXPLICIT_VERSION,,}"
if [[ "$EXPLICIT_VERSION" != "$RESOLVED_VERSION" ]]; then
NORMALIZED="true"
echo "INFO: Original version contained uppercase characters, normalized: $EXPLICIT_VERSION -> $RESOLVED_VERSION"
fi
SOURCE="explicit"
echo "INFO: Using explicit version: $RESOLVED_VERSION"
else
# Auto-generate version from branch name
if [[ "$EXPERIMENTAL_MODE" == "true" ]]; then
# Use timestamped version generation
echo "INFO: Experimental mode: generating timestamped version from branch: $CURRENT_BRANCH"
RESOLVED_VERSION=$(generate_branch_version "$CURRENT_BRANCH" "true")
SOURCE="experimental"
else
# Standard branch version (no timestamp)
echo "INFO: Auto-detecting version from branch: $CURRENT_BRANCH"
RESOLVED_VERSION=$(generate_branch_version "$CURRENT_BRANCH" "false")
SOURCE="branch"
fi
echo "Generated version: $RESOLVED_VERSION"
fi
# Final validation - ensure result is valid Docker tag
if [[ -z "$RESOLVED_VERSION" ]]; then
echo "ERROR: Failed to resolve version"
exit 1
fi
if (( ${#RESOLVED_VERSION} > 128 )); then
echo "ERROR: Version must be at most 128 characters (Docker limitation)"
echo "Generated version: $RESOLVED_VERSION (${#RESOLVED_VERSION} chars)"
exit 1
fi
if [[ ! "$RESOLVED_VERSION" =~ ^[a-z0-9._-]+$ ]]; then
echo "ERROR: Version contains invalid characters for Docker tags"
echo "Version: $RESOLVED_VERSION"
exit 1
fi
if [[ "$RESOLVED_VERSION" =~ ^[.-] || "$RESOLVED_VERSION" =~ [.-]$ ]]; then
echo "ERROR: Version must not start or end with '.' or '-'"
echo "Version: $RESOLVED_VERSION"
exit 1
fi
# Output results
echo "SUCCESS: Resolved Docker version: $RESOLVED_VERSION (source: $SOURCE)"
echo "version=$RESOLVED_VERSION" >> $GITHUB_OUTPUT
echo "source=$SOURCE" >> $GITHUB_OUTPUT
echo "normalized=$NORMALIZED" >> $GITHUB_OUTPUT

View File

@@ -1,160 +0,0 @@
name: Update Package Version
description: |
Safely updates package.json version with comprehensive validation and atomic operations.
Security Features:
- Path traversal protection
- SemVer validation with length limits
- Atomic file operations with backup/recovery
- JSON validation before applying changes
This action is designed to be secure by default and prevent common attack vectors.
inputs:
version:
description: "Version to set in package.json (must be valid SemVer)"
required: true
package_path:
description: "Path to package.json file"
required: false
default: "./apps/web/package.json"
outputs:
updated_version:
description: "The version that was actually set in package.json"
value: ${{ steps.update.outputs.updated_version }}
runs:
using: "composite"
steps:
- name: Update and verify package.json version
id: update
shell: bash
env:
VERSION: ${{ inputs.version }}
PACKAGE_PATH: ${{ inputs.package_path }}
run: |
set -euo pipefail
# Validate inputs
if [[ -z "$VERSION" ]]; then
echo "ERROR: version input is required"
exit 1
fi
# Security: Validate package_path to prevent path traversal attacks
# Only allow paths within the workspace and must end with package.json
if [[ "$PACKAGE_PATH" =~ \.\./|^/|^~ ]]; then
echo "ERROR: Invalid package path - path traversal detected: $PACKAGE_PATH"
echo "Package path must be relative to workspace root and cannot contain '../', start with '/', or '~'"
exit 1
fi
if [[ ! "$PACKAGE_PATH" =~ package\.json$ ]]; then
echo "ERROR: Package path must end with 'package.json': $PACKAGE_PATH"
exit 1
fi
# Resolve to absolute path within workspace for additional security
WORKSPACE_ROOT="${GITHUB_WORKSPACE:-$(pwd)}"
# Use realpath to resolve both paths and handle symlinks properly
WORKSPACE_ROOT=$(realpath "$WORKSPACE_ROOT")
RESOLVED_PATH=$(realpath "${WORKSPACE_ROOT}/${PACKAGE_PATH}")
# Ensure WORKSPACE_ROOT has a trailing slash for proper prefix matching
WORKSPACE_ROOT="${WORKSPACE_ROOT}/"
# Use shell string matching to ensure RESOLVED_PATH is within workspace
# This is more secure than regex and handles edge cases properly
if [[ "$RESOLVED_PATH" != "$WORKSPACE_ROOT"* ]]; then
echo "ERROR: Resolved path is outside workspace: $RESOLVED_PATH"
echo "Workspace root: $WORKSPACE_ROOT"
exit 1
fi
if [[ ! -f "$RESOLVED_PATH" ]]; then
echo "ERROR: package.json not found at: $RESOLVED_PATH"
exit 1
fi
# Use resolved path for operations
PACKAGE_PATH="$RESOLVED_PATH"
# Validate SemVer format with additional security checks
if [[ ${#VERSION} -gt 128 ]]; then
echo "ERROR: Version string too long (${#VERSION} chars, max 128): $VERSION"
exit 1
fi
if [[ ! "$VERSION" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?$ ]]; then
echo "ERROR: Invalid SemVer format: $VERSION"
echo "Expected format: MAJOR.MINOR.PATCH[-PRERELEASE]"
echo "Only alphanumeric characters, dots, and hyphens allowed in prerelease"
exit 1
fi
# Additional validation: Check for reasonable version component sizes
# Extract base version (MAJOR.MINOR.PATCH) without prerelease/build metadata
if [[ "$VERSION" =~ ^([0-9]+\.[0-9]+\.[0-9]+) ]]; then
BASE_VERSION="${BASH_REMATCH[1]}"
else
echo "ERROR: Could not extract base version from: $VERSION"
exit 1
fi
# Split version components safely
IFS='.' read -ra VERSION_PARTS <<< "$BASE_VERSION"
# Validate component sizes (should have exactly 3 parts due to regex above)
if (( ${VERSION_PARTS[0]} > 999 || ${VERSION_PARTS[1]} > 999 || ${VERSION_PARTS[2]} > 999 )); then
echo "ERROR: Version components too large (max 999 each): $VERSION"
echo "Components: ${VERSION_PARTS[0]}.${VERSION_PARTS[1]}.${VERSION_PARTS[2]}"
exit 1
fi
echo "Updating package.json version to: $VERSION"
# Create backup for atomic operations
BACKUP_PATH="${PACKAGE_PATH}.backup.$$"
cp "$PACKAGE_PATH" "$BACKUP_PATH"
# Use jq to safely update the version field with error handling
if ! jq --arg version "$VERSION" '.version = $version' "$PACKAGE_PATH" > "${PACKAGE_PATH}.tmp"; then
echo "ERROR: jq failed to process package.json"
rm -f "${PACKAGE_PATH}.tmp" "$BACKUP_PATH"
exit 1
fi
# Validate the generated JSON before applying changes
if ! jq empty "${PACKAGE_PATH}.tmp" 2>/dev/null; then
echo "ERROR: Generated invalid JSON"
rm -f "${PACKAGE_PATH}.tmp" "$BACKUP_PATH"
exit 1
fi
# Atomic move operation
if ! mv "${PACKAGE_PATH}.tmp" "$PACKAGE_PATH"; then
echo "ERROR: Failed to update package.json"
# Restore backup
mv "$BACKUP_PATH" "$PACKAGE_PATH"
exit 1
fi
# Verify the update was successful
UPDATED_VERSION=$(jq -r '.version' "$PACKAGE_PATH" 2>/dev/null)
if [[ "$UPDATED_VERSION" != "$VERSION" ]]; then
echo "ERROR: Version update failed!"
echo "Expected: $VERSION"
echo "Actual: $UPDATED_VERSION"
# Restore backup
mv "$BACKUP_PATH" "$PACKAGE_PATH"
exit 1
fi
# Clean up backup on success
rm -f "$BACKUP_PATH"
echo "SUCCESS: Updated package.json version to: $UPDATED_VERSION"
echo "updated_version=$UPDATED_VERSION" >> $GITHUB_OUTPUT

View File

@@ -1,40 +0,0 @@
<!-- We require pull request titles to follow the Conventional Commits specification ( https://www.conventionalcommits.org/en/v1.0.0/#summary ). Please make sure your title follow these conventions -->
## What does this PR do?
<!-- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. -->
Fixes #(issue)
<!-- Please provide a screenshots or a loom video for visual changes to speed up reviews
Loom Video: https://www.loom.com/
-->
## How should this be tested?
<!-- Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration -->
- Test A
- Test B
## Checklist
<!-- We're starting to get more and more contributions. Please help us making this efficient for all of us and go through this checklist. Please tick off what you did -->
### Required
- [ ] Filled out the "How to test" section in this PR
- [ ] Read [How we Code at Formbricks](<[https://github.com/formbricks/formbricks/blob/main/CONTRIBUTING.md](https://formbricks.com/docs/contributing/how-we-code)>)
- [ ] Self-reviewed my own code
- [ ] Commented on my code in hard-to-understand bits
- [ ] Ran `pnpm build`
- [ ] Checked for warnings, there are none
- [ ] Removed all `console.logs`
- [ ] Merged the latest changes from main onto my branch with `git pull origin main`
- [ ] My changes don't cause any responsiveness issues
- [ ] First PR at Formbricks? [Please sign the CLA!](https://cla-assistant.io/formbricks/formbricks) Without it we wont be able to merge it 🙏
### Appreciated
- [ ] If a UI change was made: Added a screen recording or screenshots to this PR
- [ ] Updated the Formbricks Docs if changes were necessary

View File

@@ -0,0 +1,74 @@
name: "Apply issue labels to PR"
on:
pull_request_target:
types:
- opened
jobs:
label_on_pr:
runs-on: ubuntu-latest
permissions:
contents: none
issues: read
pull-requests: write
steps:
- name: Apply labels from linked issue to PR
uses: actions/github-script@v5
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
async function getLinkedIssues(owner, repo, prNumber) {
const query = `query GetLinkedIssues($owner: String!, $repo: String!, $prNumber: Int!) {
repository(owner: $owner, name: $repo) {
pullRequest(number: $prNumber) {
closingIssuesReferences(first: 10) {
nodes {
number
labels(first: 10) {
nodes {
name
}
}
}
}
}
}
}`;
const variables = {
owner: owner,
repo: repo,
prNumber: prNumber,
};
const result = await github.graphql(query, variables);
return result.repository.pullRequest.closingIssuesReferences.nodes;
}
const pr = context.payload.pull_request;
const linkedIssues = await getLinkedIssues(
context.repo.owner,
context.repo.repo,
pr.number
);
const labelsToAdd = new Set();
for (const issue of linkedIssues) {
if (issue.labels && issue.labels.nodes) {
for (const label of issue.labels.nodes) {
labelsToAdd.add(label.name);
}
}
}
if (labelsToAdd.size) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: pr.number,
labels: Array.from(labelsToAdd),
});
}

View File

@@ -1,94 +0,0 @@
name: Build Cloud Deployment Images
# This workflow builds Formbricks Docker images for ECR deployment:
# - workflow_call: Used by releases with explicit SemVer versions
# - workflow_dispatch: Auto-detects version from current branch or uses override
on:
workflow_dispatch:
inputs:
version_override:
description: "Override version (SemVer only, e.g., 1.2.3). Leave empty to auto-detect from branch."
required: false
type: string
deploy_production:
description: "Tag image for production deployment"
required: false
default: false
type: boolean
deploy_staging:
description: "Tag image for staging deployment"
required: false
default: false
type: boolean
workflow_call:
inputs:
image_tag:
description: "Image tag to push (required for workflow_call)"
required: true
type: string
IS_PRERELEASE:
description: "Whether this is a prerelease (auto-tags for staging/production)"
required: false
type: boolean
default: false
MAKE_LATEST:
description: "Whether to tag for production (from GitHub release 'Set as the latest release' option)"
required: false
type: boolean
default: false
outputs:
IMAGE_TAG:
description: "Normalized image tag used for the build"
value: ${{ jobs.build-and-push.outputs.IMAGE_TAG }}
TAGS:
description: "Newline-separated list of ECR tags pushed"
value: ${{ jobs.build-and-push.outputs.TAGS }}
permissions:
contents: read
id-token: write
env:
ECR_REGION: ${{ vars.ECR_REGION }}
# ECR settings are sourced from repository/environment variables for portability across envs/forks
ECR_REGISTRY: ${{ vars.ECR_REGISTRY }}
ECR_REPOSITORY: ${{ vars.ECR_REPOSITORY }}
jobs:
build-and-push:
name: Build and Push
runs-on: ubuntu-latest
timeout-minutes: 45
outputs:
IMAGE_TAG: ${{ steps.build.outputs.image_tag }}
TAGS: ${{ steps.build.outputs.registry_tags }}
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Build and push cloud deployment image
id: build
uses: ./.github/actions/build-and-push-docker
with:
registry_type: "ecr"
ecr_registry: ${{ env.ECR_REGISTRY }}
ecr_repository: ${{ env.ECR_REPOSITORY }}
ecr_region: ${{ env.ECR_REGION }}
aws_role_arn: ${{ secrets.AWS_ECR_PUSH_ROLE_ARN }}
version: ${{ inputs.version_override || inputs.image_tag }}
deploy_production: ${{ inputs.deploy_production }}
deploy_staging: ${{ inputs.deploy_staging }}
is_prerelease: ${{ inputs.IS_PRERELEASE }}
make_latest: ${{ inputs.MAKE_LATEST }}
env:
DEPOT_PROJECT_TOKEN: ${{ secrets.DEPOT_PROJECT_TOKEN }}
DUMMY_DATABASE_URL: ${{ secrets.DUMMY_DATABASE_URL }}
DUMMY_ENCRYPTION_KEY: ${{ secrets.DUMMY_ENCRYPTION_KEY }}
DUMMY_REDIS_URL: ${{ secrets.DUMMY_REDIS_URL }}
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}

28
.github/workflows/build-docs.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Build Docs
on:
workflow_call:
jobs:
build:
name: Build Docs
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/dangerous-git-checkout
- name: Setup Node.js 20.x
uses: actions/setup-node@v3
with:
node-version: 20.x
- name: Install pnpm
uses: pnpm/action-setup@v4
- name: Install dependencies
run: pnpm install --config.platform=linux --config.architecture=x64
shell: bash
- run: |
pnpm build --filter=@formbricks/docs...
shell: bash

View File

@@ -1,10 +1,6 @@
name: Build Web
on:
workflow_call:
permissions:
contents: read
jobs:
build:
name: Build Formbricks-web
@@ -12,12 +8,7 @@ jobs:
timeout-minutes: 30
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v3
- uses: ./.github/actions/dangerous-git-checkout
- name: Build & Cache Web Binaries
@@ -25,5 +16,3 @@ jobs:
id: cache-build-web
with:
e2e_testing_mode: "0"
turbo_token: ${{ secrets.TURBO_TOKEN }}
turbo_team: ${{ vars.TURBO_TEAM }}

View File

@@ -6,54 +6,25 @@ on:
- main
workflow_dispatch:
permissions:
contents: read
jobs:
chromatic:
name: Run Chromatic
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Node.js
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
- name: Get pnpm store directory
shell: bash
run: |
echo "STORE_PATH=$(pnpm store path --silent)" >> $GITHUB_ENV
- name: Setup pnpm cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
path: ${{ env.STORE_PATH }}
key: ${{ runner.os }}-pnpm-store-${{ hashFiles('**/pnpm-lock.yaml') }}
restore-keys: |
${{ runner.os }}-pnpm-store-
uses: pnpm/action-setup@v4
- name: Install dependencies
run: pnpm install --config.platform=linux --config.architecture=x64
- name: Run Chromatic
uses: chromaui/action@4c20b95e9d3209ecfdf9cd6aace6bbde71ba1694 # v13.3.4
uses: chromaui/action@latest
with:
# ⚠️ Make sure to configure a `CHROMATIC_PROJECT_TOKEN` repository secret
projectToken: ${{ secrets.CHROMATIC_PROJECT_TOKEN }}
workingDir: apps/storybook
zip: true

View File

@@ -0,0 +1,24 @@
name: Cron - Survey status update
on:
workflow_dispatch:
# "Scheduled workflows run on the latest commit on the default or base branch."
# — https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#schedule
# schedule:
# Runs “At 00:00.” (see https://crontab.guru)
# - cron: "0 0 * * *"
jobs:
cron-weeklySummary:
env:
APP_URL: ${{ secrets.APP_URL }}
CRON_SECRET: ${{ secrets.CRON_SECRET }}
runs-on: ubuntu-latest
steps:
- name: cURL request
if: ${{ env.APP_URL && env.CRON_SECRET }}
run: |
curl ${{ env.APP_URL }}/api/cron/survey-status \
-X POST \
-H 'content-type: application/json' \
-H 'x-api-key: ${{ env.CRON_SECRET }}' \
--fail

View File

@@ -0,0 +1,24 @@
name: Cron - Weekly summary
on:
workflow_dispatch:
# "Scheduled workflows run on the latest commit on the default or base branch."
# — https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#schedule
schedule:
# Runs “At 08:00 on Monday.” (see https://crontab.guru)
- cron: "0 8 * * 1"
jobs:
cron-weeklySummary:
env:
APP_URL: ${{ secrets.APP_URL }}
CRON_SECRET: ${{ secrets.CRON_SECRET }}
runs-on: ubuntu-latest
steps:
- name: cURL request
if: ${{ env.APP_URL && env.CRON_SECRET }}
run: |
curl ${{ env.APP_URL }}/api/cron/weekly-summary \
-X POST \
-H 'content-type: application/json' \
-H 'x-api-key: ${{ env.CRON_SECRET }}' \
--fail

View File

@@ -1,149 +0,0 @@
name: Formbricks Cloud Deployment
on:
workflow_dispatch:
inputs:
VERSION:
description: "The version of the Docker image to release (clean SemVer, e.g., 1.2.3)"
required: true
type: string
REPOSITORY:
description: "The repository to use for the Docker image"
required: false
type: string
default: "ghcr.io/formbricks/formbricks"
ENVIRONMENT:
description: "The environment to deploy to"
required: true
type: choice
options:
- staging
- production
workflow_call:
inputs:
VERSION:
description: "The version of the Docker image to release"
required: true
type: string
REPOSITORY:
description: "The repository to use for the Docker image"
required: false
type: string
default: "ghcr.io/formbricks/formbricks"
ENVIRONMENT:
description: "The environment to deploy to"
required: true
type: string
permissions:
id-token: write
contents: read
jobs:
helmfile-deploy:
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Tailscale
uses: tailscale/github-action@84a3f23bb4d843bcf4da6cf824ec1be473daf4de # v3.2.3
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:github
args: --accept-routes
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@f24d7193d98baebaeacc7e2227925dd47cc267f5 # v4.2.0
with:
role-to-assume: ${{ secrets.AWS_ASSUME_ROLE_ARN }}
aws-region: "eu-central-1"
- name: Setup Cluster Access
run: |
aws eks update-kubeconfig --name formbricks-prod-eks --region eu-central-1
env:
AWS_REGION: eu-central-1
- uses: helmfile/helmfile-action@712000e3d4e28c72778ecc53857746082f555ef3 # v2.0.4
name: Deploy Formbricks Cloud Production
if: inputs.ENVIRONMENT == 'production'
env:
VERSION: ${{ inputs.VERSION }}
REPOSITORY: ${{ inputs.REPOSITORY }}
FORMBRICKS_S3_BUCKET: ${{ secrets.FORMBRICKS_S3_BUCKET }}
FORMBRICKS_INGRESS_CERT_ARN: ${{ secrets.FORMBRICKS_INGRESS_CERT_ARN }}
FORMBRICKS_ROLE_ARN: ${{ secrets.FORMBRICKS_ROLE_ARN }}
with:
helmfile-version: "v1.0.0"
helm-plugins: >
https://github.com/databus23/helm-diff,
https://github.com/jkroepke/helm-secrets
helmfile-args: apply -l environment=prod
helmfile-auto-init: "false"
helmfile-workdirectory: infra/formbricks-cloud-helm
- uses: helmfile/helmfile-action@712000e3d4e28c72778ecc53857746082f555ef3 # v2.0.4
name: Deploy Formbricks Cloud Staging
if: inputs.ENVIRONMENT == 'staging'
env:
VERSION: ${{ inputs.VERSION }}
REPOSITORY: ${{ inputs.REPOSITORY }}
FORMBRICKS_INGRESS_CERT_ARN: ${{ secrets.STAGE_FORMBRICKS_INGRESS_CERT_ARN }}
FORMBRICKS_ROLE_ARN: ${{ secrets.STAGE_FORMBRICKS_ROLE_ARN }}
with:
helmfile-version: "v1.0.0"
helm-plugins: >
https://github.com/databus23/helm-diff,
https://github.com/jkroepke/helm-secrets
helmfile-args: apply -l environment=stage
helmfile-auto-init: "false"
helmfile-workdirectory: infra/formbricks-cloud-helm
- name: Purge Cloudflare Cache
if: ${{ inputs.ENVIRONMENT == 'production' || inputs.ENVIRONMENT == 'staging' }}
env:
CF_ZONE_ID: ${{ secrets.CLOUDFLARE_ZONE_ID }}
CF_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
ENVIRONMENT: ${{ inputs.ENVIRONMENT }}
run: |
# Set hostname based on environment
if [[ "$ENVIRONMENT" == "production" ]]; then
PURGE_HOST="app.formbricks.com"
else
PURGE_HOST="stage.app.formbricks.com"
fi
echo "Purging Cloudflare cache for host: $PURGE_HOST (environment: $ENVIRONMENT, zone: $CF_ZONE_ID)"
# Prepare JSON payload for selective cache purge
json_payload=$(cat << EOF
{
"hosts": ["$PURGE_HOST"]
}
EOF
)
# Make API call to Cloudflare
response=$(curl -s -X POST \
"https://api.cloudflare.com/client/v4/zones/$CF_ZONE_ID/purge_cache" \
-H "Authorization: Bearer $CF_API_TOKEN" \
-H "Content-Type: application/json" \
--data "$json_payload")
echo "Cloudflare API response: $response"
# Verify the operation was successful
if [[ "$(echo "$response" | jq -r .success)" == "true" ]]; then
echo "✅ Successfully purged cache for $PURGE_HOST"
else
echo "❌ Cloudflare cache purge failed"
echo "Error details: $(echo "$response" | jq -r .errors)"
exit 1
fi

View File

@@ -1,200 +0,0 @@
name: Docker Build Validation
on:
pull_request:
branches:
- main
merge_group:
branches:
- main
workflow_dispatch:
permissions:
contents: read
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
jobs:
validate-docker-build:
name: Validate Docker Build
runs-on: ubuntu-latest
# Add PostgreSQL and Redis service containers
services:
postgres:
image: pgvector/pgvector@sha256:9ae02a756ba16a2d69dd78058e25915e36e189bb36ddf01ceae86390d7ed786a
env:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: formbricks
ports:
- 5432:5432
# Health check to ensure PostgreSQL is ready before using it
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: valkey/valkey@sha256:12ba4f45a7c3e1d0f076acd616cb230834e75a77e8516dde382720af32832d6d
ports:
- 6379:6379
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout Repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Build Docker Image
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
env:
GITHUB_SHA: ${{ github.sha }}
with:
context: .
file: ./apps/web/Dockerfile
push: false
load: true
tags: formbricks-test:${{ env.GITHUB_SHA }}
cache-from: type=gha
cache-to: type=gha,mode=max
secrets: |
database_url=${{ secrets.DUMMY_DATABASE_URL }}
encryption_key=${{ secrets.DUMMY_ENCRYPTION_KEY }}
redis_url=redis://localhost:6379
- name: Verify and Initialize PostgreSQL
run: |
echo "Verifying PostgreSQL connection..."
# Install PostgreSQL client to test connection
sudo apt-get update && sudo apt-get install -y postgresql-client
# Test connection using psql with timeout and proper error handling
echo "Testing PostgreSQL connection with 30 second timeout..."
if timeout 30 bash -c 'until PGPASSWORD=test psql -h localhost -U test -d formbricks -c "\dt" >/dev/null 2>&1; do
echo "Waiting for PostgreSQL to be ready..."
sleep 2
done'; then
echo "✅ PostgreSQL connection successful"
PGPASSWORD=test psql -h localhost -U test -d formbricks -c "SELECT version();"
# Enable necessary extensions that might be required by migrations
echo "Enabling required PostgreSQL extensions..."
PGPASSWORD=test psql -h localhost -U test -d formbricks -c "CREATE EXTENSION IF NOT EXISTS vector;" || echo "Vector extension already exists or not available"
else
echo "❌ PostgreSQL connection failed after 30 seconds"
exit 1
fi
# Show network configuration
echo "Network configuration:"
netstat -tulpn | grep 5432 || echo "No process listening on port 5432"
- name: Verify Redis/Valkey Connection
run: |
echo "Verifying Redis/Valkey connection..."
# Install Redis client to test connection
sudo apt-get update && sudo apt-get install -y redis-tools
# Test connection using redis-cli with timeout and proper error handling
echo "Testing Redis connection with 30 second timeout..."
if timeout 30 bash -c 'until redis-cli -h localhost -p 6379 ping >/dev/null 2>&1; do
echo "Waiting for Redis to be ready..."
sleep 2
done'; then
echo "✅ Redis connection successful"
redis-cli -h localhost -p 6379 info server | head -5
else
echo "❌ Redis connection failed after 30 seconds"
exit 1
fi
# Show network configuration for Redis
echo "Redis network configuration:"
netstat -tulpn | grep 6379 || echo "No process listening on port 6379"
- name: Test Docker Image with Health Check
shell: bash
env:
GITHUB_SHA: ${{ github.sha }}
DUMMY_ENCRYPTION_KEY: ${{ secrets.DUMMY_ENCRYPTION_KEY }}
run: |
echo "🧪 Testing if the Docker image starts correctly..."
# Add extra docker run args to support host.docker.internal on Linux
DOCKER_RUN_ARGS="--add-host=host.docker.internal:host-gateway"
# Start the container with host.docker.internal pointing to the host
docker run --name formbricks-test \
$DOCKER_RUN_ARGS \
-p 3000:3000 \
-e DATABASE_URL="postgresql://test:test@host.docker.internal:5432/formbricks" \
-e ENCRYPTION_KEY="$DUMMY_ENCRYPTION_KEY" \
-e REDIS_URL="redis://host.docker.internal:6379" \
-d "formbricks-test:$GITHUB_SHA"
# Start health check polling immediately (every 5 seconds for up to 5 minutes)
echo "🏥 Polling /health endpoint every 5 seconds for up to 5 minutes..."
MAX_RETRIES=60 # 60 attempts × 5 seconds = 5 minutes
RETRY_COUNT=0
HEALTH_CHECK_SUCCESS=false
set +e # Disable exit on error to allow for retries
while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do
RETRY_COUNT=$((RETRY_COUNT + 1))
# Check if container is still running
if [ "$(docker inspect -f '{{.State.Running}}' formbricks-test 2>/dev/null)" != "true" ]; then
echo "❌ Container stopped running after $((RETRY_COUNT * 5)) seconds!"
echo "📋 Container logs:"
docker logs formbricks-test
exit 1
fi
# Show progress and diagnostic info every 12 attempts (1 minute intervals)
if [ $((RETRY_COUNT % 12)) -eq 0 ] || [ $RETRY_COUNT -eq 1 ]; then
echo "Health check attempt $RETRY_COUNT of $MAX_RETRIES ($(($RETRY_COUNT * 5)) seconds elapsed)..."
echo "📋 Recent container logs:"
docker logs --tail 10 formbricks-test
fi
# Try health endpoint with shorter timeout for faster polling
# Use -f flag to make curl fail on HTTP error status codes (4xx, 5xx)
if curl -f -s -m 10 http://localhost:3000/health >/dev/null 2>&1; then
echo "✅ Health check successful after $((RETRY_COUNT * 5)) seconds!"
HEALTH_CHECK_SUCCESS=true
break
fi
# Wait 5 seconds before next attempt
sleep 5
done
# Show full container logs for debugging
echo "📋 Full container logs:"
docker logs formbricks-test
# Clean up the container
echo "🧹 Cleaning up..."
docker rm -f formbricks-test
# Exit with failure if health check did not succeed
if [ "$HEALTH_CHECK_SUCCESS" != "true" ]; then
echo "❌ Health check failed after $((MAX_RETRIES * 5)) seconds (5 minutes)"
exit 1
fi
echo "✨ Docker validation complete - all checks passed!"

View File

@@ -1,70 +0,0 @@
name: Docker Security Scan
on:
schedule:
- cron: "0 2 * * *" # Daily at 2 AM UTC
workflow_dispatch:
workflow_run:
workflows: ["Docker Release to Github"]
types: [completed]
permissions:
contents: read
packages: read
security-events: write
jobs:
scan:
name: Vulnerability Scan
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Harden the runner
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout (for SARIF fingerprinting only)
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 1
- name: Determine ref and commit for upload
id: gitref
shell: bash
env:
EVENT_NAME: ${{ github.event_name }}
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
HEAD_SHA: ${{ github.event.workflow_run.head_sha }}
run: |
set -euo pipefail
if [[ "${EVENT_NAME}" == "workflow_run" ]]; then
echo "ref=refs/heads/${HEAD_BRANCH}" >> "$GITHUB_OUTPUT"
echo "sha=${HEAD_SHA}" >> "$GITHUB_OUTPUT"
else
echo "ref=${GITHUB_REF}" >> "$GITHUB_OUTPUT"
echo "sha=${GITHUB_SHA}" >> "$GITHUB_OUTPUT"
fi
- name: Log in to GitHub Container Registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@dc5a429b52fcf669ce959baa2c2dd26090d2a6c4 # v0.32.0
with:
image-ref: "ghcr.io/${{ github.repository }}:latest"
format: "sarif"
output: "trivy-results.sarif"
severity: "CRITICAL,HIGH,MEDIUM,LOW"
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@a4e1a019f5e24960714ff6296aee04b736cbc3cf # v3.29.6
if: ${{ always() }}
with:
sarif_file: "trivy-results.sarif"
ref: ${{ steps.gitref.outputs.ref }}
sha: ${{ steps.gitref.outputs.sha }}
category: "trivy-container-scan"

View File

@@ -1,25 +1,9 @@
name: E2E Tests
on:
workflow_call:
secrets:
PLAYWRIGHT_SERVICE_URL:
required: false
PLAYWRIGHT_SERVICE_ACCESS_TOKEN:
required: false
ENTERPRISE_LICENSE_KEY:
required: true
# Add other secrets if necessary
workflow_dispatch:
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
permissions:
contents: read
actions: read
TELEMETRY_DISABLED: 1
jobs:
build:
name: Run E2E Tests
@@ -27,7 +11,7 @@ jobs:
timeout-minutes: 60
services:
postgres:
image: pgvector/pgvector@sha256:9ae02a756ba16a2d69dd78058e25915e36e189bb36ddf01ceae86390d7ed786a
image: pgvector/pgvector:pg17
env:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
@@ -35,34 +19,21 @@ jobs:
ports:
- 5432:5432
options: >-
--health-cmd="pg_isready -U postgres"
--health-cmd="pg_isready -U testuser"
--health-interval=10s
--health-timeout=5s
--health-retries=5
valkey:
image: valkey/valkey@sha256:12ba4f45a7c3e1d0f076acd616cb230834e75a77e8516dde382720af32832d6d
ports:
- 6379:6379
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
allowed-endpoints: |
ee.formbricks.com:443
registry-1.docker.io:443
docker.io:443
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v3
- uses: ./.github/actions/dangerous-git-checkout
- name: Setup Node.js 22.x
uses: actions/setup-node@1a4442cacd436585916779262731d5b162bc6ec7 # v3.8.2
- name: Setup Node.js 20.x
uses: actions/setup-node@v3
with:
node-version: 22.x
node-version: 20.x
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
uses: pnpm/action-setup@v4
- name: Install dependencies
run: pnpm install --config.platform=linux --config.architecture=x64
@@ -78,113 +49,22 @@ jobs:
sed -i "s/ENCRYPTION_KEY=.*/ENCRYPTION_KEY=${RANDOM_KEY}/" .env
sed -i "s/CRON_SECRET=.*/CRON_SECRET=${RANDOM_KEY}/" .env
sed -i "s/NEXTAUTH_SECRET=.*/NEXTAUTH_SECRET=${RANDOM_KEY}/" .env
sed -i "s/ENTERPRISE_LICENSE_KEY=.*/ENTERPRISE_LICENSE_KEY=${{ secrets.ENTERPRISE_LICENSE_KEY }}/" .env
sed -i "s|REDIS_URL=.*|REDIS_URL=redis://localhost:6379|" .env
sed -i "s/ENTERPRISE_LICENSE_KEY=.*/ENTERPRISE_LICENSE_KEY=${RANDOM_KEY}/" .env
echo "" >> .env
echo "E2E_TESTING=1" >> .env
echo "S3_REGION=us-east-1" >> .env
echo "S3_BUCKET_NAME=formbricks-e2e" >> .env
echo "S3_ENDPOINT_URL=http://localhost:9000" >> .env
echo "S3_ACCESS_KEY=devminio" >> .env
echo "S3_SECRET_KEY=devminio123" >> .env
echo "S3_FORCE_PATH_STYLE=1" >> .env
shell: bash
- name: Install MinIO client (mc)
run: |
set -euo pipefail
MC_VERSION="RELEASE.2025-08-13T08-35-41Z"
MC_BASE="https://dl.min.io/client/mc/release/linux-amd64/archive"
MC_BIN="mc.${MC_VERSION}"
MC_SUM="${MC_BIN}.sha256sum"
curl -fsSL "${MC_BASE}/${MC_BIN}" -o "${MC_BIN}"
curl -fsSL "${MC_BASE}/${MC_SUM}" -o "${MC_SUM}"
sha256sum -c "${MC_SUM}"
chmod +x "${MC_BIN}"
sudo mv "${MC_BIN}" /usr/local/bin/mc
- name: Start MinIO Server
run: |
set -euo pipefail
# Start MinIO server in background
docker run -d \
--name minio-server \
-p 9000:9000 \
-p 9001:9001 \
-e MINIO_ROOT_USER=devminio \
-e MINIO_ROOT_PASSWORD=devminio123 \
minio/minio:RELEASE.2025-09-07T16-13-09Z \
server /data --console-address :9001
echo "MinIO server started"
- name: Wait for MinIO and create S3 bucket
run: |
set -euo pipefail
echo "Waiting for MinIO to be ready..."
ready=0
for i in {1..60}; do
if curl -fsS http://localhost:9000/minio/health/live >/dev/null; then
echo "MinIO is up after ${i} seconds"
ready=1
break
fi
sleep 1
done
if [ "$ready" -ne 1 ]; then
echo "::error::MinIO did not become ready within 60 seconds"
exit 1
fi
mc alias set local http://localhost:9000 devminio devminio123
mc mb --ignore-existing local/formbricks-e2e
- name: Build App
run: |
pnpm build --filter=@formbricks/web...
- name: Apply Prisma Migrations
run: |
# pnpm prisma migrate deploy
pnpm db:migrate:dev
- name: Run Rate Limiter Load Tests
run: |
echo "Running rate limiter load tests with Redis/Valkey..."
cd apps/web && pnpm vitest run modules/core/rate-limit/rate-limit-load.test.ts
shell: bash
- name: Run Cache Integration Tests
run: |
echo "Running cache integration tests with Redis/Valkey..."
cd packages/cache && pnpm vitest run src/cache-integration.test.ts
shell: bash
- name: Check for Enterprise License
run: |
LICENSE_KEY=$(grep '^ENTERPRISE_LICENSE_KEY=' .env | cut -d'=' -f2-)
if [ -z "$LICENSE_KEY" ]; then
echo "::error::ENTERPRISE_LICENSE_KEY in .env is empty. Please check your secret configuration."
exit 1
fi
echo "License key length: ${#LICENSE_KEY}"
- name: Disable rate limiting for E2E tests
run: |
echo "RATE_LIMITING_DISABLED=1" >> .env
echo "Rate limiting disabled for E2E tests"
shell: bash
pnpm prisma migrate deploy
- name: Run App
run: |
echo "Starting app with enterprise license..."
NODE_ENV=test pnpm start --filter=@formbricks/web | tee app.log 2>&1 &
NODE_ENV=test pnpm start --filter=@formbricks/web &
sleep 10 # Optional: gives some buffer for the app to start
for attempt in {1..10}; do
if [ $(curl -o /dev/null -s -w "%{http_code}" http://localhost:3000/health) -eq 200 ]; then
@@ -202,48 +82,13 @@ jobs:
- name: Install Playwright
run: pnpm exec playwright install --with-deps
- name: Determine Playwright execution mode
shell: bash
env:
PLAYWRIGHT_SERVICE_URL: ${{ secrets.PLAYWRIGHT_SERVICE_URL }}
PLAYWRIGHT_SERVICE_ACCESS_TOKEN: ${{ secrets.PLAYWRIGHT_SERVICE_ACCESS_TOKEN }}
run: |
set -euo pipefail
if [[ -n "${PLAYWRIGHT_SERVICE_URL}" && -n "${PLAYWRIGHT_SERVICE_ACCESS_TOKEN}" ]]; then
echo "PW_MODE=service" >> "$GITHUB_ENV"
else
echo "PW_MODE=local" >> "$GITHUB_ENV"
fi
- name: Run E2E Tests (Playwright Service)
if: env.PW_MODE == 'service'
env:
PLAYWRIGHT_SERVICE_URL: ${{ secrets.PLAYWRIGHT_SERVICE_URL }}
PLAYWRIGHT_SERVICE_ACCESS_TOKEN: ${{ secrets.PLAYWRIGHT_SERVICE_ACCESS_TOKEN }}
CI: true
run: pnpm test-e2e:azure
- name: Run E2E Tests (Local)
if: env.PW_MODE == 'local'
env:
CI: true
- name: Run E2E Tests
run: |
pnpm test:e2e
- uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4.6.1
- uses: actions/upload-artifact@v3
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 30
- uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4.6.1
if: failure()
with:
name: app-logs
path: app.log
- name: Output App Logs
if: failure()
run: cat app.log

View File

@@ -1,157 +0,0 @@
name: Build, release & deploy Formbricks images
on:
release:
types: [published]
permissions:
contents: read
jobs:
check-latest-release:
name: Check if this is the latest release
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
contents: read
outputs:
is_latest: ${{ steps.compare_tags.outputs.is_latest }}
# This job determines if the current release was marked as "Set as the latest release"
# by comparing it with the latest release from GitHub API
steps:
- name: Harden the runner
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Get latest release tag from API
id: get_latest_release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
REPO: ${{ github.repository }}
run: |
set -euo pipefail
# Get the latest release tag from GitHub API with error handling
echo "Fetching latest release from GitHub API..."
# Use curl with error handling - API returns 404 if no releases exist
http_code=$(curl -s -w "%{http_code}" -H "Authorization: token ${GITHUB_TOKEN}" \
"https://api.github.com/repos/${REPO}/releases/latest" -o /tmp/latest_release.json)
if [[ "$http_code" == "404" ]]; then
echo "⚠️ No previous releases found (404). This appears to be the first release."
echo "latest_release=" >> $GITHUB_OUTPUT
elif [[ "$http_code" == "200" ]]; then
latest_release=$(jq -r .tag_name /tmp/latest_release.json)
if [[ "$latest_release" == "null" || -z "$latest_release" ]]; then
echo "⚠️ API returned null/empty tag_name. Treating as first release."
echo "latest_release=" >> $GITHUB_OUTPUT
else
echo "Latest release from API: ${latest_release}"
echo "latest_release=${latest_release}" >> $GITHUB_OUTPUT
fi
else
echo "❌ GitHub API error (HTTP ${http_code}). Treating as first release."
echo "latest_release=" >> $GITHUB_OUTPUT
fi
echo "Current release tag: ${{ github.event.release.tag_name }}"
- name: Compare release tags
id: compare_tags
env:
CURRENT_TAG: ${{ github.event.release.tag_name }}
LATEST_TAG: ${{ steps.get_latest_release.outputs.latest_release }}
run: |
set -euo pipefail
# Handle first release case (no previous releases)
if [[ -z "${LATEST_TAG}" ]]; then
echo "🎉 This is the first release (${CURRENT_TAG}) - treating as latest"
echo "is_latest=true" >> $GITHUB_OUTPUT
elif [[ "${CURRENT_TAG}" == "${LATEST_TAG}" ]]; then
echo "✅ This release (${CURRENT_TAG}) is marked as the latest release"
echo "is_latest=true" >> $GITHUB_OUTPUT
else
echo " This release (${CURRENT_TAG}) is not the latest release (latest: ${LATEST_TAG})"
echo "is_latest=false" >> $GITHUB_OUTPUT
fi
docker-build-community:
name: Build & release community docker image
permissions:
contents: read
packages: write
id-token: write
uses: ./.github/workflows/release-docker-github.yml
secrets: inherit
needs:
- check-latest-release
with:
IS_PRERELEASE: ${{ github.event.release.prerelease }}
MAKE_LATEST: ${{ needs.check-latest-release.outputs.is_latest == 'true' }}
docker-build-cloud:
name: Build & push Formbricks Cloud to ECR
permissions:
contents: read
id-token: write
uses: ./.github/workflows/build-and-push-ecr.yml
secrets: inherit
with:
image_tag: ${{ needs.docker-build-community.outputs.VERSION }}
IS_PRERELEASE: ${{ github.event.release.prerelease }}
MAKE_LATEST: ${{ needs.check-latest-release.outputs.is_latest == 'true' }}
needs:
- check-latest-release
- docker-build-community
helm-chart-release:
name: Release Helm Chart
permissions:
contents: read
packages: write
uses: ./.github/workflows/release-helm-chart.yml
secrets: inherit
needs:
- docker-build-community
with:
VERSION: ${{ needs.docker-build-community.outputs.VERSION }}
verify-cloud-build:
name: Verify Cloud Build Outputs
runs-on: ubuntu-latest
timeout-minutes: 5 # Simple verification should be quick
needs:
- docker-build-cloud
steps:
- name: Harden the runner
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Display ECR build outputs
env:
IMAGE_TAG: ${{ needs.docker-build-cloud.outputs.IMAGE_TAG }}
TAGS: ${{ needs.docker-build-cloud.outputs.TAGS }}
run: |
set -euo pipefail
echo "✅ ECR Build Completed Successfully"
echo "Image Tag: ${IMAGE_TAG}"
echo "ECR Tags:"
printf '%s\n' "${TAGS}"
move-stable-tag:
name: Move stable tag to release
permissions:
contents: write # Required for tag push operations in called workflow
uses: ./.github/workflows/move-stable-tag.yml
needs:
- check-latest-release
- docker-build-community # Ensure release is successful first
with:
release_tag: ${{ github.event.release.tag_name }}
commit_sha: ${{ github.sha }}
is_prerelease: ${{ github.event.release.prerelease }}
make_latest: ${{ needs.check-latest-release.outputs.is_latest == 'true' }}

19
.github/workflows/labeler.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: "Pull Request Labeler"
on:
- pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
labeler:
name: Pull Request Labeler
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v4
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
# https://github.com/actions/labeler/issues/442#issuecomment-1297359481
sync-labels: ""

View File

@@ -1,10 +1,6 @@
name: Lint
on:
workflow_call:
permissions:
contents: read
jobs:
build:
name: Linters
@@ -12,21 +8,16 @@ jobs:
timeout-minutes: 15
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@v3
- uses: ./.github/actions/dangerous-git-checkout
- name: Setup Node.js 20.x
uses: actions/setup-node@39370e3970a6d050c480ffad4ff0ed4d3fdee5af
uses: actions/setup-node@v3
with:
node-version: 20.x
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
uses: pnpm/action-setup@v4
- name: Install dependencies
run: pnpm install --config.platform=linux --config.architecture=x64

View File

@@ -1,101 +0,0 @@
name: Move Stable Tag
on:
workflow_call:
inputs:
release_tag:
description: "The release tag name (e.g., 1.2.3)"
required: true
type: string
commit_sha:
description: "The commit SHA to point the stable tag to"
required: true
type: string
is_prerelease:
description: "Whether this is a prerelease (stable tag won't be moved for prereleases)"
required: false
type: boolean
default: false
make_latest:
description: "Whether to move stable tag (from GitHub release 'Set as the latest release' option)"
required: false
type: boolean
default: false
permissions:
contents: read
# Prevent concurrent stable tag operations to avoid race conditions
concurrency:
group: move-stable-tag-${{ github.repository }}
cancel-in-progress: true
jobs:
move-stable-tag:
name: Move stable tag to release
runs-on: ubuntu-latest
timeout-minutes: 10 # Prevent hung git operations
permissions:
contents: write # Required to push tags
# Only move stable tag for non-prerelease versions AND when make_latest is true
if: ${{ !inputs.is_prerelease && inputs.make_latest }}
steps:
- name: Harden the runner
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0 # Full history needed for tag operations
- name: Validate inputs
env:
RELEASE_TAG: ${{ inputs.release_tag }}
COMMIT_SHA: ${{ inputs.commit_sha }}
run: |
set -euo pipefail
# Validate release tag format
if [[ ! "$RELEASE_TAG" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?(\+[a-zA-Z0-9.-]+)?$ ]]; then
echo "❌ Error: Invalid release tag format. Expected format: 1.2.3, 1.2.3-alpha"
echo "Provided: $RELEASE_TAG"
exit 1
fi
# Validate commit SHA format (40 character hex)
if [[ ! "$COMMIT_SHA" =~ ^[a-f0-9]{40}$ ]]; then
echo "❌ Error: Invalid commit SHA format. Expected 40 character hex string"
echo "Provided: $COMMIT_SHA"
exit 1
fi
echo "✅ Input validation passed"
echo "Release tag: $RELEASE_TAG"
echo "Commit SHA: $COMMIT_SHA"
- name: Move stable tag
env:
RELEASE_TAG: ${{ inputs.release_tag }}
COMMIT_SHA: ${{ inputs.commit_sha }}
run: |
set -euo pipefail
# Configure git
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Verify the commit exists
if ! git cat-file -e "$COMMIT_SHA"; then
echo "❌ Error: Commit $COMMIT_SHA does not exist in this repository"
exit 1
fi
# Move stable tag to the release commit
echo "📌 Moving stable tag to commit: $COMMIT_SHA (release: $RELEASE_TAG)"
git tag -f stable "$COMMIT_SHA"
git push origin stable --force
echo "✅ Successfully moved stable tag to release $RELEASE_TAG"
echo "🔗 Stable tag now points to: https://github.com/${{ github.repository }}/commit/$COMMIT_SHA"

View File

@@ -1,159 +0,0 @@
name: PR Size Check
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
check-pr-size:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Harden the runner
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Check PR size
id: check-size
run: |
set -euo pipefail
# Fetch the base branch
git fetch origin "${{ github.base_ref }}"
# Get diff stats
diff_output=$(git diff --numstat "origin/${{ github.base_ref }}"...HEAD)
# Count lines, excluding:
# - Test files (*.test.ts, *.spec.tsx, etc.)
# - Locale files (locales/*.json, i18n/*.json)
# - Lock files (pnpm-lock.yaml, package-lock.json, yarn.lock)
# - Generated files (dist/, coverage/, build/, .next/)
# - Storybook stories (*.stories.tsx)
total_additions=0
total_deletions=0
counted_files=0
excluded_files=0
while IFS=$'\t' read -r additions deletions file; do
# Skip if additions or deletions are "-" (binary files)
if [ "$additions" = "-" ] || [ "$deletions" = "-" ]; then
continue
fi
# Check if file should be excluded
case "$file" in
*.test.ts|*.test.tsx|*.spec.ts|*.spec.tsx|*.test.js|*.test.jsx|*.spec.js|*.spec.jsx)
excluded_files=$((excluded_files + 1))
continue
;;
*/locales/*.json|*/i18n/*.json)
excluded_files=$((excluded_files + 1))
continue
;;
pnpm-lock.yaml|package-lock.json|yarn.lock)
excluded_files=$((excluded_files + 1))
continue
;;
dist/*|coverage/*|build/*|node_modules/*|test-results/*|playwright-report/*|.next/*|*.tsbuildinfo)
excluded_files=$((excluded_files + 1))
continue
;;
*.stories.ts|*.stories.tsx|*.stories.js|*.stories.jsx)
excluded_files=$((excluded_files + 1))
continue
;;
esac
total_additions=$((total_additions + additions))
total_deletions=$((total_deletions + deletions))
counted_files=$((counted_files + 1))
done <<EOF
${diff_output}
EOF
total_changes=$((total_additions + total_deletions))
echo "counted_files=${counted_files}" >> "${GITHUB_OUTPUT}"
echo "excluded_files=${excluded_files}" >> "${GITHUB_OUTPUT}"
echo "total_additions=${total_additions}" >> "${GITHUB_OUTPUT}"
echo "total_deletions=${total_deletions}" >> "${GITHUB_OUTPUT}"
echo "total_changes=${total_changes}" >> "${GITHUB_OUTPUT}"
# Set flag if PR is too large (> 800 lines)
if [ ${total_changes} -gt 800 ]; then
echo "is_too_large=true" >> "${GITHUB_OUTPUT}"
else
echo "is_too_large=false" >> "${GITHUB_OUTPUT}"
fi
- name: Comment on PR if too large
if: steps.check-size.outputs.is_too_large == 'true'
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const totalChanges = ${{ steps.check-size.outputs.total_changes }};
const countedFiles = ${{ steps.check-size.outputs.counted_files }};
const excludedFiles = ${{ steps.check-size.outputs.excluded_files }};
const additions = ${{ steps.check-size.outputs.total_additions }};
const deletions = ${{ steps.check-size.outputs.total_deletions }};
const body = '## 🚨 PR Size Warning\n\n' +
'This PR has approximately **' + totalChanges + ' lines** of changes (' + additions + ' additions, ' + deletions + ' deletions across ' + countedFiles + ' files).\n\n' +
'Large PRs (>800 lines) are significantly harder to review and increase the chance of merge conflicts. Consider splitting this into smaller, self-contained PRs.\n\n' +
'### 💡 Suggestions:\n' +
'- **Split by feature or module** - Break down into logical, independent pieces\n' +
'- **Create a sequence of PRs** - Each building on the previous one\n' +
'- **Branch off PR branches** - Don\'t wait for reviews to continue dependent work\n\n' +
'### 📊 What was counted:\n' +
'- ✅ Source files, stylesheets, configuration files\n' +
'- ❌ Excluded ' + excludedFiles + ' files (tests, locales, locks, generated files)\n\n' +
'### 📚 Guidelines:\n' +
'- **Ideal:** 300-500 lines per PR\n' +
'- **Warning:** 500-800 lines\n' +
'- **Critical:** 800+ lines ⚠️\n\n' +
'If this large PR is unavoidable (e.g., migration, dependency update, major refactor), please explain in the PR description why it couldn\'t be split.';
// Check if we already commented
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});
const botComment = comments.find(comment =>
comment.user.type === 'Bot' &&
comment.body.includes('🚨 PR Size Warning')
);
if (botComment) {
// Update existing comment
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: botComment.id,
body: body
});
} else {
// Create new comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: body
});
}

View File

@@ -1,15 +1,9 @@
name: PR Update
# Update permissions to include all necessary ones
permissions:
contents: read
pull-requests: read
actions: read
checks: write
id-token: write
on:
pull_request:
branches:
- main
merge_group:
workflow_dispatch:
@@ -18,40 +12,64 @@ concurrency:
cancel-in-progress: true
jobs:
changes:
name: Detect changes
runs-on: ubuntu-latest
permissions:
pull-requests: read
outputs:
has-files-requiring-all-checks: ${{ steps.filter.outputs.has-files-requiring-all-checks }}
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/dangerous-git-checkout
- uses: dorny/paths-filter@v2
id: filter
with:
filters: |
has-files-requiring-all-checks:
- "!(**.md|.github/CODEOWNERS)"
test:
name: Run Unit Tests
needs: [changes]
if: ${{ needs.changes.outputs.has-files-requiring-all-checks == 'true' }}
uses: ./.github/workflows/test.yml
secrets: inherit
lint:
name: Run Linters
needs: [changes]
if: ${{ needs.changes.outputs.has-files-requiring-all-checks == 'true' }}
uses: ./.github/workflows/lint.yml
secrets: inherit
build:
name: Build Formbricks-web
needs: [changes]
if: ${{ needs.changes.outputs.has-files-requiring-all-checks == 'true' }}
uses: ./.github/workflows/build-web.yml
secrets: inherit
docs:
name: Build Docs
needs: [changes]
if: ${{ needs.changes.outputs.has-files-requiring-all-checks == 'true' }}
uses: ./.github/workflows/build-docs.yml
secrets: inherit
e2e-test:
name: Run E2E Tests
needs: [changes]
if: ${{ needs.changes.outputs.has-files-requiring-all-checks == 'true' }}
uses: ./.github/workflows/e2e.yml
secrets: inherit
required:
name: PR Check Summary
needs: [lint, test, build, e2e-test]
needs: [lint, test, build, e2e-test, docs]
if: always()
runs-on: ubuntu-latest
permissions:
contents: read
checks: write
statuses: write
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0
with:
egress-policy: audit
- name: fail if conditional jobs failed
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'skipped') || contains(needs.*.result, 'cancelled')
run: exit 1

View File

@@ -0,0 +1,46 @@
name: Release Changesets
on:
workflow_dispatch:
#push:
# branches:
# - main
concurrency: ${{ github.workflow }}-${{ github.ref }}
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
jobs:
release:
name: Release
runs-on: ubuntu-latest
timeout-minutes: 15
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
steps:
- name: Checkout Repo
uses: actions/checkout@v2
- name: Setup Node.js 18.x
uses: actions/setup-node@v2
with:
node-version: 18.x
- name: Install pnpm
uses: pnpm/action-setup@v2.2.4
- name: Install Dependencies
run: pnpm install --config.platform=linux --config.architecture=x64
- name: Create Release Pull Request or Publish to npm
id: changesets
uses: changesets/action@v1
with:
# This expects you to have a script called release which does a build for your packages and calls changeset publish
publish: pnpm release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}

View File

@@ -0,0 +1,62 @@
name: Docker for Data Migrations
on:
workflow_dispatch:
push:
tags:
- "v*"
env:
REGISTRY: ghcr.io
IMAGE_NAME: formbricks/data-migrations
DATABASE_URL: "postgresql://postgres:postgres@postgres:5432/formbricks?schema=public"
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install cosign
if: github.event_name != 'pull_request'
uses: sigstore/cosign-installer@v3.5.0
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=tag
type=raw,value=${{ github.ref_name }}
type=raw,value=latest
- name: Build and push Docker image
uses: docker/build-push-action@v3
with:
context: .
file: ./packages/database/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
DATABASE_URL=${{ env.DATABASE_URL }}
- name: Sign the published Docker image
if: ${{ github.event_name != 'pull_request' }}
run: |
cosign sign --yes ghcr.io/${{ env.IMAGE_NAME }}:${{ github.ref_name }}
cosign sign --yes ghcr.io/${{ env.IMAGE_NAME }}:latest

View File

@@ -1,50 +1,91 @@
name: Build Community Testing Images
name: Docker Release to Github Experimental
# This workflow builds experimental/testing versions of Formbricks for self-hosting customers
# to test fixes and features before official releases. Images are pushed to GHCR with
# timestamped experimental versions for easy identification and testing.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
on:
workflow_dispatch:
inputs:
version_override:
description: "Override version (SemVer only, e.g., 1.2.3-beta). Leave empty for auto-generated experimental version."
required: false
type: string
permissions:
contents: read
packages: write
id-token: write
env:
# Use docker.io for Docker Hub if empty
REGISTRY: ghcr.io
# github.repository as <account>/<repo>
IMAGE_NAME: ${{ github.repository }}-experimental
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
DATABASE_URL: "postgresql://postgres:postgres@localhost:5432/formbricks?schema=public"
jobs:
build-community-testing:
name: Build Community Testing Image
build:
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
packages: write
# This is used to complete the identity challenge
# with sigstore/fulcio when running outside of PRs.
id-token: write
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
uses: actions/checkout@v3
- name: Build and push community testing image
uses: ./.github/actions/build-and-push-docker
- name: Set up Depot CLI
uses: depot/setup-action@v1
# Install the cosign tool except on PR
# https://github.com/sigstore/cosign-installer
- name: Install cosign
if: github.event_name != 'pull_request'
uses: sigstore/cosign-installer@v3.5.0
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action@v3 # v3.0.0
with:
registry_type: "ghcr"
ghcr_image_name: "${{ github.repository }}-experimental"
experimental_mode: "true"
version: ${{ inputs.version_override }}
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5 # v5.0.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
id: build-and-push
uses: depot/build-push-action@v1
with:
project: tw0fqmsx3c
token: ${{ secrets.DEPOT_PROJECT_TOKEN }}
context: .
file: ./apps/web/Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Sign the resulting Docker image digest except on PRs.
# This will only write to the public Rekor transparency log when the Docker
# repository is public to avoid leaking data. If you would like to publish
# transparency data even for private images, pass --force to cosign below.
# https://github.com/sigstore/cosign
- name: Sign the published Docker image
if: ${{ github.event_name != 'pull_request' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEPOT_PROJECT_TOKEN: ${{ secrets.DEPOT_PROJECT_TOKEN }}
DUMMY_DATABASE_URL: ${{ secrets.DUMMY_DATABASE_URL }}
DUMMY_ENCRYPTION_KEY: ${{ secrets.DUMMY_ENCRYPTION_KEY }}
DUMMY_REDIS_URL: ${{ secrets.DUMMY_REDIS_URL }}
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
# https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-an-intermediate-environment-variable
TAGS: ${{ steps.meta.outputs.tags }}
DIGEST: ${{ steps.build-and-push.outputs.digest }}
# This step uses the identity token to provision an ephemeral certificate
# against the sigstore community Fulcio instance.
run: echo "${TAGS}" | xargs -I {} cosign sign --yes {}@${DIGEST}

View File

@@ -1,4 +1,4 @@
name: Release Community Docker Images
name: Docker Release to Github
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
@@ -6,103 +6,89 @@ name: Release Community Docker Images
# documentation.
on:
workflow_call:
inputs:
IS_PRERELEASE:
description: "Whether this is a prerelease (affects latest tag)"
required: false
type: boolean
default: false
MAKE_LATEST:
description: "Whether to tag as latest (from GitHub release 'Set as the latest release' option)"
required: false
type: boolean
default: false
outputs:
VERSION:
description: release version
value: ${{ jobs.build.outputs.VERSION }}
workflow_dispatch:
push:
tags:
- "v*"
env:
# Use docker.io for Docker Hub if empty
REGISTRY: ghcr.io
# github.repository as <account>/<repo>
IMAGE_NAME: ${{ github.repository }}
permissions:
contents: read
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
DATABASE_URL: "postgresql://postgres:postgres@localhost:5432/formbricks?schema=public"
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
packages: write
id-token: write
# This is used to complete the identity challenge
# with sigstore/fulcio when running outside of PRs.
outputs:
VERSION: ${{ steps.extract_release_tag.outputs.VERSION }}
id-token: write
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@v3
- name: Extract release version from tag
id: extract_release_tag
run: |
set -euo pipefail
- name: Set up Depot CLI
uses: depot/setup-action@v1
# Extract tag name with fallback logic for different trigger contexts
if [[ -n "${RELEASE_TAG:-}" ]]; then
TAG="$RELEASE_TAG"
echo "Using RELEASE_TAG override: $TAG"
elif [[ "$GITHUB_REF_NAME" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?$ ]] || [[ "$GITHUB_REF_NAME" =~ ^v[0-9] ]]; then
TAG="$GITHUB_REF_NAME"
echo "Using GITHUB_REF_NAME (looks like tag): $TAG"
else
# Fallback: extract from GITHUB_REF for direct tag triggers
TAG="${GITHUB_REF#refs/tags/}"
if [[ -z "$TAG" || "$TAG" == "$GITHUB_REF" ]]; then
TAG="$GITHUB_REF_NAME"
echo "Using GITHUB_REF_NAME as final fallback: $TAG"
else
echo "Extracted from GITHUB_REF: $TAG"
fi
fi
# Install the cosign tool except on PR
# https://github.com/sigstore/cosign-installer
- name: Install cosign
if: github.event_name != 'pull_request'
uses: sigstore/cosign-installer@v3.5.0
# Strip v-prefix if present (normalize to clean SemVer)
TAG=${TAG#[vV]}
# Validate SemVer format (supports prereleases like 4.0.0-rc.1)
if [[ ! "$TAG" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?$ ]]; then
echo "ERROR: Invalid tag format '$TAG'. Expected SemVer (e.g., 1.2.3, 4.0.0-rc.1)"
exit 1
fi
echo "VERSION=$TAG" >> $GITHUB_OUTPUT
echo "Using version: $TAG"
- name: Build and push community release image
id: build
uses: ./.github/actions/build-and-push-docker
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action@v3 # v3.0.0
with:
registry_type: "ghcr"
ghcr_image_name: ${{ env.IMAGE_NAME }}
version: ${{ steps.extract_release_tag.outputs.VERSION }}
is_prerelease: ${{ inputs.IS_PRERELEASE }}
make_latest: ${{ inputs.MAKE_LATEST }}
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5 # v5.0.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
id: build-and-push
uses: depot/build-push-action@v1
with:
project: tw0fqmsx3c
token: ${{ secrets.DEPOT_PROJECT_TOKEN }}
context: .
file: ./apps/web/Dockerfile
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Sign the resulting Docker image digest except on PRs.
# This will only write to the public Rekor transparency log when the Docker
# repository is public to avoid leaking data. If you would like to publish
# transparency data even for private images, pass --force to cosign below.
# https://github.com/sigstore/cosign
- name: Sign the published Docker image
if: ${{ github.event_name != 'pull_request' }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DEPOT_PROJECT_TOKEN: ${{ secrets.DEPOT_PROJECT_TOKEN }}
DUMMY_DATABASE_URL: ${{ secrets.DUMMY_DATABASE_URL }}
DUMMY_ENCRYPTION_KEY: ${{ secrets.DUMMY_ENCRYPTION_KEY }}
DUMMY_REDIS_URL: ${{ secrets.DUMMY_REDIS_URL }}
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
# https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-an-intermediate-environment-variable
TAGS: ${{ steps.meta.outputs.tags }}
DIGEST: ${{ steps.build-and-push.outputs.digest }}
# This step uses the identity token to provision an ephemeral certificate
# against the sigstore community Fulcio instance.
run: echo "${TAGS}" | xargs -I {} cosign sign --yes {}@${DIGEST}

44
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: Release on Dockerhub
on:
push:
tags:
- "v*"
jobs:
release-image-on-dockerhub:
name: Release on Dockerhub
runs-on: ubuntu-latest
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ secrets.TURBO_TEAM }}
DATABASE_URL: "postgresql://postgres:postgres@localhost:5432/formbricks?schema=public"
steps:
- name: Checkout Repo
uses: actions/checkout@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Get Release Tag
id: extract_release_tag
run: |
TAG=${{ github.ref }}
TAG=${TAG#refs/tags/v}
echo "RELEASE_TAG=$TAG" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
file: ./apps/web/Dockerfile
push: true
tags: |
${{ secrets.DOCKER_USERNAME }}/formbricks:${{ env.RELEASE_TAG }}
${{ secrets.DOCKER_USERNAME }}/formbricks:latest

View File

@@ -1,93 +0,0 @@
name: Publish Helm Chart
on:
workflow_call:
inputs:
VERSION:
description: "The version of the Helm chart to release"
required: true
type: string
permissions:
contents: read
jobs:
publish:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@ec9f2d5744a09debf3a187a3f4f675c53b671911 # v2.13.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Validate input version
env:
INPUT_VERSION: ${{ inputs.VERSION }}
run: |
set -euo pipefail
# Validate input version format (expects clean semver without 'v' prefix)
if [[ ! "$INPUT_VERSION" =~ ^[0-9]+\.[0-9]+\.[0-9]+(-[a-zA-Z0-9.-]+)?(\+[a-zA-Z0-9.-]+)?$ ]]; then
echo "❌ Error: Invalid version format. Must be clean semver (e.g., 1.2.3, 1.2.3-alpha)"
echo "Expected: clean version without 'v' prefix"
echo "Provided: $INPUT_VERSION"
exit 1
fi
# Store validated version in environment variable
echo "VERSION<<EOF" >> $GITHUB_ENV
echo "$INPUT_VERSION" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: Set up Helm
uses: azure/setup-helm@5119fcb9089d432beecbf79bb2c7915207344b78 # v3.5
with:
version: latest
- name: Log in to GitHub Container Registry
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_ACTOR: ${{ github.actor }}
run: printf '%s' "$GITHUB_TOKEN" | helm registry login ghcr.io --username "$GITHUB_ACTOR" --password-stdin
- name: Install YQ
uses: dcarbone/install-yq-action@4075b4dca348d74bd83f2bf82d30f25d7c54539b # v1.3.1
- name: Update Chart.yaml with new version
env:
VERSION: ${{ env.VERSION }}
run: |
set -euo pipefail
echo "Updating Chart.yaml with version: ${VERSION}"
yq -i ".version = \"${VERSION}\"" helm-chart/Chart.yaml
yq -i ".appVersion = \"${VERSION}\"" helm-chart/Chart.yaml
echo "✅ Successfully updated Chart.yaml"
- name: Package Helm chart
env:
VERSION: ${{ env.VERSION }}
run: |
set -euo pipefail
echo "Packaging Helm chart version: ${VERSION}"
helm package ./helm-chart
echo "✅ Successfully packaged formbricks-${VERSION}.tgz"
- name: Push Helm chart to GitHub Container Registry
env:
VERSION: ${{ env.VERSION }}
run: |
set -euo pipefail
echo "Pushing Helm chart to registry: formbricks-${VERSION}.tgz"
helm push "formbricks-${VERSION}.tgz" oci://ghcr.io/formbricks/helm-charts
echo "✅ Successfully pushed Helm chart to registry"

View File

@@ -16,12 +16,7 @@ jobs:
name: PR title
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- uses: amannn/action-semantic-pull-request@0723387faaf9b38adef4775cd42cfd5155ed6017 # v5.5.3
- uses: amannn/action-semantic-pull-request@v5
id: lint_pr_title
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -40,7 +35,7 @@ jobs:
revert
ossgg
- uses: marocchino/sticky-pull-request-comment@67d0dec7b07ed060a405f9b2a64b8ab319fdd7db # v2.9.2
- uses: marocchino/sticky-pull-request-comment@v2
# When the previous steps fails, the workflow would stop. By adding this
# condition you can continue the execution with the populated error message.
if: always() && (steps.lint_pr_title.outputs.error_message != null)
@@ -56,3 +51,11 @@ jobs:
```
${{ steps.lint_pr_title.outputs.error_message }}
```
# Delete a previous comment when the issue has been resolved
- if: ${{ steps.lint_pr_title.outputs.error_message == null }}
uses: marocchino/sticky-pull-request-comment@v2
with:
header: pr-title-lint-error
message: |
Thank you for following the naming conventions for pull request titles! 🙏

View File

@@ -1,55 +0,0 @@
name: SonarQube
on:
workflow_dispatch:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened]
merge_group:
permissions:
contents: read
jobs:
sonarqube:
name: SonarQube
runs-on: ubuntu-latest
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- name: Setup Node.js 22.x
uses: actions/setup-node@39370e3970a6d050c480ffad4ff0ed4d3fdee5af
with:
node-version: 22.x
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
- name: Install dependencies
run: pnpm install --config.platform=linux --config.architecture=x64
- name: create .env
run: cp .env.example .env
- name: Generate Random ENCRYPTION_KEY, CRON_SECRET & NEXTAUTH_SECRET and fill in .env
run: |
RANDOM_KEY=$(openssl rand -hex 32)
sed -i "s/ENCRYPTION_KEY=.*/ENCRYPTION_KEY=${RANDOM_KEY}/" .env
sed -i "s/CRON_SECRET=.*/CRON_SECRET=${RANDOM_KEY}/" .env
sed -i "s/NEXTAUTH_SECRET=.*/NEXTAUTH_SECRET=${RANDOM_KEY}/" .env
sed -i "s|REDIS_URL=.*|REDIS_URL=|" .env
- name: Run tests with coverage
run: |
pnpm test:coverage
- name: SonarQube Scan
uses: SonarSource/sonarqube-scan-action@2500896589ef8f7247069a56136f8dc177c27ccf
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

View File

@@ -1,33 +1,23 @@
name: Tests
on:
workflow_call:
permissions:
contents: read
jobs:
build:
name: Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 15
permissions:
contents: read
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v3
- uses: ./.github/actions/dangerous-git-checkout
- name: Setup Node.js 20.x
uses: actions/setup-node@1a4442cacd436585916779262731d5b162bc6ec7 # v3.8.2
uses: actions/setup-node@v3
with:
node-version: 20.x
- name: Install pnpm
uses: pnpm/action-setup@a7487c7e89a18df4991f7f222e4898a00d66ddda # v4.1.0
uses: pnpm/action-setup@v4
- name: Install dependencies
run: pnpm install --config.platform=linux --config.architecture=x64
@@ -41,7 +31,6 @@ jobs:
sed -i "s/ENCRYPTION_KEY=.*/ENCRYPTION_KEY=${RANDOM_KEY}/" .env
sed -i "s/CRON_SECRET=.*/CRON_SECRET=${RANDOM_KEY}/" .env
sed -i "s/NEXTAUTH_SECRET=.*/NEXTAUTH_SECRET=${RANDOM_KEY}/" .env
sed -i "s|REDIS_URL=.*|REDIS_URL=|" .env
- name: Test
run: pnpm test

View File

@@ -1,63 +0,0 @@
name: Translation Validation
permissions:
contents: read
on:
pull_request:
types: [opened, synchronize, reopened]
paths:
- "apps/web/**/*.ts"
- "apps/web/**/*.tsx"
- "apps/web/locales/**/*.json"
- "scan-translations.ts"
push:
branches:
- main
paths:
- "apps/web/**/*.ts"
- "apps/web/**/*.tsx"
- "apps/web/locales/**/*.json"
- "scan-translations.ts"
jobs:
validate-translations:
name: Validate Translation Keys
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Harden the runner (Audit all outbound calls)
uses: step-security/harden-runner@0634a2670c59f64b4a01f0f96f84700a4088b9f0 # v2.12.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Setup Node.js
uses: actions/setup-node@1d0ff469b7ec7b3cb9d8673fde0c81c44821de2a # v4.2.0
with:
node-version: 18
- name: Setup pnpm
uses: pnpm/action-setup@a3252b78c470c02df07e9d59298aecedc3ccdd6d # v3.0.0
with:
version: 9.15.9
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Validate translation keys
run: |
echo ""
echo "🔍 Validating translation keys..."
echo ""
pnpm run scan-translations
- name: Summary
if: success()
run: |
echo ""
echo "✅ Translation validation completed successfully!"
echo ""

View File

@@ -0,0 +1,27 @@
name: "Welcome new contributors"
on:
issues:
types: opened
pull_request:
types: opened
permissions:
pull-requests: write
issues: write
jobs:
welcome-message:
name: Welcoming New Users
runs-on: ubuntu-latest
timeout-minutes: 10
if: github.event.action == 'opened'
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: |-
Thank you so much for making your first Pull Request and taking the time to improve Formbricks! 🚀🙏❤️
Feel free to join the conversation at [Discord](https://formbricks.com/discord)
issue-message: |
Thank you for opening your first issue! 🙏❤️ One of our team members will review it and get back to you as soon as it possible. 😊

60
.gitignore vendored
View File

@@ -1,26 +1,25 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
**/node_modules
node_modules
.pnp
.pnp.js
.pnpm-store/
# testing
**/coverage
coverage
# next.js
**/.next/
**/out/
**/build
.next/
out/
build
# node
**/dist/
dist/
# misc
**/.DS_Store
.DS_Store
*.pem
Zone.Identifier
# debug
npm-debug.log*
@@ -28,43 +27,36 @@ yarn-debug.log*
yarn-error.log*
# local env files
**/.env
**/.env.local
**/.env.development.local
**/.env.test.local
**/.env.production.local
.env
.env.local
.env.development.local
.env.test.local
.env.production.local
!packages/database/.env
!apps/web/.env
# build tools
.turbo
**/*vite.config.*.timestamp-*
# Prisma generated files
packages/database/zod
# environment specific
# turbo
.turbo
# nixos stuff
.direnv
Zone.Identifier
# Playwright
/test-results/
/playwright-report/
/blob-report/
/playwright/.cache/
# project specific
# uploads
packages/lib/uploads
# Vite Timestamps
*vite.config.*.timestamp-*
# js compiled assets
apps/web/public/js
packages/database/migrations
branch.json
.vercel
# IntelliJ IDEA
/.idea/
/*.iml
packages/ios/FormbricksSDK/FormbricksSDK.xcodeproj/project.xcworkspace/xcuserdata
.cursorrules
i18n.cache
stats.html
# Agent skill archives
.agent/skills/**/.archived/
.agent/.temp-skills/

6
.gitpod.Dockerfile vendored Normal file
View File

@@ -0,0 +1,6 @@
FROM gitpod/workspace-full
# Install custom tools, runtime, etc.
RUN brew install yq
RUN pnpm install turbo --global

74
.gitpod.yml Normal file
View File

@@ -0,0 +1,74 @@
tasks:
- name: demo
init: |
gp sync-await init-install &&
bash .gitpod/setup-demo.bash
command: |
cd apps/demo &&
cp .env.example .env &&
sed -i -r "s#^(NEXT_PUBLIC_FORMBRICKS_API_HOST=).*#\1 $(gp url 3000)#" .env &&
gp sync-await init &&
turbo --filter "@formbricks/demo" go
- name: Init Formbricks
init: |
cp .env.example .env &&
bash .gitpod/init.bash &&
turbo --filter "@formbricks/js" build &&
gp sync-done init-install
command: |
gp sync-done init &&
gp tasks list &&
gp ports await 3002 && gp ports await 3000 && gp open apps/demo/.env && gp preview $(gp url 3002) --external
- name: web
init: |
gp sync-await init-install &&
bash .gitpod/setup-web.bash &&
turbo --filter "@formbricks/database" db:down
command: |
gp sync-await init &&
cp .env.example .env &&
sed -i -r "s#^(WEBAPP_URL=).*#\1 $(gp url 3000)#" .env &&
RANDOM_ENCRYPTION_KEY=$(openssl rand -hex 32)
sed -i 's/^ENCRYPTION_KEY=.*/ENCRYPTION_KEY='"$RANDOM_ENCRYPTION_KEY"'/' .env
turbo --filter "@formbricks/web" go
image:
file: .gitpod.Dockerfile
ports:
- port: 3000
visibility: public
onOpen: open-browser
- port: 3001
visibility: public
onOpen: ignore
- port: 3002
visibility: public
onOpen: ignore
- port: 5432
visibility: public
onOpen: ignore
- port: 1025
visibility: public
onOpen: ignore
- port: 8025
visibility: public
onOpen: open-browser
github:
prebuilds:
master: true
pullRequests: true
addComment: true
vscode:
extensions:
- "ban.spellright"
- "bradlc.vscode-tailwindcss"
- "DavidAnson.vscode-markdownlint"
- "dbaeumer.vscode-eslint"
- "esbenp.prettier-vscode"
- "Prisma.prisma"
- "yzhang.markdown-all-in-one"

View File

@@ -1,6 +1,6 @@
#!/bin/bash
images=($(yq eval '.services.*.image' docker-compose.dev.yml))
images=($(yq eval '.services.*.image' packages/database/docker-compose.yml))
pull_image() {
docker pull "$1"

View File

@@ -1,2 +0,0 @@
echo "{\"branchName\": \"$(git rev-parse --abbrev-ref HEAD)\"}" > ./branch.json
prettier --write ./branch.json

View File

@@ -1,40 +1 @@
# Load environment variables from .env files
if [ -f .env ]; then
set -a
. .env
set +a
fi
pnpm lint-staged
# Run Lingo.dev i18n workflow if LINGODOTDEV_API_KEY is set
if [ -n "$LINGODOTDEV_API_KEY" ]; then
echo ""
echo "🌍 Running Lingo.dev translation workflow..."
echo ""
# Run translation generation and validation
if pnpm run i18n; then
echo ""
echo "✅ Translation validation passed"
echo ""
# Add updated locale files to git
git add apps/web/locales/*.json
else
echo ""
echo "❌ Translation validation failed!"
echo ""
echo "Please fix the translation issues above before committing:"
echo " • Add missing translation keys to your locale files"
echo " • Remove unused translation keys"
echo ""
echo "Or run 'pnpm i18n' to see the detailed report"
echo ""
exit 1
fi
else
echo ""
echo "⚠️ Skipping translation validation: LINGODOTDEV_API_KEY is not set"
echo " (This is expected for community contributors)"
echo ""
fi
pnpm lint-staged

2
.nvmrc
View File

@@ -1 +1 @@
22.1.0
20.18.0

View File

@@ -2,10 +2,5 @@ const baseConfig = require("./packages/config-prettier/prettier-preset");
module.exports = {
...baseConfig,
plugins: [
"@trivago/prettier-plugin-sort-imports",
"prettier-plugin-tailwindcss",
"prettier-plugin-sort-json",
],
jsonRecursiveSort: true,
plugins: ["@trivago/prettier-plugin-sort-imports", "prettier-plugin-tailwindcss"],
};

View File

@@ -6,8 +6,6 @@
"dbaeumer.vscode-eslint", // eslint plugin
"esbenp.prettier-vscode", // prettier plugin
"Prisma.prisma", // syntax|format|completion for prisma
"yzhang.markdown-all-in-one", // nicer markdown support
"vitest.explorer", // run tests directly from the code window
"sonarsource.sonarlint-vscode" // sonarqube linter for vscode
"yzhang.markdown-all-in-one" // nicer markdown support
]
}

18
.vscode/launch.json vendored
View File

@@ -1,21 +1,21 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch localhost:3002",
"reAttach": true,
"request": "launch",
"type": "firefox",
"request": "launch",
"reAttach": true,
"url": "http://localhost:3002/",
"webRoot": "${workspaceFolder}"
},
{
"name": "Attach",
"request": "attach",
"type": "firefox"
"type": "firefox",
"request": "attach"
}
],
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0"
]
}

14
.vscode/settings.json vendored
View File

@@ -1,16 +1,4 @@
{
"eslint.validate": ["javascript", "javascriptreact", "typescript", "typescriptreact"],
"eslint.workingDirectories": [
{
"mode": "auto"
}
],
"javascript.updateImportsOnFileMove.enabled": "always",
"sonarlint.connectedMode.project": {
"connectionId": "formbricks",
"projectKey": "formbricks_formbricks"
},
"typescript.preferences.importModuleSpecifier": "non-relative",
"typescript.tsdk": "node_modules/typescript/lib",
"typescript.updateImportsOnFileMove.enabled": "always"
"typescript.preferences.importModuleSpecifier": "non-relative"
}

View File

@@ -1,82 +0,0 @@
# Repository Guidelines
## Project Structure & Module Organization
Formbricks runs as a pnpm/turbo monorepo. `apps/web` is the Next.js product surface, with feature modules under `app/` and `modules/`, assets in `public/` and `images/`, and Playwright specs in `apps/web/playwright/`. `apps/storybook` renders reusable UI pieces for review. Shared logic lives in `packages/*`: `database` (Prisma schemas/migrations), `surveys`, `js-core`, `types`, plus linting and TypeScript presets (`config-*`). Deployment collateral is kept in `docs/`, `docker/`, and `helm-chart/`. Unit tests sit next to their source as `*.test.ts` or inside `__tests__`.
## Build, Test & Development Commands
- `pnpm install` — install workspace dependencies pinned by `pnpm-lock.yaml`.
- `pnpm db:up` / `pnpm db:down` — start/stop the Docker services backing the app.
- `pnpm dev` — run all app and worker dev servers in parallel via Turborepo.
- `pnpm build` — generate production builds for every package and app.
- `pnpm lint` — apply the shared ESLint rules across the workspace.
- `pnpm test` / `pnpm test:coverage` — execute Vitest suites with optional coverage.
- `pnpm test:e2e` — launch the Playwright browser regression suite.
- `pnpm db:migrate:dev` — apply Prisma migrations against the dev database.
## Coding Style & Naming Conventions
TypeScript, React, and Prisma are the primary languages. Use the shared ESLint presets (`@formbricks/eslint-config`) and Prettier preset (110-char width, semicolons, double quotes, sorted import groups). Two-space indentation is standard; prefer `PascalCase` for React components and folders under `modules/`, `camelCase` for functions/variables, and `SCREAMING_SNAKE_CASE` only for constants. When adding mocks, place them inside `__mocks__` so import ordering stays stable.
We are using SonarQube to identify code smells and security hotspots.
## Architecture & Patterns
- Next.js app router lives in `apps/web/app` with route groups like `(app)` and `(auth)`. Services live in `apps/web/lib`, feature modules in `apps/web/modules`.
- Server actions wrap service calls and return `{ data }` or `{ error }` consistently.
- Context providers should guard against missing provider usage and use cleanup patterns that snapshot refs inside `useEffect` to avoid React hooks warnings
## Caching
- Use React `cache()` for request-level dedupe and `cache.withCache()` or explicit Redis for expensive data.
- Do not use Next.js `unstable_cache()`.
- Always use `createCacheKey.*` utilities for cache keys.
## i18n (Internationalization)
- All user-facing text must use the `t()` function from `react-i18next`.
- Key naming: use lowercase with dots for nesting (e.g., `common.welcome`).
- Translations are in `apps/web/locales/`. Default is `en-US.json`.
- Lingo.dev is automatically translating strings from en-US into other languages on commit. Run `pnpm i18n` to generate missing translations and validate keys.
## Database & Prisma Performance
- Multi-tenancy: All data must be scoped by Organization or Environment.
- Soft Deletion: Check for `isActive` or `deletedAt` fields; use proper filtering.
- Never use `skip`/`offset` with `prisma.response.count()`; only use `where`.
- Separate count and data queries and run in parallel (`Promise.all`).
- Prefer cursor pagination for large datasets.
- When filtering by `createdAt`, include indexed fields (e.g., `surveyId` + `createdAt`).
## Testing Guidelines
Prefer Vitest with Testing Library for logic in `.ts` files, keeping specs colocated with the code they exercise (`utility.test.ts`). Do not write tests for `.tsx` files—React components are covered by Playwright E2E tests instead. Mock network and storage boundaries through helpers from `@formbricks/*`. Run `pnpm test` before opening a PR and `pnpm test:coverage` when touching critical flows; keep coverage from regressing. End-to-end scenarios belong in `apps/web/playwright`, using descriptive filenames (`billing.spec.ts`) and tagging slow suites with `@slow` when necessary.
## Documentation (apps/docs)
- Add frontmatter with `title`, `description`, and `icon` at the top of the MDX file.
- Do not start with an H1; use Camel Case headings (only capitalize the feature name).
- Use Mintlify components for steps and callouts.
- If Enterprise-only, add the Enterprise note block described in docs.
## Storybook
- Stories live in `stories.tsx` in the component folder and import from `"./index"`.
- Use `@storybook/react-vite` and organize argTypes into `Behavior`, `Appearance`, `Content`.
- Include Default, Disabled (if supported), WithIcon (if supported), all variants, and edge cases.
## GitHub Actions
- Always set minimal `permissions` for `GITHUB_TOKEN`.
- On `ubuntu-latest`, add `step-security/harden-runner` as the first step.
## Quality Checklist
- Keep code DRY and small; remove dead code and unused imports.
- Follow React hooks rules, keep effects focused, and avoid unnecessary `useMemo`/`useCallback`.
- Prefer type inference, avoid `any`, and use shared types from `@formbricks/types`.
- Keep components focused, avoid deep nesting, and ensure basic accessibility.
## Commit & Pull Request Guidelines
Commits follow a lightweight Conventional Commit format (`fix:`, `chore:`, `feat:`) and usually append the PR number, e.g. `fix: update OpenAPI schema (#6617)`. Keep commits scoped and lint-clean. Pull requests should outline the problem, summarize the solution, and link to issues or product specs. Attach screenshots or gifs for UI-facing work, list any migrations or env changes, and paste the output of relevant commands (`pnpm test`, `pnpm lint`, `pnpm db:migrate:dev`) so reviewers can verify readiness.

View File

@@ -14,7 +14,17 @@ Are you brimming with brilliant ideas? For new features that can elevate Formbri
## 🛠 Crafting Pull Requests
For the time being, we don't have the capacity to properly facilitate community contributions. It's a lot of engineering attention often spent on issues which don't follow our prioritization, so we've decided to only facilitate community code contributions in rare exceptions in the coming months.
Ready to dive into the code and make a real impact? Here's your path:
1. **Read our Best Practices**: [It takes 5 minutes](https://formbricks.com/docs/developer-docs/contributing/get-started) but will help you save hours 🤓
1. **Fork the Repository:** Fork our repository or use [Gitpod](https://formbricks.com/docs/developer-docs/contributing/gitpod) or use [Codespaces](https://formbricks.com/docs/developer-docs/contributing/codespaces)
1. **Tweak and Transform:** Work your coding magic and apply your changes.
1. **Pull Request Act:** If you're ready to go, craft a new pull request closely following our PR template 🙏
Would you prefer a chat before you dive into a lot of work? Our [Discord server](https://formbricks.com/discord) is your harbor. Share your thoughts, and we'll meet you there with open arms. We're responsive and friendly, promise!
## 🚀 Aspiring Features

Some files were not shown because too many files have changed in this diff Show More