Compare commits

..

19 Commits

Author SHA1 Message Date
Matthias Nannt
208eb7ce2d chore: infrastructure improvements to lower elb errors 2025-06-01 20:27:24 +02:00
Anshuman Pandey
ec208960e8 fix: surveys package resize observer issue (#5907)
Co-authored-by: Matthias Nannt <mail@matthiasnannt.com>
2025-05-29 19:00:28 +00:00
Piyush Gupta
b9505158b4 fix: ciphers issue for fb staging (#5908) 2025-05-29 14:39:20 +00:00
abhishek
ad0c3421f0 fix: alignment issue in file upload (#5828) 2025-05-29 16:40:18 +02:00
Matti Nannt
916c00344b chore: clean up public directory and update cache headers (#5904)
Co-authored-by: Piyush Gupta <piyushguptaa2z123@gmail.com>
2025-05-29 10:46:41 +00:00
Jakob Schott
459cdee17e chore: tweak language select dropdown width (#5878) 2025-05-29 03:54:51 +00:00
Harsh Bhat
bb26a64dbb docs: follow up update (#5601)
Co-authored-by: Johannes <johannes@formbricks.com>
2025-05-29 03:24:58 +00:00
Harsh Bhat
29a3fa532a docs: RTL support in multi-lang docs (#5898)
Co-authored-by: Johannes <johannes@formbricks.com>
2025-05-29 03:02:52 +00:00
Harsh Bhat
738b8f9012 docs: android sdk (#5889) 2025-05-29 02:47:26 +00:00
Matti Nannt
c95272288e fix: caching issue in newest next version (#5902) 2025-05-28 21:44:39 +02:00
Piyush Gupta
919febd166 fix: resend verification email translation (#5881) 2025-05-28 09:51:55 +00:00
Dhruwang Jariwala
10ccc20b53 fix: recall not working for NPS question (#5895) 2025-05-28 09:44:55 +00:00
Dhruwang Jariwala
d9ca64da54 fix: favicon warning (#5874)
Co-authored-by: Piyush Gupta <piyushguptaa2z123@gmail.com>
2025-05-28 08:09:51 +00:00
Anshuman Pandey
ce00ec97d1 fix: js-core trackAction bugs (#5843)
Co-authored-by: Piyush Gupta <piyushguptaa2z123@gmail.com>
2025-05-27 17:14:21 +00:00
Matti Nannt
2b9cd37c6c chore: enable rate limiting by default in helm chart (#5879) 2025-05-27 14:36:39 +02:00
Piyush Gupta
f8f14eb6f3 fix: weak cipher suite usage (#5873) 2025-05-27 12:09:16 +00:00
Matti Nannt
645fc863aa fix: performance issues on survey summary (#5885)
Co-authored-by: Piyush Gupta <piyushguptaa2z123@gmail.com>
2025-05-27 12:07:31 +00:00
Anshuman Pandey
c53f030b24 fix: multiple close function calls because of timeouts (#5886) 2025-05-27 07:20:35 +00:00
devin-ai-integration[bot]
45d74f9ba0 fix: Update JS SDK log messages for clarity (#5819)
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Matti Nannt <mail@matti.sh>
2025-05-26 09:57:37 +00:00
113 changed files with 3521 additions and 1003 deletions

View File

@@ -0,0 +1,61 @@
---
description:
globs:
alwaysApply: false
---
# Build & Deployment Best Practices
## Build Process
### Running Builds
- Use `pnpm build` from project root for full build
- Monitor for React hooks warnings and fix them immediately
- Ensure all TypeScript errors are resolved before deployment
### Common Build Issues & Fixes
#### React Hooks Warnings
- Capture ref values in variables within useEffect cleanup
- Avoid accessing `.current` directly in cleanup functions
- Pattern for fixing ref cleanup warnings:
```typescript
useEffect(() => {
const currentRef = myRef.current;
return () => {
if (currentRef) {
currentRef.cleanup();
}
};
}, []);
```
#### Test Failures During Build
- Ensure all test mocks include required constants like `SESSION_MAX_AGE`
- Mock Next.js navigation hooks properly: `useParams`, `useRouter`, `useSearchParams`
- Remove unused imports and constants from test files
- Use literal values instead of imported constants when the constant isn't actually needed
### Test Execution
- Run `pnpm test` to execute all tests
- Use `pnpm test -- --run filename.test.tsx` for specific test files
- Fix test failures before merging code
- Ensure 100% test coverage for new components
### Performance Monitoring
- Monitor build times and optimize if necessary
- Watch for memory usage during builds
- Use proper caching strategies for faster rebuilds
### Deployment Checklist
1. All tests passing
2. Build completes without warnings
3. TypeScript compilation successful
4. No linter errors
5. Database migrations applied (if any)
6. Environment variables configured
### EKS Deployment Considerations
- Ensure latest code is deployed to all pods
- Monitor AWS RDS Performance Insights for database issues
- Verify environment-specific configurations
- Check pod health and resource usage

View File

@@ -0,0 +1,41 @@
---
description:
globs:
alwaysApply: false
---
# Database Performance & Prisma Best Practices
## Critical Performance Rules
### Response Count Queries
- **NEVER** use `skip`/`offset` with `prisma.response.count()` - this causes expensive subqueries with OFFSET
- Always use only `where` clauses for count operations: `prisma.response.count({ where: { ... } })`
- For pagination, separate count queries from data queries
- Reference: [apps/web/lib/response/service.ts](mdc:apps/web/lib/response/service.ts) line 654-686
### Prisma Query Optimization
- Use proper indexes defined in [packages/database/schema.prisma](mdc:packages/database/schema.prisma)
- Leverage existing indexes: `@@index([surveyId, createdAt])`, `@@index([createdAt])`
- Use cursor-based pagination for large datasets instead of offset-based
- Cache frequently accessed data using React Cache and custom cache tags
### Date Range Filtering
- When filtering by `createdAt`, always use indexed queries
- Combine with `surveyId` for optimal performance: `{ surveyId, createdAt: { gte: start, lt: end } }`
- Avoid complex WHERE clauses that can't utilize indexes
### Count vs Data Separation
- Always separate count queries from data fetching queries
- Use `Promise.all()` to run count and data queries in parallel
- Example pattern from [apps/web/modules/api/v2/management/responses/lib/response.ts](mdc:apps/web/modules/api/v2/management/responses/lib/response.ts):
```typescript
const [responses, totalCount] = await Promise.all([
prisma.response.findMany(query),
prisma.response.count({ where: whereClause }),
]);
```
### Monitoring & Debugging
- Monitor AWS RDS Performance Insights for problematic queries
- Look for queries with OFFSET in count operations - these indicate performance issues
- Use proper error handling with `DatabaseError` for Prisma exceptions

View File

@@ -0,0 +1,334 @@
---
description:
globs:
alwaysApply: false
---
# Formbricks Architecture & Patterns
## Monorepo Structure
### Apps Directory
- `apps/web/` - Main Next.js web application
- `packages/` - Shared packages and utilities
### Key Directories in Web App
```
apps/web/
├── app/ # Next.js 13+ app directory
│ ├── (app)/ # Main application routes
│ ├── (auth)/ # Authentication routes
│ ├── api/ # API routes
│ └── share/ # Public sharing routes
├── components/ # Shared components
├── lib/ # Utility functions and services
└── modules/ # Feature-specific modules
```
## Routing Patterns
### App Router Structure
The application uses Next.js 13+ app router with route groups:
```
(app)/environments/[environmentId]/
├── surveys/[surveyId]/
│ ├── (analysis)/ # Analysis views
│ │ ├── responses/ # Response management
│ │ ├── summary/ # Survey summary
│ │ └── hooks/ # Analysis-specific hooks
│ ├── edit/ # Survey editing
│ └── settings/ # Survey settings
```
### Dynamic Routes
- `[environmentId]` - Environment-specific routes
- `[surveyId]` - Survey-specific routes
- `[sharingKey]` - Public sharing routes
## Service Layer Pattern
### Service Organization
Services are organized by domain in `apps/web/lib/`:
```typescript
// Example: Response service
// apps/web/lib/response/service.ts
export const getResponseCountAction = async ({
surveyId,
filterCriteria,
}: {
surveyId: string;
filterCriteria: any;
}) => {
// Service implementation
};
```
### Action Pattern
Server actions follow a consistent pattern:
```typescript
// Action wrapper for service calls
export const getResponseCountAction = async (params) => {
try {
const result = await responseService.getCount(params);
return { data: result };
} catch (error) {
return { error: error.message };
}
};
```
## Context Patterns
### Provider Structure
Context providers follow a consistent pattern:
```typescript
// Provider component
export const ResponseFilterProvider = ({ children }: { children: React.ReactNode }) => {
const [selectedFilter, setSelectedFilter] = useState(defaultFilter);
const value = {
selectedFilter,
setSelectedFilter,
// ... other state and methods
};
return (
<ResponseFilterContext.Provider value={value}>
{children}
</ResponseFilterContext.Provider>
);
};
// Hook for consuming context
export const useResponseFilter = () => {
const context = useContext(ResponseFilterContext);
if (!context) {
throw new Error('useResponseFilter must be used within ResponseFilterProvider');
}
return context;
};
```
### Context Composition
Multiple contexts are often composed together:
```typescript
// Layout component with multiple providers
export default function AnalysisLayout({ children }: { children: React.ReactNode }) {
return (
<ResponseFilterProvider>
<ResponseCountProvider>
{children}
</ResponseCountProvider>
</ResponseFilterProvider>
);
}
```
## Component Patterns
### Page Components
Page components are located in the app directory and follow this pattern:
```typescript
// apps/web/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/responses/page.tsx
export default function ResponsesPage() {
return (
<div>
<ResponsesTable />
<ResponsesPagination />
</div>
);
}
```
### Component Organization
- **Pages** - Route components in app directory
- **Components** - Reusable UI components
- **Modules** - Feature-specific components and logic
### Shared Components
Common components are in `apps/web/components/`:
- UI components (buttons, inputs, modals)
- Layout components (headers, sidebars)
- Data display components (tables, charts)
## Hook Patterns
### Custom Hook Structure
Custom hooks follow consistent patterns:
```typescript
export const useResponseCount = ({
survey,
initialCount
}: {
survey: TSurvey;
initialCount?: number;
}) => {
const [responseCount, setResponseCount] = useState(initialCount ?? 0);
const [isLoading, setIsLoading] = useState(false);
// Hook logic...
return {
responseCount,
isLoading,
refetch,
};
};
```
### Hook Dependencies
- Use context hooks for shared state
- Implement proper cleanup with AbortController
- Optimize dependency arrays to prevent unnecessary re-renders
## Data Fetching Patterns
### Server Actions
The app uses Next.js server actions for data fetching:
```typescript
// Server action
export async function getResponsesAction(params: GetResponsesParams) {
const responses = await getResponses(params);
return { data: responses };
}
// Client usage
const { data } = await getResponsesAction(params);
```
### Error Handling
Consistent error handling across the application:
```typescript
try {
const result = await apiCall();
return { data: result };
} catch (error) {
console.error("Operation failed:", error);
return { error: error.message };
}
```
## Type Safety
### Type Organization
Types are organized in packages:
- `@formbricks/types` - Shared type definitions
- Local types in component/hook files
### Common Types
```typescript
import { TSurvey } from "@formbricks/types/surveys/types";
import { TResponse } from "@formbricks/types/responses";
import { TEnvironment } from "@formbricks/types/environment";
```
## State Management
### Local State
- Use `useState` for component-specific state
- Use `useReducer` for complex state logic
- Use refs for mutable values that don't trigger re-renders
### Global State
- React Context for feature-specific shared state
- URL state for filters and pagination
- Server state through server actions
## Performance Considerations
### Code Splitting
- Dynamic imports for heavy components
- Route-based code splitting with app router
- Lazy loading for non-critical features
### Caching Strategy
- Server-side caching for database queries
- Client-side caching with React Query (where applicable)
- Static generation for public pages
## Testing Strategy
### Test Organization
```
component/
├── Component.tsx
├── Component.test.tsx
└── hooks/
├── useHook.ts
└── useHook.test.tsx
```
### Test Patterns
- Unit tests for utilities and services
- Integration tests for components with context
- Hook tests with proper mocking
## Build & Deployment
### Build Process
- TypeScript compilation
- Next.js build optimization
- Asset optimization and bundling
### Environment Configuration
- Environment-specific configurations
- Feature flags for gradual rollouts
- Database connection management
## Security Patterns
### Authentication
- Session-based authentication
- Environment-based access control
- API route protection
### Data Validation
- Input validation on both client and server
- Type-safe API contracts
- Sanitization of user inputs
## Monitoring & Observability
### Error Tracking
- Client-side error boundaries
- Server-side error logging
- Performance monitoring
### Analytics
- User interaction tracking
- Performance metrics
- Database query monitoring
## Best Practices Summary
### Code Organization
- ✅ Follow the established directory structure
- ✅ Use consistent naming conventions
- ✅ Separate concerns (UI, logic, data)
- ✅ Keep components focused and small
### Performance
- ✅ Implement proper loading states
- ✅ Use AbortController for async operations
- ✅ Optimize database queries
- ✅ Implement proper caching strategies
### Type Safety
- ✅ Use TypeScript throughout
- ✅ Define proper interfaces for props
- ✅ Use type guards for runtime validation
- ✅ Leverage shared type packages
### Testing
- ✅ Write tests for critical functionality
- ✅ Mock external dependencies properly
- ✅ Test error scenarios and edge cases
- ✅ Maintain good test coverage

View File

@@ -0,0 +1,429 @@
---
description: Infrastructure, Terraform, Kubernetes Cluster related
globs:
alwaysApply: false
---
# Formbricks Infrastructure Comprehensive Guide
## Infrastructure Overview
Formbricks uses a modern, cloud-native infrastructure built on AWS EKS with a focus on scalability, security, and operational excellence. The infrastructure follows Infrastructure as Code (IaC) principles using Terraform and GitOps patterns with Helm. The system has been specifically optimized to minimize ELB 502/504 errors through careful configuration of connection handling, health checks, and pod lifecycle management.
## Repository Structure & Organization
### Terraform File Organization
```
infra/terraform/
├── main.tf # Core infrastructure (VPC, EKS, Karpenter)
├── cloudwatch.tf # Monitoring, alerting, and CloudWatch alarms
├── rds.tf # Aurora PostgreSQL database configuration
├── elasticache.tf # Redis/Valkey caching layer
├── observability.tf # Loki, Grafana, and monitoring stack
├── iam.tf # GitHub OIDC, security roles
├── secrets.tf # AWS Secrets Manager integration
├── provider.tf # AWS, Kubernetes, Helm providers
├── versions.tf # Provider version constraints
└── data.tf # Data sources and external references
```
### Helm Configuration
- **Helmfile**: [infra/formbricks-cloud-helm/helmfile.yaml.gotmpl](mdc:infra/formbricks-cloud-helm/helmfile.yaml.gotmpl) - Multi-environment orchestration
- **Production**: [infra/formbricks-cloud-helm/values.yaml.gotmpl](mdc:infra/formbricks-cloud-helm/values.yaml.gotmpl) - Optimized ALB and pod configurations
- **Staging**: [infra/formbricks-cloud-helm/values-staging.yaml.gotmpl](mdc:infra/formbricks-cloud-helm/values-staging.yaml.gotmpl) - Staging with spot instances
### Key Infrastructure Files
- **Main Infrastructure**: [infra/terraform/main.tf](mdc:infra/terraform/main.tf) - EKS cluster, VPC, Karpenter, and core AWS resources
- **Monitoring**: [infra/terraform/cloudwatch.tf](mdc:infra/terraform/cloudwatch.tf) - CloudWatch alarms for 502/504 error tracking and alerting
- **Database**: [infra/terraform/rds.tf](mdc:infra/terraform/rds.tf) - Aurora PostgreSQL configuration
## Core Architecture Principles
### 1. Multi-Environment Strategy
```hcl
# Environment-aware resource creation
locals {
envs = {
prod = "${local.project}-prod"
stage = "${local.project}-stage"
}
}
# Resource duplication pattern
resource "aws_secretsmanager_secret" "formbricks_app_secrets" {
for_each = local.envs
name = "${each.key}/formbricks/secrets"
}
```
**Key Patterns:**
- **Environment isolation** through separate namespaces and resources
- **Consistent naming** conventions across environments
- **Resource sharing** where appropriate (VPC, EKS cluster)
- **Environment-specific** configurations and scaling parameters
### 2. Network Architecture
```hcl
# Strategic subnet allocation for different workload types
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)] # /20 - Application workloads
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)] # /24 - Load balancers
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)] # /24 - EKS control plane
database_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 56)] # /24 - RDS/ElastiCache
```
**Design Principles:**
- **Private EKS cluster** with no public endpoint access
- **Multi-AZ deployment** across 3 availability zones
- **VPC endpoints** for AWS services to reduce NAT costs
- **Single NAT Gateway** for cost optimization
### 3. Security Model
```hcl
# IRSA (IAM Roles for Service Accounts) pattern
module "formbricks_app_iam_role" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
oidc_providers = {
eks = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["formbricks:*"]
}
}
}
```
**Security Best Practices:**
- **GitHub OIDC** for CI/CD authentication (no long-lived credentials)
- **Pod Identity** for workload AWS access
- **AWS Secrets Manager** integration via External Secrets Operator
- **Least privilege** IAM policies for all roles
- **KMS encryption** for sensitive data at rest
## ALB Optimization & Error Reduction
### Connection Handling Optimizations
```yaml
# Key ALB annotations for reducing 502/504 errors
alb.ingress.kubernetes.io/load-balancer-attributes: |
idle_timeout.timeout_seconds=120,
connection_logs.s3.enabled=false,
access_logs.s3.enabled=false
alb.ingress.kubernetes.io/target-group-attributes: |
deregistration_delay.timeout_seconds=30,
stickiness.enabled=false,
load_balancing.algorithm.type=least_outstanding_requests,
target_group_health.dns_failover.minimum_healthy_targets.count=1
```
### Health Check Configuration
- **Interval**: 15 seconds for faster detection of unhealthy targets
- **Timeout**: 5 seconds to prevent false positives
- **Thresholds**: 2 healthy, 3 unhealthy for balanced responsiveness
- **Path**: `/health` endpoint optimized for < 100ms response time
### Expected Improvements
- **60-80% reduction** in ELB 502 errors
- **Faster recovery** during pod restarts
- **Better connection reuse** efficiency
- **Improved autoscaling** responsiveness
## Kubernetes Platform Configuration
### 1. EKS Cluster Setup
```hcl
# Modern EKS configuration
cluster_version = "1.32"
enable_cluster_creator_admin_permissions = false
cluster_endpoint_public_access = false
cluster_addons = {
coredns = { most_recent = true }
eks-pod-identity-agent = { most_recent = true }
aws-ebs-csi-driver = { most_recent = true }
kube-proxy = { most_recent = true }
vpc-cni = { most_recent = true }
}
```
### 2. Karpenter Autoscaling & Node Management
```hcl
# Intelligent node provisioning
requirements = [
{
key = "karpenter.k8s.aws/instance-family"
operator = "In"
values = ["c8g", "c7g", "m8g", "m7g", "r8g", "r7g"] # ARM64 Graviton
},
{
key = "karpenter.k8s.aws/instance-cpu"
operator = "In"
values = ["2", "4", "8"] # Cost-optimized sizes
}
]
```
**Node Lifecycle Optimization:**
- **Startup Taints**: Prevent traffic during node initialization
- **Graceful Shutdown**: 30s grace period for pod eviction
- **Consolidation Delay**: 60s to reduce unnecessary churn
- **Eviction Policies**: Configured for smooth pod migrations
**Instance Selection:**
- **Families**: c8g, c7g, m8g, m7g, r8g, r7g (ARM64 Graviton)
- **Sizes**: 2, 4, 8 vCPUs for cost optimization
- **Bottlerocket AMI**: Enhanced security and performance
## Pod Lifecycle Management
### Graceful Shutdown Pattern
```yaml
# PreStop hook to allow connection draining
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
# Termination grace period for complete cleanup
terminationGracePeriodSeconds: 45
```
### Health Probe Strategy
- **Startup Probe**: 5s initial delay, 5s interval, max 60s startup time
- **Readiness Probe**: 10s delay, 10s interval for traffic readiness
- **Liveness Probe**: 30s delay, 30s interval for container health
### Rolling Update Configuration
```yaml
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25% # Maintain capacity during updates
maxSurge: 50% # Allow faster rollouts
```
## Application Deployment Patterns
### 1. External Helm Chart Pattern
```yaml
# Helmfile configuration for external charts
repositories:
- name: helm-charts
url: ghcr.io/formbricks/helm-charts
oci: true
releases:
- name: formbricks
chart: helm-charts/formbricks
version: ^3.0.0
values: [values.yaml.gotmpl]
```
**Advantages:**
- **Separation of concerns** (infrastructure vs application)
- **Version control** of application deployment
- **Reusable charts** across environments
- **OCI registry** for secure chart distribution
### 2. Configuration Management
```yaml
# External Secrets pattern
externalSecret:
enabled: true
files:
app-env:
dataFrom:
key: prod/formbricks/environment
secretStore:
kind: ClusterSecretStore
name: aws-secrets-manager
```
### 3. Environment-Specific Configurations
- **Production**: On-demand instances, stricter resource limits
- **Staging**: Spot instances, rate limiting disabled, relaxed resources
## Monitoring & Observability Stack
### 1. Critical ALB Metrics & CloudWatch Alarms
```hcl
# Comprehensive ALB monitoring
alarms = {
ALB_HTTPCode_ELB_502_Count = {
alarm_description = "ALB 502 errors indicating backend connection issues"
threshold = 20
evaluation_periods = 3
period = 300
}
ALB_HTTPCode_ELB_504_Count = {
alarm_description = "ALB 504 timeout errors"
threshold = 15
evaluation_periods = 3
period = 300
}
}
```
**Monitoring Thresholds:**
1. **ELB 502 Errors**: Threshold 20 over 5 minutes
2. **ELB 504 Errors**: Threshold 15 over 5 minutes
3. **Target Connection Errors**: Threshold 50 over 5 minutes
4. **4XX Errors**: Threshold 100 over 10 minutes (client issues)
### 2. Log Aggregation & Analytics
```hcl
# Loki for centralized logging
module "loki_s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
# S3 backend for long-term log storage
}
module "observability_loki_iam_role" {
# IRSA role for Loki to access S3
}
```
### 3. Grafana Dashboards
```hcl
# Grafana with AWS CloudWatch integration
policy = jsonencode({
Statement = [
{
Sid = "AllowReadingMetricsFromCloudWatch"
Effect = "Allow"
Action = [
"cloudwatch:DescribeAlarms",
"cloudwatch:ListMetrics",
"cloudwatch:GetMetricData"
]
}
]
})
```
## Cost Optimization Strategies
### 1. Instance & Compute Optimization
- **ARM64 Graviton** processors (20% better price-performance)
- **Spot instances** for staging environments
- **Right-sizing** through Karpenter optimization
- **Reserved capacity** for predictable production workloads
### 2. Network & Storage Optimization
- **Single NAT Gateway** (vs. one per AZ)
- **VPC endpoints** to reduce NAT traffic
- **ELB cost optimization** through connection reuse
- **GP3 storage** for better IOPS/cost ratio
- **Lifecycle policies** for log retention
## Deployment Workflow & Best Practices
### 1. Infrastructure Updates
```bash
# Using the deployment script
./infra/deploy-improvements.sh
# Manual process:
cd infra/terraform
terraform plan -out=changes.tfplan
terraform apply changes.tfplan
```
### 2. Application Updates
```bash
# Helmfile deployment
cd infra/formbricks-cloud-helm
helmfile sync
# Environment-specific deployment
helmfile -e production sync
helmfile -e staging sync
```
### 3. Verification Steps
1. **Infrastructure health**: Check EKS cluster status
2. **Application readiness**: Verify pod status and health checks
3. **Network connectivity**: Test ALB target group health
4. **Monitoring**: Confirm CloudWatch metrics and alerts
### 4. Change Management Best Practices
**Testing Strategy:**
- **Staging first**: Test all changes in staging environment with same configurations
- **Gradual rollout**: Use blue-green or canary deployments
- **Monitoring window**: Observe metrics for 24-48 hours after changes
- **Rollback plan**: Always have a documented rollback strategy
**Performance Optimization:**
- **Health endpoint** should respond < 100ms consistently
- **Connection pooling** aligned with ALB idle timeouts
- **Resource requests/limits** tuned for consistent performance
- **Graceful shutdown** implemented in application code
- **Maintain ALB timeout alignment** across all layers
**Security Considerations:**
- **Least privilege**: Review IAM permissions regularly
- **Secret rotation**: Implement regular credential rotation
- **Vulnerability scanning**: Keep base images updated
- **Network policies**: Implement pod-to-pod communication controls
## Troubleshooting Common Issues
### 1. ALB Error Investigation
**502 Error Analysis:**
1. Check pod readiness and health probe status
2. Verify ALB target group health
3. Review deregistration timing during deployments
4. Monitor connection pool utilization
**504 Error Analysis:**
1. Check application response times
2. Verify timeout configurations (ALB: 120s, App: aligned)
3. Review database query performance
4. Monitor resource utilization during traffic spikes
**Connection Error Patterns:**
1. Verify Karpenter node lifecycle timing
2. Check pod termination grace periods
3. Review ALB connection draining settings
4. Monitor cluster autoscaling events
### 2. Infrastructure Issues
**Pod Startup Issues:**
- Check **startup probes** and timing
- Verify **resource requests** vs. available capacity
- Review **image pull** policies and registry access
- Monitor **Karpenter** node provisioning logs
**Connectivity Problems:**
- Validate **security group** rules
- Check **DNS resolution** within cluster
- Verify **service mesh** configuration if applicable
- Review **network policies** for pod communication
**Performance Degradation:**
- Monitor **resource utilization** (CPU, memory, network)
- Check **database connection** pooling and query performance
- Review **cache hit ratios** for Redis/ElastiCache
- Analyze **ALB metrics** for traffic patterns
### 3. Monitoring Strategy
- **Real-time alerts** for error rate spikes
- **Trend analysis** for connection patterns
- **Capacity planning** based on LCU usage
- **4XX pattern analysis** for client behavior insights
## Critical Considerations When Making Infrastructure Changes
1. **Always test in staging first** with identical configurations
2. **Monitor ALB metrics** for 24-48 hours after changes
3. **Use gradual rollouts** with proper health checks and canary deployments
4. **Maintain timeout alignment** across ALB, application, and database layers
5. **Verify security configurations** don't introduce vulnerabilities
6. **Check cost impact** of infrastructure changes
7. **Update monitoring and alerting** to cover new components
8. **Document changes** and update runbooks accordingly
This comprehensive infrastructure provides a robust, scalable, and cost-effective platform for running Formbricks at scale while maintaining high availability, security standards, and minimal error rates.

View File

@@ -0,0 +1,5 @@
---
description:
globs:
alwaysApply: false
---

View File

@@ -0,0 +1,52 @@
---
description:
globs:
alwaysApply: false
---
# React Context & Provider Patterns
## Context Provider Best Practices
### Provider Implementation
- Use TypeScript interfaces for provider props with optional `initialCount` for testing
- Implement proper cleanup in `useEffect` to avoid React hooks warnings
- Reference: [apps/web/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/ResponseCountProvider.tsx](mdc:apps/web/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/ResponseCountProvider.tsx)
### Cleanup Pattern for Refs
```typescript
useEffect(() => {
const currentPendingRequests = pendingRequests.current;
const currentAbortController = abortController.current;
return () => {
if (currentAbortController) {
currentAbortController.abort();
}
currentPendingRequests.clear();
};
}, []);
```
### Testing Context Providers
- Always wrap components using context in the provider during tests
- Use `initialCount` prop for predictable test scenarios
- Mock context dependencies like `useParams`, `useResponseFilter`
- Example from [apps/web/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SurveyAnalysisCTA.test.tsx](mdc:apps/web/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SurveyAnalysisCTA.test.tsx):
```typescript
render(
<ResponseCountProvider survey={dummySurvey} initialCount={5}>
<ComponentUnderTest />
</ResponseCountProvider>
);
```
### Required Mocks for Context Testing
- Mock `next/navigation` with `useParams` returning environment and survey IDs
- Mock response filter context and actions
- Mock API actions that the provider depends on
### Context Hook Usage
- Create custom hooks like `useResponseCountContext()` for consuming context
- Provide meaningful error messages when context is used outside provider
- Use context for shared state that multiple components need to access

View File

@@ -0,0 +1,5 @@
---
description:
globs:
alwaysApply: false
---

View File

@@ -0,0 +1,282 @@
---
description:
globs:
alwaysApply: false
---
# Testing Patterns & Best Practices
## Test File Naming & Environment
### File Extensions
- Use `.test.tsx` for React component/hook tests (runs in jsdom environment)
- Use `.test.ts` for utility/service tests (runs in Node environment)
- The vitest config uses `environmentMatchGlobs` to automatically set jsdom for `.tsx` files
### Test Structure
```typescript
// Import the mocked functions first
import { useHook } from "@/path/to/hook";
import { serviceFunction } from "@/path/to/service";
import { renderHook, waitFor } from "@testing-library/react";
import { beforeEach, describe, expect, test, vi } from "vitest";
// Mock dependencies
vi.mock("@/path/to/hook", () => ({
useHook: vi.fn(),
}));
describe("ComponentName", () => {
beforeEach(() => {
vi.clearAllMocks();
// Setup default mocks
});
test("descriptive test name", async () => {
// Test implementation
});
});
```
## React Hook Testing
### Context Mocking
When testing hooks that use React Context:
```typescript
vi.mocked(useResponseFilter).mockReturnValue({
selectedFilter: {
filter: [],
onlyComplete: false,
},
setSelectedFilter: vi.fn(),
selectedOptions: {
questionOptions: [],
questionFilterOptions: [],
},
setSelectedOptions: vi.fn(),
dateRange: { from: new Date(), to: new Date() },
setDateRange: vi.fn(),
resetState: vi.fn(),
});
```
### Testing Async Hooks
- Always use `waitFor` for async operations
- Test both loading and completed states
- Verify API calls with correct parameters
```typescript
test("fetches data on mount", async () => {
const { result } = renderHook(() => useHook());
expect(result.current.isLoading).toBe(true);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
expect(result.current.data).toBe(expectedData);
expect(vi.mocked(apiCall)).toHaveBeenCalledWith(expectedParams);
});
```
### Testing Hook Dependencies
To test useEffect dependencies, ensure mocks return different values:
```typescript
// First render
mockGetFormattedFilters.mockReturnValue(mockFilters);
// Change dependency and trigger re-render
const newMockFilters = { ...mockFilters, finished: true };
mockGetFormattedFilters.mockReturnValue(newMockFilters);
rerender();
```
## Performance Testing
### Race Condition Testing
Test AbortController implementation:
```typescript
test("cancels previous request when new request is made", async () => {
let resolveFirst: (value: any) => void;
let resolveSecond: (value: any) => void;
const firstPromise = new Promise((resolve) => {
resolveFirst = resolve;
});
const secondPromise = new Promise((resolve) => {
resolveSecond = resolve;
});
vi.mocked(apiCall)
.mockReturnValueOnce(firstPromise as any)
.mockReturnValueOnce(secondPromise as any);
const { result } = renderHook(() => useHook());
// Trigger second request
result.current.refetch();
// Resolve in order - first should be cancelled
resolveFirst!({ data: 100 });
resolveSecond!({ data: 200 });
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
// Should have result from second request
expect(result.current.data).toBe(200);
});
```
### Cleanup Testing
```typescript
test("cleans up on unmount", () => {
const abortSpy = vi.spyOn(AbortController.prototype, "abort");
const { unmount } = renderHook(() => useHook());
unmount();
expect(abortSpy).toHaveBeenCalled();
abortSpy.mockRestore();
});
```
## Error Handling Testing
### API Error Testing
```typescript
test("handles API errors gracefully", async () => {
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
vi.mocked(apiCall).mockRejectedValue(new Error("API Error"));
const { result } = renderHook(() => useHook());
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
expect(consoleSpy).toHaveBeenCalledWith("Error message:", expect.any(Error));
expect(result.current.data).toBe(fallbackValue);
consoleSpy.mockRestore();
});
```
### Cancelled Request Testing
```typescript
test("does not update state for cancelled requests", async () => {
const consoleSpy = vi.spyOn(console, "error").mockImplementation(() => {});
let rejectFirst: (error: any) => void;
const firstPromise = new Promise((_, reject) => {
rejectFirst = reject;
});
vi.mocked(apiCall)
.mockReturnValueOnce(firstPromise as any)
.mockResolvedValueOnce({ data: 42 });
const { result } = renderHook(() => useHook());
result.current.refetch();
const abortError = new Error("Request cancelled");
rejectFirst!(abortError);
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
// Should not log error for cancelled request
expect(consoleSpy).not.toHaveBeenCalled();
consoleSpy.mockRestore();
});
```
## Type Safety in Tests
### Mock Type Assertions
Use type assertions for edge cases:
```typescript
vi.mocked(apiCall).mockResolvedValue({
data: null as any, // For testing null handling
});
vi.mocked(apiCall).mockResolvedValue({
data: undefined as any, // For testing undefined handling
});
```
### Proper Mock Typing
Ensure mocks match the actual interface:
```typescript
const mockSurvey: TSurvey = {
id: "survey-123",
name: "Test Survey",
// ... other required properties
} as unknown as TSurvey; // Use when partial mocking is needed
```
## Common Test Patterns
### Testing State Changes
```typescript
test("updates state correctly", async () => {
const { result } = renderHook(() => useHook());
// Initial state
expect(result.current.value).toBe(initialValue);
// Trigger change
result.current.updateValue(newValue);
// Verify change
expect(result.current.value).toBe(newValue);
});
```
### Testing Multiple Scenarios
```typescript
test("handles different modes", async () => {
// Test regular mode
vi.mocked(useParams).mockReturnValue({ surveyId: "123" });
const { rerender } = renderHook(() => useHook());
await waitFor(() => {
expect(vi.mocked(regularApi)).toHaveBeenCalled();
});
// Test sharing mode
vi.mocked(useParams).mockReturnValue({
surveyId: "123",
sharingKey: "share-123"
});
rerender();
await waitFor(() => {
expect(vi.mocked(sharingApi)).toHaveBeenCalled();
});
});
```
## Test Organization
### Comprehensive Test Coverage
For hooks, ensure you test:
- ✅ Initialization (with/without initial values)
- ✅ Data fetching (success/error cases)
- ✅ State updates and refetching
- ✅ Dependency changes triggering effects
- ✅ Manual actions (refetch, reset)
- ✅ Race condition prevention
- ✅ Cleanup on unmount
- ✅ Mode switching (if applicable)
- ✅ Edge cases (null/undefined data)
### Test Naming
Use descriptive test names that explain the scenario:
- ✅ "initializes with initial count"
- ✅ "fetches response count on mount for regular survey"
- ✅ "cancels previous request when new request is made"
- ❌ "test hook"
- ❌ "it works"

View File

@@ -7,7 +7,8 @@ import { Button } from "@/modules/ui/components/button";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuRadioGroup,
DropdownMenuRadioItem,
DropdownMenuTrigger,
} from "@/modules/ui/components/dropdown-menu";
import { FormControl, FormError, FormField, FormItem, FormLabel } from "@/modules/ui/components/form";
@@ -175,20 +176,24 @@ export const EditProfileDetailsForm = ({
variant="ghost"
className="h-10 w-full border border-slate-300 px-3 text-left">
<div className="flex w-full items-center justify-between">
{appLanguages.find((l) => l.code === field.value)?.label[field.value] ?? "NA"}
{appLanguages.find((l) => l.code === field.value)?.label["en-US"] ?? "NA"}
<ChevronDownIcon className="h-4 w-4 text-slate-500" />
</div>
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent className="w-40 bg-slate-50 text-slate-700" align="start">
{appLanguages.map((lang) => (
<DropdownMenuItem
key={lang.code}
onClick={() => field.onChange(lang.code)}
className="min-h-8 cursor-pointer">
{lang.label[field.value]}
</DropdownMenuItem>
))}
<DropdownMenuContent
className="min-w-[var(--radix-dropdown-menu-trigger-width)] bg-slate-50 text-slate-700"
align="start">
<DropdownMenuRadioGroup value={field.value} onValueChange={field.onChange}>
{appLanguages.map((lang) => (
<DropdownMenuRadioItem
key={lang.code}
value={lang.code}
className="min-h-8 cursor-pointer">
{lang.label["en-US"]}
</DropdownMenuRadioItem>
))}
</DropdownMenuRadioGroup>
</DropdownMenuContent>
</DropdownMenu>
</FormControl>

View File

@@ -75,7 +75,6 @@ export const getSurveySummaryAction = authenticatedActionClient
},
],
});
return getSurveySummary(parsedInput.surveyId, parsedInput.filterCriteria);
});

View File

@@ -5,7 +5,6 @@ import {
} from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import { SurveyAnalysisNavigation } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/SurveyAnalysisNavigation";
import { getFormattedFilters } from "@/app/lib/surveys/surveys";
import { useIntervalWhenFocused } from "@/lib/utils/hooks/useIntervalWhenFocused";
import { SecondaryNavigation } from "@/modules/ui/components/secondary-navigation";
import { act, cleanup, render, waitFor } from "@testing-library/react";
import { useParams, usePathname, useSearchParams } from "next/navigation";
@@ -52,7 +51,6 @@ vi.mock("@/app/(app)/environments/[environmentId]/components/ResponseFilterConte
vi.mock("@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions");
vi.mock("@/app/lib/surveys/surveys");
vi.mock("@/app/share/[sharingKey]/actions");
vi.mock("@/lib/utils/hooks/useIntervalWhenFocused");
vi.mock("@/modules/ui/components/secondary-navigation", () => ({
SecondaryNavigation: vi.fn(() => <div data-testid="secondary-navigation" />),
}));
@@ -69,7 +67,6 @@ const mockUseResponseFilter = vi.mocked(useResponseFilter);
const mockGetResponseCountAction = vi.mocked(getResponseCountAction);
const mockRevalidateSurveyIdPath = vi.mocked(revalidateSurveyIdPath);
const mockGetFormattedFilters = vi.mocked(getFormattedFilters);
const mockUseIntervalWhenFocused = vi.mocked(useIntervalWhenFocused);
const MockSecondaryNavigation = vi.mocked(SecondaryNavigation);
const mockSurveyLanguages: TSurveyLanguage[] = [
@@ -120,7 +117,6 @@ const mockSurvey = {
const defaultProps = {
environmentId: "testEnvId",
survey: mockSurvey,
initialTotalResponseCount: 10,
activeId: "summary",
};
@@ -167,23 +163,20 @@ describe("SurveyAnalysisNavigation", () => {
);
});
test("passes correct runWhen flag to useIntervalWhenFocused based on share embed modal", () => {
test("renders navigation correctly for sharing page", () => {
mockUsePathname.mockReturnValue(
`/environments/${defaultProps.environmentId}/surveys/${mockSurvey.id}/summary`
);
mockUseParams.mockReturnValue({ environmentId: defaultProps.environmentId, surveyId: mockSurvey.id });
mockUseParams.mockReturnValue({ sharingKey: "test-sharing-key" });
mockUseResponseFilter.mockReturnValue({ selectedFilter: "all", dateRange: {} } as any);
mockGetFormattedFilters.mockReturnValue([] as any);
mockGetResponseCountAction.mockResolvedValue({ data: 5 });
mockUseSearchParams.mockReturnValue({ get: vi.fn().mockReturnValue("true") } as any);
render(<SurveyAnalysisNavigation {...defaultProps} />);
expect(mockUseIntervalWhenFocused).toHaveBeenCalledWith(expect.any(Function), 10000, false, false);
cleanup();
mockUseSearchParams.mockReturnValue({ get: vi.fn().mockReturnValue(null) } as any);
render(<SurveyAnalysisNavigation {...defaultProps} />);
expect(mockUseIntervalWhenFocused).toHaveBeenCalledWith(expect.any(Function), 10000, true, false);
expect(MockSecondaryNavigation).toHaveBeenCalled();
const lastCallArgs = MockSecondaryNavigation.mock.calls[MockSecondaryNavigation.mock.calls.length - 1][0];
expect(lastCallArgs.navigation[0].href).toContain("/share/test-sharing-key");
});
test("displays correct response count string in label for various scenarios", async () => {
@@ -196,8 +189,8 @@ describe("SurveyAnalysisNavigation", () => {
mockGetFormattedFilters.mockReturnValue([] as any);
// Scenario 1: total = 10, filtered = null (initial state)
render(<SurveyAnalysisNavigation {...defaultProps} initialTotalResponseCount={10} />);
expect(MockSecondaryNavigation.mock.calls[0][0].navigation[1].label).toBe("common.responses (10)");
render(<SurveyAnalysisNavigation {...defaultProps} />);
expect(MockSecondaryNavigation.mock.calls[0][0].navigation[1].label).toBe("common.responses");
cleanup();
vi.resetAllMocks(); // Reset mocks for next case
@@ -213,11 +206,11 @@ describe("SurveyAnalysisNavigation", () => {
if (args && "filterCriteria" in args) return { data: 15, error: null, success: true };
return { data: 15, error: null, success: true };
});
render(<SurveyAnalysisNavigation {...defaultProps} initialTotalResponseCount={15} />);
render(<SurveyAnalysisNavigation {...defaultProps} />);
await waitFor(() => {
const lastCallArgs =
MockSecondaryNavigation.mock.calls[MockSecondaryNavigation.mock.calls.length - 1][0];
expect(lastCallArgs.navigation[1].label).toBe("common.responses (15)");
expect(lastCallArgs.navigation[1].label).toBe("common.responses");
});
cleanup();
vi.resetAllMocks();
@@ -234,11 +227,11 @@ describe("SurveyAnalysisNavigation", () => {
if (args && "filterCriteria" in args) return { data: 15, error: null, success: true };
return { data: 10, error: null, success: true };
});
render(<SurveyAnalysisNavigation {...defaultProps} initialTotalResponseCount={10} />);
render(<SurveyAnalysisNavigation {...defaultProps} />);
await waitFor(() => {
const lastCallArgs =
MockSecondaryNavigation.mock.calls[MockSecondaryNavigation.mock.calls.length - 1][0];
expect(lastCallArgs.navigation[1].label).toBe("common.responses (15)");
expect(lastCallArgs.navigation[1].label).toBe("common.responses");
});
});
});

View File

@@ -1,105 +1,30 @@
"use client";
import { useResponseFilter } from "@/app/(app)/environments/[environmentId]/components/ResponseFilterContext";
import {
getResponseCountAction,
revalidateSurveyIdPath,
} from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import { getFormattedFilters } from "@/app/lib/surveys/surveys";
import { getResponseCountBySurveySharingKeyAction } from "@/app/share/[sharingKey]/actions";
import { useIntervalWhenFocused } from "@/lib/utils/hooks/useIntervalWhenFocused";
import { revalidateSurveyIdPath } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import { SecondaryNavigation } from "@/modules/ui/components/secondary-navigation";
import { useTranslate } from "@tolgee/react";
import { InboxIcon, PresentationIcon } from "lucide-react";
import { useParams, usePathname, useSearchParams } from "next/navigation";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useParams, usePathname } from "next/navigation";
import { TSurvey } from "@formbricks/types/surveys/types";
interface SurveyAnalysisNavigationProps {
environmentId: string;
survey: TSurvey;
initialTotalResponseCount: number | null;
activeId: string;
}
export const SurveyAnalysisNavigation = ({
environmentId,
survey,
initialTotalResponseCount,
activeId,
}: SurveyAnalysisNavigationProps) => {
const pathname = usePathname();
const { t } = useTranslate();
const params = useParams();
const [filteredResponseCount, setFilteredResponseCount] = useState<number | null>(null);
const [totalResponseCount, setTotalResponseCount] = useState<number | null>(initialTotalResponseCount);
const sharingKey = params.sharingKey as string;
const isSharingPage = !!sharingKey;
const searchParams = useSearchParams();
const isShareEmbedModalOpen = searchParams.get("share") === "true";
const url = isSharingPage ? `/share/${sharingKey}` : `/environments/${environmentId}/surveys/${survey.id}`;
const { selectedFilter, dateRange } = useResponseFilter();
const filters = useMemo(
() => getFormattedFilters(survey, selectedFilter, dateRange),
[selectedFilter, dateRange, survey]
);
const latestFiltersRef = useRef(filters);
latestFiltersRef.current = filters;
const getResponseCount = () => {
if (isSharingPage) return getResponseCountBySurveySharingKeyAction({ sharingKey });
return getResponseCountAction({ surveyId: survey.id });
};
const fetchResponseCount = async () => {
const count = await getResponseCount();
const responseCount = count?.data ?? 0;
setTotalResponseCount(responseCount);
};
const getFilteredResponseCount = useCallback(() => {
if (isSharingPage)
return getResponseCountBySurveySharingKeyAction({
sharingKey,
filterCriteria: latestFiltersRef.current,
});
return getResponseCountAction({ surveyId: survey.id, filterCriteria: latestFiltersRef.current });
}, [isSharingPage, sharingKey, survey.id]);
const fetchFilteredResponseCount = useCallback(async () => {
const count = await getFilteredResponseCount();
const responseCount = count?.data ?? 0;
setFilteredResponseCount(responseCount);
}, [getFilteredResponseCount]);
useEffect(() => {
fetchFilteredResponseCount();
}, [filters, isSharingPage, sharingKey, survey.id, fetchFilteredResponseCount]);
useIntervalWhenFocused(
() => {
fetchResponseCount();
fetchFilteredResponseCount();
},
10000,
!isShareEmbedModalOpen,
false
);
const getResponseCountString = () => {
if (totalResponseCount === null) return "";
if (filteredResponseCount === null) return `(${totalResponseCount})`;
const totalCount = Math.max(totalResponseCount, filteredResponseCount);
if (totalCount === filteredResponseCount) return `(${totalCount})`;
return `(${filteredResponseCount} of ${totalCount})`;
};
const navigation = [
{
@@ -114,7 +39,7 @@ export const SurveyAnalysisNavigation = ({
},
{
id: "responses",
label: `${t("common.responses")} ${getResponseCountString()}`,
label: t("common.responses"),
icon: <InboxIcon className="h-5 w-5" />,
href: `${url}/responses?referer=true`,
current: pathname?.includes("/responses"),

View File

@@ -162,7 +162,6 @@ describe("ResponsePage", () => {
expect(screen.getByTestId("results-share-button")).toBeInTheDocument();
expect(screen.getByTestId("response-data-view")).toBeInTheDocument();
});
expect(mockGetResponseCountAction).toHaveBeenCalled();
expect(mockGetResponsesAction).toHaveBeenCalled();
});
@@ -179,7 +178,6 @@ describe("ResponsePage", () => {
await waitFor(() => {
expect(screen.queryByTestId("results-share-button")).not.toBeInTheDocument();
});
expect(mockGetResponseCountBySurveySharingKeyAction).toHaveBeenCalled();
expect(mockGetResponsesBySurveySharingKeyAction).toHaveBeenCalled();
});
@@ -297,8 +295,7 @@ describe("ResponsePage", () => {
rerender(<ResponsePage {...defaultProps} />);
await waitFor(() => {
// Should fetch count and responses again due to filter change
expect(mockGetResponseCountAction).toHaveBeenCalledTimes(2);
// Should fetch responses again due to filter change
expect(mockGetResponsesAction).toHaveBeenCalledTimes(2);
// Check if it fetches with offset 0 (first page)
expect(mockGetResponsesAction).toHaveBeenLastCalledWith(

View File

@@ -1,18 +1,12 @@
"use client";
import { useResponseFilter } from "@/app/(app)/environments/[environmentId]/components/ResponseFilterContext";
import {
getResponseCountAction,
getResponsesAction,
} from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import { getResponsesAction } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import { ResponseDataView } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/responses/components/ResponseDataView";
import { CustomFilter } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/components/CustomFilter";
import { ResultsShareButton } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/components/ResultsShareButton";
import { getFormattedFilters } from "@/app/lib/surveys/surveys";
import {
getResponseCountBySurveySharingKeyAction,
getResponsesBySurveySharingKeyAction,
} from "@/app/share/[sharingKey]/actions";
import { getResponsesBySurveySharingKeyAction } from "@/app/share/[sharingKey]/actions";
import { replaceHeadlineRecall } from "@/lib/utils/recall";
import { useParams, useSearchParams } from "next/navigation";
import { useCallback, useEffect, useMemo, useState } from "react";
@@ -49,7 +43,6 @@ export const ResponsePage = ({
const sharingKey = params.sharingKey as string;
const isSharingPage = !!sharingKey;
const [responseCount, setResponseCount] = useState<number | null>(null);
const [responses, setResponses] = useState<TResponse[]>([]);
const [page, setPage] = useState<number>(1);
const [hasMore, setHasMore] = useState<boolean>(true);
@@ -97,9 +90,6 @@ export const ResponsePage = ({
const deleteResponses = (responseIds: string[]) => {
setResponses(responses.filter((response) => !responseIds.includes(response.id)));
if (responseCount) {
setResponseCount(responseCount - responseIds.length);
}
};
const updateResponse = (responseId: string, updatedResponse: TResponse) => {
@@ -118,29 +108,6 @@ export const ResponsePage = ({
}
}, [searchParams, resetState]);
useEffect(() => {
const handleResponsesCount = async () => {
let responseCount = 0;
if (isSharingPage) {
const responseCountActionResponse = await getResponseCountBySurveySharingKeyAction({
sharingKey,
filterCriteria: filters,
});
responseCount = responseCountActionResponse?.data || 0;
} else {
const responseCountActionResponse = await getResponseCountAction({
surveyId,
filterCriteria: filters,
});
responseCount = responseCountActionResponse?.data || 0;
}
setResponseCount(responseCount);
};
handleResponsesCount();
}, [filters, isSharingPage, sharingKey, surveyId]);
useEffect(() => {
const fetchInitialResponses = async () => {
try {

View File

@@ -1,3 +1,4 @@
import { ResponseFilterProvider } from "@/app/(app)/environments/[environmentId]/components/ResponseFilterContext";
import { SurveyAnalysisNavigation } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/SurveyAnalysisNavigation";
import { ResponsePage } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/responses/components/ResponsePage";
import Page from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/responses/page";
@@ -61,6 +62,7 @@ vi.mock("@/lib/constants", () => ({
SENTRY_DSN: "mock-sentry-dsn",
WEBAPP_URL: "http://localhost:3000",
RESPONSES_PER_PAGE: 10,
SESSION_MAX_AGE: 1000,
}));
vi.mock("@/lib/getSurveyUrl", () => ({
@@ -109,6 +111,14 @@ vi.mock("@/tolgee/server", () => ({
getTranslate: async () => (key: string) => key,
}));
vi.mock("next/navigation", () => ({
useParams: () => ({
environmentId: "test-env-id",
surveyId: "test-survey-id",
sharingKey: null,
}),
}));
const mockEnvironmentId = "test-env-id";
const mockSurveyId = "test-survey-id";
const mockUserId = "test-user-id";
@@ -180,7 +190,7 @@ describe("ResponsesPage", () => {
test("renders correctly with all data", async () => {
const props = { params: mockParams };
const jsx = await Page(props);
render(jsx);
render(<ResponseFilterProvider>{jsx}</ResponseFilterProvider>);
await screen.findByTestId("page-content-wrapper");
expect(screen.getByTestId("page-header")).toBeInTheDocument();
@@ -196,7 +206,6 @@ describe("ResponsesPage", () => {
isReadOnly: false,
user: mockUser,
surveyDomain: mockSurveyDomain,
responseCount: 10,
}),
undefined
);
@@ -206,7 +215,6 @@ describe("ResponsesPage", () => {
environmentId: mockEnvironmentId,
survey: mockSurvey,
activeId: "responses",
initialTotalResponseCount: 10,
}),
undefined
);

View File

@@ -33,7 +33,8 @@ const Page = async (props) => {
const tags = await getTagsByEnvironmentId(params.environmentId);
const totalResponseCount = await getResponseCountBySurveyId(params.surveyId);
// Get response count for the CTA component
const responseCount = await getResponseCountBySurveyId(params.surveyId);
const locale = await findMatchingLocale();
const surveyDomain = getSurveyDomain();
@@ -49,15 +50,10 @@ const Page = async (props) => {
isReadOnly={isReadOnly}
user={user}
surveyDomain={surveyDomain}
responseCount={totalResponseCount}
responseCount={responseCount}
/>
}>
<SurveyAnalysisNavigation
environmentId={environment.id}
survey={survey}
activeId="responses"
initialTotalResponseCount={totalResponseCount}
/>
<SurveyAnalysisNavigation environmentId={environment.id} survey={survey} activeId="responses" />
</PageHeader>
<ResponsePage
environment={environment}

View File

@@ -38,18 +38,10 @@ interface SummaryListProps {
responseCount: number | null;
environment: TEnvironment;
survey: TSurvey;
totalResponseCount: number;
locale: TUserLocale;
}
export const SummaryList = ({
summary,
environment,
responseCount,
survey,
totalResponseCount,
locale,
}: SummaryListProps) => {
export const SummaryList = ({ summary, environment, responseCount, survey, locale }: SummaryListProps) => {
const { setSelectedFilter, selectedFilter } = useResponseFilter();
const { t } = useTranslate();
const setFilter = (
@@ -115,11 +107,7 @@ export const SummaryList = ({
type="response"
environment={environment}
noWidgetRequired={survey.type === "link"}
emptyMessage={
totalResponseCount === 0
? undefined
: t("environments.surveys.summary.no_response_matches_filter")
}
emptyMessage={t("environments.surveys.summary.no_responses_found")}
/>
) : (
summary.map((questionSummary) => {

View File

@@ -1,30 +1,23 @@
"use client";
import { useResponseFilter } from "@/app/(app)/environments/[environmentId]/components/ResponseFilterContext";
import {
getResponseCountAction,
getSurveySummaryAction,
} from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import { getSurveySummaryAction } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions";
import ScrollToTop from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/ScrollToTop";
import { SummaryDropOffs } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SummaryDropOffs";
import { CustomFilter } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/components/CustomFilter";
import { ResultsShareButton } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/components/ResultsShareButton";
import { getFormattedFilters } from "@/app/lib/surveys/surveys";
import {
getResponseCountBySurveySharingKeyAction,
getSummaryBySurveySharingKeyAction,
} from "@/app/share/[sharingKey]/actions";
import { useIntervalWhenFocused } from "@/lib/utils/hooks/useIntervalWhenFocused";
import { getSummaryBySurveySharingKeyAction } from "@/app/share/[sharingKey]/actions";
import { replaceHeadlineRecall } from "@/lib/utils/recall";
import { useParams, useSearchParams } from "next/navigation";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useEffect, useMemo, useState } from "react";
import { TEnvironment } from "@formbricks/types/environment";
import { TSurvey, TSurveySummary } from "@formbricks/types/surveys/types";
import { TUser, TUserLocale } from "@formbricks/types/user";
import { TUserLocale } from "@formbricks/types/user";
import { SummaryList } from "./SummaryList";
import { SummaryMetadata } from "./SummaryMetadata";
const initialSurveySummary: TSurveySummary = {
const defaultSurveySummary: TSurveySummary = {
meta: {
completedPercentage: 0,
completedResponses: 0,
@@ -44,11 +37,9 @@ interface SummaryPageProps {
survey: TSurvey;
surveyId: string;
webAppUrl: string;
user?: TUser;
totalResponseCount: number;
documentsPerPage?: number;
locale: TUserLocale;
isReadOnly: boolean;
initialSurveySummary?: TSurveySummary;
}
export const SummaryPage = ({
@@ -56,98 +47,69 @@ export const SummaryPage = ({
survey,
surveyId,
webAppUrl,
totalResponseCount,
locale,
isReadOnly,
initialSurveySummary,
}: SummaryPageProps) => {
const params = useParams();
const sharingKey = params.sharingKey as string;
const isSharingPage = !!sharingKey;
const searchParams = useSearchParams();
const isShareEmbedModalOpen = searchParams.get("share") === "true";
const [responseCount, setResponseCount] = useState<number | null>(null);
const [surveySummary, setSurveySummary] = useState<TSurveySummary>(initialSurveySummary);
const [surveySummary, setSurveySummary] = useState<TSurveySummary>(
initialSurveySummary || defaultSurveySummary
);
const [showDropOffs, setShowDropOffs] = useState<boolean>(false);
const [isLoading, setIsLoading] = useState(true);
const [isLoading, setIsLoading] = useState(!initialSurveySummary);
const { selectedFilter, dateRange, resetState } = useResponseFilter();
const filters = useMemo(
() => getFormattedFilters(survey, selectedFilter, dateRange),
[selectedFilter, dateRange, survey]
);
// Only fetch data when filters change or when there's no initial data
useEffect(() => {
// If we have initial data and no filters are applied, don't fetch
const hasNoFilters =
(!selectedFilter ||
Object.keys(selectedFilter).length === 0 ||
(selectedFilter.filter && selectedFilter.filter.length === 0)) &&
(!dateRange || (!dateRange.from && !dateRange.to));
// Use a ref to keep the latest state and props
const latestFiltersRef = useRef(filters);
latestFiltersRef.current = filters;
if (initialSurveySummary && hasNoFilters) {
setIsLoading(false);
return;
}
const getResponseCount = useCallback(() => {
if (isSharingPage)
return getResponseCountBySurveySharingKeyAction({
sharingKey,
filterCriteria: latestFiltersRef.current,
});
return getResponseCountAction({
surveyId,
filterCriteria: latestFiltersRef.current,
});
}, [isSharingPage, sharingKey, surveyId]);
const getSummary = useCallback(() => {
if (isSharingPage)
return getSummaryBySurveySharingKeyAction({
sharingKey,
filterCriteria: latestFiltersRef.current,
});
return getSurveySummaryAction({
surveyId,
filterCriteria: latestFiltersRef.current,
});
}, [isSharingPage, sharingKey, surveyId]);
const handleInitialData = useCallback(
async (isInitialLoad = false) => {
if (isInitialLoad) {
setIsLoading(true);
}
const fetchSummary = async () => {
setIsLoading(true);
try {
const [updatedResponseCountData, updatedSurveySummary] = await Promise.all([
getResponseCount(),
getSummary(),
]);
// Recalculate filters inside the effect to ensure we have the latest values
const currentFilters = getFormattedFilters(survey, selectedFilter, dateRange);
let updatedSurveySummary;
const responseCount = updatedResponseCountData?.data ?? 0;
const surveySummary = updatedSurveySummary?.data ?? initialSurveySummary;
if (isSharingPage) {
updatedSurveySummary = await getSummaryBySurveySharingKeyAction({
sharingKey,
filterCriteria: currentFilters,
});
} else {
updatedSurveySummary = await getSurveySummaryAction({
surveyId,
filterCriteria: currentFilters,
});
}
setResponseCount(responseCount);
const surveySummary = updatedSurveySummary?.data ?? defaultSurveySummary;
setSurveySummary(surveySummary);
} catch (error) {
console.error(error);
} finally {
if (isInitialLoad) {
setIsLoading(false);
}
setIsLoading(false);
}
},
[getResponseCount, getSummary]
);
};
useEffect(() => {
handleInitialData(true);
}, [filters, isSharingPage, sharingKey, surveyId, handleInitialData]);
useIntervalWhenFocused(
() => {
handleInitialData(false);
},
10000,
!isShareEmbedModalOpen,
false
);
fetchSummary();
}, [selectedFilter, dateRange, survey.id, isSharingPage, sharingKey, surveyId, initialSurveySummary]);
const surveyMemoized = useMemo(() => {
return replaceHeadlineRecall(survey, "default");
@@ -177,10 +139,9 @@ export const SummaryPage = ({
<ScrollToTop containerId="mainContent" />
<SummaryList
summary={surveySummary.summary}
responseCount={responseCount}
responseCount={surveySummary.meta.totalResponses}
survey={surveyMemoized}
environment={environment}
totalResponseCount={totalResponseCount}
locale={locale}
/>
</>

View File

@@ -51,6 +51,7 @@ vi.mock("next/navigation", () => ({
useRouter: () => ({ push: mockPush }),
useSearchParams: () => mockSearchParams,
usePathname: () => "/current",
useParams: () => ({ environmentId: "env123", surveyId: "survey123" }),
}));
// Mock copySurveyLink to return a predictable string
@@ -69,6 +70,23 @@ vi.mock("@/lib/utils/helper", () => ({
getFormattedErrorMessage: vi.fn((response) => response?.error || "Unknown error"),
}));
// Mock ResponseCountProvider dependencies
vi.mock("@/app/(app)/environments/[environmentId]/components/ResponseFilterContext", () => ({
useResponseFilter: vi.fn(() => ({ selectedFilter: "all", dateRange: {} })),
}));
vi.mock("@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/actions", () => ({
getResponseCountAction: vi.fn(() => Promise.resolve({ data: 5 })),
}));
vi.mock("@/app/lib/surveys/surveys", () => ({
getFormattedFilters: vi.fn(() => []),
}));
vi.mock("@/app/share/[sharingKey]/actions", () => ({
getResponseCountBySurveySharingKeyAction: vi.fn(() => Promise.resolve({ data: 5 })),
}));
vi.spyOn(toast, "success");
vi.spyOn(toast, "error");

View File

@@ -171,7 +171,7 @@ export const SurveyAnalysisCTA = ({
icon: SquarePenIcon,
tooltip: t("common.edit"),
onClick: () => {
responseCount && responseCount > 0
responseCount > 0
? setIsCautionDialogOpen(true)
: router.push(`/environments/${environment.id}/surveys/${survey.id}/edit`);
},

View File

@@ -758,7 +758,6 @@ describe("getSurveySummary", () => {
expect(summary.dropOff).toBeDefined();
expect(summary.summary).toBeDefined();
expect(getSurvey).toHaveBeenCalledWith(mockSurveyId);
expect(getResponseCountBySurveyId).toHaveBeenCalledWith(mockSurveyId, undefined);
expect(prisma.response.findMany).toHaveBeenCalled(); // Check if getResponsesForSummary was effectively called
expect(getDisplayCountBySurveyId).toHaveBeenCalled();
});
@@ -770,7 +769,6 @@ describe("getSurveySummary", () => {
test("handles filterCriteria", async () => {
const filterCriteria: TResponseFilterCriteria = { finished: true };
vi.mocked(getResponseCountBySurveyId).mockResolvedValue(2); // Assume 2 finished responses
const finishedResponses = mockResponses
.filter((r) => r.finished)
.map((r) => ({ ...r, contactId: null, personAttributes: {} }));
@@ -778,7 +776,6 @@ describe("getSurveySummary", () => {
await getSurveySummary(mockSurveyId, filterCriteria);
expect(getResponseCountBySurveyId).toHaveBeenCalledWith(mockSurveyId, filterCriteria);
expect(prisma.response.findMany).toHaveBeenCalledWith(
expect.objectContaining({
where: expect.objectContaining({ surveyId: mockSurveyId }), // buildWhereClause is mocked

View File

@@ -5,7 +5,6 @@ import { displayCache } from "@/lib/display/cache";
import { getDisplayCountBySurveyId } from "@/lib/display/service";
import { getLocalizedValue } from "@/lib/i18n/utils";
import { responseCache } from "@/lib/response/cache";
import { getResponseCountBySurveyId } from "@/lib/response/service";
import { buildWhereClause } from "@/lib/response/utils";
import { surveyCache } from "@/lib/survey/cache";
import { getSurvey } from "@/lib/survey/service";
@@ -13,6 +12,7 @@ import { evaluateLogic, performActions } from "@/lib/surveyLogic/utils";
import { validateInputs } from "@/lib/utils/validate";
import { Prisma } from "@prisma/client";
import { cache as reactCache } from "react";
import { z } from "zod";
import { prisma } from "@formbricks/database";
import { ZId, ZOptionalNumber } from "@formbricks/types/common";
import { DatabaseError, ResourceNotFoundError } from "@formbricks/types/errors";
@@ -917,22 +917,24 @@ export const getSurveySummary = reactCache(
}
const batchSize = 5000;
const responseCount = await getResponseCountBySurveyId(surveyId, filterCriteria);
const hasFilter = Object.keys(filterCriteria ?? {}).length > 0;
const pages = Math.ceil(responseCount / batchSize);
// Use cursor-based pagination instead of count + offset to avoid expensive queries
const responses: TSurveySummaryResponse[] = [];
let cursor: string | undefined = undefined;
let hasMore = true;
// Create an array of batch fetch promises
const batchPromises = Array.from({ length: pages }, (_, i) =>
getResponsesForSummary(surveyId, batchSize, i * batchSize, filterCriteria)
);
while (hasMore) {
const batch = await getResponsesForSummary(surveyId, batchSize, 0, filterCriteria, cursor);
responses.push(...batch);
// Fetch all batches in parallel
const batchResults = await Promise.all(batchPromises);
// Combine all batch results
const responses = batchResults.flat();
if (batch.length < batchSize) {
hasMore = false;
} else {
// Use the last response's ID as cursor for next batch
cursor = batch[batch.length - 1].id;
}
}
const responseIds = hasFilter ? responses.map((response) => response.id) : [];
@@ -972,7 +974,8 @@ export const getResponsesForSummary = reactCache(
surveyId: string,
limit: number,
offset: number,
filterCriteria?: TResponseFilterCriteria
filterCriteria?: TResponseFilterCriteria,
cursor?: string
): Promise<TSurveySummaryResponse[]> =>
cache(
async () => {
@@ -980,18 +983,28 @@ export const getResponsesForSummary = reactCache(
[surveyId, ZId],
[limit, ZOptionalNumber],
[offset, ZOptionalNumber],
[filterCriteria, ZResponseFilterCriteria.optional()]
[filterCriteria, ZResponseFilterCriteria.optional()],
[cursor, z.string().cuid2().optional()]
);
const queryLimit = limit ?? RESPONSES_PER_PAGE;
const survey = await getSurvey(surveyId);
if (!survey) return [];
try {
const whereClause: Prisma.ResponseWhereInput = {
surveyId,
...buildWhereClause(survey, filterCriteria),
};
// Add cursor condition for cursor-based pagination
if (cursor) {
whereClause.id = {
lt: cursor, // Get responses with ID less than cursor (for desc order)
};
}
const responses = await prisma.response.findMany({
where: {
surveyId,
...buildWhereClause(survey, filterCriteria),
},
where: whereClause,
select: {
id: true,
data: true,
@@ -1013,6 +1026,9 @@ export const getResponsesForSummary = reactCache(
{
createdAt: "desc",
},
{
id: "desc", // Secondary sort by ID for consistent pagination
},
],
take: queryLimit,
skip: offset,
@@ -1043,7 +1059,9 @@ export const getResponsesForSummary = reactCache(
throw error;
}
},
[`getResponsesForSummary-${surveyId}-${limit}-${offset}-${JSON.stringify(filterCriteria)}`],
[
`getResponsesForSummary-${surveyId}-${limit}-${offset}-${JSON.stringify(filterCriteria)}-${cursor || ""}`,
],
{
tags: [responseCache.tag.bySurveyId(surveyId)],
}

View File

@@ -1,7 +1,9 @@
import { ResponseFilterProvider } from "@/app/(app)/environments/[environmentId]/components/ResponseFilterContext";
import { SurveyAnalysisNavigation } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/SurveyAnalysisNavigation";
import { SummaryPage } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SummaryPage";
import { getSurveySummary } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/lib/surveySummary";
import SurveyPage from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/page";
import { DEFAULT_LOCALE, DOCUMENTS_PER_PAGE, WEBAPP_URL } from "@/lib/constants";
import { DEFAULT_LOCALE, WEBAPP_URL } from "@/lib/constants";
import { getSurveyDomain } from "@/lib/getSurveyUrl";
import { getResponseCountBySurveyId } from "@/lib/response/service";
import { getSurvey } from "@/lib/survey/service";
@@ -38,7 +40,7 @@ vi.mock("@/lib/constants", () => ({
SENTRY_DSN: "mock-sentry-dsn",
WEBAPP_URL: "http://localhost:3000",
RESPONSES_PER_PAGE: 10,
DOCUMENTS_PER_PAGE: 10,
SESSION_MAX_AGE: 1000,
}));
vi.mock(
@@ -78,6 +80,13 @@ vi.mock("@/lib/user/service", () => ({
getUser: vi.fn(),
}));
vi.mock(
"@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/lib/surveySummary",
() => ({
getSurveySummary: vi.fn(),
})
);
vi.mock("@/modules/environments/lib/utils", () => ({
getEnvironmentAuth: vi.fn(),
}));
@@ -100,6 +109,11 @@ vi.mock("@/tolgee/server", () => ({
vi.mock("next/navigation", () => ({
notFound: vi.fn(),
useParams: () => ({
environmentId: "test-environment-id",
surveyId: "test-survey-id",
sharingKey: null,
}),
}));
const mockEnvironmentId = "test-environment-id";
@@ -172,6 +186,21 @@ const mockSession = {
expires: new Date(Date.now() + 3600 * 1000).toISOString(), // 1 hour from now
} as any;
const mockSurveySummary = {
meta: {
completedPercentage: 75,
completedResponses: 15,
displayCount: 20,
dropOffPercentage: 25,
dropOffCount: 5,
startsPercentage: 80,
totalResponses: 20,
ttcAverage: 120,
},
dropOff: [],
summary: [],
};
describe("SurveyPage", () => {
beforeEach(() => {
vi.mocked(getEnvironmentAuth).mockResolvedValue({
@@ -183,6 +212,7 @@ describe("SurveyPage", () => {
vi.mocked(getUser).mockResolvedValue(mockUser);
vi.mocked(getResponseCountBySurveyId).mockResolvedValue(10);
vi.mocked(getSurveyDomain).mockReturnValue("test.domain.com");
vi.mocked(getSurveySummary).mockResolvedValue(mockSurveySummary);
vi.mocked(notFound).mockClear();
});
@@ -193,7 +223,8 @@ describe("SurveyPage", () => {
test("renders correctly with valid data", async () => {
const params = Promise.resolve({ environmentId: mockEnvironmentId, surveyId: mockSurveyId });
render(await SurveyPage({ params }));
const jsx = await SurveyPage({ params });
render(<ResponseFilterProvider>{jsx}</ResponseFilterProvider>);
expect(screen.getByTestId("page-content-wrapper")).toBeInTheDocument();
expect(screen.getByTestId("page-header")).toBeInTheDocument();
@@ -204,7 +235,6 @@ describe("SurveyPage", () => {
expect(vi.mocked(getEnvironmentAuth)).toHaveBeenCalledWith(mockEnvironmentId);
expect(vi.mocked(getSurvey)).toHaveBeenCalledWith(mockSurveyId);
expect(vi.mocked(getUser)).toHaveBeenCalledWith(mockUserId);
expect(vi.mocked(getResponseCountBySurveyId)).toHaveBeenCalledWith(mockSurveyId);
expect(vi.mocked(getSurveyDomain)).toHaveBeenCalled();
expect(vi.mocked(SurveyAnalysisNavigation).mock.calls[0][0]).toEqual(
@@ -212,7 +242,6 @@ describe("SurveyPage", () => {
environmentId: mockEnvironmentId,
survey: mockSurvey,
activeId: "summary",
initialTotalResponseCount: 10,
})
);
@@ -222,18 +251,17 @@ describe("SurveyPage", () => {
survey: mockSurvey,
surveyId: mockSurveyId,
webAppUrl: WEBAPP_URL,
user: mockUser,
totalResponseCount: 10,
documentsPerPage: DOCUMENTS_PER_PAGE,
isReadOnly: false,
locale: mockUser.locale ?? DEFAULT_LOCALE,
initialSurveySummary: mockSurveySummary,
})
);
});
test("calls notFound if surveyId is not present in params", async () => {
const params = Promise.resolve({ environmentId: mockEnvironmentId, surveyId: undefined }) as any;
render(await SurveyPage({ params }));
const jsx = await SurveyPage({ params });
render(<ResponseFilterProvider>{jsx}</ResponseFilterProvider>);
expect(vi.mocked(notFound)).toHaveBeenCalled();
});
@@ -243,7 +271,7 @@ describe("SurveyPage", () => {
try {
// We need to await the component itself because it's an async component
const SurveyPageComponent = await SurveyPage({ params });
render(SurveyPageComponent);
render(<ResponseFilterProvider>{SurveyPageComponent}</ResponseFilterProvider>);
} catch (e: any) {
expect(e.message).toBe("common.survey_not_found");
}
@@ -256,7 +284,7 @@ describe("SurveyPage", () => {
const params = Promise.resolve({ environmentId: mockEnvironmentId, surveyId: mockSurveyId });
try {
const SurveyPageComponent = await SurveyPage({ params });
render(SurveyPageComponent);
render(<ResponseFilterProvider>{SurveyPageComponent}</ResponseFilterProvider>);
} catch (e: any) {
expect(e.message).toBe("common.user_not_found");
}

View File

@@ -1,9 +1,9 @@
import { SurveyAnalysisNavigation } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/SurveyAnalysisNavigation";
import { SummaryPage } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SummaryPage";
import { SurveyAnalysisCTA } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SurveyAnalysisCTA";
import { DEFAULT_LOCALE, DOCUMENTS_PER_PAGE, WEBAPP_URL } from "@/lib/constants";
import { getSurveySummary } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/lib/surveySummary";
import { DEFAULT_LOCALE, WEBAPP_URL } from "@/lib/constants";
import { getSurveyDomain } from "@/lib/getSurveyUrl";
import { getResponseCountBySurveyId } from "@/lib/response/service";
import { getSurvey } from "@/lib/survey/service";
import { getUser } from "@/lib/user/service";
import { getEnvironmentAuth } from "@/modules/environments/lib/utils";
@@ -37,10 +37,8 @@ const SurveyPage = async (props: { params: Promise<{ environmentId: string; surv
throw new Error(t("common.user_not_found"));
}
const totalResponseCount = await getResponseCountBySurveyId(params.surveyId);
// I took this out cause it's cloud only right?
// const { active: isEnterpriseEdition } = await getEnterpriseLicense();
// Fetch initial survey summary data on the server to prevent duplicate API calls during hydration
const initialSurveySummary = await getSurveySummary(surveyId);
const surveyDomain = getSurveyDomain();
@@ -55,26 +53,19 @@ const SurveyPage = async (props: { params: Promise<{ environmentId: string; surv
isReadOnly={isReadOnly}
user={user}
surveyDomain={surveyDomain}
responseCount={totalResponseCount}
responseCount={initialSurveySummary?.meta.totalResponses ?? 0}
/>
}>
<SurveyAnalysisNavigation
environmentId={environment.id}
survey={survey}
activeId="summary"
initialTotalResponseCount={totalResponseCount}
/>
<SurveyAnalysisNavigation environmentId={environment.id} survey={survey} activeId="summary" />
</PageHeader>
<SummaryPage
environment={environment}
survey={survey}
surveyId={params.surveyId}
webAppUrl={WEBAPP_URL}
user={user}
totalResponseCount={totalResponseCount}
documentsPerPage={DOCUMENTS_PER_PAGE}
isReadOnly={isReadOnly}
locale={user.locale ?? DEFAULT_LOCALE}
initialSurveySummary={initialSurveySummary}
/>
<SettingsId title={t("common.survey_id")} id={surveyId}></SettingsId>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -59,7 +59,6 @@ describe("endpoint-validator", () => {
describe("isClientSideApiRoute", () => {
test("should return true for client-side API routes", () => {
expect(isClientSideApiRoute("/api/packages/something")).toBe(true);
expect(isClientSideApiRoute("/api/v1/js/actions")).toBe(true);
expect(isClientSideApiRoute("/api/v1/client/storage")).toBe(true);
expect(isClientSideApiRoute("/api/v1/client/other")).toBe(true);

View File

@@ -8,7 +8,6 @@ export const isVerifyEmailRoute = (url: string) => url === "/auth/verify-email";
export const isForgotPasswordRoute = (url: string) => url === "/auth/forgot-password";
export const isClientSideApiRoute = (url: string): boolean => {
if (url.includes("/api/packages/")) return true;
if (url.includes("/api/v1/js/actions")) return true;
if (url.includes("/api/v1/client/storage")) return true;
const regex = /^\/api\/v\d+\/client\//;

View File

@@ -3,7 +3,6 @@ import { ResponsePage } from "@/app/(app)/environments/[environmentId]/surveys/[
import { RESPONSES_PER_PAGE, WEBAPP_URL } from "@/lib/constants";
import { getEnvironment } from "@/lib/environment/service";
import { getProjectByEnvironmentId } from "@/lib/project/service";
import { getResponseCountBySurveyId } from "@/lib/response/service";
import { getSurvey, getSurveyIdByResultShareKey } from "@/lib/survey/service";
import { getTagsByEnvironmentId } from "@/lib/tag/service";
import { findMatchingLocale } from "@/lib/utils/locale";
@@ -46,19 +45,13 @@ const Page = async (props: ResponsesPageProps) => {
throw new Error(t("common.project_not_found"));
}
const totalResponseCount = await getResponseCountBySurveyId(surveyId);
const locale = await findMatchingLocale();
return (
<div className="flex w-full justify-center">
<PageContentWrapper className="w-full">
<PageHeader pageTitle={survey.name}>
<SurveyAnalysisNavigation
survey={survey}
environmentId={environment.id}
activeId="responses"
initialTotalResponseCount={totalResponseCount}
/>
<SurveyAnalysisNavigation survey={survey} environmentId={environment.id} activeId="responses" />
</PageHeader>
<ResponsePage
environment={environment}

View File

@@ -1,9 +1,9 @@
import { SurveyAnalysisNavigation } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/components/SurveyAnalysisNavigation";
import { SummaryPage } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/components/SummaryPage";
import { getSurveySummary } from "@/app/(app)/environments/[environmentId]/surveys/[surveyId]/(analysis)/summary/lib/surveySummary";
import { DEFAULT_LOCALE, WEBAPP_URL } from "@/lib/constants";
import { getEnvironment } from "@/lib/environment/service";
import { getProjectByEnvironmentId } from "@/lib/project/service";
import { getResponseCountBySurveyId } from "@/lib/response/service";
import { getSurvey, getSurveyIdByResultShareKey } from "@/lib/survey/service";
import { PageContentWrapper } from "@/modules/ui/components/page-content-wrapper";
import { PageHeader } from "@/modules/ui/components/page-header";
@@ -47,27 +47,23 @@ const Page = async (props: SummaryPageProps) => {
throw new Error(t("common.project_not_found"));
}
const totalResponseCount = await getResponseCountBySurveyId(surveyId);
// Fetch initial survey summary data on the server to prevent duplicate API calls during hydration
const initialSurveySummary = await getSurveySummary(surveyId);
return (
<div className="flex w-full justify-center">
<PageContentWrapper className="w-full">
<PageHeader pageTitle={survey.name}>
<SurveyAnalysisNavigation
survey={survey}
environmentId={environment.id}
activeId="summary"
initialTotalResponseCount={totalResponseCount}
/>
<SurveyAnalysisNavigation survey={survey} environmentId={environment.id} activeId="summary" />
</PageHeader>
<SummaryPage
environment={environment}
survey={survey}
surveyId={survey.id}
webAppUrl={WEBAPP_URL}
totalResponseCount={totalResponseCount}
isReadOnly={true}
locale={DEFAULT_LOCALE}
initialSurveySummary={initialSurveySummary}
/>
</PageContentWrapper>
</div>

View File

@@ -43,7 +43,7 @@ export const getSummaryBySurveySharingKeyAction = actionClient
const surveyId = await getSurveyIdByResultShareKey(parsedInput.sharingKey);
if (!surveyId) throw new AuthorizationError("Not authorized");
return await getSurveySummary(surveyId, parsedInput.filterCriteria);
return getSurveySummary(surveyId, parsedInput.filterCriteria);
});
const ZGetResponseCountBySurveySharingKeyAction = z.object({
@@ -57,7 +57,7 @@ export const getResponseCountBySurveySharingKeyAction = actionClient
const surveyId = await getSurveyIdByResultShareKey(parsedInput.sharingKey);
if (!surveyId) throw new AuthorizationError("Not authorized");
return await getResponseCountBySurveyId(surveyId, parsedInput.filterCriteria);
return getResponseCountBySurveyId(surveyId, parsedInput.filterCriteria);
});
const ZGetSurveyFilterDataBySurveySharingKeyAction = z.object({

View File

@@ -11,6 +11,8 @@ const { PHASE_PRODUCTION_BUILD } = require("next/constants");
// @fortedigital/nextjs-cache-handler dependencies
const createRedisHandler = require("@fortedigital/nextjs-cache-handler/redis-strings").default;
const createBufferStringHandler =
require("@fortedigital/nextjs-cache-handler/buffer-string-decorator").default;
const { Next15CacheHandler } = require("@fortedigital/nextjs-cache-handler/next-15-cache-handler");
// Usual onCreation from @neshca/cache-handler
@@ -85,7 +87,7 @@ CacheHandler.onCreation(() => {
global.cacheHandlerConfigPromise = null;
global.cacheHandlerConfig = {
handlers: [redisCacheHandler],
handlers: [createBufferStringHandler(redisCacheHandler)],
};
return global.cacheHandlerConfig;

View File

@@ -95,8 +95,6 @@ export const ITEMS_PER_PAGE = 30;
export const SURVEYS_PER_PAGE = 12;
export const RESPONSES_PER_PAGE = 25;
export const TEXT_RESPONSES_PER_PAGE = 5;
export const INSIGHTS_PER_PAGE = 10;
export const DOCUMENTS_PER_PAGE = 10;
export const MAX_RESPONSES_FOR_INSIGHT_GENERATION = 500;
export const MAX_OTHER_OPTION_LENGTH = 250;

View File

@@ -2,6 +2,7 @@ import "server-only";
import { cache } from "@/lib/cache";
import { Prisma } from "@prisma/client";
import { cache as reactCache } from "react";
import { z } from "zod";
import { prisma } from "@formbricks/database";
import { logger } from "@formbricks/logger";
import { ZId, ZOptionalNumber, ZString } from "@formbricks/types/common";
@@ -98,7 +99,7 @@ export const getResponseContact = (
if (!responsePrisma.contact) return null;
return {
id: responsePrisma.contact.id as string,
id: responsePrisma.contact.id,
userId: responsePrisma.contact.attributes.find((attribute) => attribute.attributeKey.key === "userId")
?.value as string,
};
@@ -291,7 +292,8 @@ export const getResponses = reactCache(
surveyId: string,
limit?: number,
offset?: number,
filterCriteria?: TResponseFilterCriteria
filterCriteria?: TResponseFilterCriteria,
cursor?: string
): Promise<TResponse[]> =>
cache(
async () => {
@@ -299,26 +301,39 @@ export const getResponses = reactCache(
[surveyId, ZId],
[limit, ZOptionalNumber],
[offset, ZOptionalNumber],
[filterCriteria, ZResponseFilterCriteria.optional()]
[filterCriteria, ZResponseFilterCriteria.optional()],
[cursor, z.string().cuid2().optional()]
);
limit = limit ?? RESPONSES_PER_PAGE;
const survey = await getSurvey(surveyId);
if (!survey) return [];
try {
const whereClause: Prisma.ResponseWhereInput = {
surveyId,
...buildWhereClause(survey, filterCriteria),
};
// Add cursor condition for cursor-based pagination
if (cursor) {
whereClause.id = {
lt: cursor, // Get responses with ID less than cursor (for desc order)
};
}
const responses = await prisma.response.findMany({
where: {
surveyId,
...buildWhereClause(survey, filterCriteria),
},
where: whereClause,
select: responseSelection,
orderBy: [
{
createdAt: "desc",
},
{
id: "desc", // Secondary sort by ID for consistent pagination
},
],
take: limit ? limit : undefined,
skip: offset ? offset : undefined,
take: limit,
skip: offset,
});
const transformedResponses: TResponse[] = await Promise.all(
@@ -340,7 +355,7 @@ export const getResponses = reactCache(
throw error;
}
},
[`getResponses-${surveyId}-${limit}-${offset}-${JSON.stringify(filterCriteria)}`],
[`getResponses-${surveyId}-${limit}-${offset}-${JSON.stringify(filterCriteria)}-${cursor}`],
{
tags: [responseCache.tag.bySurveyId(surveyId)],
}
@@ -360,19 +375,27 @@ export const getResponseDownloadUrl = async (
throw new ResourceNotFoundError("Survey", surveyId);
}
const environmentId = survey.environmentId as string;
const environmentId = survey.environmentId;
const accessType = "private";
const batchSize = 3000;
const responseCount = await getResponseCountBySurveyId(surveyId, filterCriteria);
const pages = Math.ceil(responseCount / batchSize);
const responsesArray = await Promise.all(
Array.from({ length: pages }, (_, i) => {
return getResponses(surveyId, batchSize, i * batchSize, filterCriteria);
})
);
const responses = responsesArray.flat();
// Use cursor-based pagination instead of count + offset to avoid expensive queries
const responses: TResponse[] = [];
let cursor: string | undefined = undefined;
let hasMore = true;
while (hasMore) {
const batch = await getResponses(surveyId, batchSize, 0, filterCriteria, cursor);
responses.push(...batch);
if (batch.length < batchSize) {
hasMore = false;
} else {
// Use the last response's ID as cursor for next batch
cursor = batch[batch.length - 1].id;
}
}
const { metaDataFields, questions, hiddenFields, variables, userAttributes } = extractSurveyDetails(
survey,
@@ -442,8 +465,8 @@ export const getResponsesByEnvironmentId = reactCache(
createdAt: "desc",
},
],
take: limit ? limit : undefined,
skip: offset ? offset : undefined,
take: limit,
skip: offset,
});
const transformedResponses: TResponse[] = await Promise.all(
@@ -478,8 +501,6 @@ export const updateResponse = async (
): Promise<TResponse> => {
validateInputs([responseId, ZId], [responseInput, ZResponseUpdateInput]);
try {
// const currentResponse = await getResponse(responseId);
// use direct prisma call to avoid cache issues
const currentResponse = await prisma.response.findUnique({
where: {

View File

@@ -238,14 +238,14 @@ describe("Tests for getResponseDownloadUrl service", () => {
expect(fileExtension).not.toEqual("xlsx");
});
test("Throws DatabaseError on PrismaClientKnownRequestError, when the getResponseCountBySurveyId fails", async () => {
test("Throws DatabaseError on PrismaClientKnownRequestError, when the getResponses fails", async () => {
const mockErrorMessage = "Mock error message";
const errToThrow = new Prisma.PrismaClientKnownRequestError(mockErrorMessage, {
code: PrismaErrorType.UniqueConstraintViolation,
clientVersion: "0.0.1",
});
prisma.survey.findUnique.mockResolvedValue(mockSurveyOutput);
prisma.response.count.mockRejectedValue(errToThrow);
prisma.response.findMany.mockRejectedValue(errToThrow);
await expect(getResponseDownloadUrl(mockSurveyId, "csv")).rejects.toThrow(DatabaseError);
});

View File

@@ -1,52 +0,0 @@
import { useCallback, useEffect, useRef } from "react";
export const useIntervalWhenFocused = (
callback: () => void,
intervalDuration: number,
isActive: boolean,
shouldExecuteImmediately = true
) => {
const intervalRef = useRef<NodeJS.Timeout | null>(null);
const handleFocus = useCallback(() => {
if (isActive) {
if (shouldExecuteImmediately) {
// Execute the callback immediately when the tab comes into focus
callback();
}
// Set the interval to execute the callback every `intervalDuration` milliseconds
intervalRef.current = setInterval(() => {
callback();
}, intervalDuration);
}
}, [isActive, intervalDuration, callback, shouldExecuteImmediately]);
const handleBlur = () => {
// Clear the interval when the tab loses focus
if (intervalRef.current) {
clearInterval(intervalRef.current);
intervalRef.current = null;
}
};
useEffect(() => {
// Attach focus and blur event listeners
window.addEventListener("focus", handleFocus);
window.addEventListener("blur", handleBlur);
// Handle initial focus
handleFocus();
// Cleanup interval and event listeners when the component unmounts or dependencies change
return () => {
if (intervalRef.current) {
clearInterval(intervalRef.current);
}
window.removeEventListener("focus", handleFocus);
window.removeEventListener("blur", handleBlur);
};
}, [isActive, intervalDuration, handleFocus]);
};
export default useIntervalWhenFocused;

View File

@@ -9,7 +9,6 @@
"continue_with_saml": "Login mit SAML SSO",
"email-change": {
"confirm_password_description": "Bitte bestätige dein Passwort, bevor du deine E-Mail-Adresse änderst",
"email_already_exists": "Diese E-Mail wird bereits verwendet",
"email_change_success": "E-Mail erfolgreich geändert",
"email_change_success_description": "Du hast deine E-Mail-Adresse erfolgreich geändert. Bitte logge dich mit deiner neuen E-Mail-Adresse ein.",
"email_verification_failed": "E-Mail-Bestätigung fehlgeschlagen",
@@ -95,7 +94,7 @@
"please_click_the_link_in_the_email_to_activate_your_account": "Bitte klicke auf den Link in der E-Mail, um dein Konto zu aktivieren.",
"please_confirm_your_email_address": "Bitte bestätige deine E-Mail-Adresse",
"resend_verification_email": "Bestätigungs-E-Mail erneut senden",
"verification_email_successfully_sent": "Bestätigungs-E-Mail an {email} gesendet. Bitte überprüfen Sie, um das Update abzuschließen.",
"verification_email_resent_successfully": "Bestätigungs-E-Mail gesendet! Bitte überprüfe dein Postfach.",
"we_sent_an_email_to": "Wir haben eine E-Mail an {email} gesendet",
"you_didnt_receive_an_email_or_your_link_expired": "Hast Du keine E-Mail erhalten oder ist dein Link abgelaufen?"
},
@@ -1158,7 +1157,6 @@
"file_size_must_be_less_than_10mb": "Dateigröße muss weniger als 10MB sein.",
"invalid_file_type": "Ungültiger Dateityp. Nur JPEG-, PNG- und WEBP-Dateien sind erlaubt.",
"lost_access": "Zugriff verloren",
"new_email_update_success": "Deine Anfrage zur Änderung der E-Mail wurde erhalten.",
"or_enter_the_following_code_manually": "Oder gib den folgenden Code manuell ein:",
"organization_identification": "Hilf deiner Organisation, Dich auf Formbricks zu identifizieren",
"organizations_delete_message": "Du bist der einzige Besitzer dieser Organisationen, also werden sie <b>auch gelöscht.</b>",
@@ -1771,7 +1769,7 @@
"link_to_public_results_copied": "Link zu öffentlichen Ergebnissen kopiert",
"make_sure_the_survey_type_is_set_to": "Stelle sicher, dass der Umfragetyp richtig eingestellt ist",
"mobile_app": "Mobile App",
"no_response_matches_filter": "Keine Antwort entspricht deinem Filter",
"no_responses_found": "Keine Antworten gefunden",
"only_completed": "Nur vollständige Antworten",
"other_values_found": "Andere Werte gefunden",
"overall": "Insgesamt",

View File

@@ -9,7 +9,6 @@
"continue_with_saml": "Continue with SAML SSO",
"email-change": {
"confirm_password_description": "Please confirm your password before changing your email address",
"email_already_exists": "This email is already in use",
"email_change_success": "Email changed successfully",
"email_change_success_description": "You have successfully changed your email address. Please log in with your new email address.",
"email_verification_failed": "Email verification failed",
@@ -95,7 +94,7 @@
"please_click_the_link_in_the_email_to_activate_your_account": "Please click the link in the email to activate your account.",
"please_confirm_your_email_address": "Please confirm your email address",
"resend_verification_email": "Resend verification email",
"verification_email_successfully_sent": "Verification email sent to {email}. Please verify to complete the update.",
"verification_email_resent_successfully": "Verification email sent! Please check your inbox.",
"we_sent_an_email_to": "We sent an email to {email}. ",
"you_didnt_receive_an_email_or_your_link_expired": "You didn't receive an email or your link expired?"
},
@@ -1158,7 +1157,6 @@
"file_size_must_be_less_than_10mb": "File size must be less than 10MB.",
"invalid_file_type": "Invalid file type. Only JPEG, PNG, and WEBP files are allowed.",
"lost_access": "Lost access",
"new_email_update_success": "Your email change request was received.",
"or_enter_the_following_code_manually": "Or enter the following code manually:",
"organization_identification": "Assist your organization in identifying you on Formbricks",
"organizations_delete_message": "You are the only owner of these organizations, so they <b>will be deleted as well.</b>",
@@ -1771,7 +1769,7 @@
"link_to_public_results_copied": "Link to public results copied",
"make_sure_the_survey_type_is_set_to": "Make sure the survey type is set to",
"mobile_app": "Mobile app",
"no_response_matches_filter": "No response matches your filter",
"no_responses_found": "No responses found",
"only_completed": "Only completed",
"other_values_found": "Other values found",
"overall": "Overall",

View File

@@ -9,7 +9,6 @@
"continue_with_saml": "Continuer avec SAML SSO",
"email-change": {
"confirm_password_description": "Veuillez confirmer votre mot de passe avant de changer votre adresse e-mail",
"email_already_exists": "Cet e-mail est déjà utilisé",
"email_change_success": "E-mail changé avec succès",
"email_change_success_description": "Vous avez changé votre adresse e-mail avec succès. Veuillez vous connecter avec votre nouvelle adresse e-mail.",
"email_verification_failed": "Échec de la vérification de l'email",
@@ -95,7 +94,7 @@
"please_click_the_link_in_the_email_to_activate_your_account": "Veuillez cliquer sur le lien dans l'e-mail pour activer votre compte.",
"please_confirm_your_email_address": "Veuillez confirmer votre adresse e-mail.",
"resend_verification_email": "Renvoyer l'email de vérification",
"verification_email_successfully_sent": "Email de vérification envoyé à {email}. Veuillez vérifier pour compléter la mise à jour.",
"verification_email_resent_successfully": "E-mail de vérification envoyé ! Veuillez vérifier votre boîte de réception.",
"we_sent_an_email_to": "Nous avons envoyé un email à {email}",
"you_didnt_receive_an_email_or_your_link_expired": "Vous n'avez pas reçu d'email ou votre lien a expiré ?"
},
@@ -1158,7 +1157,6 @@
"file_size_must_be_less_than_10mb": "La taille du fichier doit être inférieure à 10 Mo.",
"invalid_file_type": "Type de fichier invalide. Seuls les fichiers JPEG, PNG et WEBP sont autorisés.",
"lost_access": "Accès perdu",
"new_email_update_success": "Votre demande de changement d'email a été reçue.",
"or_enter_the_following_code_manually": "Ou entrez le code suivant manuellement :",
"organization_identification": "Aidez votre organisation à vous identifier sur Formbricks",
"organizations_delete_message": "Tu es le seul propriétaire de ces organisations, elles <b>seront aussi supprimées.</b>",
@@ -1771,7 +1769,7 @@
"link_to_public_results_copied": "Lien vers les résultats publics copié",
"make_sure_the_survey_type_is_set_to": "Assurez-vous que le type d'enquête est défini sur",
"mobile_app": "Application mobile",
"no_response_matches_filter": "Aucune réponse ne correspond à votre filtre",
"no_responses_found": "Aucune réponse trouvée",
"only_completed": "Uniquement terminé",
"other_values_found": "D'autres valeurs trouvées",
"overall": "Globalement",

View File

@@ -9,7 +9,6 @@
"continue_with_saml": "Continuar com SAML SSO",
"email-change": {
"confirm_password_description": "Por favor, confirme sua senha antes de mudar seu endereço de e-mail",
"email_already_exists": "Este e-mail já está em uso",
"email_change_success": "E-mail alterado com sucesso",
"email_change_success_description": "Você alterou seu endereço de e-mail com sucesso. Por favor, faça login com seu novo endereço de e-mail.",
"email_verification_failed": "Falha na verificação do e-mail",
@@ -95,7 +94,7 @@
"please_click_the_link_in_the_email_to_activate_your_account": "Por favor, clica no link do e-mail pra ativar sua conta.",
"please_confirm_your_email_address": "Por favor, confirme seu endereço de e-mail",
"resend_verification_email": "Reenviar e-mail de verificação",
"verification_email_successfully_sent": "E-mail de verificação enviado para {email}. Verifique para concluir a atualização.",
"verification_email_resent_successfully": "E-mail de verificação enviado! Por favor, verifique sua caixa de entrada.",
"we_sent_an_email_to": "Enviamos um email para {email}",
"you_didnt_receive_an_email_or_your_link_expired": "Você não recebeu um e-mail ou seu link expirou?"
},
@@ -1158,7 +1157,6 @@
"file_size_must_be_less_than_10mb": "O tamanho do arquivo deve ser menor que 10MB.",
"invalid_file_type": "Tipo de arquivo inválido. Só são permitidos arquivos JPEG, PNG e WEBP.",
"lost_access": "Perdi o acesso",
"new_email_update_success": "Sua solicitação de alteração de e-mail foi recebida.",
"or_enter_the_following_code_manually": "Ou insira o seguinte código manualmente:",
"organization_identification": "Ajude sua organização a te identificar no Formbricks",
"organizations_delete_message": "Você é o único dono dessas organizações, então elas <b>também serão apagadas.</b>",
@@ -1771,7 +1769,7 @@
"link_to_public_results_copied": "Link pros resultados públicos copiado",
"make_sure_the_survey_type_is_set_to": "Certifique-se de que o tipo de pesquisa esteja definido como",
"mobile_app": "app de celular",
"no_response_matches_filter": "Nenhuma resposta corresponde ao seu filtro",
"no_responses_found": "Nenhuma resposta encontrada",
"only_completed": "Somente concluído",
"other_values_found": "Outros valores encontrados",
"overall": "No geral",

View File

@@ -9,7 +9,6 @@
"continue_with_saml": "Continuar com SAML SSO",
"email-change": {
"confirm_password_description": "Por favor, confirme a sua palavra-passe antes de alterar o seu endereço de email",
"email_already_exists": "Este email já está a ser utilizado",
"email_change_success": "Email alterado com sucesso",
"email_change_success_description": "Alterou com sucesso o seu endereço de email. Por favor, inicie sessão com o seu novo endereço de email.",
"email_verification_failed": "Falha na verificação do email",
@@ -95,7 +94,7 @@
"please_click_the_link_in_the_email_to_activate_your_account": "Por favor, clique no link no email para ativar a sua conta.",
"please_confirm_your_email_address": "Por favor, confirme o seu endereço de email",
"resend_verification_email": "Reenviar email de verificação",
"verification_email_successfully_sent": "Email de verificação enviado para {email}. Por favor, verifique para completar a atualização.",
"verification_email_resent_successfully": "Email de verificação enviado! Por favor, verifique a sua caixa de entrada.",
"we_sent_an_email_to": "Enviámos um email para {email}. ",
"you_didnt_receive_an_email_or_your_link_expired": "Não recebeu um email ou o seu link expirou?"
},
@@ -1158,7 +1157,6 @@
"file_size_must_be_less_than_10mb": "O tamanho do ficheiro deve ser inferior a 10MB.",
"invalid_file_type": "Tipo de ficheiro inválido. Apenas são permitidos ficheiros JPEG, PNG e WEBP.",
"lost_access": "Perdeu o acesso",
"new_email_update_success": "O seu pedido de alteração de email foi recebido.",
"or_enter_the_following_code_manually": "Ou insira o seguinte código manualmente:",
"organization_identification": "Ajude a sua organização a identificá-lo no Formbricks",
"organizations_delete_message": "É o único proprietário destas organizações, por isso <b>também serão eliminadas.</b>",
@@ -1771,7 +1769,7 @@
"link_to_public_results_copied": "Link para resultados públicos copiado",
"make_sure_the_survey_type_is_set_to": "Certifique-se de que o tipo de inquérito está definido para",
"mobile_app": "Aplicação móvel",
"no_response_matches_filter": "Nenhuma resposta corresponde ao seu filtro",
"no_responses_found": "Nenhuma resposta encontrada",
"only_completed": "Apenas concluído",
"other_values_found": "Outros valores encontrados",
"overall": "Geral",

View File

@@ -9,7 +9,6 @@
"continue_with_saml": "使用 SAML SSO 繼續",
"email-change": {
"confirm_password_description": "在更改您的電子郵件地址之前,請確認您的密碼",
"email_already_exists": "此電子郵件地址已被使用",
"email_change_success": "電子郵件已成功更改",
"email_change_success_description": "您已成功更改電子郵件地址。請使用您的新電子郵件地址登入。",
"email_verification_failed": "電子郵件驗證失敗",
@@ -95,7 +94,7 @@
"please_click_the_link_in_the_email_to_activate_your_account": "請點擊電子郵件中的連結以啟用您的帳戶。",
"please_confirm_your_email_address": "請確認您的電子郵件地址",
"resend_verification_email": "重新發送驗證電子郵件",
"verification_email_successfully_sent": "验证电子邮件已发送至 {email}。请验证以完成更新。",
"verification_email_resent_successfully": "驗證電子郵件已發送!請檢查您的收件箱。",
"we_sent_an_email_to": "我們已發送一封電子郵件至 <email>'{'email'}'</email>。",
"you_didnt_receive_an_email_or_your_link_expired": "您沒有收到電子郵件或您的連結已過期?"
},
@@ -1158,7 +1157,6 @@
"file_size_must_be_less_than_10mb": "檔案大小必須小於 10MB。",
"invalid_file_type": "無效的檔案類型。僅允許 JPEG、PNG 和 WEBP 檔案。",
"lost_access": "無法存取",
"new_email_update_success": "您的 email 更改請求已收到。",
"or_enter_the_following_code_manually": "或手動輸入下列程式碼:",
"organization_identification": "協助您的組織在 Formbricks 上識別您",
"organizations_delete_message": "您是這些組織的唯一擁有者,因此它們也 <b>將被刪除。</b>",
@@ -1771,7 +1769,7 @@
"link_to_public_results_copied": "已複製公開結果的連結",
"make_sure_the_survey_type_is_set_to": "請確保問卷類型設定為",
"mobile_app": "行動應用程式",
"no_response_matches_filter": "沒有任何回應符合您的篩選器",
"no_responses_found": "找不到回應",
"only_completed": "僅已完成",
"other_values_found": "找到其他值",
"overall": "整體",

View File

@@ -12,8 +12,8 @@ vi.mock("@tolgee/react", () => ({
if (key === "auth.verification-requested.no_email_provided") {
return "No email provided";
}
if (key === "auth.verification-requested.verification_email_successfully_sent") {
return `Verification email sent to ${params?.email}`;
if (key === "auth.verification-requested.verification_email_resent_successfully") {
return `Verification email sent! Please check your inbox.`;
}
if (key === "auth.verification-requested.resend_verification_email") {
return "Resend verification email";
@@ -61,7 +61,7 @@ describe("RequestVerificationEmail", () => {
await fireEvent.click(button);
expect(resendVerificationEmailAction).toHaveBeenCalledWith({ email: mockEmail });
expect(toast.success).toHaveBeenCalledWith(`Verification email sent to ${mockEmail}`);
expect(toast.success).toHaveBeenCalledWith(`Verification email sent! Please check your inbox.`);
});
test("reloads page when visibility changes to visible", () => {

View File

@@ -31,7 +31,7 @@ export const RequestVerificationEmail = ({ email }: RequestVerificationEmailProp
if (!email) return toast.error(t("auth.verification-requested.no_email_provided"));
const response = await resendVerificationEmailAction({ email });
if (response?.data) {
toast.success(t("auth.verification-requested.verification_email_successfully_sent", { email }));
toast.success(t("auth.verification-requested.verification_email_resent_successfully"));
} else {
const errorMessage = getFormattedErrorMessage(response);
toast.error(errorMessage);

View File

@@ -510,7 +510,7 @@ describe("SegmentFilter", () => {
qualifier: {
operator: "greaterThan",
},
value: "10",
value: "hello",
};
const segmentWithArithmeticFilter: TSegment = {
@@ -527,7 +527,7 @@ describe("SegmentFilter", () => {
const currentProps = { ...baseProps, segment: segmentWithArithmeticFilter };
render(<SegmentFilter {...currentProps} connector="and" resource={arithmeticFilterResource} />);
const valueInput = screen.getByDisplayValue("10");
const valueInput = screen.getByDisplayValue("hello");
await userEvent.clear(valueInput);
fireEvent.change(valueInput, { target: { value: "abc" } });
@@ -694,7 +694,7 @@ describe("SegmentFilter", () => {
id: "filter-person-2",
root: { type: "person", personIdentifier: "userId" },
qualifier: { operator: "greaterThan" },
value: "10",
value: "hello",
};
const segmentWithPersonFilterArithmetic: TSegment = {
@@ -715,7 +715,7 @@ describe("SegmentFilter", () => {
resource={personFilterResourceWithArithmeticOperator}
/>
);
const valueInput = screen.getByDisplayValue("10");
const valueInput = screen.getByDisplayValue("hello");
await userEvent.clear(valueInput);
fireEvent.change(valueInput, { target: { value: "abc" } });

View File

@@ -236,7 +236,7 @@ function AttributeSegmentFilter({
setValueError(t("environments.segments.value_must_be_a_number"));
}
}
}, [resource.qualifier, resource.value]);
}, [resource.qualifier, resource.value, t]);
const operatorArr = ATTRIBUTE_OPERATORS.map((operator) => {
return {
@@ -327,7 +327,7 @@ function AttributeSegmentFilter({
<SelectContent>
{contactAttributeKeys.map((attrClass) => (
<SelectItem key={attrClass.id} value={attrClass.key}>
{attrClass.name}
{attrClass.name ?? attrClass.key}
</SelectItem>
))}
</SelectContent>
@@ -422,7 +422,7 @@ function PersonSegmentFilter({
setValueError(t("environments.segments.value_must_be_a_number"));
}
}
}, [resource.qualifier, resource.value]);
}, [resource.qualifier, resource.value, t]);
const operatorArr = PERSON_OPERATORS.map((operator) => {
return {

View File

@@ -23,7 +23,6 @@ const nextConfig = {
productionBrowserSourceMaps: false,
serverExternalPackages: ["@aws-sdk", "@opentelemetry/instrumentation", "pino", "pino-pretty"],
outputFileTracingIncludes: {
"app/api/packages": ["../../packages/js-core/dist/*", "../../packages/surveys/dist/*"],
"/api/auth/**/*": ["../../node_modules/jose/**/*"],
},
experimental: {},
@@ -189,7 +188,8 @@ const nextConfig = {
headers: [
{
key: "Cache-Control",
value: "public, max-age=3600, s-maxage=604800, stale-while-revalidate=3600, stale-if-error=3600",
value:
"public, max-age=3600, s-maxage=2592000, stale-while-revalidate=3600, stale-if-error=86400",
},
{
key: "Content-Type",
@@ -199,20 +199,151 @@ const nextConfig = {
key: "Access-Control-Allow-Origin",
value: "*",
},
{
key: "Vary",
value: "Accept-Encoding",
},
],
},
// headers for /api/packages/(.*) -- the api route does not exist, but we still need the headers for the rewrites to work correctly!
// Favicon files - long cache since they rarely change
{
source: "/api/packages/(.*)",
source: "/favicon/(.*)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=3600, s-maxage=604800, stale-while-revalidate=3600, stale-if-error=3600",
value: "public, max-age=2592000, s-maxage=31536000, immutable",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
],
},
// Root favicon.ico - long cache
{
source: "/favicon.ico",
headers: [
{
key: "Cache-Control",
value: "public, max-age=2592000, s-maxage=31536000, immutable",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
],
},
// SVG files (icons, logos) - long cache since they're usually static
{
source: "/(.*)\\.svg",
headers: [
{
key: "Cache-Control",
value: "public, max-age=2592000, s-maxage=31536000, immutable",
},
{
key: "Content-Type",
value: "application/javascript; charset=UTF-8",
value: "image/svg+xml",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
],
},
// Image backgrounds - medium cache (might update more frequently)
{
source: "/image-backgrounds/(.*)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=86400, s-maxage=2592000, stale-while-revalidate=86400",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
{
key: "Vary",
value: "Accept-Encoding",
},
],
},
// Video files - long cache since they're large and expensive to transfer
{
source: "/video/(.*)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=604800, s-maxage=31536000, stale-while-revalidate=604800",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
{
key: "Accept-Ranges",
value: "bytes",
},
],
},
// Animated backgrounds (4K videos) - very long cache since they're large and immutable
{
source: "/animated-bgs/(.*)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=604800, s-maxage=31536000, immutable",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
{
key: "Accept-Ranges",
value: "bytes",
},
],
},
// CSV templates - shorter cache since they might update with feature changes
{
source: "/sample-csv/(.*)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=3600, s-maxage=86400, stale-while-revalidate=3600",
},
{
key: "Content-Type",
value: "text/csv",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
],
},
// Web manifest and browser config files - medium cache
{
source: "/(site\\.webmanifest|browserconfig\\.xml)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=86400, s-maxage=604800, stale-while-revalidate=86400",
},
{
key: "Access-Control-Allow-Origin",
value: "*",
},
],
},
// Optimize caching for other static assets in public folder (fallback)
{
source: "/(images|fonts|icons)/(.*)",
headers: [
{
key: "Cache-Control",
value: "public, max-age=31536000, s-maxage=31536000, immutable",
},
{
key: "Access-Control-Allow-Origin",

View File

@@ -138,7 +138,7 @@ test.describe("JS Package Test", async () => {
const impressionsCount = await page.getByRole("button", { name: "Impressions" }).innerText();
expect(impressionsCount).toEqual("Impressions\n\n1");
await expect(page.getByRole("link", { name: "Responses (1)" })).toBeVisible();
await expect(page.getByRole("link", { name: "Responses" })).toBeVisible();
await expect(page.getByRole("button", { name: "Completed 100%" })).toBeVisible();
await expect(page.getByText("1 Responses", { exact: true }).first()).toBeVisible();
await expect(page.getByText("CTR100%")).toBeVisible();

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80"><path fill="#000" d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/><path fill="#000" d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/></svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="40" height="31" fill="none"><g opacity=".9"><path fill="url(#a)" d="M13 .4v29.3H7V6.3h-.2L0 10.5V5L7.2.4H13Z"/><path fill="url(#b)" d="M28.8 30.1c-2.2 0-4-.3-5.7-1-1.7-.8-3-1.8-4-3.1a7.7 7.7 0 0 1-1.4-4.6h6.2c0 .8.3 1.4.7 2 .4.5 1 .9 1.7 1.2.7.3 1.6.4 2.5.4 1 0 1.7-.2 2.5-.5.7-.3 1.3-.8 1.7-1.4.4-.6.6-1.2.6-2s-.2-1.5-.7-2.1c-.4-.6-1-1-1.8-1.4-.8-.4-1.8-.5-2.9-.5h-2.7v-4.6h2.7a6 6 0 0 0 2.5-.5 4 4 0 0 0 1.7-1.3c.4-.6.6-1.3.6-2a3.5 3.5 0 0 0-2-3.3 5.6 5.6 0 0 0-4.5 0 4 4 0 0 0-1.7 1.2c-.4.6-.6 1.2-.6 2h-6c0-1.7.6-3.2 1.5-4.5 1-1.3 2.2-2.3 3.8-3C25 .4 26.8 0 28.8 0s3.8.4 5.3 1.1c1.5.7 2.7 1.7 3.6 3a7.2 7.2 0 0 1 1.2 4.2c0 1.6-.5 3-1.5 4a7 7 0 0 1-4 2.2v.2c2.2.3 3.8 1 5 2.2a6.4 6.4 0 0 1 1.6 4.6c0 1.7-.5 3.1-1.4 4.4a9.7 9.7 0 0 1-4 3.1c-1.7.8-3.7 1.1-5.8 1.1Z"/></g><defs><linearGradient id="a" x1="20" x2="20" y1="0" y2="30.1" gradientUnits="userSpaceOnUse"><stop/><stop offset="1" stop-color="#3D3D3D"/></linearGradient><linearGradient id="b" x1="20" x2="20" y1="0" y2="30.1" gradientUnits="userSpaceOnUse"><stop/><stop offset="1" stop-color="#3D3D3D"/></linearGradient></defs></svg>

Before

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 283 64"><path fill="black" d="M141 16c-11 0-19 7-19 18s9 18 20 18c7 0 13-3 16-7l-7-5c-2 3-6 4-9 4-5 0-9-3-10-7h28v-3c0-11-8-18-19-18zm-9 15c1-4 4-7 9-7s8 3 9 7h-18zm117-15c-11 0-19 7-19 18s9 18 20 18c6 0 12-3 16-7l-8-5c-2 3-5 4-8 4-5 0-9-3-11-7h28l1-3c0-11-8-18-19-18zm-10 15c2-4 5-7 10-7s8 3 9 7h-19zm-39 3c0 6 4 10 10 10 4 0 7-2 9-5l8 5c-3 5-9 8-17 8-11 0-19-7-19-18s8-18 19-18c8 0 14 3 17 8l-8 5c-2-3-5-5-9-5-6 0-10 4-10 10zm83-29v46h-9V5h9zM37 0l37 64H0L37 0zm92 5-27 48L74 5h10l18 30 17-30h10zm59 12v10l-3-1c-6 0-10 4-10 10v15h-9V17h9v9c0-5 6-9 13-9z"/></svg>

Before

Width:  |  Height:  |  Size: 629 B

Binary file not shown.

View File

@@ -91,6 +91,7 @@ export default defineConfig({
"packages/surveys/src/components/general/smileys.tsx", // Smiley components
"modules/analysis/components/SingleResponseCard/components/Smileys.tsx", // Analysis smiley components
"modules/auth/lib/mock-data.ts", // Mock data for authentication
"packages/js-core/src/index.ts", // JS Core index file
// Other
"**/scripts/**", // Utility scripts

View File

@@ -180,25 +180,23 @@ tls:
default:
minVersion: VersionTLS12
cipherSuites:
# TLS 1.2 Ciphers
# TLS 1.2 strong ciphers
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
# TLS 1.3 Ciphers (These are automatically used for TLS 1.3 connections)
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
# Fallback
- TLS_FALLBACK_SCSV
# TLS 1.3 ciphers are not configurable in Traefik; they are enabled by default
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
alpnProtocols:
- h2
- http/1.1
EOT
echo "💡 Created traefik.yaml and traefik-dynamic.yaml file."

View File

@@ -44,4 +44,4 @@ We currently have the following Management API methods exposed and below is thei
---
**Cant figure it out?** Get help in [GitHub Discussions](https://github.com/formbricks/formbricks/discussions).
**Need help?** Reach out in [GitHub Discussions](https://github.com/formbricks/formbricks/discussions).

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

View File

@@ -1,74 +1,77 @@
---
title: "Email Follow-ups"
description: "Follow-ups are a feature that allows you to send emails to your users on different survey events."
description: "Automatically send customized emails to respondents based on their survey responses or specific survey endings."
icon: "envelope"
---
## Overview
The email followup feature allows survey creators to automatically send customized emails to respondents based on their survey responses or when they reach specific survey endings. This feature is particularly useful for following up with respondents, sending thank you notes, or providing additional information.
<Note>
Email followups is a paid feature. It is only available for users on paid plans or if you have [Enterprise Edition](/self-hosting/advanced/license).
</Note>
## Key Components
## What are Email Follow-ups?
### 1. Trigger Types
Email followups allow you to automatically send customized emails to respondents based on their survey responses or when they reach specific survey endings. This feature is perfect for:
- Sending thank you notes
- Following up with respondents
- Providing additional information
- Sharing survey response data
There are two types of triggers for email followups:
### Trigger Types
- **Response-based**: Triggered when a response is submitted
- **Ending-based**: Triggered when respondents reach specific survey endings
<Card title="Response-based">
Emails are sent when a response to your survey is completed.
</Card>
### 2. Email Configuration
<Card title="Ending-based">
Emails are triggered when respondents reach specific survey endings.
</Card>
Each followup email can be configured with:
## Setting Up Email Follow-ups
- **Name**: A descriptive name for the followup
- **To**: Email recipient (sourced from):
- Open text questions with email input type
- Contact info questions
- Hidden fields
- **Reply-To**: One or more email addresses for replies
- **Subject**: Email subject line
- **Body**: HTML-formatted email content
<Steps>
<Step title="Go to Follow-ups Section and Create New Follow-up">
Navigate to the survey editor and access the Follow-ups section.
</Step>
## Setup Process
<Step title="Configure Recipients">
The "To" field can be configured to use:
1. Navigate to the survey editor
2. Access the `follow-ups` section
<ul>
<li><strong>Email Questions:</strong> Responses to question type `Open Text` of type `email`</li>
<li><strong>Contact Info:</strong> Responses to question type `Contact`</li>
<li><strong>Hidden Fields:</strong> Values from hidden fields</li>
<li><strong>Team Members:</strong> Members of your team</li>
<li><strong>Yourself:</strong> Your own email address</li>
</ul>
![Followups tab](/images/xm-and-surveys/core-features/email-followups/followups-tab.webp)
<Image src="/images/xm-and-surveys/core-features/email-followups/followup-recipient.webp" alt="Followup recipient configuration" />
</Step>
3. Click the "New follow-up" button to add a new followup
4. Fill in the required information:
<Step title="Set Up Reply-To">
- Add one or more valid email addresses
- Addresses can be added by typing and pressing space or comma
- Invalid email addresses are automatically rejected
</Step>
- Followup name
- Trigger type (response or endings)
<Step title="Configure Email Content">
<Image src="/images/xm-and-surveys/core-features/email-followups/followup-content.webp" alt="Followup content configuration" />
![Followup form](/images/xm-and-surveys/core-features/email-followups/followup-form.webp)
<ul>
<li><strong>Subject:</strong> Customize your email subject line</li>
<li><strong>Body:</strong> Supports basic HTML formatting (`p`, `span`, `b`, `strong`, `i`, `em`, `a`, `br` tags)</li>
<li>
<strong>Survey Response Data:</strong> Option to include detailed response data with support for:
<ul>
<li>File uploads</li>
<li>Images</li>
<li>Rankings</li>
<li>Translations</li>
</ul>
</li>
</ul>
</Step>
5. **Configuring Recipients**:
The "To" field can be configured to use:
- Responses from email-type open text questions
- Responses from contact info questions
- Values from hidden fields
6. **Configure the Reply-To**:
- Add one or more valid email addresses
- Addresses can be added by typing and pressing space or comma
- Invalid email addresses are automatically rejected
![Followup recipient](/images/xm-and-surveys/core-features/email-followups/followup-recipient.webp)
7. **Configuring the Email Content**:
- Subject
- Body: Supports basic HTML formatting (p, span, b, strong, i, em, a, br tags)
![Followup content](/images/xm-and-surveys/core-features/email-followups/followup-content.webp)
8. **Save and Activate**
<Step title="Save to Activate">
Once you've configured all settings, save your survey to activate the email follow-up.
</Step>
</Steps>

View File

@@ -51,4 +51,4 @@ You can export the metadata of your responses along with the response data. When
---
**Cant figure it out?**: [Get help in Github Discussions](https://github.com/formbricks/formbricks/discussions)
**Need help?** [Reach out in Github Discussions](https://github.com/formbricks/formbricks/discussions)

View File

@@ -108,4 +108,31 @@ Without the `lang` parameter, Formbricks will show the survey in the default lan
You can now start collecting responses in multiple languages!
**Cant figure it out?**: [Get help in Github Discussions](https://github.com/formbricks/formbricks/discussions)
---
## RTL Language Support
Formbricks fully supports Right-to-Left (RTL) languages such as Arabic, Hebrew, Persian, and Urdu. When you add an RTL language to your survey, the survey interface automatically adjusts to display content from right to left.
### How RTL Support Works
- Text alignment automatically switches to right-to-left
- Survey layout and UI elements adjust to RTL orientation
- Button placement and navigation flow adapt to RTL reading direction
- Form elements maintain proper RTL formatting
### Setting Up RTL Languages
No additional configuration is needed to enable RTL support. Simply:
1. Add an RTL language (like Arabic or Hebrew) in the **Survey Languages** settings
2. Create translations for your survey content in the RTL language
3. The survey will automatically display in RTL format when that language is selected
![RTL Language Support](/images/xm-and-surveys/surveys/general-features/multi-language-surveys/rtl-support.webp)
---
**Need help?** [Reach out in Github Discussions](https://github.com/formbricks/formbricks/discussions)

View File

@@ -194,4 +194,4 @@ PS: If you do not see any signature settings, just use one of the methods we've
---
**Cant figure it out?**: [Get help in Github Discussions](https://github.com/formbricks/formbricks/discussions)
**Need help?** [Reach out in Github Discussions](https://github.com/formbricks/formbricks/discussions)

View File

@@ -33,6 +33,10 @@ Integrate the **Formbricks App Survey SDK** into your app using multiple options
[Use our iOS SDK to quickly integrate surveys into your iOS applications.](https://formbricks.com/docs/app-surveys/framework-guides#swift)
</Card>
<Card title="Android" icon="android" color="green" href="#android">
[Integrate surveys into your Android applications using our native Kotlin SDK.](https://formbricks.com/docs/app-surveys/framework-guides#android)
</Card>
</CardGroup>
## Prerequisites
@@ -409,6 +413,77 @@ Formbricks.cleanup(waitForOperations: true) {
| environment-id | string | Formbricks Environment ID. |
| app-url | string | URL of the hosted Formbricks instance. |
Now, visit the [Validate Your Setup](#validate-your-setup) section to verify your setup!
## Android
Install the Formbricks Android SDK using the following steps:
### Installation
Add the Maven Central repository and the Formbricks SDK dependency to your application's `build.gradle.kts`:
```kotlin
repositories {
google()
mavenCentral()
}
dependencies {
implementation("com.formbricks:android:1.0.0") // replace with latest version
}
```
Enable DataBinding in your app's module build.gradle.kts:
```kotlin
android {
buildFeatures {
dataBinding = true
}
}
```
### Usage
```kotlin
// 1. Initialize the SDK
val config = FormbricksConfig.Builder(
"https://your-formbricks-server.com",
"YOUR_ENVIRONMENT_ID"
)
.setLoggingEnabled(true)
.setFragmentManager(supportFragmentManager)
.build()
// 2. Setup Formbricks
Formbricks.setup(this, config)
// 3. Identify the user
Formbricks.setUserId("user123")
// 4. Track events
Formbricks.track("button_pressed")
// 5. Set or add user attributes
Formbricks.setAttribute("test@web.com", "email")
Formbricks.setAttributes(mapOf(Pair("attr1", "val1"), Pair("attr2", "val2")))
// 6. Change language (no userId required):
Formbricks.setLanguage("de")
// 7. Log out:
Formbricks.logout()
```
### Required Customizations
| Name | Type | Description |
| -------------- | ------ | -------------------------------------- |
| environment-id | string | Formbricks Environment ID. |
| app-url | string | URL of the hosted Formbricks instance. |
## Validate your setup
Once youve completed the steps above, validate your setup by checking the Setup Checklist in the Settings. The widget status indicator should change from this:
@@ -420,6 +495,10 @@ To this:
## Debugging Formbricks Integration
<Note>
The debug mode is only available in the JavaScript SDK and works exclusively in the browser. It is not supported in mobile SDKs such as React Native, iOS, or Android.
</Note>
Enabling debug mode in your browser can help troubleshoot issues with Formbricks. Heres how to activate it and what to look for in the logs.
### Activate Debug Mode

View File

@@ -375,4 +375,4 @@ And lastly, in the `updateFeedback` function
Something doesnt work? Check your browser console for the error.
**Cant figure it out?**: [Get help in GitHub Discussions](https://github.com/formbricks/formbricks/discussions)
**Need help?** [Reach out in GitHub Discussions](https://github.com/formbricks/formbricks/discussions)

View File

@@ -93,6 +93,51 @@ deployment:
nodeSelector:
karpenter.sh/capacity-type: spot
reloadOnChange: true
# Pod lifecycle management for zero-downtime deployments
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
# Health probes configuration
probes:
readiness:
httpGet:
path: /health
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
liveness:
httpGet:
path: /health
port: 3000
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
startup:
httpGet:
path: /health
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 12
# Pod termination grace period
terminationGracePeriodSeconds: 45
# Rolling update strategy
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 50%
autoscaling:
enabled: true
maxReplicas: 95
@@ -136,9 +181,32 @@ ingress:
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-Res-2021-06
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
# Enhanced ALB configuration for connection handling
alb.ingress.kubernetes.io/load-balancer-attributes: |
idle_timeout.timeout_seconds=120,
connection_logs.s3.enabled=false,
access_logs.s3.enabled=false
# Target group health check optimizations
alb.ingress.kubernetes.io/target-group-attributes: |
deregistration_delay.timeout_seconds=30,
stickiness.enabled=false,
stickiness.type=lb_cookie,
stickiness.lb_cookie.duration_seconds=86400,
load_balancing.algorithm.type=least_outstanding_requests,
target_group_health.dns_failover.minimum_healthy_targets.count=1,
target_group_health.dns_failover.minimum_healthy_targets.percentage=off
# Health check configuration
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
alb.ingress.kubernetes.io/success-codes: "200"
# Backend protocol and port
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/backend-protocol-version: HTTP1
enabled: true
hosts:
- host: stage.app.formbricks.com
@@ -163,3 +231,16 @@ postgresql:
enabled: false
redis:
enabled: false
## Service Configuration
service:
type: ClusterIP
port: 80
targetPort: 3000
annotations:
# Service annotations for better ALB integration
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "120"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# Session affinity disabled for better load distribution
sessionAffinity: None

View File

@@ -82,8 +82,6 @@ deployment:
env:
DOCKER_CRON_ENABLED:
value: "0"
RATE_LIMITING_DISABLED:
value: "1"
envFrom:
app-env:
nameSuffix: app-env
@@ -91,6 +89,51 @@ deployment:
nodeSelector:
karpenter.sh/capacity-type: on-demand
reloadOnChange: true
# Pod lifecycle management for zero-downtime deployments
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
# Health probes configuration
probes:
readiness:
httpGet:
path: /health
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
liveness:
httpGet:
path: /health
port: 3000
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
startup:
httpGet:
path: /health
port: 3000
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 12
# Pod termination grace period
terminationGracePeriodSeconds: 45
# Rolling update strategy
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 50%
autoscaling:
enabled: true
maxReplicas: 95
@@ -137,6 +180,39 @@ ingress:
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
# Enhanced ALB configuration for connection handling
alb.ingress.kubernetes.io/load-balancer-attributes: |
idle_timeout.timeout_seconds=120,
connection_logs.s3.enabled=false,
access_logs.s3.enabled=false
# Target group health check optimizations
alb.ingress.kubernetes.io/target-group-attributes: |
deregistration_delay.timeout_seconds=30,
stickiness.enabled=false,
stickiness.type=lb_cookie,
stickiness.lb_cookie.duration_seconds=86400,
load_balancing.algorithm.type=least_outstanding_requests,
target_group_health.dns_failover.minimum_healthy_targets.count=1,
target_group_health.dns_failover.minimum_healthy_targets.percentage=off
# Health check configuration
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "3"
alb.ingress.kubernetes.io/success-codes: "200"
# Backend protocol and port
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/backend-protocol-version: HTTP1
# Connection draining
alb.ingress.kubernetes.io/actions.ssl-redirect: |
{
"Type": "redirect",
"RedirectConfig": {
"Protocol": "HTTPS",
"Port": "443",
"StatusCode": "HTTP_301"
}
}
enabled: true
hosts:
- host: app.k8s.formbricks.com
@@ -166,3 +242,16 @@ postgresql:
enabled: false
redis:
enabled: false
## Service Configuration
service:
type: ClusterIP
port: 80
targetPort: 3000
annotations:
# Service annotations for better ALB integration
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "120"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
# Session affinity disabled for better load distribution
sessionAffinity: None

View File

@@ -57,6 +57,62 @@ locals {
LoadBalancer = local.alb_id
}
}
ALB_HTTPCode_ELB_502_Count = {
alarm_description = "ALB 502 errors indicating backend connection issues"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 3
threshold = 20
period = 300
unit = "Count"
namespace = "AWS/ApplicationELB"
metric_name = "HTTPCode_ELB_502_Count"
statistic = "Sum"
dimensions = {
LoadBalancer = local.alb_id
}
}
ALB_HTTPCode_ELB_504_Count = {
alarm_description = "ALB 504 errors indicating timeout issues"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 3
threshold = 15
period = 300
unit = "Count"
namespace = "AWS/ApplicationELB"
metric_name = "HTTPCode_ELB_504_Count"
statistic = "Sum"
dimensions = {
LoadBalancer = local.alb_id
}
}
ALB_HTTPCode_Target_4XX_Count = {
alarm_description = "High 4XX error rate indicating client issues or misconfigurations"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 5
threshold = 100
period = 600
unit = "Count"
namespace = "AWS/ApplicationELB"
metric_name = "HTTPCode_Target_4XX_Count"
statistic = "Sum"
dimensions = {
LoadBalancer = local.alb_id
}
}
ALB_TargetConnectionErrorCount = {
alarm_description = "High target connection errors indicating backend connectivity issues"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 3
threshold = 50
period = 300
unit = "Count"
namespace = "AWS/ApplicationELB"
metric_name = "TargetConnectionErrorCount"
statistic = "Sum"
dimensions = {
LoadBalancer = local.alb_id
}
}
ALB_TargetResponseTime = {
alarm_description = format("Average API response time is greater than %s", 5)
comparison_operator = "GreaterThanThreshold"

View File

@@ -385,6 +385,38 @@ resource "kubernetes_manifest" "node_pool" {
values = ["nitro"]
}
]
# Add node startup and shutdown taints to prevent traffic during lifecycle events
startupTaints = [
{
key = "karpenter.sh/startup"
value = "true"
effect = "NoSchedule"
}
]
# Add kubelet configuration for better pod lifecycle management
kubelet = {
maxPods = 110
clusterDNS = ["169.254.20.10"]
# Graceful node shutdown configuration
shutdownGracePeriod = "30s"
shutdownGracePeriodCriticalPods = "10s"
# Pod eviction settings
evictionHard = {
"memory.available" = "100Mi"
"nodefs.available" = "10%"
"imagefs.available" = "10%"
}
evictionSoft = {
"memory.available" = "500Mi"
"nodefs.available" = "15%"
"imagefs.available" = "15%"
}
evictionSoftGracePeriod = {
"memory.available" = "2m"
"nodefs.available" = "2m"
"imagefs.available" = "2m"
}
}
}
}
limits = {
@@ -392,8 +424,12 @@ resource "kubernetes_manifest" "node_pool" {
}
disruption = {
consolidationPolicy = "WhenEmptyOrUnderutilized"
consolidateAfter = "30s"
consolidateAfter = "60s" # Increased from 30s to reduce frequent disruptions
# Expiration settings for better predictability
expireAfter = "168h" # 7 days
}
# Weight for prioritizing this NodePool
weight = 100
}
}
}

View File

@@ -1,5 +1,5 @@
/* eslint-disable import/no-default-export -- required for default export*/
import { CommandQueue } from "@/lib/common/command-queue";
import { CommandQueue, CommandType } from "@/lib/common/command-queue";
import * as Setup from "@/lib/common/setup";
import { getIsDebug } from "@/lib/common/utils";
import * as Action from "@/lib/survey/action";
@@ -9,7 +9,7 @@ import * as User from "@/lib/user/user";
import { type TConfigInput, type TLegacyConfigInput } from "@/types/config";
import { type TTrackProperties } from "@/types/survey";
const queue = new CommandQueue();
const queue = CommandQueue.getInstance();
const setup = async (setupConfig: TConfigInput): Promise<void> => {
// If the initConfig has a userId or attributes, we need to use the legacy init
@@ -27,45 +27,41 @@ const setup = async (setupConfig: TConfigInput): Promise<void> => {
// eslint-disable-next-line no-console -- legacy init
console.warn("🧱 Formbricks - Warning: Using legacy init");
}
queue.add(Setup.setup, false, {
await queue.add(Setup.setup, CommandType.Setup, false, {
...setupConfig,
// @ts-expect-error -- apiHost was in the older type
...(setupConfig.apiHost && { appUrl: setupConfig.apiHost as string }),
} as unknown as TConfigInput);
} else {
queue.add(Setup.setup, false, setupConfig);
await queue.wait();
await queue.add(Setup.setup, CommandType.Setup, false, setupConfig);
}
// wait for setup to complete
await queue.wait();
};
const setUserId = async (userId: string): Promise<void> => {
queue.add(User.setUserId, true, userId);
await queue.wait();
await queue.add(User.setUserId, CommandType.UserAction, true, userId);
};
const setEmail = async (email: string): Promise<void> => {
await setAttribute("email", email);
await queue.wait();
await queue.add(Attribute.setAttributes, CommandType.UserAction, true, { email });
};
const setAttribute = async (key: string, value: string): Promise<void> => {
queue.add(Attribute.setAttributes, true, { [key]: value });
await queue.wait();
await queue.add(Attribute.setAttributes, CommandType.UserAction, true, { [key]: value });
};
const setAttributes = async (attributes: Record<string, string>): Promise<void> => {
queue.add(Attribute.setAttributes, true, attributes);
await queue.wait();
await queue.add(Attribute.setAttributes, CommandType.UserAction, true, attributes);
};
const setLanguage = async (language: string): Promise<void> => {
queue.add(Attribute.setAttributes, true, { language });
await queue.wait();
await queue.add(Attribute.setAttributes, CommandType.UserAction, true, { language });
};
const logout = async (): Promise<void> => {
queue.add(User.logout, true);
await queue.wait();
await queue.add(User.logout, CommandType.GeneralAction);
};
/**
@@ -73,13 +69,11 @@ const logout = async (): Promise<void> => {
* @param properties - Optional properties to set, like the hidden fields (deprecated, hidden fields will be removed in a future version)
*/
const track = async (code: string, properties?: TTrackProperties): Promise<void> => {
queue.add<string | TTrackProperties | undefined>(Action.trackCodeAction, true, code, properties);
await queue.wait();
await queue.add(Action.trackCodeAction, CommandType.GeneralAction, true, code, properties);
};
const registerRouteChange = async (): Promise<void> => {
queue.add(checkPageUrl, true);
await queue.wait();
await queue.add(checkPageUrl, CommandType.GeneralAction);
};
const formbricks = {

View File

@@ -1,32 +1,68 @@
/* eslint-disable @typescript-eslint/no-explicit-any -- required for command queue */
/* eslint-disable no-console -- we need to log global errors */
import { checkSetup } from "@/lib/common/setup";
import { checkSetup } from "@/lib/common/status";
import { wrapThrowsAsync } from "@/lib/common/utils";
import type { Result } from "@/types/error";
import { UpdateQueue } from "@/lib/user/update-queue";
import { type Result } from "@/types/error";
export type TCommand = (
...args: any[]
) => Promise<Result<void, unknown>> | Result<void, unknown> | Promise<void>;
export enum CommandType {
Setup,
UserAction,
GeneralAction,
}
interface InternalQueueItem {
command: TCommand;
type: CommandType;
checkSetup: boolean;
commandArgs: any[];
}
export class CommandQueue {
private queue: {
command: TCommand;
checkSetup: boolean;
commandArgs: any[];
}[] = [];
private queue: InternalQueueItem[] = [];
private running = false;
private resolvePromise: (() => void) | null = null;
private commandPromise: Promise<void> | null = null;
private static instance: CommandQueue | null = null;
public add<A>(command: TCommand, shouldCheckSetup = true, ...args: A[]): void {
this.queue.push({ command, checkSetup: shouldCheckSetup, commandArgs: args });
public static getInstance(): CommandQueue {
CommandQueue.instance ??= new CommandQueue();
return CommandQueue.instance;
}
if (!this.running) {
this.commandPromise = new Promise((resolve) => {
this.resolvePromise = resolve;
void this.run();
});
}
public add(
command: TCommand,
type: CommandType,
shouldCheckSetupFlag = true,
...args: any[]
): Promise<Result<void, unknown>> {
return new Promise((addResolve) => {
try {
const newItem: InternalQueueItem = {
command,
type,
checkSetup: shouldCheckSetupFlag,
commandArgs: args,
};
this.queue.push(newItem);
if (!this.running) {
this.commandPromise = new Promise((resolve) => {
this.resolvePromise = resolve;
void this.run();
});
}
addResolve({ ok: true, data: undefined });
} catch (error) {
addResolve({ ok: false, error: error as Error });
}
});
}
public async wait(): Promise<void> {
@@ -37,21 +73,29 @@ export class CommandQueue {
private async run(): Promise<void> {
this.running = true;
while (this.queue.length > 0) {
const currentItem = this.queue.shift();
if (!currentItem) continue;
// make sure formbricks is setup
if (currentItem.checkSetup) {
// call different function based on package type
const setupResult = checkSetup();
if (!setupResult.ok) {
console.warn(`🧱 Formbricks - Setup not complete.`);
continue;
}
}
if (currentItem.type === CommandType.GeneralAction) {
// first check if there are pending updates in the update queue
const updateQueue = UpdateQueue.getInstance();
if (!updateQueue.isEmpty()) {
console.log("🧱 Formbricks - Waiting for pending updates to complete before executing command");
await updateQueue.processUpdates();
}
}
const executeCommand = async (): Promise<Result<void, unknown>> => {
return (await currentItem.command.apply(null, currentItem.commandArgs)) as Result<void, unknown>;
};
@@ -64,6 +108,7 @@ export class CommandQueue {
console.error("🧱 Formbricks - Global error: ", result.data.error);
}
}
this.running = false;
if (this.resolvePromise) {
this.resolvePromise();

View File

@@ -16,10 +16,7 @@ export class Config {
}
static getInstance(): Config {
if (!Config.instance) {
Config.instance = new Config();
}
Config.instance ??= new Config();
return Config.instance;
}

View File

@@ -1,12 +1,9 @@
/* eslint-disable no-console -- required for logging */
import { Config } from "@/lib/common/config";
import { JS_LOCAL_STORAGE_KEY } from "@/lib/common/constants";
import {
addCleanupEventListeners,
addEventListeners,
removeAllEventListeners,
} from "@/lib/common/event-listeners";
import { addCleanupEventListeners, addEventListeners } from "@/lib/common/event-listeners";
import { Logger } from "@/lib/common/logger";
import { getIsSetup, setIsSetup } from "@/lib/common/status";
import { filterSurveys, getIsDebug, isNowExpired, wrapThrows } from "@/lib/common/utils";
import { fetchEnvironmentState } from "@/lib/environment/state";
import { checkPageUrl } from "@/lib/survey/no-code-action";
@@ -24,18 +21,11 @@ import {
type MissingFieldError,
type MissingPersonError,
type NetworkError,
type NotSetupError,
type Result,
err,
okVoid,
} from "@/types/error";
let isSetup = false;
export const setIsSetup = (state: boolean): void => {
isSetup = state;
};
const migrateLocalStorage = (): { changed: boolean; newState?: TConfig } => {
const existingConfig = localStorage.getItem(JS_LOCAL_STORAGE_KEY);
@@ -99,7 +89,7 @@ export const setup = async (
}
}
if (isSetup) {
if (getIsSetup()) {
logger.debug("Already set up, skipping setup.");
return okVoid();
}
@@ -193,6 +183,7 @@ export const setup = async (
if (environmentStateResponse.ok) {
environmentState = environmentStateResponse.data;
logger.debug(`Fetched ${environmentState.data.surveys.length.toString()} surveys from the backend`);
} else {
logger.error(
`Error fetching environment state: ${environmentStateResponse.error.code} - ${environmentStateResponse.error.responseMessage ?? ""}`
@@ -257,7 +248,9 @@ export const setup = async (
});
const surveyNames = filteredSurveys.map((s) => s.name);
logger.debug(`Fetched ${surveyNames.length.toString()} surveys during sync: ${surveyNames.join(", ")}`);
logger.debug(
`${surveyNames.length.toString()} surveys could be shown to current user on trigger: ${surveyNames.join(", ")}`
);
} catch {
logger.debug("Error during sync. Please try again.");
}
@@ -303,6 +296,7 @@ export const setup = async (
}
const environmentState = environmentStateResponse.data;
logger.debug(`Fetched ${environmentState.data.surveys.length.toString()} surveys from the backend`);
const filteredSurveys = filterSurveys(environmentState, userState);
config.update({
@@ -312,6 +306,11 @@ export const setup = async (
environment: environmentState,
filteredSurveys,
});
const surveyNames = filteredSurveys.map((s) => s.name);
logger.debug(
`${surveyNames.length.toString()} surveys could be shown to current user on trigger: ${surveyNames.join(", ")}`
);
} catch (e) {
await handleErrorOnFirstSetup(e as { code: string; responseMessage: string });
}
@@ -329,35 +328,26 @@ export const setup = async (
return okVoid();
};
export const checkSetup = (): Result<void, NotSetupError> => {
const logger = Logger.getInstance();
logger.debug("Check if set up");
if (!isSetup) {
return err({
code: "not_setup",
message: "Formbricks is not set up. Call setup() first.",
});
}
return okVoid();
};
export const tearDown = (): void => {
const logger = Logger.getInstance();
const appConfig = Config.getInstance();
const { environment } = appConfig.get();
const filteredSurveys = filterSurveys(environment, DEFAULT_USER_STATE_NO_USER_ID);
logger.debug("Setting user state to default");
// clear the user state and set it to the default value
appConfig.update({
...appConfig.get(),
user: DEFAULT_USER_STATE_NO_USER_ID,
filteredSurveys,
});
// remove container element from DOM
removeWidgetContainer();
addWidgetContainer();
setIsSurveyRunning(false);
removeAllEventListeners();
setIsSetup(false);
};
export const handleErrorOnFirstSetup = (e: { code: string; responseMessage: string }): Promise<never> => {

View File

@@ -0,0 +1,26 @@
import { Logger } from "@/lib/common/logger";
import { type NotSetupError, type Result, err, okVoid } from "@/types/error";
let isSetup = false;
export const setIsSetup = (state: boolean): void => {
isSetup = state;
};
export const getIsSetup = (): boolean => {
return isSetup;
};
export const checkSetup = (): Result<void, NotSetupError> => {
const logger = Logger.getInstance();
logger.debug("Check if set up");
if (!isSetup) {
return err({
code: "not_setup",
message: "Formbricks is not set up. Call setup() first.",
});
}
return okVoid();
};

View File

@@ -1,13 +1,24 @@
import { CommandQueue } from "@/lib/common/command-queue";
import { checkSetup } from "@/lib/common/setup";
import { CommandQueue, CommandType } from "@/lib/common/command-queue";
import { checkSetup } from "@/lib/common/status";
import { UpdateQueue } from "@/lib/user/update-queue";
import { type Result } from "@/types/error";
import { beforeEach, describe, expect, test, vi } from "vitest";
// Mock the setup module so we can control checkSetup()
vi.mock("@/lib/common/setup", () => ({
vi.mock("@/lib/common/status", () => ({
checkSetup: vi.fn(),
}));
// Mock the UpdateQueue
vi.mock("@/lib/user/update-queue", () => ({
UpdateQueue: {
getInstance: vi.fn(() => ({
isEmpty: vi.fn(),
processUpdates: vi.fn(),
})),
},
}));
describe("CommandQueue", () => {
let queue: CommandQueue;
@@ -51,9 +62,9 @@ describe("CommandQueue", () => {
vi.mocked(checkSetup).mockReturnValue({ ok: true, data: undefined });
// Enqueue commands
queue.add(cmdA, true);
queue.add(cmdB, true);
queue.add(cmdC, true);
await queue.add(cmdA, CommandType.GeneralAction, true);
await queue.add(cmdB, CommandType.GeneralAction, true);
await queue.add(cmdC, CommandType.GeneralAction, true);
// Wait for them to finish
await queue.wait();
@@ -79,7 +90,7 @@ describe("CommandQueue", () => {
},
});
queue.add(cmd, true);
await queue.add(cmd, CommandType.GeneralAction, true);
await queue.wait();
// Command should never have been called
@@ -99,7 +110,7 @@ describe("CommandQueue", () => {
vi.mocked(checkSetup).mockReturnValue({ ok: true, data: undefined });
// Here we pass 'false' for the second argument, so no check is performed
queue.add(cmd, false);
await queue.add(cmd, CommandType.GeneralAction, false);
await queue.wait();
expect(cmd).toHaveBeenCalledTimes(1);
@@ -128,7 +139,7 @@ describe("CommandQueue", () => {
throw new Error("some error");
});
queue.add(failingCmd, true);
await queue.add(failingCmd, CommandType.GeneralAction, true);
await queue.wait();
expect(consoleErrorSpy).toHaveBeenCalledWith("🧱 Formbricks - Global error: ", expect.any(Error));
@@ -153,8 +164,8 @@ describe("CommandQueue", () => {
vi.mocked(checkSetup).mockReturnValue({ ok: true, data: undefined });
queue.add(cmd1, true);
queue.add(cmd2, true);
await queue.add(cmd1, CommandType.GeneralAction, true);
await queue.add(cmd2, CommandType.GeneralAction, true);
await queue.wait();
@@ -162,4 +173,70 @@ describe("CommandQueue", () => {
expect(cmd1).toHaveBeenCalled();
expect(cmd2).toHaveBeenCalled();
});
test("processes UpdateQueue before executing GeneralAction commands", async () => {
const mockUpdateQueue = {
isEmpty: vi.fn().mockReturnValue(false),
processUpdates: vi.fn().mockResolvedValue("test"),
};
const mockUpdateQueueInstance = vi.spyOn(UpdateQueue, "getInstance");
mockUpdateQueueInstance.mockReturnValue(mockUpdateQueue as unknown as UpdateQueue);
const generalActionCmd = vi.fn((): Promise<Result<void, unknown>> => {
return Promise.resolve({ ok: true, data: undefined });
});
vi.mocked(checkSetup).mockReturnValue({ ok: true, data: undefined });
await queue.add(generalActionCmd, CommandType.GeneralAction, true);
await queue.wait();
expect(mockUpdateQueue.isEmpty).toHaveBeenCalled();
expect(mockUpdateQueue.processUpdates).toHaveBeenCalled();
expect(generalActionCmd).toHaveBeenCalled();
});
test("implements singleton pattern correctly", () => {
const instance1 = CommandQueue.getInstance();
const instance2 = CommandQueue.getInstance();
expect(instance1).toBe(instance2);
});
test("handles multiple commands with different types and setup checks", async () => {
const executionOrder: string[] = [];
const cmd1 = vi.fn((): Promise<Result<void, unknown>> => {
executionOrder.push("cmd1");
return Promise.resolve({ ok: true, data: undefined });
});
const cmd2 = vi.fn((): Promise<Result<void, unknown>> => {
executionOrder.push("cmd2");
return Promise.resolve({ ok: true, data: undefined });
});
const cmd3 = vi.fn((): Promise<Result<void, unknown>> => {
executionOrder.push("cmd3");
return Promise.resolve({ ok: true, data: undefined });
});
// Setup check will fail for cmd2
vi.mocked(checkSetup)
.mockReturnValueOnce({ ok: true, data: undefined }) // for cmd1
.mockReturnValueOnce({ ok: false, error: { code: "not_setup", message: "Not setup" } }) // for cmd2
.mockReturnValueOnce({ ok: true, data: undefined }); // for cmd3
await queue.add(cmd1, CommandType.Setup, true);
await queue.add(cmd2, CommandType.UserAction, true);
await queue.add(cmd3, CommandType.GeneralAction, true);
await queue.wait();
// cmd2 should be skipped due to failed setup check
expect(executionOrder).toEqual(["cmd1", "cmd3"]);
expect(cmd1).toHaveBeenCalled();
expect(cmd2).not.toHaveBeenCalled();
expect(cmd3).toHaveBeenCalled();
});
});

View File

@@ -1,13 +1,10 @@
/* eslint-disable @typescript-eslint/unbound-method -- required for testing */
import { Config } from "@/lib/common/config";
import { JS_LOCAL_STORAGE_KEY } from "@/lib/common/constants";
import {
addCleanupEventListeners,
addEventListeners,
removeAllEventListeners,
} from "@/lib/common/event-listeners";
import { addCleanupEventListeners, addEventListeners } from "@/lib/common/event-listeners";
import { Logger } from "@/lib/common/logger";
import { checkSetup, handleErrorOnFirstSetup, setIsSetup, setup, tearDown } from "@/lib/common/setup";
import { handleErrorOnFirstSetup, setup, tearDown } from "@/lib/common/setup";
import { setIsSetup } from "@/lib/common/status";
import { filterSurveys, isNowExpired } from "@/lib/common/utils";
import { fetchEnvironmentState } from "@/lib/environment/state";
import { DEFAULT_USER_STATE_NO_USER_ID } from "@/lib/user/state";
@@ -287,24 +284,8 @@ describe("setup.ts", () => {
});
});
describe("checkSetup()", () => {
test("returns err if not setup", () => {
const res = checkSetup();
expect(res.ok).toBe(false);
if (!res.ok) {
expect(res.error.code).toBe("not_setup");
}
});
test("returns ok if setup", () => {
setIsSetup(true);
const res = checkSetup();
expect(res.ok).toBe(true);
});
});
describe("tearDown()", () => {
test("resets user state to default and removes event listeners", () => {
test("resets user state to default", () => {
const mockConfig = {
get: vi.fn().mockReturnValue({
user: { data: { userId: "XYZ" } },
@@ -321,7 +302,7 @@ describe("setup.ts", () => {
user: DEFAULT_USER_STATE_NO_USER_ID,
})
);
expect(removeAllEventListeners).toHaveBeenCalled();
expect(filterSurveys).toHaveBeenCalled();
});
});

View File

@@ -0,0 +1,41 @@
import { checkSetup, getIsSetup, setIsSetup } from "@/lib/common/status";
import { beforeEach, describe, expect, test, vi } from "vitest";
describe("checkSetup()", () => {
beforeEach(() => {
vi.clearAllMocks();
setIsSetup(false);
});
test("returns err if not setup", () => {
const res = checkSetup();
expect(res.ok).toBe(false);
if (!res.ok) {
expect(res.error.code).toBe("not_setup");
}
});
test("returns ok if setup", () => {
setIsSetup(true);
const res = checkSetup();
expect(res.ok).toBe(true);
});
});
describe("getIsSetup()", () => {
beforeEach(() => {
vi.clearAllMocks();
setIsSetup(false);
});
test("returns false if not setup", () => {
const res = getIsSetup();
expect(res).toBe(false);
});
test("returns true if setup", () => {
setIsSetup(true);
const res = getIsSetup();
expect(res).toBe(true);
});
});

View File

@@ -8,7 +8,9 @@ import {
getDefaultLanguageCode,
getIsDebug,
getLanguageCode,
getSecureRandom,
getStyling,
handleHiddenFields,
handleUrlFilters,
isNowExpired,
shouldDisplayBasedOnPercentage,
@@ -23,7 +25,7 @@ import type {
TSurveyStyling,
TUserState,
} from "@/types/config";
import { type TActionClassPageUrlRule } from "@/types/survey";
import { type TActionClassNoCodeConfig, type TActionClassPageUrlRule } from "@/types/survey";
import { beforeEach, describe, expect, test, vi } from "vitest";
const mockSurveyId1 = "e3kxlpnzmdp84op9qzxl9olj";
@@ -61,7 +63,49 @@ describe("utils.ts", () => {
test("returns ok on success", () => {
const fn = vi.fn(() => "success");
const wrapped = wrapThrows(fn);
expect(wrapped()).toEqual({ ok: true, data: "success" });
const result = wrapped();
expect(result.ok).toBe(true);
if (result.ok) {
expect(result.data).toBe("success");
}
});
test("returns err on error", () => {
const fn = vi.fn(() => {
throw new Error("Something broke");
});
const wrapped = wrapThrows(fn);
const result = wrapped();
expect(result.ok).toBe(false);
if (!result.ok) {
expect(result.error.message).toBe("Something broke");
}
});
test("passes arguments to wrapped function", () => {
const fn = vi.fn((a: number, b: number) => a + b);
const wrapped = wrapThrows(fn);
const result = wrapped(2, 3);
expect(result.ok).toBe(true);
if (result.ok) {
expect(result.data).toBe(5);
}
expect(fn).toHaveBeenCalledWith(2, 3);
});
test("handles async function", () => {
const fn = vi.fn(async () => {
await new Promise((r) => {
setTimeout(r, 10);
});
return "async success";
});
const wrapped = wrapThrows(fn);
const result = wrapped();
expect(result.ok).toBe(true);
if (result.ok) {
expect(result.data).toBeInstanceOf(Promise);
}
});
});
@@ -561,6 +605,55 @@ describe("utils.ts", () => {
const result = handleUrlFilters(urlFilters);
expect(result).toBe(true);
});
test("returns true if urlFilters is empty", () => {
const urlFilters: TActionClassNoCodeConfig["urlFilters"] = [];
const result = handleUrlFilters(urlFilters);
expect(result).toBe(true);
});
test("returns false if no urlFilters match", () => {
const urlFilters = [
{
value: "https://example.com/other",
rule: "exactMatch" as unknown as TActionClassPageUrlRule,
},
];
// mock window.location.href
vi.stubGlobal("window", {
location: {
href: "https://example.com/path",
},
});
const result = handleUrlFilters(urlFilters);
expect(result).toBe(false);
});
test("returns true if any urlFilter matches", () => {
const urlFilters = [
{
value: "https://example.com/other",
rule: "exactMatch" as unknown as TActionClassPageUrlRule,
},
{
value: "path",
rule: "contains" as unknown as TActionClassPageUrlRule,
},
];
// mock window.location.href
vi.stubGlobal("window", {
location: {
href: "https://example.com/path",
},
});
const result = handleUrlFilters(urlFilters);
expect(result).toBe(true);
});
});
// ---------------------------------------------------------------------------------
@@ -571,12 +664,12 @@ describe("utils.ts", () => {
const targetElement = document.createElement("div");
const action: TEnvironmentStateActionClass = {
id: "clabc123abc", // some valid cuid2 or placeholder
id: "clabc123abc",
name: "Test Action",
type: "noCode", // or "code", but here we have noCode
type: "noCode",
key: null,
noCodeConfig: {
type: "pageView", // the mismatch
type: "pageView",
urlFilters: [],
},
};
@@ -590,7 +683,7 @@ describe("utils.ts", () => {
targetElement.innerHTML = "Test";
const action: TEnvironmentStateActionClass = {
id: "clabc123abc", // some valid cuid2 or placeholder
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
@@ -615,7 +708,7 @@ describe("utils.ts", () => {
targetElement.matches = vi.fn(() => true);
const action: TEnvironmentStateActionClass = {
id: "clabc123abc", // some valid cuid2 or placeholder
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
@@ -640,14 +733,35 @@ describe("utils.ts", () => {
targetElement.matches = vi.fn(() => false);
const action: TEnvironmentStateActionClass = {
id: "clabc123abc", // some valid cuid2 or placeholder
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
noCodeConfig: {
type: "click",
urlFilters: [],
elementSelector: { cssSelector },
elementSelector: {
cssSelector,
},
},
};
const result = evaluateNoCodeConfigClick(targetElement, action);
expect(result).toBe(false);
});
test("returns false if neither innerHtml nor cssSelector is provided", () => {
const targetElement = document.createElement("div");
const action: TEnvironmentStateActionClass = {
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
noCodeConfig: {
type: "click",
urlFilters: [],
elementSelector: {},
},
};
@@ -657,44 +771,240 @@ describe("utils.ts", () => {
test("returns false if urlFilters do not match", () => {
const targetElement = document.createElement("div");
const urlFilters = [
{
value: "https://example.com/path",
rule: "exactMatch" as unknown as TActionClassPageUrlRule,
targetElement.innerHTML = "Test";
// mock window.location.href
vi.stubGlobal("window", {
location: {
href: "https://example.com/path",
},
];
});
const action: TEnvironmentStateActionClass = {
id: "clabc123abc", // some valid cuid2 or placeholder
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
noCodeConfig: {
type: "click",
urlFilters,
elementSelector: {},
urlFilters: [
{
value: "https://example.com/other",
rule: "exactMatch" as unknown as TActionClassPageUrlRule,
},
],
elementSelector: {
innerHtml: "Test",
},
},
};
const result = evaluateNoCodeConfigClick(targetElement, action);
expect(result).toBe(false);
});
test("returns true if both innerHtml and urlFilters match", () => {
const targetElement = document.createElement("div");
targetElement.innerHTML = "Test";
// mock window.location.href
vi.stubGlobal("window", {
location: {
href: "https://example.com/path",
},
});
const action: TEnvironmentStateActionClass = {
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
noCodeConfig: {
type: "click",
urlFilters: [
{
value: "path",
rule: "contains" as unknown as TActionClassPageUrlRule,
},
],
elementSelector: {
innerHtml: "Test",
},
},
};
const result = evaluateNoCodeConfigClick(targetElement, action);
expect(result).toBe(true);
});
test("handles multiple cssSelectors correctly", () => {
const targetElement = document.createElement("div");
targetElement.className = "test other";
targetElement.matches = vi.fn((selector) => {
return selector === ".test" || selector === ".other";
});
const action: TEnvironmentStateActionClass = {
id: "clabc123abc",
name: "Test Action",
type: "noCode",
key: null,
noCodeConfig: {
type: "click",
urlFilters: [],
elementSelector: {
cssSelector: ".test .other",
},
},
};
const result = evaluateNoCodeConfigClick(targetElement, action);
expect(result).toBe(true);
});
});
// ---------------------------------------------------------------------------------
// getIsDebug
// ---------------------------------------------------------------------------------
describe("getIsDebug()", () => {
test("returns true if debug param is set", () => {
// mock window.location.search
vi.stubGlobal("window", {
location: {
search: "?formbricksDebug=true",
},
beforeEach(() => {
// Reset window.location.search before each test
Object.defineProperty(window, "location", {
value: { search: "" },
writable: true,
});
});
const result = getIsDebug();
expect(result).toBe(true);
test("returns true if debug parameter is set", () => {
Object.defineProperty(window, "location", {
value: { search: "?formbricksDebug=true" },
writable: true,
});
expect(getIsDebug()).toBe(true);
});
test("returns false if debug parameter is not set", () => {
Object.defineProperty(window, "location", {
value: { search: "?otherParam=value" },
writable: true,
});
expect(getIsDebug()).toBe(false);
});
test("returns false if search string is empty", () => {
Object.defineProperty(window, "location", {
value: { search: "" },
writable: true,
});
expect(getIsDebug()).toBe(false);
});
test("returns false if search string is just '?'", () => {
Object.defineProperty(window, "location", {
value: { search: "?" },
writable: true,
});
expect(getIsDebug()).toBe(false);
});
});
// ---------------------------------------------------------------------------------
// handleHiddenFields
// ---------------------------------------------------------------------------------
describe("handleHiddenFields()", () => {
test("returns empty object when hidden fields are not enabled", () => {
const hiddenFieldsConfig = {
enabled: false,
fieldIds: ["field1", "field2"],
};
const hiddenFields = {
field1: "value1",
field2: "value2",
};
const result = handleHiddenFields(hiddenFieldsConfig, hiddenFields);
expect(result).toEqual({});
});
test("returns empty object when no hidden fields are provided", () => {
const hiddenFieldsConfig = {
enabled: true,
fieldIds: ["field1", "field2"],
};
const result = handleHiddenFields(hiddenFieldsConfig);
expect(result).toEqual({});
});
test("filters and returns only valid hidden fields", () => {
const hiddenFieldsConfig = {
enabled: true,
fieldIds: ["field1", "field2"],
};
const hiddenFields = {
field1: "value1",
field2: "value2",
field3: "value3", // This should be filtered out
};
const result = handleHiddenFields(hiddenFieldsConfig, hiddenFields);
expect(result).toEqual({
field1: "value1",
field2: "value2",
});
});
test("handles empty fieldIds array", () => {
const hiddenFieldsConfig = {
enabled: true,
fieldIds: [],
};
const hiddenFields = {
field1: "value1",
field2: "value2",
};
const result = handleHiddenFields(hiddenFieldsConfig, hiddenFields);
expect(result).toEqual({});
});
test("handles null fieldIds", () => {
const hiddenFieldsConfig = {
enabled: true,
fieldIds: undefined,
};
const hiddenFields = {
field1: "value1",
field2: "value2",
};
const result = handleHiddenFields(hiddenFieldsConfig, hiddenFields);
expect(result).toEqual({});
});
});
// ---------------------------------------------------------------------------------
// getSecureRandom
// ---------------------------------------------------------------------------------
describe("getSecureRandom()", () => {
test("returns a number between 0 and 1", () => {
const result = getSecureRandom();
expect(result).toBeGreaterThanOrEqual(0);
expect(result).toBeLessThan(1);
});
test("returns different values on subsequent calls", () => {
const result1 = getSecureRandom();
const result2 = getSecureRandom();
expect(result1).not.toBe(result2);
});
test("uses crypto.getRandomValues", () => {
const mockGetRandomValues = vi.spyOn(crypto, "getRandomValues");
getSecureRandom();
expect(mockGetRandomValues).toHaveBeenCalled();
mockGetRandomValues.mockRestore();
});
});
});

View File

@@ -121,7 +121,11 @@ export const filterSurveys = (
});
if (!userId) {
return filteredSurveys;
// exclude surveys that have a segment with filters
return filteredSurveys.filter((survey) => {
const segmentFiltersLength = survey.segment?.filters.length ?? 0;
return segmentFiltersLength === 0;
});
}
if (!segments.length) {

View File

@@ -1,12 +1,33 @@
/* eslint-disable no-console -- required for logging */
import { CommandQueue, CommandType } from "@/lib/common/command-queue";
import { Config } from "@/lib/common/config";
import { Logger } from "@/lib/common/logger";
import { TimeoutStack } from "@/lib/common/timeout-stack";
import { evaluateNoCodeConfigClick, handleUrlFilters } from "@/lib/common/utils";
import { trackNoCodeAction } from "@/lib/survey/action";
import { setIsSurveyRunning } from "@/lib/survey/widget";
import { type TEnvironmentStateActionClass } from "@/types/config";
import { type NetworkError, type Result, type ResultError, err, match, okVoid } from "@/types/error";
import { type Result } from "@/types/error";
// Factory for creating context-specific tracking handlers
export const createTrackNoCodeActionWithContext = (context: string) => {
return async (actionName: string): Promise<Result<void, unknown>> => {
const result = await trackNoCodeAction(actionName);
if (!result.ok) {
const errorToLog = result.error as { message?: string };
const errorMessageText = errorToLog.message ?? "An unknown error occurred.";
console.error(
`🧱 Formbricks - Error in no-code ${context} action '${actionName}': ${errorMessageText}`,
errorToLog
);
}
return result;
};
};
const trackNoCodePageViewActionHandler = createTrackNoCodeActionWithContext("page view");
const trackNoCodeClickActionHandler = createTrackNoCodeActionWithContext("click");
const trackNoCodeExitIntentActionHandler = createTrackNoCodeActionWithContext("exit intent");
const trackNoCodeScrollActionHandler = createTrackNoCodeActionWithContext("scroll");
// Event types for various listeners
const events = ["hashchange", "popstate", "pushstate", "replacestate", "load"];
@@ -18,7 +39,8 @@ export const setIsHistoryPatched = (value: boolean): void => {
isHistoryPatched = value;
};
export const checkPageUrl = async (): Promise<Result<void, NetworkError>> => {
export const checkPageUrl = async (): Promise<Result<void, unknown>> => {
const queue = CommandQueue.getInstance();
const appConfig = Config.getInstance();
const logger = Logger.getInstance();
const timeoutStack = TimeoutStack.getInstance();
@@ -35,11 +57,7 @@ export const checkPageUrl = async (): Promise<Result<void, NetworkError>> => {
const isValidUrl = handleUrlFilters(urlFilters);
if (isValidUrl) {
const trackResult = await trackNoCodeAction(event.name);
if (!trackResult.ok) {
return err(trackResult.error);
}
await queue.add(trackNoCodePageViewActionHandler, CommandType.GeneralAction, true, event.name);
} else {
const scheduledTimeouts = timeoutStack.getTimeouts();
@@ -52,10 +70,12 @@ export const checkPageUrl = async (): Promise<Result<void, NetworkError>> => {
}
}
return okVoid();
return { ok: true, data: undefined };
};
const checkPageUrlWrapper = (): ReturnType<typeof checkPageUrl> => checkPageUrl();
const checkPageUrlWrapper = (): void => {
void checkPageUrl();
};
export const addPageUrlEventListeners = (): void => {
if (typeof window === "undefined" || arePageUrlEventListenersAdded) return;
@@ -92,7 +112,8 @@ export const removePageUrlEventListeners = (): void => {
// Click Event Handlers
let isClickEventListenerAdded = false;
const checkClickMatch = (event: MouseEvent): void => {
const checkClickMatch = async (event: MouseEvent): Promise<void> => {
const queue = CommandQueue.getInstance();
const appConfig = Config.getInstance();
const { environment } = appConfig.get();
@@ -105,28 +126,15 @@ const checkClickMatch = (event: MouseEvent): void => {
const targetElement = event.target as HTMLElement;
noCodeClickActionClasses.forEach((action: TEnvironmentStateActionClass) => {
for (const action of noCodeClickActionClasses) {
if (evaluateNoCodeConfigClick(targetElement, action)) {
trackNoCodeAction(action.name)
.then((res) => {
match(
res,
(_value: unknown) => undefined,
(actionError: unknown) => {
// errorHandler.handle(actionError);
console.error(actionError);
}
);
})
.catch((error: unknown) => {
console.error(error);
});
await queue.add(trackNoCodeClickActionHandler, CommandType.GeneralAction, true, action.name);
}
});
}
};
const checkClickMatchWrapper = (e: MouseEvent): void => {
checkClickMatch(e);
void checkClickMatch(e);
};
export const addClickEventListener = (): void => {
@@ -144,7 +152,8 @@ export const removeClickEventListener = (): void => {
// Exit Intent Handlers
let isExitIntentListenerAdded = false;
const checkExitIntent = async (e: MouseEvent): Promise<ResultError<NetworkError> | undefined> => {
const checkExitIntent = async (e: MouseEvent): Promise<void> => {
const queue = CommandQueue.getInstance();
const appConfig = Config.getInstance();
const { environment } = appConfig.get();
@@ -161,13 +170,14 @@ const checkExitIntent = async (e: MouseEvent): Promise<ResultError<NetworkError>
if (!isValidUrl) continue;
const trackResult = await trackNoCodeAction(event.name);
if (!trackResult.ok) return err(trackResult.error);
await queue.add(trackNoCodeExitIntentActionHandler, CommandType.GeneralAction, true, event.name);
}
}
};
const checkExitIntentWrapper = (e: MouseEvent): ReturnType<typeof checkExitIntent> => checkExitIntent(e);
const checkExitIntentWrapper = (e: MouseEvent): void => {
void checkExitIntent(e);
};
export const addExitIntentListener = (): void => {
if (typeof document !== "undefined" && !isExitIntentListenerAdded) {
@@ -189,7 +199,8 @@ export const removeExitIntentListener = (): void => {
let scrollDepthListenerAdded = false;
let scrollDepthTriggered = false;
const checkScrollDepth = async (): Promise<Result<void, unknown>> => {
const checkScrollDepth = async (): Promise<void> => {
const queue = CommandQueue.getInstance();
const appConfig = Config.getInstance();
const scrollPosition = window.scrollY;
@@ -216,15 +227,14 @@ const checkScrollDepth = async (): Promise<Result<void, unknown>> => {
if (!isValidUrl) continue;
const trackResult = await trackNoCodeAction(event.name);
if (!trackResult.ok) return err(trackResult.error);
await queue.add(trackNoCodeScrollActionHandler, CommandType.GeneralAction, true, event.name);
}
}
return okVoid();
};
const checkScrollDepthWrapper = (): ReturnType<typeof checkScrollDepth> => checkScrollDepth();
const checkScrollDepthWrapper = (): void => {
void checkScrollDepth();
};
export const addScrollDepthListener = (): void => {
if (typeof window !== "undefined" && !scrollDepthListenerAdded) {

View File

@@ -1,5 +1,6 @@
/* eslint-disable @typescript-eslint/unbound-method -- mock functions are unbound */
import { Config } from "@/lib/common/config";
import { checkSetup } from "@/lib/common/status";
import { TimeoutStack } from "@/lib/common/timeout-stack";
import { handleUrlFilters } from "@/lib/common/utils";
import { trackNoCodeAction } from "@/lib/survey/action";
@@ -9,12 +10,14 @@ import {
addPageUrlEventListeners,
addScrollDepthListener,
checkPageUrl,
createTrackNoCodeActionWithContext,
removeClickEventListener,
removeExitIntentListener,
removePageUrlEventListeners,
removeScrollDepthListener,
} from "@/lib/survey/no-code-action";
import { setIsSurveyRunning } from "@/lib/survey/widget";
import { TActionClassNoCodeConfig } from "@/types/survey";
import { type Mock, type MockInstance, afterEach, beforeEach, describe, expect, test, vi } from "vitest";
vi.mock("@/lib/common/config", () => ({
@@ -45,10 +48,15 @@ vi.mock("@/lib/common/timeout-stack", () => ({
},
}));
vi.mock("@/lib/common/utils", () => ({
handleUrlFilters: vi.fn(),
evaluateNoCodeConfigClick: vi.fn(),
}));
vi.mock("@/lib/common/utils", async (importOriginal) => {
// eslint-disable-next-line @typescript-eslint/consistent-type-imports -- We need this only for type inference
const actual = await importOriginal<typeof import("@/lib/common/utils")>();
return {
...actual,
handleUrlFilters: vi.fn(),
evaluateNoCodeConfigClick: vi.fn(),
};
});
vi.mock("@/lib/survey/action", () => ({
trackNoCodeAction: vi.fn(),
@@ -58,13 +66,53 @@ vi.mock("@/lib/survey/widget", () => ({
setIsSurveyRunning: vi.fn(),
}));
vi.mock("@/lib/common/status", () => ({
checkSetup: vi.fn(),
}));
describe("createTrackNoCodeActionWithContext", () => {
test("should create a trackNoCodeAction with the correct context", () => {
const trackNoCodeActionWithContext = createTrackNoCodeActionWithContext("pageView");
expect(trackNoCodeActionWithContext).toBeDefined();
});
test("should log error if trackNoCodeAction fails", async () => {
const consoleErrorSpy = vi.spyOn(console, "error");
vi.mocked(trackNoCodeAction).mockResolvedValue({
ok: false,
error: {
code: "network_error",
message: "Network error",
status: 500,
url: new URL("https://example.com"),
responseMessage: "Network error",
},
});
const trackNoCodeActionWithContext = createTrackNoCodeActionWithContext("pageView");
expect(trackNoCodeActionWithContext).toBeDefined();
await trackNoCodeActionWithContext("noCodeAction");
expect(consoleErrorSpy).toHaveBeenCalledWith(
`🧱 Formbricks - Error in no-code pageView action 'noCodeAction': Network error`,
{
code: "network_error",
message: "Network error",
status: 500,
url: new URL("https://example.com"),
responseMessage: "Network error",
}
);
});
});
describe("no-code-event-listeners file", () => {
let getInstanceConfigMock: MockInstance<() => Config>;
let getInstanceTimeoutStackMock: MockInstance<() => TimeoutStack>;
beforeEach(() => {
vi.clearAllMocks();
getInstanceConfigMock = vi.spyOn(Config, "getInstance");
getInstanceTimeoutStackMock = vi.spyOn(TimeoutStack, "getInstance");
});
@@ -76,6 +124,7 @@ describe("no-code-event-listeners file", () => {
test("checkPageUrl calls handleUrlFilters & trackNoCodeAction for matching actionClasses", async () => {
(handleUrlFilters as Mock).mockReturnValue(true);
(trackNoCodeAction as Mock).mockResolvedValue({ ok: true });
(checkSetup as Mock).mockReturnValue({ ok: true });
const mockConfigValue = {
get: vi.fn().mockReturnValue({
@@ -99,11 +148,10 @@ describe("no-code-event-listeners file", () => {
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
const result = await checkPageUrl();
await checkPageUrl();
expect(handleUrlFilters).toHaveBeenCalledWith([{ value: "/some-path", rule: "contains" }]);
expect(trackNoCodeAction).toHaveBeenCalledWith("pageViewAction");
expect(result.ok).toBe(true);
});
test("checkPageUrl removes scheduled timeouts & calls setIsSurveyRunning(false) if invalid url", async () => {
@@ -138,12 +186,11 @@ describe("no-code-event-listeners file", () => {
getInstanceTimeoutStackMock.mockReturnValue(mockTimeoutStack as unknown as TimeoutStack);
const result = await checkPageUrl();
await checkPageUrl();
expect(trackNoCodeAction).not.toHaveBeenCalled();
expect(mockTimeoutStack.remove).toHaveBeenCalledWith(123);
expect(setIsSurveyRunning).toHaveBeenCalledWith(false);
expect(result.ok).toBe(true);
});
test("addPageUrlEventListeners adds event listeners to window, patches history if not patched", () => {
@@ -262,4 +309,347 @@ describe("no-code-event-listeners file", () => {
(window.removeEventListener as Mock).mockRestore();
});
// Test cases for Click Event Handlers
describe("Click Event Handlers", () => {
beforeEach(() => {
vi.stubGlobal("document", {
addEventListener: vi.fn(),
removeEventListener: vi.fn(),
});
});
test("addClickEventListener does not add listener if window is undefined", () => {
vi.stubGlobal("window", undefined);
addClickEventListener();
expect(document.addEventListener).not.toHaveBeenCalled();
});
test("addClickEventListener does not re-add listener if already added", () => {
vi.stubGlobal("window", {}); // Ensure window is defined
addClickEventListener(); // First call
expect(document.addEventListener).toHaveBeenCalledTimes(1);
addClickEventListener(); // Second call
expect(document.addEventListener).toHaveBeenCalledTimes(1);
});
});
// Test cases for Exit Intent Handlers
describe("Exit Intent Handlers", () => {
let querySelectorMock: MockInstance;
let addEventListenerMock: Mock;
let removeEventListenerMock: Mock;
beforeEach(() => {
addEventListenerMock = vi.fn();
removeEventListenerMock = vi.fn();
querySelectorMock = vi.fn().mockReturnValue({
addEventListener: addEventListenerMock,
removeEventListener: removeEventListenerMock,
});
vi.stubGlobal("document", {
querySelector: querySelectorMock,
removeEventListener: removeEventListenerMock, // For direct document.removeEventListener calls
});
(handleUrlFilters as Mock).mockReset(); // Reset mock for each test
});
test("addExitIntentListener does not add if document is undefined", () => {
vi.stubGlobal("document", undefined);
addExitIntentListener();
// No explicit expect, passes if no error. querySelector would not be called.
});
test("addExitIntentListener does not add if body is not found", () => {
querySelectorMock.mockReturnValue(null); // body not found
addExitIntentListener();
expect(addEventListenerMock).not.toHaveBeenCalled();
});
test("checkExitIntent does not trigger if clientY > 0", () => {
const mockAction = {
name: "exitAction",
type: "noCode",
noCodeConfig: { type: "exitIntent", urlFilters: [] },
};
const mockConfigValue = {
get: vi.fn().mockReturnValue({
environment: { data: { actionClasses: [mockAction] } },
}),
};
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
(handleUrlFilters as Mock).mockReturnValue(true);
addExitIntentListener();
expect(handleUrlFilters).not.toHaveBeenCalled();
expect(trackNoCodeAction).not.toHaveBeenCalled();
});
});
// Test cases for Scroll Depth Handlers
describe("Scroll Depth Handlers", () => {
let addEventListenerSpy: MockInstance;
let removeEventListenerSpy: MockInstance;
beforeEach(() => {
addEventListenerSpy = vi.fn();
removeEventListenerSpy = vi.fn();
vi.stubGlobal("window", {
addEventListener: addEventListenerSpy,
removeEventListener: removeEventListenerSpy,
scrollY: 0,
innerHeight: 500,
});
vi.stubGlobal("document", {
readyState: "complete",
documentElement: {
scrollHeight: 2000, // bodyHeight > windowSize
},
});
(handleUrlFilters as Mock).mockReset();
(trackNoCodeAction as Mock).mockReset();
// Reset internal state variables (scrollDepthListenerAdded, scrollDepthTriggered)
// This is tricky without exporting them. We can call removeScrollDepthListener
// to reset scrollDepthListenerAdded. scrollDepthTriggered is reset if scrollY is 0.
removeScrollDepthListener(); // Resets scrollDepthListenerAdded
window.scrollY = 0; // Resets scrollDepthTriggered assumption in checkScrollDepth
});
afterEach(() => {
vi.stubGlobal("document", undefined);
});
test("addScrollDepthListener does not add if window is undefined", () => {
vi.stubGlobal("window", undefined);
addScrollDepthListener();
// No explicit expect. Passes if no error.
});
test("addScrollDepthListener does not re-add listener if already added", () => {
addScrollDepthListener(); // First call
expect(window.addEventListener).toHaveBeenCalledWith("scroll", expect.any(Function));
expect(window.addEventListener).toHaveBeenCalledTimes(1);
addScrollDepthListener(); // Second call
expect(window.addEventListener).toHaveBeenCalledTimes(1);
});
test("checkScrollDepth does nothing if no fiftyPercentScroll actions", async () => {
const mockConfigValue = {
get: vi.fn().mockReturnValue({
environment: { data: { actionClasses: [] } },
}),
};
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
window.scrollY = 1000; // Past 50%
addScrollDepthListener();
const scrollCallback = addEventListenerSpy.mock.calls[0][1] as () => Promise<void>; // Added type assertion
await scrollCallback();
expect(handleUrlFilters).not.toHaveBeenCalled();
expect(trackNoCodeAction).not.toHaveBeenCalled();
});
test("checkScrollDepth does not trigger if scroll < 50%", async () => {
const mockAction = {
name: "scrollAction",
type: "noCode",
noCodeConfig: { type: "fiftyPercentScroll", urlFilters: [] },
};
const mockConfigValue = {
get: vi.fn().mockReturnValue({
environment: { data: { actionClasses: [mockAction] } },
}),
};
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
(handleUrlFilters as Mock).mockReturnValue(true);
window.scrollY = 200; // scrollPosition / (bodyHeight - windowSize) = 200 / (2000 - 500) = 200 / 1500 < 0.5
addScrollDepthListener();
const scrollCallback = addEventListenerSpy.mock.calls[0][1] as () => Promise<void>; // Added type assertion
await scrollCallback();
expect(trackNoCodeAction).not.toHaveBeenCalled();
});
test("checkScrollDepth filters by URL", async () => {
(handleUrlFilters as Mock).mockImplementation(
(urlFilters: TActionClassNoCodeConfig["urlFilters"]) => urlFilters[0]?.value === "valid-scroll"
);
(trackNoCodeAction as Mock).mockResolvedValue({ ok: true });
const mockActionValid = {
name: "scrollValid",
type: "noCode",
noCodeConfig: { type: "fiftyPercentScroll", urlFilters: [{ value: "valid-scroll" }] },
};
const mockActionInvalid = {
name: "scrollInvalid",
type: "noCode",
noCodeConfig: { type: "fiftyPercentScroll", urlFilters: [{ value: "invalid-scroll" }] },
};
const mockConfigValue = {
get: vi.fn().mockReturnValue({
environment: { data: { actionClasses: [mockActionValid, mockActionInvalid] } },
}),
};
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
window.scrollY = 1000; // Past 50%
addScrollDepthListener();
const scrollCallback = addEventListenerSpy.mock.calls[0][1] as () => Promise<void>; // Added type assertion
await scrollCallback();
expect(trackNoCodeAction).not.toHaveBeenCalledWith("scrollInvalid");
});
});
});
describe("checkPageUrl additional cases", () => {
let getInstanceConfigMock: MockInstance<() => Config>;
let getInstanceTimeoutStackMock: MockInstance<() => TimeoutStack>;
beforeEach(() => {
vi.clearAllMocks();
getInstanceConfigMock = vi.spyOn(Config, "getInstance");
getInstanceTimeoutStackMock = vi.spyOn(TimeoutStack, "getInstance");
});
test("checkPageUrl does nothing if no pageView actionClasses", async () => {
(handleUrlFilters as Mock).mockReturnValue(true);
(trackNoCodeAction as Mock).mockResolvedValue({ ok: true });
(checkSetup as Mock).mockReturnValue({ ok: true });
const mockConfigValue = {
get: vi.fn().mockReturnValue({
environment: {
data: {
actionClasses: [
{
name: "clickAction", // Not a pageView action
type: "noCode",
noCodeConfig: {
type: "click",
},
},
],
},
},
}),
update: vi.fn(),
};
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
vi.stubGlobal("window", { location: { href: "/fail" } });
await checkPageUrl();
expect(handleUrlFilters).not.toHaveBeenCalled();
expect(trackNoCodeAction).not.toHaveBeenCalled();
});
test("checkPageUrl does not remove timeout if not scheduled", async () => {
(handleUrlFilters as Mock).mockReturnValue(false); // Invalid URL
const mockConfigValue = {
get: vi.fn().mockReturnValue({
environment: {
data: {
actionClasses: [
{
name: "pageViewAction",
type: "noCode",
noCodeConfig: {
type: "pageView",
urlFilters: [{ value: "/fail", rule: "contains" }],
},
},
],
},
},
}),
};
getInstanceConfigMock.mockReturnValue(mockConfigValue as unknown as Config);
const mockTimeoutStack = {
getTimeouts: vi.fn().mockReturnValue([]), // No scheduled timeouts
remove: vi.fn(),
add: vi.fn(),
};
getInstanceTimeoutStackMock.mockReturnValue(mockTimeoutStack as unknown as TimeoutStack);
vi.stubGlobal("window", { location: { href: "/fail" } });
await checkPageUrl();
expect(mockTimeoutStack.remove).not.toHaveBeenCalled();
expect(setIsSurveyRunning).not.toHaveBeenCalledWith(false); // Should not be called if timeout was not present
});
});
describe("addPageUrlEventListeners additional cases", () => {
test("addPageUrlEventListeners does not add listeners if window is undefined", () => {
vi.stubGlobal("window", undefined);
addPageUrlEventListeners(); // Call the function
// No explicit expect needed, the test passes if no error is thrown
// and no listeners were attempted to be added to an undefined window.
// We can also assert that isHistoryPatched remains false if it's exported and settable for testing.
// For now, we assume it's an internal detail not directly testable without more mocks.
});
test("addPageUrlEventListeners does not re-add listeners if already added", () => {
const addEventListenerMock = vi.fn();
vi.stubGlobal("window", { addEventListener: addEventListenerMock });
vi.stubGlobal("history", { pushState: vi.fn(), replaceState: vi.fn() });
addPageUrlEventListeners(); // First call
expect(addEventListenerMock).toHaveBeenCalledTimes(5); // hashchange, popstate, pushstate, replacestate, load
addPageUrlEventListeners(); // Second call
expect(addEventListenerMock).toHaveBeenCalledTimes(5); // Should not have been called again
(window.addEventListener as Mock).mockRestore();
});
test("addPageUrlEventListeners does not patch history if already patched", () => {
const addEventListenerMock = vi.fn();
const originalPushState = vi.fn();
vi.stubGlobal("window", { addEventListener: addEventListenerMock, dispatchEvent: vi.fn() });
vi.stubGlobal("history", { pushState: originalPushState, replaceState: vi.fn() });
// Simulate history already patched
// This requires isHistoryPatched to be exported or a way to set it.
// Assuming we can't directly set isHistoryPatched from outside,
// we call it once to patch, then check if pushState is re-assigned.
addPageUrlEventListeners(); // First call, patches history
const patchedPushState = history.pushState;
addPageUrlEventListeners(); // Second call
expect(history.pushState).toBe(patchedPushState); // pushState should not be a new function
// Test patched pushState
const dispatchEventSpy = vi.spyOn(window, "dispatchEvent");
patchedPushState.apply(history, [{}, "", "/new-url"]);
expect(originalPushState).toHaveBeenCalled();
// expect(dispatchEventSpy).toHaveBeenCalledWith(event);
(window.addEventListener as Mock).mockRestore();
dispatchEventSpy.mockRestore();
});
});
describe("removePageUrlEventListeners additional cases", () => {
test("removePageUrlEventListeners does nothing if window is undefined", () => {
vi.stubGlobal("window", undefined);
removePageUrlEventListeners();
// No explicit expect. Passes if no error.
});
test("removePageUrlEventListeners does nothing if listeners were not added", () => {
const removeEventListenerMock = vi.fn();
vi.stubGlobal("window", { removeEventListener: removeEventListenerMock });
// Assuming listeners are not added yet (arePageUrlEventListenersAdded is false)
removePageUrlEventListeners();
(window.removeEventListener as Mock).mockRestore();
});
});

View File

@@ -116,7 +116,7 @@ describe("user.ts", () => {
});
describe("logout", () => {
test("successfully sets up formbricks after logout", async () => {
test("successfully sets up formbricks after logout", () => {
const mockConfig = {
get: vi.fn().mockReturnValue({
environmentId: mockEnvironmentId,
@@ -129,40 +129,24 @@ describe("user.ts", () => {
(setup as Mock).mockResolvedValue(undefined);
const result = await logout();
const result = logout();
expect(tearDown).toHaveBeenCalled();
expect(setup).toHaveBeenCalledWith({
environmentId: mockEnvironmentId,
appUrl: mockAppUrl,
});
expect(result.ok).toBe(true);
});
test("returns error if setup fails", async () => {
test("returns error if appConfig.get fails", () => {
const mockConfig = {
get: vi.fn().mockReturnValue({
environmentId: mockEnvironmentId,
appUrl: mockAppUrl,
user: { data: { userId: mockUserId } },
}),
get: vi.fn().mockReturnValue(null),
};
getInstanceConfigMock.mockReturnValue(mockConfig as unknown as Config);
const mockError = { code: "network_error", message: "Failed to connect" };
(setup as Mock).mockRejectedValue(mockError);
const result = logout();
const result = await logout();
expect(tearDown).toHaveBeenCalled();
expect(setup).toHaveBeenCalledWith({
environmentId: mockEnvironmentId,
appUrl: mockAppUrl,
});
expect(result.ok).toBe(false);
if (!result.ok) {
expect(result.error).toEqual(mockError);
expect(result.error).toEqual(new Error("Failed to logout"));
}
});
});

View File

@@ -13,9 +13,7 @@ export class UpdateQueue {
private constructor() {}
public static getInstance(): UpdateQueue {
if (!UpdateQueue.instance) {
UpdateQueue.instance = new UpdateQueue();
}
UpdateQueue.instance ??= new UpdateQueue();
return UpdateQueue.instance;
}

View File

@@ -1,8 +1,8 @@
import { Config } from "@/lib/common/config";
import { Logger } from "@/lib/common/logger";
import { setup, tearDown } from "@/lib/common/setup";
import { tearDown } from "@/lib/common/setup";
import { UpdateQueue } from "@/lib/user/update-queue";
import { type ApiErrorResponse, type NetworkError, type Result, err, okVoid } from "@/types/error";
import { type ApiErrorResponse, type Result, err, okVoid } from "@/types/error";
// eslint-disable-next-line @typescript-eslint/require-await -- we want to use promises here
export const setUserId = async (userId: string): Promise<Result<void, ApiErrorResponse>> => {
@@ -31,32 +31,22 @@ export const setUserId = async (userId: string): Promise<Result<void, ApiErrorRe
return okVoid();
};
export const logout = async (): Promise<Result<void, NetworkError>> => {
const logger = Logger.getInstance();
const appConfig = Config.getInstance();
const { userId } = appConfig.get().user.data;
if (!userId) {
logger.error("No userId is set, please use the setUserId function to set a userId first");
return okVoid();
}
logger.debug("Resetting state & getting new state from backend");
const initParams = {
environmentId: appConfig.get().environmentId,
appUrl: appConfig.get().appUrl,
};
// logout the user, remove user state and setup formbricks again
tearDown();
export const logout = (): Result<void> => {
try {
await setup(initParams);
const logger = Logger.getInstance();
const appConfig = Config.getInstance();
const { userId } = appConfig.get().user.data;
if (!userId) {
logger.error("No userId is set, please use the setUserId function to set a userId first");
return okVoid();
}
tearDown();
return okVoid();
} catch (e) {
const errorTyped = e as { message?: string };
logger.error(`Failed to setup formbricks after logout: ${errorTyped.message ?? "Unknown error"}`);
return err(e as NetworkError);
} catch {
return { ok: false, error: new Error("Failed to logout") };
}
};

View File

@@ -1,6 +1,6 @@
import "@testing-library/jest-dom/vitest";
import { cleanup, fireEvent, render, screen, waitFor } from "@testing-library/preact";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
import { FileInput } from "./file-input";
// Mock auto-animate hook to prevent React useState errors in Preact tests
@@ -37,7 +37,7 @@ describe("FileInput", () => {
vi.clearAllMocks();
});
it("uploads valid file and calls callbacks", async () => {
test("uploads valid file and calls callbacks", async () => {
render(
<FileInput
surveyId="survey1"
@@ -62,7 +62,7 @@ describe("FileInput", () => {
});
});
it("alerts on invalid file type", async () => {
test("alerts on invalid file type", async () => {
render(
<FileInput
surveyId="survey1"
@@ -82,7 +82,7 @@ describe("FileInput", () => {
expect(onUploadCallback).not.toHaveBeenCalled();
});
it("alerts when multiple files not allowed", () => {
test("alerts when multiple files not allowed", () => {
render(
<FileInput
surveyId="survey1"
@@ -100,7 +100,7 @@ describe("FileInput", () => {
expect(onFileUpload).not.toHaveBeenCalled();
});
it("renders existing fileUrls and handles delete", () => {
test("renders existing fileUrls and handles delete", () => {
const initialUrls = ["fileA.txt", "fileB.txt"];
render(
<FileInput
@@ -121,7 +121,7 @@ describe("FileInput", () => {
expect(onUploadCallback).toHaveBeenCalledWith(["fileB.txt"]);
});
it("alerts when duplicate files selected", () => {
test("alerts when duplicate files selected", () => {
render(
<FileInput
surveyId="survey1"
@@ -140,7 +140,7 @@ describe("FileInput", () => {
);
});
it("handles native file upload event", async () => {
test("handles native file upload event", async () => {
// Import the actual constant to ensure we're using the right event name
const FILE_PICK_EVENT = "formbricks:onFilePick";
const nativeFile = { name: "native.txt", type: "text/plain", base64: btoa("native content") };
@@ -174,7 +174,7 @@ describe("FileInput", () => {
});
});
it("tests file size validation", async () => {
test("tests file size validation", async () => {
// Instead of testing the alert directly, test that large files don't get uploaded
const largeFile = createFile("large.txt", 2 * 1024 * 1024, "text/plain"); // 2MB file
const smallFile = createFile("small.txt", 500, "text/plain"); // 500B file
@@ -215,7 +215,7 @@ describe("FileInput", () => {
expect(onFileUpload).not.toHaveBeenCalled();
});
it("does not upload when no valid files are selected", async () => {
test("does not upload when no valid files are selected", async () => {
render(
<FileInput
surveyId="survey1"
@@ -235,7 +235,7 @@ describe("FileInput", () => {
expect(onFileUpload).not.toHaveBeenCalled();
});
it("does not upload duplicates", async () => {
test("does not upload duplicates", async () => {
render(
<FileInput
surveyId="survey1"
@@ -257,7 +257,7 @@ describe("FileInput", () => {
expect(onFileUpload).not.toHaveBeenCalled();
});
it("handles native file upload with size limits", async () => {
test("handles native file upload with size limits", async () => {
// Import the actual constant to ensure we're using the right event name
const FILE_PICK_EVENT = "formbricks:onFilePick";
@@ -297,7 +297,7 @@ describe("FileInput", () => {
);
});
it("handles case when no files remain after filtering", async () => {
test("handles case when no files remain after filtering", async () => {
// Import the actual constant
const FILE_PICK_EVENT = "formbricks:onFilePick";
@@ -331,7 +331,7 @@ describe("FileInput", () => {
expect(onUploadCallback).not.toHaveBeenCalled();
});
it("deletes a file", () => {
test("deletes a file", () => {
const initialUrls = ["fileA.txt", "fileB.txt"];
render(
<FileInput
@@ -352,7 +352,7 @@ describe("FileInput", () => {
expect(onUploadCallback).toHaveBeenCalledWith(["fileB.txt"]);
});
it("handles drag and drop", async () => {
test("handles drag and drop", async () => {
render(
<FileInput
surveyId="survey1"
@@ -389,7 +389,7 @@ describe("FileInput", () => {
});
});
it("handles file upload errors", async () => {
test("handles file upload errors", async () => {
// Mock the toBase64 function to fail by making onFileUpload throw an error
// during the Promise.all for uploadPromises
onFileUpload.mockImplementationOnce(() => {
@@ -419,7 +419,7 @@ describe("FileInput", () => {
});
});
it("enforces file limit", () => {
test("enforces file limit", () => {
render(
<FileInput
surveyId="survey1"

View File

@@ -275,7 +275,7 @@ export function FileInput({
}, [allowedFileExtensions]);
return (
<div className="fb-items-left fb-bg-input-bg hover:fb-bg-input-bg-selected fb-border-border fb-relative fb-mt-3 fb-flex fb-w-full fb-flex-col fb-justify-center fb-rounded-lg fb-border-2 fb-border-dashed dark:fb-border-slate-600 dark:fb-bg-slate-700 dark:hover:fb-border-slate-500 dark:hover:fb-bg-slate-800">
<div className="fb-bg-input-bg hover:fb-bg-input-bg-selected fb-border-border fb-relative fb-mt-3 fb-flex fb-w-full fb-flex-col fb-justify-center fb-items-center fb-rounded-lg fb-border-2 fb-border-dashed dark:fb-border-slate-600 dark:fb-bg-slate-700 dark:hover:fb-border-slate-500 dark:hover:fb-bg-slate-800">
<div ref={parent}>
{fileUrls?.map((fileUrl, index) => {
const fileName = getOriginalFileNameFromUrl(fileUrl);

View File

@@ -1,6 +1,6 @@
import "@testing-library/jest-dom/vitest";
import { cleanup, fireEvent, render, screen } from "@testing-library/preact";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
import { TSurveyLanguage } from "@formbricks/types/surveys/types";
import { LanguageSwitch } from "./language-switch";
@@ -59,7 +59,7 @@ describe("LanguageSwitch", () => {
cleanup();
});
it("toggles dropdown and lists only enabled languages", () => {
test("toggles dropdown and lists only enabled languages", () => {
render(
<LanguageSwitch
surveyLanguages={surveyLanguages}
@@ -83,7 +83,7 @@ describe("LanguageSwitch", () => {
expect(screen.queryByText("fr")).toBeNull();
});
it("calls setSelectedLanguageCode and setFirstRender correctly", () => {
test("calls setSelectedLanguageCode and setFirstRender correctly", () => {
render(
<LanguageSwitch
surveyLanguages={surveyLanguages}

View File

@@ -1,6 +1,6 @@
import "@testing-library/jest-dom/vitest";
import { cleanup, render, screen } from "@testing-library/preact";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
import { ProgressBar } from "./progress-bar";
// Mock Progress component to capture progress prop
@@ -24,12 +24,12 @@ describe("ProgressBar", () => {
endings: [{ id: "end1" }],
};
it("renders 0 for start", () => {
test("renders 0 for start", () => {
render(<ProgressBar survey={baseSurvey} questionId="start" />);
expect(screen.getByTestId("progress")).toHaveTextContent("0");
});
it("renders correct progress for questions", () => {
test("renders correct progress for questions", () => {
// totalCards = questions.length + 1 = 3
render(<ProgressBar survey={baseSurvey} questionId="q1" />);
expect(screen.getByTestId("progress")).toHaveTextContent("0");
@@ -41,7 +41,7 @@ describe("ProgressBar", () => {
expect(screen.getByTestId("progress")).toHaveTextContent((1 / 3).toString());
});
it("renders 1 for ending card", () => {
test("renders 1 for ending card", () => {
render(<ProgressBar survey={baseSurvey} questionId="end1" />);
expect(screen.getByTestId("progress")).toHaveTextContent("1");
});

View File

@@ -1,13 +1,13 @@
import { convertToEmbedUrl } from "@/lib/video-upload";
import { cleanup, render, screen } from "@testing-library/preact";
import { afterEach, describe, expect, it } from "vitest";
import { afterEach, describe, expect, test } from "vitest";
import { QuestionMedia } from "./question-media";
describe("QuestionMedia", () => {
afterEach(() => {
cleanup();
});
it("renders image correctly", () => {
test("renders image correctly", () => {
const imgUrl = "https://example.com/test.jpg";
const altText = "Test Image";
render(<QuestionMedia imgUrl={imgUrl} altText={altText} />);
@@ -17,7 +17,7 @@ describe("QuestionMedia", () => {
expect(img.getAttribute("src")).toBe(imgUrl);
});
it("renders YouTube video correctly", () => {
test("renders YouTube video correctly", () => {
const videoUrl = "https://www.youtube.com/watch?v=test123";
render(<QuestionMedia videoUrl={videoUrl} />);
@@ -26,7 +26,7 @@ describe("QuestionMedia", () => {
expect(iframe.getAttribute("src")).toBe(videoUrl + "?controls=0");
});
it("renders Vimeo video correctly", () => {
test("renders Vimeo video correctly", () => {
const videoUrl = "https://vimeo.com/test123";
render(<QuestionMedia videoUrl={videoUrl} />);
@@ -38,7 +38,7 @@ describe("QuestionMedia", () => {
);
});
it("renders Loom video correctly", () => {
test("renders Loom video correctly", () => {
const videoUrl = "https://www.loom.com/share/test123";
render(<QuestionMedia videoUrl={videoUrl} />);
@@ -49,14 +49,14 @@ describe("QuestionMedia", () => {
);
});
it("renders loading state initially", () => {
test("renders loading state initially", () => {
const { container } = render(<QuestionMedia imgUrl="https://example.com/test.jpg" />);
const loadingElement = container.querySelector(".fb-animate-pulse");
expect(loadingElement).toBeTruthy();
});
it("renders expand button with correct link", () => {
test("renders expand button with correct link", () => {
const imgUrl = "https://example.com/test.jpg";
render(<QuestionMedia imgUrl={imgUrl} />);
@@ -67,7 +67,7 @@ describe("QuestionMedia", () => {
expect(expandLink.getAttribute("rel")).toBe("noreferrer");
});
it("handles loading completion", async () => {
test("handles loading completion", async () => {
const imgUrl = "https://example.com/test.jpg";
const { container } = render(<QuestionMedia imgUrl={imgUrl} />);
@@ -81,14 +81,14 @@ describe("QuestionMedia", () => {
expect(loadingElements.length).toBe(0);
});
it("renders nothing when no media URLs are provided", () => {
test("renders nothing when no media URLs are provided", () => {
const { container } = render(<QuestionMedia />);
expect(container.querySelector("img")).toBeNull();
expect(container.querySelector("iframe")).toBeNull();
});
it("uses default alt text when not provided", () => {
test("uses default alt text when not provided", () => {
const imgUrl = "https://example.com/test.jpg";
render(<QuestionMedia imgUrl={imgUrl} />);
@@ -96,7 +96,7 @@ describe("QuestionMedia", () => {
expect(img).toBeTruthy();
});
it("handles video loading state", async () => {
test("handles video loading state", async () => {
const videoUrl = "https://www.youtube.com/watch?v=test123";
const { container } = render(<QuestionMedia videoUrl={videoUrl} />);
@@ -115,7 +115,7 @@ describe("QuestionMedia", () => {
expect(loadingElements.length).toBe(0);
});
it("renders expand button with correct video link", () => {
test("renders expand button with correct video link", () => {
const videoUrl = "https://www.youtube.com/watch?v=test123";
render(<QuestionMedia videoUrl={videoUrl} />);
@@ -126,7 +126,7 @@ describe("QuestionMedia", () => {
expect(expandLink.getAttribute("rel")).toBe("noreferrer");
});
it("handles regular video URL without parameters", () => {
test("handles regular video URL without parameters", () => {
const videoUrl = "https://example.com/video.mp4";
render(<QuestionMedia videoUrl={videoUrl} />);

View File

@@ -1,6 +1,6 @@
import "@testing-library/jest-dom/vitest";
import { render } from "@testing-library/preact";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
import { RenderSurvey } from "./render-survey";
// Stub SurveyContainer to render children and capture props
@@ -31,7 +31,7 @@ describe("RenderSurvey", () => {
vi.useRealTimers();
});
it("renders with default props and handles close", () => {
test("renders with default props and handles close", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [{ id: "e1", type: "question" }] } as any;
@@ -63,7 +63,7 @@ describe("RenderSurvey", () => {
expect(onClose).toHaveBeenCalled();
});
it("onFinished skips close if redirectToUrl", () => {
test("onFinished skips close if redirectToUrl", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [{ id: "e1", type: "redirectToUrl" }] } as any;
@@ -88,7 +88,7 @@ describe("RenderSurvey", () => {
expect(onClose).not.toHaveBeenCalled();
});
it("onFinished closes after delay for non-redirect endings", () => {
test("onFinished closes after delay for non-redirect endings", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [{ id: "e1", type: "question" }] } as any;
@@ -108,14 +108,14 @@ describe("RenderSurvey", () => {
const props = surveySpy.mock.calls[0][0];
props.onFinished();
// after first delay (survey finish), close schedules another delay
// wait for the onFinished timeout (3s) then the close timeout (1s)
vi.advanceTimersByTime(3000);
expect(onClose).not.toHaveBeenCalled();
vi.advanceTimersByTime(1000);
expect(onClose).toHaveBeenCalled();
});
it("onFinished does not auto-close when inline mode", () => {
test("onFinished does not auto-close when inline mode", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [] } as any;
@@ -139,4 +139,103 @@ describe("RenderSurvey", () => {
vi.advanceTimersByTime(5000);
expect(onClose).not.toHaveBeenCalled();
});
test("close clears any pending onFinished timeout", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [{ id: "e1", type: "question" }] } as any;
const { unmount } = render(
(
<RenderSurvey
survey={survey}
onClose={onClose}
onFinished={onFinished}
styling={{}}
isBrandingEnabled={false}
languageCode="en"
/>
) as any
);
const props = surveySpy.mock.calls[0][0];
// schedule the onFinished-based close
props.onFinished();
// immediately manually close, which should clear that pending timeout
props.onClose();
// manual close schedules onClose in 1s
vi.advanceTimersByTime(1000);
expect(onClose).toHaveBeenCalledTimes(1);
// advance past the original onFinished timeout (3s) + its would-be close delay
vi.advanceTimersByTime(4000);
// still only the one manual-close call
expect(onClose).toHaveBeenCalledTimes(1);
unmount();
});
test("double close only schedules one onClose", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [{ id: "e1", type: "question" }] } as any;
render(
(
<RenderSurvey
survey={survey}
onClose={onClose}
onFinished={onFinished}
styling={{}}
isBrandingEnabled={false}
languageCode="en"
/>
) as any
);
const props = surveySpy.mock.calls[0][0];
// first close schedules user onClose at t=1000
props.onClose();
vi.advanceTimersByTime(500);
// before the first fires, call close again and clear it
props.onClose();
// advance to t=1000: first one would have fired if not cleared
vi.advanceTimersByTime(500);
expect(onClose).not.toHaveBeenCalled();
// advance to t=1500: only the second close should now fire
vi.advanceTimersByTime(500);
expect(onClose).toHaveBeenCalledTimes(1);
});
test("cleanup on unmount clears pending timers (useEffect)", () => {
const onClose = vi.fn();
const onFinished = vi.fn();
const survey = { endings: [{ id: "e1", type: "question" }] } as any;
const { unmount } = render(
(
<RenderSurvey
survey={survey}
onClose={onClose}
onFinished={onFinished}
styling={{}}
isBrandingEnabled={false}
languageCode="en"
/>
) as any
);
const props = surveySpy.mock.calls[0][0];
// schedule both timeouts
props.onFinished();
props.onClose();
// unmount should clear both pending timeouts
unmount();
// advance well past all delays
vi.advanceTimersByTime(10000);
expect(onClose).not.toHaveBeenCalled();
});
});

View File

@@ -1,20 +1,49 @@
import { useState } from "react";
import { useEffect, useRef, useState } from "react";
import { SurveyContainerProps } from "@formbricks/types/formbricks-surveys";
import { SurveyContainer } from "../wrappers/survey-container";
import { Survey } from "./survey";
export function RenderSurvey(props: SurveyContainerProps) {
const [isOpen, setIsOpen] = useState(true);
const onFinishedTimeoutRef = useRef<NodeJS.Timeout | null>(null);
const closeTimeoutRef = useRef<NodeJS.Timeout | null>(null);
const close = () => {
if (onFinishedTimeoutRef.current) {
clearTimeout(onFinishedTimeoutRef.current);
onFinishedTimeoutRef.current = null;
}
if (closeTimeoutRef.current) {
clearTimeout(closeTimeoutRef.current);
closeTimeoutRef.current = null;
}
setIsOpen(false);
setTimeout(() => {
closeTimeoutRef.current = setTimeout(() => {
if (props.onClose) {
props.onClose();
}
}, 1000); // wait for animation to finish}
}, 1000);
};
useEffect(() => {
return () => {
if (onFinishedTimeoutRef.current) {
clearTimeout(onFinishedTimeoutRef.current);
}
if (closeTimeoutRef.current) {
clearTimeout(closeTimeoutRef.current);
}
};
}, []);
if (!isOpen) {
return null;
}
return (
<SurveyContainer
mode={props.mode ?? "modal"}
@@ -32,7 +61,7 @@ export function RenderSurvey(props: SurveyContainerProps) {
props.onFinished?.();
if (props.mode !== "inline") {
setTimeout(
onFinishedTimeoutRef.current = setTimeout(
() => {
const firstEnabledEnding = props.survey.endings?.[0];
if (firstEnabledEnding?.type !== "redirectToUrl") {

View File

@@ -1,5 +1,5 @@
import { cleanup, fireEvent, render, screen } from "@testing-library/preact";
import { afterEach, describe, expect, it, vi } from "vitest";
import { afterEach, describe, expect, test, vi } from "vitest";
import { type TSurveyQuestion, TSurveyQuestionTypeEnum } from "@formbricks/types/surveys/types";
import { ResponseErrorComponent } from "./response-error-component";
@@ -37,7 +37,7 @@ describe("ResponseErrorComponent", () => {
q2: "Answer 2",
};
it("renders error message and retry button", () => {
test("renders error message and retry button", () => {
render(
<ResponseErrorComponent questions={mockQuestions} responseData={mockResponseData} onRetry={() => {}} />
);
@@ -47,7 +47,7 @@ describe("ResponseErrorComponent", () => {
expect(screen.getByText("Retry")).toBeDefined();
});
it("displays questions and responses correctly", () => {
test("displays questions and responses correctly", () => {
render(
<ResponseErrorComponent questions={mockQuestions} responseData={mockResponseData} onRetry={() => {}} />
);
@@ -63,7 +63,7 @@ describe("ResponseErrorComponent", () => {
expect(answers[1].textContent).toBe("Answer 2");
});
it("calls onRetry when retry button is clicked", () => {
test("calls onRetry when retry button is clicked", () => {
const mockOnRetry = vi.fn();
render(
<ResponseErrorComponent
@@ -79,7 +79,7 @@ describe("ResponseErrorComponent", () => {
expect(mockOnRetry).toHaveBeenCalledTimes(1);
});
it("handles missing responses gracefully", () => {
test("handles missing responses gracefully", () => {
const partialResponseData = {
q1: "Answer 1",
};

View File

@@ -1,5 +1,5 @@
import { render } from "@testing-library/preact";
import { describe, expect, it } from "vitest";
import { describe, expect, test } from "vitest";
import {
ConfusedFace,
FrowningFace,
@@ -34,7 +34,7 @@ describe("Smiley Components", () => {
components.forEach(({ name, Component }) => {
describe(name, () => {
it("renders with default props", () => {
test("renders with default props", () => {
const { container } = render(<Component />);
const svg = container.querySelector("svg");
expect(svg).to.exist;
@@ -53,7 +53,7 @@ describe("Smiley Components", () => {
expect(paths.length).to.be.greaterThan(0);
});
it("applies custom props correctly", () => {
test("applies custom props correctly", () => {
const { container } = render(
<Component {...testProps} style={{ stroke: "red", strokeWidth: 3, fill: "blue" }} />
);
@@ -65,7 +65,7 @@ describe("Smiley Components", () => {
expect(circle?.getAttribute("style")).to.include("fill: blue");
});
it("maintains accessibility", () => {
test("maintains accessibility", () => {
const { container } = render(<Component aria-label={`${name} emoji`} data-testid="smiley-svg" />);
const svg = container.querySelector("[data-testid='smiley-svg']");
expect(svg).to.exist;

View File

@@ -1,7 +1,7 @@
import "@testing-library/jest-dom/vitest";
import { cleanup, fireEvent, render, screen, waitFor } from "@testing-library/preact";
import { JSX } from "preact";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
import { afterEach, beforeEach, describe, expect, test, vi } from "vitest";
import type { TJsEnvironmentStateSurvey } from "@formbricks/types/js";
import { Survey } from "./survey";
@@ -243,7 +243,7 @@ describe("Survey", () => {
vi.clearAllMocks();
});
it("renders the survey with welcome card initially", () => {
test("renders the survey with welcome card initially", () => {
render(
<Survey
survey={mockSurvey}
@@ -272,7 +272,7 @@ describe("Survey", () => {
expect(onDisplayMock).toHaveBeenCalled();
});
it("handles question submission and navigation", async () => {
test("handles question submission and navigation", async () => {
// For this test, we'll use startAtQuestionId to force rendering the question card
render(
<Survey
@@ -317,7 +317,7 @@ describe("Survey", () => {
});
});
it("renders branding when enabled", () => {
test("renders branding when enabled", () => {
render(
<Survey
survey={mockSurvey}
@@ -345,7 +345,7 @@ describe("Survey", () => {
expect(screen.getByTestId("formbricks-branding")).toBeInTheDocument();
});
it("renders progress bar by default", () => {
test("renders progress bar by default", () => {
render(
<Survey
survey={mockSurvey}
@@ -373,7 +373,7 @@ describe("Survey", () => {
expect(screen.getByTestId("progress-bar")).toBeInTheDocument();
});
it("hides progress bar when hideProgressBar is true", () => {
test("hides progress bar when hideProgressBar is true", () => {
render(
<Survey
survey={mockSurvey}
@@ -402,7 +402,7 @@ describe("Survey", () => {
expect(screen.queryByTestId("progress-bar")).not.toBeInTheDocument();
});
it("handles file uploads in preview mode", async () => {
test("handles file uploads in preview mode", async () => {
// The createDisplay function in the Survey component calls onDisplayCreated
// We need to make sure it resolves before checking if onDisplayCreated was called
@@ -444,7 +444,7 @@ describe("Survey", () => {
expect(onFileUploadMock).toBeDefined();
});
it("calls onResponseCreated in preview mode", async () => {
test("calls onResponseCreated in preview mode", async () => {
// This test verifies that onResponseCreated is called in preview mode
// when a question is submitted in preview mode
@@ -489,7 +489,7 @@ describe("Survey", () => {
expect(onResponseCreatedMock).toHaveBeenCalled();
});
it("adds response to queue with correct user and contact IDs", async () => {
test("adds response to queue with correct user and contact IDs", async () => {
// This test is focused on the functionality in lines 445-472 of survey.tsx
// We will verify that the 'add' method of the ResponseQueue (mockRQAdd) is called.
// No need to import ResponseQueue or get mock instances dynamically here.
@@ -541,7 +541,7 @@ describe("Survey", () => {
);
});
it("makes questions required based on logic actions", async () => {
test("makes questions required based on logic actions", async () => {
// This test is focused on the functionality in lines 409-411 of survey.tsx
// We'll customize the performActions mock to return requiredQuestionIds
@@ -609,7 +609,7 @@ describe("Survey", () => {
expect(performActions).toHaveBeenCalled();
});
it("starts at a specific question when startAtQuestionId is provided", () => {
test("starts at a specific question when startAtQuestionId is provided", () => {
render(
<Survey
survey={mockSurvey}

View File

@@ -1,6 +1,6 @@
import "@testing-library/jest-dom/vitest";
import { fireEvent, render, screen } from "@testing-library/preact";
import { afterEach, describe, expect, it, vi } from "vitest";
import { afterEach, describe, expect, test, vi } from "vitest";
import { WelcomeCard } from "./welcome-card";
describe("WelcomeCard", () => {
@@ -35,7 +35,7 @@ describe("WelcomeCard", () => {
variablesData: {},
};
it("renders welcome card with basic content", () => {
test("renders welcome card with basic content", () => {
const { container } = render(<WelcomeCard {...defaultProps} />);
expect(container.querySelector(".fb-text-heading")).toHaveTextContent("Welcome to our survey");
@@ -43,7 +43,7 @@ describe("WelcomeCard", () => {
expect(container.querySelector("button")).toHaveTextContent("Start");
});
it("shows time to complete when timeToFinish is true", () => {
test("shows time to complete when timeToFinish is true", () => {
const { container } = render(<WelcomeCard {...defaultProps} />);
const timeDisplay = container.querySelector(".fb-text-subheading");
@@ -51,14 +51,14 @@ describe("WelcomeCard", () => {
expect(timeDisplay).toHaveTextContent(/Takes/);
});
it("shows response count when showResponseCount is true and count > 3", () => {
test("shows response count when showResponseCount is true and count > 3", () => {
const { container } = render(<WelcomeCard {...defaultProps} responseCount={10} />);
const responseText = container.querySelector(".fb-text-xs");
expect(responseText).toHaveTextContent(/10 people responded/);
});
it("handles submit button click", () => {
test("handles submit button click", () => {
const { container } = render(<WelcomeCard {...defaultProps} />);
const button = container.querySelector("button");
@@ -68,7 +68,7 @@ describe("WelcomeCard", () => {
expect(defaultProps.onSubmit).toHaveBeenCalledWith({ welcomeCard: "clicked" }, {});
});
it("handles Enter key press when survey type is link", () => {
test("handles Enter key press when survey type is link", () => {
render(<WelcomeCard {...defaultProps} />);
fireEvent.keyDown(document, { key: "Enter" });
@@ -76,14 +76,14 @@ describe("WelcomeCard", () => {
expect(defaultProps.onSubmit).toHaveBeenCalledWith({ welcomeCard: "clicked" }, {});
});
it("does not show response count when count <= 3", () => {
test("does not show response count when count <= 3", () => {
const { container } = render(<WelcomeCard {...defaultProps} responseCount={3} />);
const responseText = container.querySelector(".fb-text-xs");
expect(responseText).not.toHaveTextContent(/3 people responded/);
});
it("shows company logo when fileUrl is provided", () => {
test("shows company logo when fileUrl is provided", () => {
const propsWithLogo = {
...defaultProps,
fileUrl: "https://example.com/logo.png",
@@ -96,7 +96,7 @@ describe("WelcomeCard", () => {
expect(logo).toHaveAttribute("src", "https://example.com/logo.png");
});
it("calculates time to complete correctly for different survey lengths", () => {
test("calculates time to complete correctly for different survey lengths", () => {
// Test short survey (2 questions)
const { container } = render(<WelcomeCard {...defaultProps} />);
const timeDisplay = container.querySelector(".fb-text-subheading");
@@ -121,7 +121,7 @@ describe("WelcomeCard", () => {
expect(longTimeDisplay).toHaveTextContent(/Takes 6\+ minutes/);
});
it("shows both time and response count when both flags are true", () => {
test("shows both time and response count when both flags are true", () => {
const { container } = render(
<WelcomeCard
{...defaultProps}
@@ -141,7 +141,7 @@ describe("WelcomeCard", () => {
expect(textDisplay).toHaveTextContent(/Takes.*10 people responded/);
});
it("handles missing optional props gracefully", () => {
test("handles missing optional props gracefully", () => {
const minimalProps = {
...defaultProps,
headline: undefined,
@@ -157,7 +157,7 @@ describe("WelcomeCard", () => {
expect(container.querySelector("button")).toBeInTheDocument();
});
it("handles Enter key press correctly based on survey type and isCurrent", () => {
test("handles Enter key press correctly based on survey type and isCurrent", () => {
const mockOnSubmit = vi.fn();
// Test when survey is not link type
const { rerender, unmount } = render(
@@ -177,7 +177,7 @@ describe("WelcomeCard", () => {
unmount();
});
it("prevents default on Enter key in button", () => {
test("prevents default on Enter key in button", () => {
const { container } = render(<WelcomeCard {...defaultProps} />);
const button = container.querySelector("button");
const event = new KeyboardEvent("keydown", { key: "Enter", bubbles: true });
@@ -188,7 +188,7 @@ describe("WelcomeCard", () => {
expect(event.preventDefault).toHaveBeenCalled();
});
it("properly cleans up event listeners on unmount", () => {
test("properly cleans up event listeners on unmount", () => {
const { unmount } = render(<WelcomeCard {...defaultProps} />);
const removeEventListenerSpy = vi.spyOn(document, "removeEventListener");
@@ -198,7 +198,7 @@ describe("WelcomeCard", () => {
removeEventListenerSpy.mockRestore();
});
it("handles response counts at boundary conditions", () => {
test("handles response counts at boundary conditions", () => {
// Test with exactly 3 responses (boundary)
const { container: container3 } = render(<WelcomeCard {...defaultProps} responseCount={3} />);
expect(container3.querySelector(".fb-text-xs")).not.toHaveTextContent(/3 people responded/);
@@ -208,7 +208,7 @@ describe("WelcomeCard", () => {
expect(container4.querySelector(".fb-text-xs")).toHaveTextContent(/4 people responded/);
});
it("handles time calculation edge cases", () => {
test("handles time calculation edge cases", () => {
// Test with no questions
const emptyQuestionsSurvey = {
...mockSurvey,
@@ -231,7 +231,7 @@ describe("WelcomeCard", () => {
expect(boundaryContainer.querySelector(".fb-text-subheading")).toHaveTextContent(/Takes 6 minutes/);
});
it("correctly processes localized content", () => {
test("correctly processes localized content", () => {
const localizedProps = {
...defaultProps,
headline: { default: "Welcome", es: "Bienvenido" },
@@ -247,7 +247,7 @@ describe("WelcomeCard", () => {
expect(container.querySelector("button")).toHaveTextContent("Comenzar");
});
it("handles variable replacement in content", () => {
test("handles variable replacement in content", () => {
const propsWithVariables = {
...defaultProps,
headline: { default: "Welcome #recall:name/fallback:Guest#" },

View File

@@ -399,6 +399,7 @@ describe("StackedCardsContainer", () => {
const resizeCallback = (global.ResizeObserver as any).mock.calls[0][0];
act(() => {
resizeCallback([{ contentRect: { height: 500, width: 300 } }]);
vi.runAllTimers(); // Advance timers after resize callback to handle potential internal delays
});
// Check that cardHeight and cardWidth are passed to StackedCard instances (e.g., next card)

View File

@@ -98,24 +98,36 @@ export function StackedCardsContainer({
// UseEffect to handle the resize of current question card and set cardHeight accordingly
useEffect(() => {
let resizeTimeout: NodeJS.Timeout;
const handleDebouncedResize = (entries: ResizeObserverEntry[]) => {
clearTimeout(resizeTimeout);
resizeTimeout = setTimeout(() => {
for (const entry of entries) {
setCardHeight(`${entry.contentRect.height.toString()}px`);
setCardWidth(entry.contentRect.width);
}
}, 50); // 50ms debounce
};
const timer = setTimeout(() => {
const currentElement = cardRefs.current[questionIdxTemp];
if (currentElement) {
if (resizeObserver.current) {
resizeObserver.current.disconnect();
}
resizeObserver.current = new ResizeObserver((entries) => {
for (const entry of entries) {
setCardHeight(`${entry.contentRect.height.toString()}px`);
setCardWidth(entry.contentRect.width);
}
handleDebouncedResize(entries);
});
resizeObserver.current.observe(currentElement);
}
}, 0);
return () => {
resizeObserver.current?.disconnect();
clearTimeout(timer);
clearTimeout(resizeTimeout);
};
}, [questionIdxTemp, cardArrangement, cardRefs]);

Some files were not shown because too many files have changed in this diff Show More