Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(backend): adding s3 supports, and also expose uploading project images #154

Merged
merged 11 commits into from
Mar 5, 2025
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,6 @@ models/

*/**/database.sqlite
./backend/src/database.sqlite
.codefox
.codefox

.env
5 changes: 0 additions & 5 deletions backend/.env

This file was deleted.

5 changes: 0 additions & 5 deletions backend/.env.development

This file was deleted.

21 changes: 21 additions & 0 deletions backend/.env.example
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add NODE_ENV=DEV # 'PROD', 'TEST', 'DEV' here

Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Server Configuration
PORT=8080

# JWT Configuration
JWT_SECRET="your_jwt_secret_here"
JWT_REFRESH="your_jwt_refresh_secret_here"
SALT_ROUNDS=10

# OpenAI Configuration
OPENAI_BASE_URI="http://localhost:3001"

# S3/Cloudflare R2 Configuration (Optional)
# If not provided, local file storage will be used
S3_ACCESS_KEY_ID="your_s3_access_key_id" # Must be 32 characters for Cloudflare R2
S3_SECRET_ACCESS_KEY="your_s3_secret_access_key"
S3_REGION="auto" # Use 'auto' for Cloudflare R2
S3_BUCKET_NAME="your_bucket_name"
S3_ENDPOINT="https://<account_id>.r2.cloudflarestorage.com" # Cloudflare R2 endpoint
S3_ACCOUNT_ID="your_cloudflare_account_id" # Your Cloudflare account ID
S3_PUBLIC_URL="https://pub-xxx.r2.dev" # Your R2 public bucket URL

3 changes: 2 additions & 1 deletion backend/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,5 @@ log-*/


# Backend
/backend/package-lock.json
/backend/package-lock.json
.env
3 changes: 3 additions & 0 deletions backend/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
},
"dependencies": {
"@apollo/server": "^4.11.0",
"@aws-sdk/client-s3": "^3.758.0",
"@huggingface/hub": "latest",
"@huggingface/transformers": "latest",
"@nestjs/apollo": "^12.2.0",
Expand All @@ -43,6 +44,7 @@
"@types/toposort": "^2.0.7",
"axios": "^1.7.7",
"bcrypt": "^5.1.1",
"class-transformer": "^0.5.1",
"class-validator": "^0.14.1",
"dotenv": "^16.4.7",
"eslint-plugin-unused-imports": "^4.1.4",
Expand All @@ -51,6 +53,7 @@
"gpt-3-encoder": "^1.1.4",
"graphql": "^16.9.0",
"graphql-subscriptions": "^2.0.0",
"graphql-upload-minimal": "^1.6.1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

GraphQL upload package added correctly but consider security implications.

The graphql-upload-minimal package is correctly added for handling file uploads in GraphQL, but ensure you implement proper file size limits and validation.

Make sure to:

  1. Implement file size limits
  2. Validate file types
  3. Scan uploads for malware if possible
  4. Set appropriate CORS policies for upload endpoints

Here's a shell script to check if file size limits are implemented:


🏁 Script executed:

#!/bin/bash
# Check if file size limits are implemented for uploads

echo "Checking for file size limit configurations..."

# Look for file size limit configurations in main.ts or similar files
rg -i "file.*size|maxFileSize|sizeLimit" --glob "*.ts" --no-ignore

# Check if graphqlUploadExpress is configured with maxFileSize
rg -i "graphqlUploadExpress.*maxFileSize" --glob "*.ts" --no-ignore

Length of output: 415


Action: Enhance Security for GraphQL File Uploads

The dependency in backend/package.json is added correctly and the file size limit is actively configured in backend/src/main.ts via:

graphqlUploadExpress({ maxFileSize: 50000000, maxFiles: 10 })

However, please ensure the following additional security measures are implemented:

  • File Type Validation: Verify that uploads are checked against allowed file types to prevent unwanted content.
  • Malware Scanning: Consider integrating malware scanning for uploaded files.
  • CORS Policies: Confirm that the upload endpoints enforce appropriate CORS policies.

"graphql-ws": "^5.16.0",
"lodash": "^4.17.21",
"markdown-to-txt": "^2.0.1",
Expand Down
34 changes: 34 additions & 0 deletions backend/src/config/config.module.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import { Module } from '@nestjs/common';
import { ConfigModule as NestConfigModule } from '@nestjs/config';
import { AppConfigService } from './config.service';
import { EnvironmentVariables } from './env.validation';
import { plainToInstance } from 'class-transformer';
import { validateSync } from 'class-validator';

const validate = (config: Record<string, unknown>) => {
const validatedConfig = plainToInstance(EnvironmentVariables, config, {
enableImplicitConversion: true,
});

const errors = validateSync(validatedConfig, {
skipMissingProperties: false,
});

if (errors.length > 0) {
throw new Error(errors.toString());
}

return validatedConfig;
};

@Module({
imports: [
NestConfigModule.forRoot({
validate,
isGlobal: true,
}),
],
providers: [AppConfigService],
exports: [AppConfigService],
})
export class AppConfigModule {}
71 changes: 71 additions & 0 deletions backend/src/config/config.service.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import { Injectable } from '@nestjs/common';
import { ConfigService as NestConfigService } from '@nestjs/config';
import { EnvironmentVariables } from './env.validation';

@Injectable()
export class AppConfigService {
constructor(private configService: NestConfigService<EnvironmentVariables>) {}

/**
* Get server port from environment
*/
get port(): number {
return this.configService.get('PORT');
}

/**
* Get JWT secret key for token generation
*/
get jwtSecret(): string {
return this.configService.get('JWT_SECRET');
}

/**
* Get JWT refresh token secret
*/
get jwtRefresh(): string {
return this.configService.get('JWT_REFRESH');
}

/**
* Get password hashing salt rounds
*/
get saltRounds(): number {
return this.configService.get('SALT_ROUNDS');
}

/**
* Get OpenAI API base URI
*/
get openaiBaseUri(): string {
return this.configService.get('OPENAI_BASE_URI');
}

Comment on lines +9 to +43
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Config getters can throw if environment variables are missing.

The config getters directly access environment variables without checking if they exist. This could lead to runtime errors if required environment variables are not set.

Consider adding default values or explicit error handling:

 get jwtSecret(): string {
-  return this.configService.get('JWT_SECRET');
+  const secret = this.configService.get('JWT_SECRET');
+  if (!secret) {
+    throw new Error('JWT_SECRET environment variable is not set');
+  }
+  return secret;
 }

Or use the built-in ConfigService default value capability:

 get jwtSecret(): string {
-  return this.configService.get('JWT_SECRET');
+  return this.configService.get('JWT_SECRET', 'default-secret-for-development-only');
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* Get server port from environment
*/
get port(): number {
return this.configService.get('PORT');
}
/**
* Get JWT secret key for token generation
*/
get jwtSecret(): string {
return this.configService.get('JWT_SECRET');
}
/**
* Get JWT refresh token secret
*/
get jwtRefresh(): string {
return this.configService.get('JWT_REFRESH');
}
/**
* Get password hashing salt rounds
*/
get saltRounds(): number {
return this.configService.get('SALT_ROUNDS');
}
/**
* Get OpenAI API base URI
*/
get openaiBaseUri(): string {
return this.configService.get('OPENAI_BASE_URI');
}
/**
* Get server port from environment
*/
get port(): number {
return this.configService.get('PORT');
}
/**
* Get JWT secret key for token generation
*/
get jwtSecret(): string {
const secret = this.configService.get('JWT_SECRET');
if (!secret) {
throw new Error('JWT_SECRET environment variable is not set');
}
return secret;
}
/**
* Get JWT refresh token secret
*/
get jwtRefresh(): string {
return this.configService.get('JWT_REFRESH');
}
/**
* Get password hashing salt rounds
*/
get saltRounds(): number {
return this.configService.get('SALT_ROUNDS');
}
/**
* Get OpenAI API base URI
*/
get openaiBaseUri(): string {
return this.configService.get('OPENAI_BASE_URI');
}

/**
* Get S3/Cloudflare R2 configuration object
*/
get s3Config() {
return {
accessKeyId: this.configService.get('S3_ACCESS_KEY_ID'),
secretAccessKey: this.configService.get('S3_SECRET_ACCESS_KEY'),
region: this.configService.get('S3_REGION'),
bucketName: this.configService.get('S3_BUCKET_NAME'),
endpoint: this.configService.get('S3_ENDPOINT'),
accountId: this.configService.get('S3_ACCOUNT_ID'),
publicUrl: this.configService.get('S3_PUBLIC_URL'),
};
}

/**
* Check if S3 storage is properly configured
*/
get hasS3Configured(): boolean {
const config = this.s3Config;
return !!(
config.accessKeyId &&
config.secretAccessKey &&
config.region &&
(config.endpoint || config.accountId)
);
}
Comment on lines +62 to +70
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

S3 configuration validation is incomplete.

The hasS3Configured method checks for the presence of mandatory fields but doesn't verify if the bucket name is set, which is essential for S3 operations.

Add bucket name validation to the S3 configuration check:

 get hasS3Configured(): boolean {
   const config = this.s3Config;
   return !!(
     config.accessKeyId &&
     config.secretAccessKey &&
     config.region &&
+    config.bucketName &&
     (config.endpoint || config.accountId)
   );
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
get hasS3Configured(): boolean {
const config = this.s3Config;
return !!(
config.accessKeyId &&
config.secretAccessKey &&
config.region &&
(config.endpoint || config.accountId)
);
}
get hasS3Configured(): boolean {
const config = this.s3Config;
return !!(
config.accessKeyId &&
config.secretAccessKey &&
config.region &&
config.bucketName &&
(config.endpoint || config.accountId)
);
}

}
51 changes: 51 additions & 0 deletions backend/src/config/env.validation.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import { IsOptional, IsString, IsNumber, IsIn } from 'class-validator';

export class EnvironmentVariables {
@IsNumber()
PORT: number;

@IsString()
@IsIn(['DEV', 'PROD', 'TEST'])
NODE_ENV: string;

@IsString()
JWT_SECRET: string;

@IsString()
JWT_REFRESH: string;

@IsNumber()
SALT_ROUNDS: number;

@IsString()
OPENAI_BASE_URI: string;

// S3/Cloudflare R2 Configuration - all optional
@IsOptional()
@IsString()
S3_ACCESS_KEY_ID?: string;

@IsOptional()
@IsString()
S3_SECRET_ACCESS_KEY?: string;

@IsOptional()
@IsString()
S3_REGION?: string;

@IsOptional()
@IsString()
S3_BUCKET_NAME?: string;

@IsOptional()
@IsString()
S3_ENDPOINT?: string;

@IsOptional()
@IsString()
S3_ACCOUNT_ID?: string;

@IsOptional()
@IsString()
S3_PUBLIC_URL?: string;
Comment on lines +23 to +50
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

S3 configuration is optional but should have local fallback implementation.

Making S3 configuration optional is good for flexibility, but ensure there's a fallback for storing files locally when S3 is not configured.

Ensure the upload service has a fallback mechanism when S3 credentials aren't provided. You should implement a strategy pattern that can switch between S3 storage and local file storage based on configuration.

Let's check if this is already implemented:


🏁 Script executed:

#!/bin/bash
# Check if there's a fallback mechanism for local file storage

echo "Checking for local file storage fallback implementation..."

# Look for conditional storage selection based on config
rg -A 10 -B 10 "S3_" --glob "*upload*.ts" --no-ignore

# Check for filesystem operations indicating local storage
rg -i "fs\.|writeFile|createReadStream|createWriteStream" --glob "*upload*.ts" --no-ignore

Length of output: 863


Action Required: Implement Conditional Storage Strategy

The environment validation in backend/src/config/env.validation.ts correctly marks all S3 configuration values as optional, and local file storage functionality is present in backend/src/upload/upload.service.ts (e.g., use of fs-extra methods for file operations). However, there isn’t any conditional logic (or strategy pattern) to select S3 storage when credentials are provided versus falling back to local storage when they’re not.

  • Ensure the upload service dynamically switches between S3 and local file storage based on your configuration.
  • Consider refactoring the service to use a strategy pattern so that when valid S3 credentials are available, file uploads are directed to S3; otherwise, the service should reliably use the local fallback implementation.

}
3 changes: 3 additions & 0 deletions backend/src/guard/project.guard.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ import { JwtService } from '@nestjs/jwt';

import { ProjectService } from '../project/project.service';

/**
* This guard checks if the user is authorized to access a project.
*/
@Injectable()
export class ProjectGuard implements CanActivate {
private readonly logger = new Logger('ProjectGuard');
Expand Down
7 changes: 7 additions & 0 deletions backend/src/main.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import { AppModule } from './app.module';
import 'reflect-metadata';
import * as dotenv from 'dotenv';
import { Logger } from '@nestjs/common';
import { graphqlUploadExpress } from 'graphql-upload-minimal';

async function bootstrap() {
const logger = new Logger('Bootstrap');
Expand All @@ -18,10 +19,16 @@ async function bootstrap() {
'Authorization',
'Access-Control-Allow-Origin',
'Access-Control-Allow-Credentials',
'Apollo-Require-Preflight',
'x-refresh-token',
],
});

app.use(
'/graphql',
graphqlUploadExpress({ maxFileSize: 50000000, maxFiles: 10 }),
);

console.log('process.env.PORT:', process.env.PORT);
const server = await app.listen(process.env.PORT ?? 8080);
logger.log(`Application is running on port ${process.env.PORT ?? 8080}`);
Expand Down
12 changes: 12 additions & 0 deletions backend/src/project/dto/project.input.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import { InputType, Field, ID, ObjectType } from '@nestjs/graphql';
import { IsNotEmpty, IsString, IsUUID, IsOptional } from 'class-validator';
import { Project } from '../project.model';
import { FileUpload, GraphQLUpload } from 'graphql-upload-minimal';

/**
* @deprecated We don't need project upsert
Expand Down Expand Up @@ -118,3 +119,14 @@ export class FetchPublicProjectsInputs {
@Field()
size: number;
}

@InputType()
export class UpdateProjectPhotoInput {
@IsString()
@Field(() => ID)
projectId: string;

@IsOptional()
@Field(() => GraphQLUpload)
file: FileUpload;
}
6 changes: 5 additions & 1 deletion backend/src/project/project.module.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,15 @@ import { ProjectGuard } from '../guard/project.guard';
import { ChatService } from 'src/chat/chat.service';
import { User } from 'src/user/user.model';
import { Chat } from 'src/chat/chat.model';
import { AppConfigModule } from 'src/config/config.module';
import { UploadModule } from 'src/upload/upload.module';

@Module({
imports: [
TypeOrmModule.forFeature([Project, Chat, User, ProjectPackages]),
AuthModule, // Import AuthModule to provide JwtService to the ProjectGuard
AuthModule,
AppConfigModule,
UploadModule,
],
providers: [ChatService, ProjectService, ProjectsResolver, ProjectGuard],
exports: [ProjectService, ProjectGuard],
Expand Down
28 changes: 20 additions & 8 deletions backend/src/project/project.resolver.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import {
CreateProjectInput,
FetchPublicProjectsInputs,
IsValidProjectInput,
UpdateProjectPhotoInput,
} from './dto/project.input';
import { Logger, UseGuards } from '@nestjs/common';
import { ProjectGuard } from '../guard/project.guard';
Expand Down Expand Up @@ -86,22 +87,33 @@ export class ProjectsResolver {
return this.projectService.subscribeToProject(userId, projectId);
}

@UseGuards(ProjectGuard)
@Mutation(() => Project)
async updateProjectPhotoUrl(
async updateProjectPhoto(
@GetUserIdFromToken() userId: string,
@Args('projectId', { type: () => ID }) projectId: string,
@Args('photoUrl') photoUrl: string,
@Args('input') input: UpdateProjectPhotoInput,
): Promise<Project> {
this.logger.log(
`User ${userId} updating photo URL for project ${projectId}`,
);
const { projectId, file } = input;
this.logger.log(`User ${userId} uploading photo for project ${projectId}`);

// Extract the file data
const { createReadStream, mimetype } = await file;

// Buffer the file content
const chunks = [];
for await (const chunk of createReadStream()) {
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);

Comment on lines +100 to +109
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding file size validation and error handling.

While the file buffering implementation is correct, there are some potential improvements for robustness:

  1. No file size validation is present, which could lead to memory issues with large files
  2. There's no error handling if the file stream fails

Consider adding the following improvements:

// Extract the file data
const { createReadStream, mimetype } = await file;

// Buffer the file content
const chunks = [];
+ let totalSize = 0;
+ const MAX_FILE_SIZE = 10 * 1024 * 1024; // 10MB limit
try {
  for await (const chunk of createReadStream()) {
+   totalSize += chunk.length;
+   if (totalSize > MAX_FILE_SIZE) {
+     throw new Error(`File size exceeds the maximum allowed size of ${MAX_FILE_SIZE / (1024 * 1024)}MB`);
+   }
    chunks.push(chunk);
  }
  const buffer = Buffer.concat(chunks);
+ } catch (error) {
+   this.logger.error(`Error processing file upload: ${error.message}`);
+   throw new Error(`Failed to process file upload: ${error.message}`);
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Extract the file data
const { createReadStream, mimetype } = await file;
// Buffer the file content
const chunks = [];
for await (const chunk of createReadStream()) {
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);
// Extract the file data
const { createReadStream, mimetype } = await file;
// Buffer the file content
const chunks = [];
let totalSize = 0;
const MAX_FILE_SIZE = 10 * 1024 * 1024; // 10MB limit
try {
for await (const chunk of createReadStream()) {
totalSize += chunk.length;
if (totalSize > MAX_FILE_SIZE) {
throw new Error(`File size exceeds the maximum allowed size of ${MAX_FILE_SIZE / (1024 * 1024)}MB`);
}
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);
} catch (error) {
this.logger.error(`Error processing file upload: ${error.message}`);
throw new Error(`Failed to process file upload: ${error.message}`);
}

// Call the service with the extracted buffer and mimetype
return this.projectService.updateProjectPhotoUrl(
userId,
projectId,
photoUrl,
buffer,
mimetype,
);
Comment on lines +110 to 116
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Consider validating file type for security.

The service now accepts a buffer and mimetype, but there's no validation to ensure the file is an acceptable image type.

Consider adding file type validation before passing to the service:

// Call the service with the extracted buffer and mimetype
+ const ALLOWED_MIMETYPES = ['image/jpeg', 'image/png', 'image/gif', 'image/webp'];
+ if (!ALLOWED_MIMETYPES.includes(mimetype)) {
+   throw new Error(`Unsupported file type: ${mimetype}. Allowed types: ${ALLOWED_MIMETYPES.join(', ')}`);
+ }

return this.projectService.updateProjectPhotoUrl(
  userId,
  projectId,
  buffer,
  mimetype,
);

🏁 Script executed:

#!/bin/bash
# Check if there's mimetype validation in the service method
rg -A 10 "updateProjectPhotoUrl.*buffer.*mimetype" --glob "*.ts" backend/src/

Length of output: 79


Security Update Required: Validate Image File Type

I verified that there’s no evidence of file type validation in the updateProjectPhotoUrl flow. The current implementation passes the buffer and mimetype directly to the service without checking if the mimetype is acceptable. Please consider adding a check against a whitelist of allowed mimetypes. For example:

// Call the service with the extracted buffer and mimetype
+ const ALLOWED_MIMETYPES = ['image/jpeg', 'image/png', 'image/gif', 'image/webp'];
+ if (!ALLOWED_MIMETYPES.includes(mimetype)) {
+   throw new Error(`Unsupported file type: ${mimetype}. Allowed types: ${ALLOWED_MIMETYPES.join(', ')}`);
+ }

return this.projectService.updateProjectPhotoUrl(
  userId,
  projectId,
  buffer,
  mimetype,
);

This update will mitigate potential security risks by ensuring that only expected image file types are processed.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Call the service with the extracted buffer and mimetype
return this.projectService.updateProjectPhotoUrl(
userId,
projectId,
photoUrl,
buffer,
mimetype,
);
// Call the service with the extracted buffer and mimetype
const ALLOWED_MIMETYPES = ['image/jpeg', 'image/png', 'image/gif', 'image/webp'];
if (!ALLOWED_MIMETYPES.includes(mimetype)) {
throw new Error(`Unsupported file type: ${mimetype}. Allowed types: ${ALLOWED_MIMETYPES.join(', ')}`);
}
return this.projectService.updateProjectPhotoUrl(
userId,
projectId,
buffer,
mimetype,
);

}

@Mutation(() => Project)
async updateProjectPublicStatus(
@GetUserIdFromToken() userId: string,
Expand Down
Loading
Loading