S3 Compatible
S3 Compatible Storage
The PluginS3 adapter uploads files to S3-compatible storage using presigned URLs. This approach keeps your credentials secure on the server while allowing direct client-to-storage uploads.
Supported Services
| Service | Endpoint Configuration |
|---|---|
| AWS S3 | Default (no endpoint needed) |
| Cloudflare R2 | https://<account-id>.r2.cloudflarestorage.com |
| DigitalOcean Spaces | https://<region>.digitaloceanspaces.com |
| MinIO | Your MinIO server URL |
| Backblaze B2 | https://s3.<region>.backblazeb2.com |
| Wasabi | https://s3.<region>.wasabisys.com |
| Supabase Storage | https://<project>.supabase.co/storage/v1/s3 |
Installation
No client-side dependencies required! The adapter uses native fetch and XMLHttpRequest for uploads.
Your backend will need the AWS SDK to generate presigned URLs:
pnpm add @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Usage
import { PluginS3 } from "nuxt-upload-kit/providers/s3"
const uploader = useUploadKit({
storage: PluginS3({
getPresignedUploadUrl: async (fileId, contentType, { fileName, fileSize }) => {
const response = await fetch("/api/storage/presign", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ key: fileId, contentType, fileName, fileSize }),
})
return response.json()
},
}),
})
Options
| Option | Type | Required | Description |
|---|---|---|---|
getPresignedUploadUrl | (fileId, contentType, metadata) => Promise<{...}> | Yes | Function to fetch presigned upload URL |
getPresignedDownloadUrl | (fileId) => Promise<string> | No | Function to fetch presigned download URL |
deleteFile | (fileId) => Promise<void> | No | Function to delete a file via your API |
retries | number | No | Number of retry attempts (default: 3) |
retryDelay | number | No | Initial retry delay in ms (default: 1000) |
Backend Setup
AWS S3
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"
import { getSignedUrl } from "@aws-sdk/s3-request-presigner"
const s3 = new S3Client({
region: process.env.AWS_REGION!,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
})
const BUCKET = process.env.AWS_S3_BUCKET!
export default defineEventHandler(async (event) => {
const { key, contentType, fileName, fileSize } = await readBody(event)
const objectKey = `uploads/${key}`
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: objectKey,
ContentType: contentType,
Metadata: {
"original-name": fileName,
"file-size": String(fileSize),
},
})
const uploadUrl = await getSignedUrl(s3, command, { expiresIn: 3600 })
return {
uploadUrl,
publicUrl: `https://${BUCKET}.s3.${process.env.AWS_REGION}.amazonaws.com/${objectKey}`,
}
})
Cloudflare R2
R2 offers zero egress fees, making it cost-effective for serving files.
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"
import { getSignedUrl } from "@aws-sdk/s3-request-presigner"
const r2 = new S3Client({
region: "auto",
endpoint: `https://${process.env.CF_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
})
const BUCKET = process.env.R2_BUCKET_NAME!
const PUBLIC_URL = process.env.R2_PUBLIC_URL! // https://pub-xxx.r2.dev or custom domain
export default defineEventHandler(async (event) => {
const { key, contentType, fileName, fileSize } = await readBody(event)
const objectKey = `uploads/${key}`
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: objectKey,
ContentType: contentType,
})
const uploadUrl = await getSignedUrl(r2, command, { expiresIn: 3600 })
return {
uploadUrl,
publicUrl: `${PUBLIC_URL}/${objectKey}`,
}
})
DigitalOcean Spaces
const s3 = new S3Client({
region: "nyc3",
endpoint: "https://nyc3.digitaloceanspaces.com",
credentials: {
accessKeyId: process.env.DO_SPACES_KEY!,
secretAccessKey: process.env.DO_SPACES_SECRET!,
},
})
MinIO
const s3 = new S3Client({
region: "us-east-1",
endpoint: "http://localhost:9000",
forcePathStyle: true, // Required for MinIO
credentials: {
accessKeyId: process.env.MINIO_ACCESS_KEY!,
secretAccessKey: process.env.MINIO_SECRET_KEY!,
},
})
Backblaze B2
const s3 = new S3Client({
region: "us-west-002",
endpoint: "https://s3.us-west-002.backblazeb2.com",
credentials: {
accessKeyId: process.env.B2_KEY_ID!,
secretAccessKey: process.env.B2_APP_KEY!,
},
})
Optional: Download URL
For private buckets, create an endpoint to generate download URLs:
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3"
import { getSignedUrl } from "@aws-sdk/s3-request-presigner"
const s3 = new S3Client({ /* ... */ })
export default defineEventHandler(async (event) => {
const key = getRouterParam(event, "key")
const command = new GetObjectCommand({
Bucket: process.env.BUCKET!,
Key: `uploads/${key}`,
})
return await getSignedUrl(s3, command, { expiresIn: 3600 })
})
Optional: Delete File
import { S3Client, DeleteObjectCommand } from "@aws-sdk/client-s3"
const s3 = new S3Client({ /* ... */ })
export default defineEventHandler(async (event) => {
const key = getRouterParam(event, "key")
await s3.send(new DeleteObjectCommand({
Bucket: process.env.BUCKET!,
Key: `uploads/${key}`,
}))
return { success: true }
})
Complete Example
import { PluginS3 } from "nuxt-upload-kit/providers/s3"
const uploader = useUploadKit({
storage: PluginS3({
getPresignedUploadUrl: async (fileId, contentType, metadata) => {
const response = await fetch("/api/storage/presign", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ key: fileId, contentType, ...metadata }),
})
if (!response.ok) throw new Error("Failed to get upload URL")
return response.json()
},
getPresignedDownloadUrl: async (fileId) => {
const response = await fetch(`/api/storage/download/${fileId}`)
return response.text()
},
deleteFile: async (fileId) => {
await fetch(`/api/storage/delete/${fileId}`, { method: "DELETE" })
},
retries: 3,
retryDelay: 1000,
}),
})
Upload Result
After successful upload, file.uploadResult contains:
{
url: 'https://my-bucket.s3.us-east-1.amazonaws.com/uploads/abc123.jpg',
storageKey: 'abc123.jpg',
etag: 'd41d8cd98f00b204e9800998ecf8427e' // Optional, from S3 response
}
Standalone Upload
Upload raw Blob or File data directly, bypassing the useUploadKit pipeline:
const storage = PluginS3({
getPresignedUploadUrl: async (fileId, contentType, metadata) => {
const response = await fetch("/api/storage/presign", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ key: fileId, contentType, ...metadata }),
})
return response.json()
},
})
// Upload an edited image
const croppedBlob = await getCroppedCanvas().toBlob()
const result = await storage.upload(croppedBlob, "edited-photo.jpg", {
contentType: "image/jpeg",
})
console.log(result.url) // https://bucket.s3.amazonaws.com/...
console.log(result.storageKey) // edited-photo.jpg (or path/edited-photo.jpg with path prefix)
The path prefix is automatically applied, and the presigned URL handler receives the resolved key.
CORS Configuration
Ensure your bucket allows uploads from your domain:
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "HEAD"],
"AllowedOrigins": ["https://your-domain.com"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]
Apply via AWS CLI:
aws s3api put-bucket-cors --bucket my-bucket --cors-configuration file://cors.json
For Cloudflare R2, configure CORS in the Cloudflare Dashboard under R2 → Your Bucket → Settings.
Error Handling
The adapter includes built-in retry logic with exponential backoff:
PluginS3({
getPresignedUploadUrl: /* ... */,
retries: 5, // Try up to 5 times
retryDelay: 2000 // Start with 2s delay, doubles each retry
})
Retry sequence: 2s → 4s → 8s → 16s → fail
Troubleshooting
"403 Forbidden" on Upload
- Check your presigned URL hasn't expired
- Verify the
Content-Typeheader matches what was signed - Ensure CORS is configured correctly
"SignatureDoesNotMatch"
The request doesn't match what was signed. Common causes:
- Content-Type mismatch between presign and upload
- Extra headers being sent that weren't included in signing
No Progress Updates
Ensure you're not using a proxy that buffers the request. The progress is tracked via XMLHttpRequest.upload.onprogress.

