Azure Data Lake
Azure Data Lake Storage
The PluginAzureDataLake adapter uploads files to Azure Data Lake Storage Gen2 using SAS (Shared Access Signature) authentication.
Installation
Install the Azure SDK:
pnpm add @azure/storage-file-datalake
Usage
import { PluginAzureDataLake } from "nuxt-upload-kit/providers/azure-datalake"
const uploader = useUploadKit({
storage: PluginAzureDataLake({
sasURL:
"https://mystorageaccount.blob.core.windows.net/mycontainer?sv=2022-11-02&ss=b&srt=co&sp=rwdlaciytfx&se=2024-12-31T23:59:59Z&st=2024-01-01T00:00:00Z&spr=https&sig=...",
path: "uploads/images",
}),
})
nuxt-upload-kit/providers/azure-datalake to avoid bundling the Azure SDK for users who don't need it.Options
| Option | Type | Default | Description |
|---|---|---|---|
sasURL | string | - | Static SAS URL for the container |
getSASUrl | (storageKey: string, operation: "upload" | "delete" | "read") => Promise<string> | - | Function to dynamically fetch SAS URL |
path | string | - | Subdirectory path within the container |
metadata | Record<string, string> | - | Custom metadata for uploaded files |
pathHttpHeaders | object | - | Custom HTTP headers |
autoCreateDirectory | boolean | true | Auto-create directory if it doesn't exist |
Authentication
Static SAS URL
For simple use cases, provide a static SAS URL:
PluginAzureDataLake({
sasURL: "https://mystorageaccount.blob.core.windows.net/container?sv=...",
})
Dynamic SAS URL
For production, fetch SAS tokens dynamically from your server:
PluginAzureDataLake({
getSASUrl: async (storageKey, operation) => {
const response = await fetch("/api/storage/sas-token", {
method: "POST",
body: JSON.stringify({ storageKey, operation }),
})
const { url } = await response.json()
return url
},
})
The operation parameter tells your server what the SAS token will be used for:
| Operation | When Called | Recommended Permissions |
|---|---|---|
"upload" | Uploading a new file | cw (create, write) |
"read" | Loading existing files via initialFiles or getRemoteFile | r (read) |
"delete" | Removing a file from storage | d (delete) |
This is especially useful when your server constructs the storage path (e.g., prepending a user/org ID). During "upload", the storageKey is a relative filename and your server should build the full path. During "read" and "delete", the storageKey is the full path previously stored in your database — your server should use it as-is.
The adapter auto-detects the SAS type from the sr (signed resource) parameter:
| SAS Type | Parameter | Behavior |
|---|---|---|
Directory (sr=d) | Cached and reused for batch uploads | |
File (sr=b) | Called per file for granular access control |
The adapter also:
- Checks token expiration before operations (5-minute buffer)
- Refreshes expired tokens automatically
- Deduplicates concurrent refresh requests (directory mode only)
Creating a SAS Token
Generate SAS tokens on your backend. Use the operation parameter to scope permissions and resolve paths correctly:
import { generateBlobSASQueryParameters, BlobSASPermissions } from "@azure/storage-blob"
import { StorageSharedKeyCredential } from "@azure/storage-blob"
const permissionsByOperation = {
upload: "cw", // create + write
read: "r", // read
delete: "d", // delete
}
export default defineEventHandler(async (event) => {
const { storageKey, operation } = getQuery(event) as {
storageKey: string
operation: "upload" | "delete" | "read"
}
const userId = /* get from session */ "user123"
// For uploads, the storageKey is a relative filename — build the full path.
// For read/delete, the storageKey is the full path from the database — use as-is.
const blobPath = operation === "upload"
? `${userId}/${storageKey}`
: storageKey
const accountName = process.env.AZURE_STORAGE_ACCOUNT!
const accountKey = process.env.AZURE_STORAGE_KEY!
const containerName = "uploads"
const credential = new StorageSharedKeyCredential(accountName, accountKey)
const startsOn = new Date()
const expiresOn = new Date(startsOn.getTime() + 60 * 60 * 1000) // 1 hour
const sasToken = generateBlobSASQueryParameters(
{
containerName,
blobName: blobPath,
permissions: BlobSASPermissions.parse(permissionsByOperation[operation]),
startsOn,
expiresOn,
},
credential,
).toString()
return {
url: `https://${accountName}.blob.core.windows.net/${containerName}/${blobPath}?${sasToken}`,
}
})
Directory Structure
Organize uploads with the path option:
// All files go to: container/uploads/2024/images/
PluginAzureDataLake({
sasURL: "...",
path: "uploads/2024/images",
})
Dynamic Paths
Create dynamic paths based on user or date:
const userId = useAuth().user.id
const date = new Date().toISOString().split("T")[0]
const uploader = useUploadKit({
storage: PluginAzureDataLake({
getSASUrl: () => fetchSasUrl(),
path: `users/${userId}/${date}`,
}),
})
Custom Metadata
Attach metadata to uploaded files:
PluginAzureDataLake({
sasURL: "...",
metadata: {
uploadedBy: "user123",
application: "my-app",
version: "1.0.0",
},
})
The adapter automatically adds:
mimeType- File MIME typesize- File size in bytesoriginalName- Original filename
HTTP Headers
Set custom headers (e.g., for caching):
PluginAzureDataLake({
sasURL: "...",
pathHttpHeaders: {
cacheControl: "public, max-age=31536000",
contentDisposition: "inline",
},
})
contentType header is automatically set from the file's MIME type.Upload Result
After successful upload, file.uploadResult contains:
{
url: 'https://mystorageaccount.blob.core.windows.net/container/path/file.jpg',
storageKey: 'path/file.jpg'
}
Access it after upload:
uploader.on("upload:complete", (files) => {
files.forEach((file) => {
console.log("Uploaded to:", file.uploadResult.url)
console.log("Storage key:", file.uploadResult.storageKey)
})
})
Standalone Upload
Upload raw Blob or File data directly, bypassing the useUploadKit pipeline:
const storage = PluginAzureDataLake({
getSASUrl: async (storageKey, operation) => {
const response = await fetch("/api/storage/sas-token", {
method: "POST",
body: JSON.stringify({ storageKey, operation }),
})
const { url } = await response.json()
return url
},
})
// Upload an edited image
const croppedBlob = await getCroppedCanvas().toBlob()
const result = await storage.upload(croppedBlob, "edited-photo.jpg", {
contentType: "image/jpeg",
})
console.log(result.url) // https://mystorageaccount.blob.core.windows.net/...
console.log(result.storageKey) // path/edited-photo.jpg
The path prefix is automatically applied, and the SAS URL handler receives the resolved key.
Loading Existing Files
Load previously uploaded files:
// The adapter's getRemoteFile hook fetches metadata from Azure
await uploader.initializeExistingFiles([{ id: "path/to/file1.jpg" }, { id: "path/to/file2.png" }])
Deleting Files
When you call removeFile(), the adapter automatically deletes from Azure:
// This deletes from Azure and removes from the local list
await uploader.removeFile(file.id)
To only remove locally without deleting from Azure:
await uploader.removeFile(file.id, { deleteFromStorage: "never" })
Troubleshooting
"AuthorizationPermissionMismatch"
Your SAS token doesn't have required permissions. Ensure it includes:
r- Read (for getRemoteFile)a- Addc- Create (for directory creation)w- Writed- Delete (for remove)
"ContainerNotFound"
The container in your SAS URL doesn't exist. Create it in Azure Portal or via Azure CLI:
az storage container create --name mycontainer --account-name mystorageaccount
Directory Creation Fails
If your SAS token only has blob-level permissions, disable auto-create:
PluginAzureDataLake({
sasURL: "...",
autoCreateDirectory: false,
})

