I know there are plugins for offloading WordPress uploads to bucket storage like WP Offload Media or Infinite Uploads. However, I recently wondered if anyone tried using Rclone and Nginx rules. This weekend I decided to play around with that idea. It seems too simple. Maybe I’m missing something?
I’m not an expert when it comes to configuring NGINX. So for experimenting, I needed a local playground. I discovered you can modify NGINX rules within a DevKinsta site by opening up ~/DevKinsta/
in VS Code and then restarting devkinsta_nginx
within Docker for changes to apply. After a bit of trial and error using AI and some help from Kinsta support, here is what I came up with.
# Serve local uploads from disk.
location ^~ /wp-content/uploads/ {
try_files $uri @b2stream;
}
# Stream response from Backblaze if the file is missing
location @b2stream {
internal;
rewrite ^/wp-content/uploads/(.*)$ /$1 break;
proxy_pass https://f001.backblazeb2.com/file/Bucket/folder/$1;
proxy_redirect off;
proxy_set_header Host f001.backblazeb2.com;
proxy_buffering off; # Disable proxy buffering
}
These NGINX rules will attempt to load the requested wp-content/uploads
files from local storage. If not found it will attempt to fetch from a B2 bucket and return that directly. This means that NGINX preserves the original requested URLs. To implement this on a live site, here are steps to take:
- Create a public B2 bucket and path to house your WordPress uploads.
- Add the the above NGINX rules to Kinsta and replace
Bucket/folder
with a newly created B2 path. Also, updatef001
according to your B2 bucket settings. - Use
rclone
to move uploads from your WordPress uploads to B2. - Enjoy!
An example of an Rclone setup might include creating a custom rclone.config
file with SFTP info for WordPress and credentials for B2.
[production]
type = sftp
key_file = /home/.ssh/key_file
host = xxx.xxx.xxx.xxx
user = username
port = 22
shell_type = unix
md5sum_command = md5sum
sha1sum_command = sha1sum
[offload]
type = b2
account = xxxxxxxxxx
key = xxxxxxxxxxxxxxxxxxxx
endpoint =
With that, the following command will selectively move certain files over to the B2 bucket. When using an Rclone include flags, all other extensions will be ignored. This can be run and configured from any computer with Rclone installed and scheduled with Crontab for a very efficient offload.
rclone --config=rclone.conf move production:public/wp-content/uploads offload:Bucket/uploads/my-site/ \
--include "*.pdf" \
--include "*.jpg" \
--include "*.jpeg" \
--include "*.png" \
--include "*.gif" \
--include "*.mp3" \
--include "*.mov" \
--include "*.mp4" \
--include "*.webp" \
--include "*.avif" \
--include "*.svg" \
--include "*.apng" \
--include "*.ogg" \
--include "*.webm" \
--include "*.mkv" \
--include "*.avi"
The final missing part is handling file deletions. Deleting an item in the WordPress media library, at this point, won’t delete anything from the bucket. To track file deletions I created the following captaincore-offload-tracker.php
must-use plugin.
<?php
namespace CaptainCore;
/**
* Plugin Name: B2 Uploads Deletion Tracker
* Description: Tracks WordPress attachment deletions by creating .deleted files.
* Version: 1.0.0
* Author: Austin Ginder
*/
class B2DeletionTracker {
/**
* Constructor.
*/
public function __construct() {
add_action( 'delete_attachment', [ $this, 'handle_attachment_deletion' ] );
}
/**
* Handles attachment deletion: creates a .deleted file for the main file and thumbnails, ensuring the directory exists.
*
* @param int $post_id The ID of the attachment being deleted.
*/
public function handle_attachment_deletion( $post_id ) {
// Get the attachment URL.
$attachment_url = wp_get_attachment_url( $post_id );
if ( ! $attachment_url ) {
// Attachment URL not found, possibly already deleted or corrupted.
return;
}
// Get the path to the file in the uploads directory.
$upload_dir = wp_upload_dir();
$file_path = str_replace( $upload_dir['baseurl'], $upload_dir['basedir'], $attachment_url );
// Create .deleted file for the main file.
$this->create_deleted_file( $file_path );
// Get attachment metadata to find thumbnails.
$image_meta = wp_get_attachment_metadata( $post_id );
if ( isset( $image_meta['sizes'] ) && is_array( $image_meta['sizes'] ) ) {
foreach ( $image_meta['sizes'] as $size => $size_data ) {
// Construct the thumbnail file path.
$thumbnail_file_path = str_replace(
basename( $file_path ),
$size_data['file'],
$file_path
);
// Create .deleted file for the thumbnail.
$this->create_deleted_file( $thumbnail_file_path );
}
}
}
/**
* Creates a .deleted file for a given file path, ensuring the directory exists.
*
* @param string $file_path The path to the file.
*/
private function create_deleted_file( $file_path ) {
// Create the .deleted file path.
$deleted_file_path = $file_path . '.deleted';
// Ensure the directory exists.
$deleted_file_dir = dirname( $deleted_file_path );
if ( ! is_dir( $deleted_file_dir ) ) {
$result = wp_mkdir_p( $deleted_file_dir ); // Use WordPress's recursive mkdir
if ( ! $result ) {
error_log(
'B2 Deletion Tracker: Failed to create directory: ' .
$deleted_file_dir
);
return; // Don't proceed if directory creation fails.
}
}
// Create the .deleted file.
if ( ! file_exists( $deleted_file_path ) ) {
$result = touch( $deleted_file_path );
if ( $result === false ) {
error_log(
'B2 Deletion Tracker: Failed to create .deleted file: ' .
$deleted_file_path
);
}
}
}
}
new B2DeletionTracker();
It’s crude however it works. It essentially creates an empty .deleted
file for each file that is removed. That means I can track file deletions and then later handle removing the files from the bucket using the following bash script. This allows WordPress to operate normally without access to the storage provider.
deleted_files=$( rclone --config=rclone.conf --files-only --recursive --include "*.deleted" lsf production:public/wp-content/uploads )
for deleted_file in ${deleted_files[@]}; do
rclone --config=rclone.conf delete "production:public/wp-content/uploads/${deleted_file}.deleted"
rclone --config=rclone.conf delete "offload:Bucket/uploads/my-site/$deleted_file"
done
While Kinsta is my primary host provider, you should be able to use a similar strategy for offloading WordPress uploads. The only requirement would be for the web hosts to use NGINX. Offloading isn’t just limited to B2 as Rclone supports many cloud providers. The nice thing about Kinsta is that their CDN just works. Even though these images are being retrieved from a B2 bucket, the images are still cached on their CDN. This is an important feature as I have caching disabled within NGINX because I was concerned about how much temporary disk space that would require.

When should I offload content?
Moving data from your web host to a cloud provider will depend on your unique situation however a good starting threshold is 100GBs. Most folks shouldn’t offload their uploads as it adds complexity and splits backups and potential restores into two separate places. That said, if you have more than 100GBs of data, you’ll most likely already be looking for a more cost-effective way of handling your data. This approach is very cost-effective. In fact, the reason I came up with this solution was for one of my customers whose hosting bill was getting too expensive. After offloading their uploads I was able to greatly reduce their costs.