How to implement a different asset backend

<%- if @topic_view.topic.tags.present? %>
<%= t 'js.tagging.tags' %>: <%- @topic_view.topic.tags.each do |t| %> <%= t %> <%- end %>
<% end %>

Silverstripe Version: 4.5

I’d like to abstract the uploaded assets away from the web server disk to a Digital Ocean Volume (not S3…).

Reading this

Flysystem is a filesystem abstraction that provides a generic API allowing you to swap from using your local filesystem storage to a distributed filesystem like Amazon S3, Google Cloud Storage, Digital Ocean, Rackspace, and many others (through Flysystem’s adapters). This means that all static assets uploaded from the CMS will now be stored on S3 instead of your servers filesystem.

Seems doable. Same with this.

Is there any docs on how to achieve this? These docs aren’t helpful.

We don’t provide official support for alternative adapters, but it’s possible to get working some caveats.

The most straight forward is to:

  • define SilverStripe\Assets\Flysystem\ProtectedAdapter
  • define SilverStripe\Assets\Flysystem\PublicAdapter
  • override this bit of the asset config to use your own adapters.

The key thing those two adapters are doing is generating URLs to the files. In theory, you could reuse SilverStripe\Assets\Flysystem\FlysystemAssetStore to serve the files if you keep pointing the URL for files looking like http://example.com/assets/hello.jpg. However, this will mean that the server will need to fire off a PHP worker and download the file each time a user asks for it … which would potentially be very painful if your file adapter has a high latency.

The way the S3 adapter gets around this is by generating links that point directly to S3. To achieve that you will need to implement your own AssetAdapter.

Last time I looked at silverstripe/s3, I thought it would be relatively easy to update it to be a bit more generic so it works with other similar services. If I were you, I might start by forking that module.

I’ll be honest with you, what you are trying to do achieve is not trivial. To hack a solution together will probably take several days. To get something production ready, would probably take several weeks. That’s why we don’t officially support the silverstripe/s3 module and other alternatives.

@DorsetDigital seems I’ve completely missed DO’s limitation here. I assumed their storage volumes could be applied to multiple droplets , they can not. That’s surprising.

NFS does look possible.

I’m thinking I should stop fighting it and just use S3. If I’m using the calculator right the price is negligible and there’s already an S3 adapter written. If the system still works the same, is performant and assets are served I don’t think I really care where they “physically” are.

The only potential issue could be collisions between admin users editing the same asset from different instances. Seems like a very rare case and versioning in SS would probably mop it up anyway.

@MaximeRainville was the issue with S3 and the versioning get param of the url resolved?

Thanks for the reply @MaximeRainville

It is looking like a mountain to climb… is there a company line/best practice of how to use SilverStripe behind a load balancer feeding multiple servers of the same code base?

A single database is easy.
Deploying the code itself is easy.

It’s just the user uploaded content that is the block as far as I can see.

Suppose could use rsync, seems a little hacky and potentially prone to problems though.

Can you make the volume available via NFS or similar? (It’s been a while since I did anything with Digital Ocean). If so, just include relevant mount in your web server instances. You should be able to set your mount points so the relevant asset directories point at the network volume.

They do seem to offer an S3 compatible block storage though, so you might be able to still use the DO ecosystem and the existing S3 connector.

I’ve used the S3 backend for a few projects and have always found it reasonably well-performing and transparent to the users. For busy sites, adding a Cloudfront distribution on top allows you a bit more peace of mind too.

1 Like

Silverstripe Cloud uses AWS’ EFS to share files across multiple EC2 instances. My guess is that things should work pretty well if you use another NFS implementation.

At some stage, we were planning to use S3 on the Silverstripe Cloud, which is why we started the silverstripe/s3 module. When we decided to use EFS instead the S3 module became less of a priority.

We added an API to allow users to disable the cache busting logic which should address this, but it hasn’t been released in a stable release yet and didn’t make the cut off for the 4.7 release. We probably should update the S3 module so it uses this API by default.