Silverstripe S3

I am using silverStripe 4.4.5 and have installed the silverstripe/silverstripe-s3 module:

It’s supposed to move the assets to an S3 bucket, but everything is still going to local assets folder :frowning:

I have a bucket configured with public access, and I have entered the region, bucket name, access and secret access code in the config. Any ideas?

Thanks

If all is actually configured properly, I’d try running a ?flush to make sure the new config is picked up.

Additionally if you are adding custom config to your local instance, make sure that the yml config is set to load after the modules, eg.

---
Only:
  envvarset: AWS_BUCKET_NAME
After:
  - '#assetsflysystem'
  - '#silverstripes3-flysystem'
---
SilverStripe\Core\Injector\Injector:
  Aws\S3\S3Client:
    constructor:
      configuration:
        region: '`AWS_REGION`'
        version: latest
        credentials:
          key: '`AWS_ACCESS_KEY_ID`'
          secret: '`AWS_SECRET_ACCESS_KEY`'

Hi ,

Thanks for your reply! I think I might not have things configured properly :frowning: I’m setting AWS_BUCKET_NAME, region, api and secret in a _config.php file in my module like so:

Environment::setEnv('AWS_BUCKET_NAME', 'MY-BUCKET-NAME');

So in my controller I made a function to get the environment variable for AWS_BUCKET_NAME and write to the page, just to see if it is set, and I get nothing back. Maybe I’m doing it wrong? Or maybe the _config.php isn’t read yet? I don’t know, I’m pretty knew to SS4 but have worked for years with SS3.

Is there a different way those env variables should be set?

I think my yaml file is correct according to the example you posted.

I would not set the ENV variables in your _config.php, you will want to place those in your .env file that is in the root of your site, this should be for any site using SS4x.

Then if you need to specify your access key and secret key on your local dev environment (since not running on aws infrastructure) you can basically copy the YML config I pasted above and either add it as a new .yml file in the app/_config/ folder or add it to an existing yml config file in your project.

So I got this working, thanks to your help!

I put the ENV variables into the .env file as you suggested. But still I was running into a 500 server error on the page. So on a whim, I took the yml configuration out of app/my_module/_config/s3.yml and moved it into app/_config/mysite.yml and it started working. I have no idea why I can not configure the S3 stuff in my module. It kind of bugs but at least it is working. Thanks for your help!

I seem to be having the same issue. Installed and (I think) configured but no change. Assets are still on the local disk.

Originally I had the config in an S3.yml file under /app/_config/S3.yml.

After the comments above I moved it into app.yml after all the config in there and added the extra After setting mentioned above and still no change after a ?flush

app.yml contents:


---
Name: <sitename>
---
SilverStripe\Core\Manifest\ModuleManifest:
  project: app
SilverStripe\Assets\File:
  allowed_extensions:
    - svg
SilverStripe\Admin\LeftAndMain:
  extra_requirements_css:
    - /css/dashboard.css
SilverStripe\View\Requirements_Backend:
  combine_in_dev: true
SilverStripe\Control\Director:
  alternate_base_url: '<site_url>'

---
Only:
  envvarset: <bucketname>
After:
  - '#assetsflysystem'
  - '#silverstripes3-flysystem'
---
SilverStripe\Core\Injector\Injector:
  Aws\S3\S3Client:
    constructor:
      configuration:
        region: '`ap-southeast-2`'
        version: latest
        credentials:
          key: '`<key>`'
          secret: '`<secret>`'

Anything else I can check?

So first thing - you have this set to only include when the env variable of is set. That needs to be the actual environment variable - AWS_BUCKET_NAME. So don’t change that in the YML config.

Then in your .env file add -

AWS_BUCKET_NAME={youractualbucketname} where you set {youractualbucketname} to what it is on amazon. Basically the way you have it configured now you are telling it to not include it as the environment variable doesn’t exist that you stated in your yml config.

1 Like

Ohhh, I see - got it.

Thank you.

@obj63mc Can I check these settings with you again.

My project isn’t hosted with AWS. Directories and Images are being sent to S3 but I can’t preview or publish them. This is when uploading to assets admin.

I read the comment here: Appending file version id to image url breaks Protected Asset previews with SilverStripe-S3 · Issue #1026 · silverstripe/silverstripe-asset-admin · GitHub about adding

SilverStripe\AssetAdmin\Forms\PreviewImageField:
  bust_cache: false

To the .yml. No change.

The error is a PutObject error with Access Denied 403.

I read your comment here: Migrating existing assets to S3 · Issue #30 · silverstripe/silverstripe-s3 · GitHub with the bucket prefixes. I added those too with no change.

The .env file is now:


SS_VENDOR_METHOD=copy
AWS_BUCKET_NAME=[bucket-name]
AWS_REGION=ap-southeast-2
AWS_ACCESS_KEY_ID=[ID]
AWS_SECRET_ACCESS_KEY=[key]
AWS_PUBLIC_BUCKET_PREFIX="public"
AWS_PROTECTED_BUCKET_PREFIX="protected"

s3.yml is


---
Only:
  envvarset: AWS_BUCKET_NAME
After:
  - '#assetsflysystem'
---
SilverStripe\Core\Injector\Injector:
  Aws\S3\S3Client:
    constructor:
      configuration:
        region: '`AWS_REGION`'
        version: latest
        credentials:
          key: '`AWS_ACCESS_KEY_ID`'
          secret: '`AWS_SECRET_ACCESS_KEY`'

The credentials indentation is different to the readme due to the PR: FIX: Fixed example YML config structure in README.md by zanderwar · Pull Request #38 · silverstripe/silverstripe-s3 · GitHub

Am I doing something wrong or?

There is a bug with silverstripe’s asset admin that you mentioned above for viewing protected assets. Note this is just in the asset admin specifically. Their fix does not resolve the issue and I don’t believe it has been actually released yet.

Checkout No preview on upload · Issue #33 · silverstripe/silverstripe-s3 · GitHub for a workaround. Basically includes a js file that removes the extra query string value asset admin is appending to the signed AWS urls.

Now a 403 error could also be due to your access key/secret and permissions on your S3 bucket not being set correctly. If you can login to the AWS console and see the file physically uploaded then you know your configuration is fine and it is more due to the bug mentioned above.

They have been physically uploaded. Unlikely to add much but on a dev/build?flush it does explode with a 403 for the error-404.html page.

That could only be an issue with the key/secret or is there other potential causes?

If protected assets is a problem, is there any way to just have public? All these assets are just images displayed on the front end of the site - they’re nothing special.

The 403 error:

[Emergency] Uncaught Aws\S3\Exception\S3Exception: Error executing “PutObject” on “https://.s3.ap-southeast-2.amazonaws.com/public/error-404.html”; AWS HTTP error: Client error: PUT https://<bucket>.s3.ap-southeast-2.amazonaws.com/public/error-404.html resulted in a 403 Forbidden response: AccessDeniedAccess Denied785879 (truncated…) AccessDenied (client): Access Denied - AccessDeniedAccess Denied785879884CC83735FZFjmrUMzGAJ0o89ux1kf5dvHH+UGNxcAFONwn73kHoPo55mYNiBy6aKyEhZNWFfQS8IuDXSsUE=

That error message - error executing PutObject is stating that you don’t have access to the bucket or your credentials are not correct.

Not sure if you edited the error message but its showing the url of -
“https://.s3.ap-southeast-2.amazonaws.com/public/error-404.html"
Notice there is no bucket name specified in the URL so its not able to find your bucket URL so something is wrong with your configuration somewhere, most likely your bucket name is not set in your .env file.

Again not being able to see your full code base, not much I can recommend other than again simply ensuring everything is actually correct in your .env file and the .env file is in the root of your project.

Then just simply run a ?flush=1 on your website and it should pull in the updated configuration.
In your configuration too - s3.yml I notice you only have it running after ‘#assetsflysystem’, this needs to be set to run after both ‘#assetsflysystem’ and ‘#silverstripes3-flysystem’. Make sure your config is exactly as specified as I mentioned above in this post - Silverstripe S3

As to the issue with protected assets, they work just fine, just the preview in the CMS is broken because the react component is editing (adding onto) the AWS signed url which breaks the signature. In the bug I posted above on the issue there is a workaround where you can include some extra JS in the CMS to fix the preview. You can’t not have protected assets as all files and images are versioned and draft versions of the files are stored as protected assets…

I had removed the bucket name from the error. The creds are correct and getting picked up.

A third party has the S3 bucket not me.

I’ve added the JS you linked to and now I don’t get a 403 trying to view them once uploaded. Great!

However the preview link (clicking on the right side version of the image) doesn’t remove the ?vid=X but more problematic is trying to publish an asset either through the asset admin or when added to a page through an upload field the page owns the result is still a 403 put error.

Message:

[Emergency] Uncaught Aws\S3\Exception\S3Exception: Error executing “PutObject” on “https://polycode.s3.ap-southeast-2.amazonaws.com/public/Backgrounds/Texture-01.png”; AWS HTTP error: Client error: PUT https://polycode.s3.ap-southeast-2.amazonaws.com/public/Backgrounds/Texture-01.png resulted in a 403 Forbidden response: AccessDeniedAccess Denied64EF9E (truncated…) AccessDenied (client): Access Denied - AccessDeniedAccess Denied64EF9ED7457B2364/K4GwCFlb5lOEXeJD0b9IbBig7JCJlCH35Kvkb58fkUI49t1+fin3hlUeUzGKQ7S5WuP91qFZx8=

The image with the ?vid=X removed is uploaded to S3 already:

https://polycode.s3.ap-southeast-2.amazonaws.com/protected/Backgrounds/3011c89aab/Texture-01.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAZPMP4BCQAZKY4FWN%2F20200730%2Fap-southeast-2%2Fs3%2Faws4_request&X-Amz-Date=20200730T192823Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Signature=ec1ce331d23bc6c3959bef3a0c12d5f029a761e16398e936c77e7a183ff78edf

That does work at the time but will expire now.
Could that be the issue? Some expires value set too short so it doesn’t have access to publish it maybe?

So from your error above, it shows that you do not have access to the Public folder of that bucket when trying to publish.

Basically, I would look at the following docs to understand SS file system with assets -

Protected assets, are only accessible via Signed URLs as they are protected, those URLs only last for some time, then expire. If you refresh the page though it should regenerate the url… When a file is ‘published’ it removes the file from the protected folder (or whatever your preset is set to in your env configuration) and moves it to the public folder. Then at that point the public folder on your bucket should be able to be accessed from anywhere at all times and should not need a signed url.

So if I had to guess on your bucket, the public folder isn’t actually allowing public access. If you look at the Readme of the S3 module we provide an example bucket policy to make sure the assets are public. If you already have existing files in the public folder before you setup your bucket policy, you will have to grant those public read access manually to those existing files.

{
    "Policy": {
		"Version":"2012-10-17",
		"Statement":[
			{
				"Sid":"AddPerm",
				"Effect":"Allow",
				"Principal":"*",
				"Action":"s3:GetObject",
				"Resource":"arn:aws:s3:::<bucket-name>/public/*"
			}
		]
	}
}