I think there is an issue with how the current `fi...
# help
d
I think there is an issue with how the current
fileOptions
works in
StaticSite
since
exclude
and
include
are setup as static properties (where includes always override excludes) there is no way to reproduce this command that the s3 sync CLI could…
Copy code
aws s3 sync ./public/ "s3://${BUCKET_NAME}/" \
  --exclude "*" \
  --include "static/*" \
  --include "*.css" \
  --include "*.js" \
  --exclude "sw.js" \
  --cache-control 'public,max-age=31536000,immutable' --delete
Notice the second
exclude
comes after a bunch of includes and another exclude. Perhaps laying out these fileOptions would have been better this way…
Copy code
{
  fileOptions: [
    {
      matching: [
        { exclude: '*' },
        { include: 'static/*' },
        { include: '*.css' },
        { include: '*.js' },
        { exclude: 'sw.js' },
      ],
      cacheControl: 'public,max-age=31536000,immutable',
    },
  ],
}
or perhaps even shorter something like
Copy code
{
  fileOptions: [
    {
      matching: [ '!*', 'static/*', '*.css', '*.js', '!sw.js'],
      cacheControl: 'public,max-age=31536000,immutable',
    },
  ],
}
Need to check that it doesn’t conflict with other
!
in standard rules… but it appears you have to use
[!thingy]
if you want to use negate.
so it could work. you would just strip out the first ! if it exists
@Frank any thoughts or workaround ideas in the time being? I can’t seem to think of anything unless I add another fileOption group and hope that the uploader script will upload twice and fix the cache the second time?
Also seems like being able to use ONLY
exclude
is also missing. which is a perfectly acceptable command with the CLI.
Copy code
aws s3 sync ./public/ "s3://${BUCKET_NAME}/" \
  --exclude "*.html" \
  --exclude "page-data/*.json" \
  --exclude "*.js" \
  --exclude "*.css" \
  --exclude "static/*" \
  --exclude "robot.txt" \
  --exclude "sitemap.xml" \
  --delete
f
Hey @Dan Van Brunt, just read through the thread. Yeah, this makes a ton of sense.
Just double checking, so the order of `include`s and `exclude`s in the s3 sync command matters right?
Copy code
aws s3 sync ./public/ "s3://${BUCKET_NAME}/" \
  --exclude "*" \
  --include "static/*" \
  --include "*.css" \
  --include "*.js" \
  --exclude "sw.js" \
  --cache-control 'public,max-age=31536000,immutable' --delete
d
thats what the docs say…
To be fair though… I did get it working with the current implementation its just that
sw.js
gets uploaded twice.
so this could be an issue if someone wanted to do this with more files
f
Got it! Yeah, we should definitely respect the order.