Hi, I’m having issues getting `sst-deploy` to run ...
# help
p
Hi, I’m having issues getting
sst-deploy
to run on our gitlab CI/CD, with the step continually failing with the following:
Copy code
There was a problem installing nodeModules.
Error: Command failed: yarn install
  ...
There was an error synthesizing your app.
Full output in the thread
Copy code
$ yarn install --check-cache
➤ YN0000: ┌ Resolution step
Resolution step
00:01
➤ YN0000: └ Completed in 0s 301ms
➤ YN0000: ┌ Fetch step
Fetch step
00:52
➤ YN0000: └ Completed in 52s 574ms
➤ YN0000: ┌ Link step
Link step
00:53
➤ YN0000: └ Completed in 53s 299ms
➤ YN0000: Done with warnings in 1m 47s
$ npx sst deploy --stage test --verbose
SST: 0.56.1
CDK: 1.132.0
Using stage: test
Preparing your SST app
synth {
  output: '.build/cdk.out',
  app: 'node .build/run.js',
  rollback: true,
  roleArn: undefined,
  verbose: 2,
  noColor: true
}
CDK toolkit version: 1.132.0 (build 5c75891)
Command line arguments: {
  _: [ 'synth' ],
  'version-reporting': false,
  versionReporting: false,
  app: 'node .build/run.js',
  a: 'node .build/run.js',
  output: '.build/cdk.out',
  o: '.build/cdk.out',
  quiet: true,
  q: true,
  color: false,
  verbose: 1,
  v: 1,
  disableVersionCheck: 'true',
  lookups: true,
  'ignore-errors': false,
  ignoreErrors: false,
  json: false,
  j: false,
  debug: false,
  ec2creds: undefined,
  i: undefined,
  'path-metadata': true,
  pathMetadata: true,
  'asset-metadata': true,
  assetMetadata: true,
  'role-arn': undefined,
  r: undefined,
  roleArn: undefined,
  staging: true,
  'no-color': false,
  noColor: false,
  validation: true,
  '$0': 'node_modules/aws-cdk/bin/cdk'
}
merged settings: {
  versionReporting: false,
  pathMetadata: true,
  output: '.build/cdk.out',
  app: 'node .build/run.js',
  context: {},
  debug: false,
  assetMetadata: true,
  toolkitBucket: {},
  staging: true,
  bundlingStacks: [ '*' ],
  lookups: true
}
Determining if we're on an EC2 instance.
Looks like an EC2 instance.
Looking up AWS region in the EC2 Instance Metadata Service (IMDS).
Attempting to retrieve an IMDSv2 token.
No IMDSv2 token: TimeoutError: Connection timed out after 1000ms
Retrieving the AWS region from the IMDS.
AWS region from IMDS: eu-west-2
Toolkit stack: CDKToolkit
Setting "CDK_DEFAULT_REGION" environment variable to eu-west-2
Resolving default credentials
...
...
[dotenv][DEBUG] did not match key and value when parsing line 1: # These variables are only available in your SST code.
[dotenv][DEBUG] did not match key and value when parsing line 2: # To apply them to your Lambda functions, checkout this doc - <https://docs.serverless-stack.com/environment-variables#environment-variables-in-lambda-functions>
[dotenv][DEBUG] did not match key and value when parsing line 3: 
[dotenv][DEBUG] did not match key and value when parsing line 5: 
There was a problem installing nodeModules.
Error: Command failed: yarn install
    at checkExecSyncError (node:child_process:826:11)
    at execSync (node:child_process:900:15)
    at installNodeModules (/builds/<.../.../...>/node_modules/@serverless-stack/core/dist/runtime/handler/node.js:289:38)
    at Object.bundle (/builds/<.../.../...>/node_modules/@serverless-stack/core/dist/runtime/handler/node.js:204:13)
    at Object.bundle (/builds/<.../.../...>/node_modules/@serverless-stack/core/dist/runtime/handler/handler.js:19:16)
    at new Function (/builds/<.../.../...>/node_modules/@serverless-stack/resources/src/Function.ts:359:39)
    at Function.fromDefinition (/builds/<.../.../...>/node_modules/@serverless-stack/resources/src/Function.ts:528:14)
    at Queue.addConsumer (/builds/<.../.../...>/node_modules/@serverless-stack/resources/src/Queue.ts:100:32)
    at new Queue (/builds/<.../.../...>/node_modules/@serverless-stack/resources/src/Queue.ts:75:12)
    at new ControllerStack (/builds/<.../.../...>/stacks/controller-stack.ts:87:26)
Error: Subprocess exited with error 1
    at ChildProcess.<anonymous> (/builds/<.../.../...>/node_modules/aws-cdk/lib/api/cxapp/exec.ts:127:23)
    at ChildProcess.emit (node:events:390:28)
    at ChildProcess.emit (node:domain:475:12)
    at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
There was an error synthesizing your app.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
f
Hey @Piers Williams does running
npx sst build --stage test
work on ur local?
p
Yep. After some more debugging today I’ve got
sst build
and
sst deploy
running locally fine, and been able to get past this point in a fresh docker EC2 instance. I’m more and more convinced this is an environmental problem but honestly not sure where to look beyond this
Just going by the stack trace, this is the Queue, and the nodeModule bundle within, supposedly causing the failure:
Copy code
const commandQueue = new sst.Queue(this, "CommandQueue", {
      consumer: {
        function: {
          handler: "src/handlers/command.consumer",
          timeout: commandHandlerTimeout,
          environment: {
            ROLE_ARN: role.roleArn,
            NOTIFICATIONS_ARN: monitoringTopic.topicArn,
            TRACKING_TABLE: trackingTable.tableName,
            DEPLOYMENT_TABLE: deploymentTable.tableName,
          },
          bundle: {
            nodeModules: ["proxy-agent"],
          },
          permissions: [trackingTable, deploymentTable],
        },
      },
      sqsQueue: {
        fifo: true,
        visibilityTimeout: Duration.seconds(commandVisibilityTimeout),
      },
    });
But it’s hard to take that at face value when this runs locally. Right now I’m working with the belief that this is in some way related to our gitlab environment, the builds for which are running in docker containers. Is there anything you can suggest I look into to try and rule out environmental factors?
Removing that nodeModule bundle does indeed let a
sst build
command go through correctly on GitLab. So, it’s environmental but stacktrace is not a red herring. something here must be going wrong then - I’ll try this again tomorrow, manually recreating the docker environment on the same ec2 instance Gitlab uses for envs - anything you can think of to help debug this before then would be awesome.
Today I got this working on the same ec2 instance gitlab runs on, in a fresh node:lts docker container, so I’m struggling to imagine it’s anything but GitLab at this point. Looking at the code, I’m really out of ideas now as to what to try next. I’m trying to debug the docker environment as it spins up via GitLab - this step where the package.json gets created, @Frank where would you expect this to be created assuming we’re following the default sst file structure?
f
targetPath
points to a folder inside
.sst/artifacts
. You can try running
sst build
locally and figure out the exact folder name. It’s a hash based of the function.
Also, are you able to log into gitlab’s build container? You can change this line to
stdio: "inherit"
to get execSync print to screen. https://github.com/serverless-stack/serverless-stack/blob/2f0f16934787b435639111c73efd39de665013c4/packages/core/src/runtime/handler/node.ts#L320
If non of the above work, let me know. I can cut a canary release with more debug info and you can give that a try.
p
Everything looks correct inside
.sst/artifacts
within the container. I tried to get execSync to print out by changing L320 to
stdio: "inherit"
but didn’t get any more output than what was provided above
If none of the above work, let me know. I can cut a canary release with more debug info and you can give that a try.
This’d be much appreciated, thank you so much. We’re on sst 0.57.4 atm as the jump to CDK v2 is a little complicated for us.
f
Can you make the
stdio
change in the compiled .js file at
core/dist/runtime/handler/node.js
?
p
Aha!
Copy code
➤ YN0028: │ The lockfile would have been modified by this install, which is explicitly forbidden.
➤ YN0000: └ Completed in 20s 262ms
➤ YN0000: Failed with errors in 20s 321ms
There was a problem installing nodeModules.
lockfile problem, which I wouldn’t have expected.
So I’m still not sure how this is being caused by gitlab spinning up a docker container vs making one on my own, but this is something at least
Well, at least this got worked around by just having
export YARN_ENABLE_IMMUTABLE_INSTALLS=false
in the builds steps. One last issue with aws profiles which I’ve posted about here, but thank you so much for your help with this particular part!