It looks like somewhere between v0.48.0 and v0.52....
# sst
n
It looks like somewhere between v0.48.0 and v0.52.0 the
--port
flag on
sst start
was removed. Was that intentional? Is there any new way to specify the port being used?
f
Hey @Nick Laffey, in v0.52.0, we made it so that
sst start
would automatically search for an open port if the default port 12557 is in-use, since the port wasn’t user facing.
What issue are you running up against?
n
We have our own startup script that was polling the port we specified to see when it was ready. Let me flip it over to the default port and see if that works
Are you sure it’s not 12577? instead of 12557?
I see that default in the CLI help on v0.48.0
f
Sorry yeah, it was 12577. It’s now 12557. We were going for
ILSST
(I Love SST), and made a typo in the spelling 😅
n
lol 🙂
f
Give 12557 a try. Let me know if it works.
n
I’m thinking we might still have a problem because we run our BE and FE seperately
So i’m assuming if I startup the BE first it’s going to take the default, and then the FE is going to grab a different port
f
hmm I see.. yeah the frontend will likely to be 12558.
n
Any reason why we can’t get the --port flag back and it uses that if it’s available otherwise searches for another open port?
f
Yeah, for sure we can. So you are polling to see if
sst start
is fired up?
Are you guys doing some sort of automation? Just curious.
n
Copy code
"sst:start": "sst start --stage $(aws iam  get-user --query 'User.UserName' --output text) --port 12557",
        "sst:env": "wait-on tcp:12557 && sst-env -- craco start",
        "start": "rushx rm-cdk-context && concurrently -k -r \"rushx codegen --watch\" \"rushx schema:watch\"  \"rushx deps:watch\" \"rushx sst:start\" \"rushx sst:env\"",
@Patrick Young Might be able to chime in and give some context as to why we’re doing it this way
f
Let me also pull in @thdxr and see if he has anything to add.
t
I thought we still supported --port
f
Just to add some context, SST now uses 2 port, 1 for the Lambda runtime server (12557 not user facing), 1 for the Console (4000 user facing). It seems that it doesn’t matter which port you guys poll for. @thdxr do you think there’s a better way to check if
sst start
is up and running?
I thought we still supported --port
I took out
--port
in v0.52.0 and added auto port searching 😔
I was thinking ppl shouldn’t care about the Runtime Server port, and if there were an option to specify a port it would be for the Console port.
p
The context is we have a monorepo that treats each package as its own "repo" so more isolated. For local development we deploy the backend in one terminal tab and the front end in another tab. The front end I didn't want to have to do another step (another terminal window) so I'm using wait-on to listen for when the sst is done before it kicks up the front end and builds. I'm open to better patterns 🙂
If the port 12557 is taken, is there a pattern for what it will take next? Or maybe is there a better way to wait before executing the react app?
"craco start" is just the react create app but faster.
n
@Frank said it’ll probably take 12558 but I think it’d be better if we could specify the port otherwise we’ll be reliant on which package we startup first.
p
I'm also wondering if we should just invest in short circuiting the local development to not kick up SST. We really just need the variables outputted by the backend server...
f
Got it! So u guys just need the
sst-env
cli to work and provide the REACT_APP_* environment variables right?
n
Here’s our steps, correct me if I’m wrong Patrick 1. We start up our backend stack via
sst start
2. It generates our backend API url and userpool information which we write into SSM 3. We run our frontend
start
script which pulls in the backend values via SSM and pushes them into the ReactStaticSite 4. Once we know the ReactStaticSite is deployed (by waiting on the port) we start up our create-react-app now that the environment variables are populated.
So yeah basically I think we’re just trying to be fancy and do steps 3,4 via one script. Seems like having that port flag back would fix us. Or if somehow the CRA could watch for the environment variables to change and reload but I have no idea how that would work.
f
Thanks for the details @Nick Laffey. So it seems you guys don’t need the backend stack to start up. If we made
sst-env
work with
sst deploy
, would this work for you:
Copy code
"sst:deploy": "sst deploy --stage $(aws iam  get-user --query 'User.UserName' --output text)",
        "sst:env": "sst-env -- craco start",
        "start": "rushx rm-cdk-context && concurrently -k -r \"rushx codegen --watch\" \"rushx schema:watch\"  \"rushx deps:watch\" \"rushx sst:deploy\" && rushx sst:env",
Essentially, you run
sst deploy
instead of
sst start
. And you don’t have to wait on the port, you just have to wait for the command to finish deploying and quit. Then you can run
sst-env -- craco start
to start up CRA.
p
To be honest I'm not 100% sure how sst-env works but that would work 🤷 . Out of curiosity, is there a good pattern for skipping the deploy step (route53/all the other stuff). I'm wondering if I can "short circuit" it and have a script that gets the environment variables without a full deploy?
n
Yeah I think that would work… I’m trying to understand if we lose anything by deploying rather than starting… It’s just our FE so it’s not like we’re changing infrastructure.
It looks like
sst-env
just grabs values from the sst .build folder and injects them into the environment variables for CRA
Yeah one downside of deploying is it takes forever to do the CFN stuff 😞, at least the first time you do it.
f
I’m trying to understand if we lose anything by deploying rather than starting…
If you are not live debugging the Lambda functions, you don’t lose anything.
n
Yeah no lambda functions in this stack anyway, those are all in the BE stack
f
I’m wondering if I can “short circuit” it and have a script that gets the environment variables without a full deploy?
We added an optimization in v0.50.0. If there are no changes in the stack, it’s going to deploy real quick.
n
I gotta run for the night, thanks for the help @Frank!
f
Np! I will push out an update for
sst deploy
to support
sst-env
. And you guys can give it a try.
If no luck, we can add the
--port
option back.
But I think waiting for
sst deploy
to finish is less hackier than waiting for port.
p
Sorry I'm jumping back and forth with making dinner. I'm not sure if that work is needed. We would still need to wait for the front end infrastructure to deploy. The thing we were attempting was that a new person would just be able to come in and run "npm run start" and the infra and craco would start in the same command. Craco needs to wait for the backend to kick up and the hack was to listen for the connection to be established. I can still accomplish this by running "npm run deploy" in one tab and in another wait for it manually and after deploy run "npm run start" (which does craco) right? Just don't want you to add complexity if not needed. Pulling in @Erik Mikkelson into this convo as he has the same stack in a different company. Wondering how he is handling this.
How are other people setting this up? Everyone else just using to tabs in the terminal?
I feel like I"m doing something wrong here 😆
n
@Patrick Young The difference is
sst deploy
exits after it’s done and deployed so we don’t need to do a
wait-on
.
vs
sst start
which never exits so we have no direct output telling us it’s complete.
p
Aight I think I'm following.
t
And sst start deploys stuff slightly different to enable live debugging. Sst deploy will get you an exact production deploy so it's good for CI
But if this is for local dev you do need to run sst start, I personally just run one after the other
p
^^ Right thats what I'm kind of leaning toward is that for local we just switch back to running them one after another manually. "npm run infra" ...wait... "npm run craco" vs mudding up the code 🤷
@Frank and @Nick Laffey I changed it to wait-on a file that SST outputs in the ".build" folder.
Copy code
"sst:start": "sst start --stage $(aws iam  get-user --query 'User.UserName' --output text)",
"rm-cdk-context": "rm -f ./cdk.context.json ./.build/static-site-environment-output-values.json",
"sst:env": "wait-on ./.build/static-site-environment-output-values.json && sst-env -- craco start",
"start": "rushx rm-cdk-context && concurrently -k -r \"rushx codegen --watch\" \"rushx schema:watch\"  \"rushx deps:watch\" \"rushx sst:start\" \"rushx sst:env\"",
This fixes the issue 🤷 . Still kind of gross... So @Frank if you think you can make it cleaner and its worthwhile go for it otherwise we are unblocked for now.
I feel like we have enough complexity that I should be moving this stuff into a scripts folder 😆
f
Oh nice! Yeah, let us give this some thoughts.
s
Well timed conversation, I was also looking for a mechanism to wait on the SST port prior to firing up my CRA instance for dev workflow. Trying to make things as seamless as possible for the team. I'll keep a close eye on this, but judging by the complexity my personal vote is for
--port
to come back.
Hmmm, perhaps on further consideration waiting on the build output file is actually more appropriate. 🙂 I'll wrap this up in a script. Thanks all!
Is there any consideration for improving this experience somewhat? It would be great if this could all be integrated into the
sst start
command, with the ability to configure/delegate to an associated script for the frontend(s). The real kicker would be automatic restart of the frontend process(es) if and when the environment variables change. Feels much more get up and go sort of vibe.
t
Let me think about this a bit. I'm not a huge fan of spawning multiple long running processes in the same terminal session
Makes the more common case of having to restart things slower than it needs to be and I also don't want sst trying to solve the problem of too many terminal windows
I think we should provide an API with events that we can expose. It's probably something we already need/have for the new sst console
f
hmm.. for the API, they’d still need to know which port SST started on, hence the
--port
flag would still be required.
t
Yeah I'm confused because I thought I supported the port flag still
oh maybe it changed with the auto fixing the port when it's used