Hey <@U01JVDKASAC> - just an update on the python ...
# sst
l
Hey @Frank - just an update on the python packaging thing. It looks like it's quite doable. What I did manually (raw without sst): • followed the aws documentation to do a
zip -r ../../../../src/my-deployment-package.zip
from inside my
lib/python3.8/site-packages
in my virtual environment (does the same thing as requirements.txt) • then ran over to the
handlers
directory and did a
zip -g getter.py
to add my Lambda handler • then did a
python3 -m build
inside
libs
to build my wheel file, then a
wheel unpack [package name]
inside
dist
(created by the
build
), cd'd into that directory and did another
zip -r my-deployment-package.zip .
to add the decompressed wheel contents (a wheel behaves just like a zip) to the deployment package. • deployed the zip, and it worked with package names like
rest_helper.rest_helper
. What this could look like as an sst implementation (maybe a construct kind of thing like script, or better yet, a property of
defaultFunctionProps
like
srcPath
is currently for Python projects? ) The big deal here is getting custom
libs
packages into the docker container at image build time: • specify a
libs
folder in your sst/cdk code in the construct (there could be multiple
libs
-style folders, depending on monorepo size and complexity) • either the developer does there own
python3 -m build
to make a package manually (although you'd have to remember to do this before each
npx sst deploy
if you changed your lib code). Or, what I'm more inclined to do, is have sst do the build each time. The
libs
folder supplied would have to have a
setup.cfg
and
pyproject.toml
for this to work (whether manual or automated). • then, sst would "remember" and copy any wheel(s) to the docker container and install it (probably after doing the standard
requirements.txt
install): ◦
COPY dist/[wheel package name].whl .
RUN  pip3 install --no-index --find-links=./ [wheel package name].whl --target "${LAMBDA_TASK_ROOT}"
• it would be up to the developer to run a
pip install -e .
inside the
libs
folder so that the tests can see the packages under test using the virtual environment.
pip install -e .
installs a virtual link to the
libs
folder so that the lambdas can see the packages in the IDE, and so that pytest can see the same packages at test time. Any code changes are detected without any rebuilds or reinstalls.
The more I think about it, the more I like having a
packages
property of
defaultFunctionProps
.
packages
would contain a list of libs-style directories. Each directory would need to have a
setup.cfg
or
setup.py
and a `pyproject.toml.T`he Python build command will throw an exception if the folder structure and configuration files are missing or defunct, which in turn would stop the deploy. This way, any handler or group of handlers belonging to an Api would have all the local, custom dependencies needed.
f
Hey @Luke Wyman Thanks for pushing on my Python boundary 😉 I did some homework on Python over the weekend, and I think I have a good understand of the approach you put together.
We are packaging the local packages (ie.
rest_helper
) into a wheel file and copying that into the docker container is b/c the
rest_helper
files are not available in the container right? Since we are only copying the
requirements.txt
file into the container.
Would it work if we copied or mounted entire
srcPath
folder into the container, can we skip the creating wheel step?
As a side note, not sure if you’ve come across the Zappa project. They were the predominate Python specific serverless framework at one point - https://github.com/zappa/Zappa
I’m curious how they are solving the local package issue. It seems they include dependencies in this order: • Lambda-compatible 
manylinux
 wheels from a local cache • Lambda-compatible 
manylinux
 wheels from PyPI • Packages from the active virtual environment • Packages from the local project directory We can ignore some of the advanced stuff they do, I wonder that last point of
Packages from the local project directory
addresses our similar issue. https://github.com/zappa/Zappa#package
It seems this function is doing all the packaging magic? https://github.com/zappa/Zappa/blob/master/zappa/core.py#L545
l
Hi @Frank That's a yes on only
requirements.txt
being copied into the container. I also think that the Docker container approach to building zip files should be more hidden as implementation - it's sort of counter intuitive to me that
installCommands
can't access files on my laptop, but only in the container. Mounting the entire
srcPath
? Not sure that would work. Python import statements aren't directory-aware like Node import statements are. If it's not in the same directory, you pretty much have to package. I've heard of Zappa, but haven't checked it out. "Packages from the local project directory" is the crux of what I'm after for the unit-testable Python code.
f
then did a 
python3 -m build
 inside 
libs
 to build my wheel file
Btw, by
libs
are you referring to the
packaged
folder in lamba-deploy-spikes?
l
So, on Sunday night, I was taking a look at Poetry, and a light bulb went off. Initially, I embraced the choice that sst offered, and I chose
requirements.txt
since I like to start simple. What I have found, is that
requirements.txt
becomes unmanageable pretty quickly when the project becomes more complex than a toy project. Dependencies and sub-dependencies are all listed at the same level in alpha order, so there's absolutely no discernible dependency tree. Also, there's no
dependencies
vs
devDependencies
concept as there is with npm/yarn. Ponder these questions from the Node perspective: How would you feel if I took npm/yarn away from you? What if I said you couldn't use Lerna? You can't make something out of nothing. And I certainly wouldn't propose rewriting a npm/yarn equivalent for Python sst. Here are the parallels that I'm thinking about taking from npm/yarn and Lerna: Lerna lets you have local packages that don't publish, as well as publishable packages (thinking of the sst template for a monorepo). Poetry does the same thing. There's a
poetry build
command that does the local package thing, and a
poetry publish
command that does the publish to a repo (pypi or private are both options). sst could potentially issue poetry build commands under the hood during an
sst deploy
to make the wheels before copying to the container. Poetry also does the npm/yarn thing of installing a dependency as either dev or prod with
poetry install --dev
or
poetry install -D
. Poetry also does a
poetry export
to get a
requirements.txt
that can then be copied to the container during deploy. It also seems that Poetry makes virtual environment management seamless, which is a nice way to manage things in a large, complex project. As for what Zappa does, perhaps it's smart to start simple and not worry about publishable packages at first - just stick to the simplicity of getting unit tests, error-free imports in the IDE and deployment to work as a harmonious solution. Just use the build command to make wheels and copy them to the container. Then iterate as users clamor for more features, like publishing wheels.
And that's a yes on
libs
, but I've added another spike to lambda-deploy-spikes that I just got around to pushing now. (raw-lambda-whl-package-zip). It proves out that adding the contents of a wheel to a zip makes the full import work in the lambda in the cloud environment.
That latest spike uses the
handlers
and
libs
nomenclature.