happy-kitchen-89482
11/09/2025, 11:49 PMsilly-queen-7197
11/11/2025, 4:58 AMworried-glass-66985
11/11/2025, 11:50 AMPants cannot infer owners for the following imports when some dependency is specified in several python_requirement targets. I have a small example to illustrate my problem. I would expect dependency to be:
1. If not specified explicitly for the target, then some merge of all other dependencies, or just take the exact version from lockfile
2. If specified, then use this dependency without any warningsgentle-flower-25372
11/12/2025, 4:15 PMbrainy-sundown-66139
11/13/2025, 6:51 PMModuleNotFoundError: No module named 'azure.eventhub.extensions.checkpointstoreblobaio'famous-kilobyte-26155
11/14/2025, 4:03 PMbusy-ram-14533
11/14/2025, 4:48 PM.shellcheckrc files not being discovered or used when running shellcheck via pants lint?limited-potato-15054
11/14/2025, 7:39 PMpowerful-scooter-95162
11/14/2025, 8:57 PMgray-apple-58935
11/18/2025, 4:45 PMwide-midnight-78598
11/19/2025, 3:46 PMworried-piano-22913
11/20/2025, 10:48 AMfamous-kilobyte-26155
11/20/2025, 5:17 PMpants lint scripts:: for subdirectory scripts I get no errors, but when I run pants lint :: I get errors for files in that directory. (mostly include order, but that might be a red herring)
Any hints?happy-sundown-654
11/20/2025, 6:10 PMRun pantsbuild/actions/init-pants@v9
with:
gha-cache-key: v0
setup-commit: 6f136713a46e555946a22ffb3ed49c372eea58df
base-branch: main
named-caches-location: ~/.cache/pants/named_caches
pants-ci-config: DEFAULT
cache-lmdb-store: false
lmdb-store-location: ~/.cache/pants/lmdb_store
experimental-remote-cache-via-gha: false
setup-python-for-plugins: false
gh-host: github.com
env:
AWS_REGION: us-west-2
SLACK_WEBHOOK: ***
pythonLocation: /opt/hostedtoolcache/Python/3.12.7/x64
PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.12.7/x64/lib/pkgconfig
Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.7/x64
Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.7/x64
Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.12.7/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.12.7/x64/lib
Run if ! command -v pants; then
Downloading and installing the pants launcher ...
Installed the pants launcher from <https://github.com/pantsbuild/scie-pants/releases/latest/download/scie-pants-linux-x86_64> to /home/runner/.local/bin/pants
Running `pants` in a Pants-enabled repo will use the version of Pants configured for that repo.
In a repo not yet Pants-enabled, it will prompt you to set up Pants for that repo.
Run PANTS_BOOTSTRAP_CACHE_KEY=$(PANTS_BOOTSTRAP_TOOLS=2 pants bootstrap-cache-key)
Failed to source file cpython-3.11.13+20250612-x86_64-unknown-linux-gnu-install_only.tar.gz: Failed to fetch <https://github.com/astral-sh/python-build-standalone/releases/download/20250612/cpython-3.11.13%2B20250612-x86_64-unknown-linux-gnu-install_only.tar.gz>: [22] HTTP response code said error (The requested URL returned error: 503)
Error: Failed to prepare a scie jump action: Failed to establish atomic directory /home/runner/.cache/nce/4dd2c710a828c8cfff384e0549141016a563a5e153d2819a7225ccc05a1a17c7/cpython-3.11.13+20250612-x86_64-unknown-linux-gnu-install_only.tar.gz. Population of work directory failed: The tar.gz destination /home/runner/.cache/nce/4dd2c710a828c8cfff384e0549141016a563a5e153d2819a7225ccc05a1a17c7/cpython-3.11.13+20250612-x86_64-unknown-linux-gnu-install_only.tar.gz of size 0 had unexpected hash: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855mammoth-area-83300
11/22/2025, 6:49 PM# requirements.txt
sqlalchemy==1.3.22
pybigquery==0.5.0
# BUILD
python_requirements(
name="root",
)
python_tests(
name="tests0",
dependencies=[":root#pybigquery"],
)
# test_bq.py
def test_engine_init():
sqlalchemy.create_engine("<bigquery://test-project>")
I see that the pybigquery dependency was installed in the pex from the logs of running test and I verified via pdb -> interact that import pybigquery is successful. But there seems to be something screwed up with the environment setup because creating the sql engine fails with
...
else:
for impl in pkg_resources.iter_entry_points(self.group, name):
self.impls[name] = impl.load
return impl.load()
> raise exc.NoSuchModuleError(
"Can't load plugin: %s:%s" % (self.group, name)
)
E sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:bigquery
Is there something extra needed for Pants (or is pex the culprit?) to correctly register the pybigquery plugin for sqlalchemy?acoustic-librarian-29560
11/24/2025, 5:50 PM12:49:05.91 [INFO] Initializing scheduler...
12:49:05.93 [INFO] Trying sandboxer socket path /run/pants/b85ec67a0d83283d/sandboxer.sock
12:49:05.93 [WARN] Failed to create dir for sandboxer socket at /run/pants/b85ec67a0d83283d/sandboxer.sock
12:49:05.93 [INFO] Trying sandboxer socket path /var/run/pants/b85ec67a0d83283d/sandboxer.sock
12:49:05.93 [WARN] Failed to create dir for sandboxer socket at /var/run/pants/b85ec67a0d83283d/sandboxer.sock
12:49:05.93 [INFO] Trying sandboxer socket path /var/folders/v7/qb819xs91_55px65swlk4nqr0000gp/T/run/pants/b85ec67a0d83283d/sandboxer.sock
12:49:05.93 [INFO] Using sandboxer socket path /var/folders/v7/qb819xs91_55px65swlk4nqr0000gp/T/run/pants/b85ec67a0d83283d/sandboxer.sock
It seems to work eventually so is it just trying different paths?busy-ram-14533
11/25/2025, 4:10 PM[python.resolves_to_only_binary]
__default__ = [":all:", "!pyspark"]
error
```stderr:
pid 54908 -> /Users/gregory.fast/.cache/pants/named_caches/pex_root/venvs/1/4ff2f56ac3254236a25efbdabc197b860c992fbd/dfd2c09fe12009f8446c34db0afc997702969a55/bin/python /Users/gregory.fast/.cache/pants/named_caches/pex_root/venvs/1/4ff2f56ac3254236a25efbdabc197b860c992fbd/dfd2c09fe12009f8446c34db0afc997702969a55/pex --disable-pip-version-check --exists-action a --no-input --isolated --log pex-pip-download.log -q --cache-dir /Users/gregory.fast/.cache/pants/named_caches/pex_root/pip/1/25.2/pip_cache download --dest /private/var/folders/19/qfvft6053wvdm7yfbgcg08d40000gp/T/pants-sandbox-iv6uS1/.tmp/tmprbumncl8/Users.gregory.fast..pyenv.versions.3.12.10.bin.python3.12 --only-binary all psutil>=7.0.0 pyspark==3.5.4 --index-url https://depot-read-api-python.us1.ddbuild.io/magicmirror/magicmirror/@current/simple/ --retries 5 --resume-retries 3 --timeout 15 exited with 1 and STDERR:
pip: ERROR: Could not find a version that satisfies the requirement pyspark==3.5.4 (from versions: none)
pip: ERROR: No matching distribution found for pyspark==3.5.4```(2) seeing if I could put exclusions in no_binary
[python.resolves_to_only_binary]
__default__ = [":all:"]
[python.resolves_to_no_binary]
__default__ = ["pyspark"]
error
```stderr:
Builds were disallowed, but the following project names are configured to only allow building: pyspark```
ambitious-actor-36781
11/25/2025, 10:19 PMa -> b -> c
and a 's output is a random number. (i.e. a() -> uuid4())
and B's output is a constant. (i.e. b(a_out) -> 0)
what happens to C (c(b_out) -> ...)
Would c always get a cache miss because a isn't deterministic?
or would c always get a cache hit, because b returns a constant value?fresh-continent-76371
11/25/2025, 10:46 PM~/pro/git/project
❯ rm ../../.pants.d/workdir/pants.log
~/pro/git/project
❯ pants list ::
09:41:52.68 [INFO] (pe_nailgun::nailgun_pool) Initializing Nailgun pool for 28 processes...
09:41:55.12 [INFO] (pe_nailgun::nailgun_pool) Initializing Nailgun pool for 28 processes...
.......>8....... snipped output
~/pro/git/project
❯ cat ../../.pants.d/workdir/pants.log
09:41:59.04 [WARN] (watch) File watcher exiting with: The watcher was shut down.
~/pro/git/project
❯
This happens to all developers - so is not a local workstation problem ? How do we go about debugging thisvictorious-dress-47449
11/26/2025, 10:41 PMA under the directory company_name/lib/A that depends on a third party package called B and B depends on a package also called A (but not the same as my package under company_name/lib/A). In testing company_name/lib/A, when B tries to import from A, it resolves to my own package. Why would this happen if my package is nested under company_name/lib? I'm using a docker environment. My source root is just '/'. I'd expect `import company_name.lib.A`to resolve to my package and import A to resolve to the third party package.hundreds-carpet-28072
12/01/2025, 12:37 PMuv as an optional dependency resolver in Pants? I know there are feature requests for it, but was curious as to whether anybody has looked to pick it up. Faster resolution of environments would be a great improvement to have.better-wolf-86659
12/01/2025, 6:11 PMversion-control style python_requirement (reference) and the lockfile?
Here's what I'm running into:
• we have pants version 2.28.0
• we have a python_requirement defined in 3rdparty/BUILD:
MY_DEP_VERSION = "v1.xx.xx"
MY_DEP_GIT_URL = f"my-dep@ git+<https://github.com/my-repo/my-dep.git@{MY_DEP_VERSION}>"
python_requirement(
name="my_dep",
requirements=[MY_DEP_GIT_URL],
modules=["mydep"],
)
• after generating the lockfile, I bumped `MY_DEP_VERSION`to a newer version. Surprisingly, running "pants check" or "pants package" on targets than depend on "my_dep" didn't error out.
• When I manually run "pants generate-lockfiles", the summary does show the upgraded dependencies for "my_dep" from v1.xx.xx to the newer version "v1.yy.yy"
I wonder if this is a bug? Is it known or I should file a new one? Or I'm doing something wrong with version control requirement?
Thank you!abundant-tent-27407
12/02/2025, 2:52 PMpoetry_requirements would produce a bunch of targets such as //:root#aiohttp but that doesn't seem to be the case with uv_requirements. Therefore I am getting this error:
InvalidFieldException: Unused key in the `overrides` field for //:root: ['fastapi', 'redis']
Target in threadvictorious-dress-47449
12/03/2025, 8:47 PMpants publish docker/python:base would work fine publishing to google cloud but then if I delete the local version of the image (or go to a different machine with the same user credentials) then pants run server:docker (which triggers pulling docker/python:base from the repository) would fail with a permission denied error Permission \"artifactregistry.repositories.downloadArtifacts\" denied on resource yet docker pull works. So somehow the authentication is working for pushing but then it fails for pulling in the pants sandbox.stale-twilight-79248
12/03/2025, 9:33 PM$ pants generate-lockfiles
16:20:51.53 [INFO] Initializing scheduler...
16:20:53.64 [INFO] Scheduler initialized.
...
16:29:37.97 [ERROR] 1 Exception encountered:
Engine traceback:
in `generate-lockfiles` goal
IntrinsicError: Error downloading file: error sending request for url (<https://github.com/astral-sh/python-build-standalone/releases/download/20250610/cpython-3.10.18%2B20250610-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz>)
I'm on a network with no direct access to the internet, i have my http_proxy and https_proxy set correctly, everything else on my system is correctly able to reach the internet through the proxy
I have the following in my pants.toml as per the documentation:
[subprocess-environment]
# pull in proxy settings from environment and pass them through to all subprocesses
env_vars = [
"http_proxy",
"https_proxy",
"no_proxy",
"HTTP_PROXY",
"HTTPS_PROXY",
"NO_PROXY",
]
but still every pants operation requiring the internet fails...square-elephant-85438
12/03/2025, 9:46 PMacoustic-librarian-29560
12/04/2025, 4:34 PM[WARN] Failed to read from remote cache (2 occurrences so far): Internal: "protocol error: received message with invalid compression flag: 60 (valid flags are 0 and 1) while receiving response with status: 400 Bad Request"
I'm not the first one to hit the invalid compression flag: 60 but it seems the others were always due to size rather than bad request.careful-postman-55465
12/04/2025, 8:13 PMpytest-spark (github.com/malexer/pytest-spark). For it to work properly, PySpark must be installed. PySpark isn't a dep on pytest-spark, tho, so our tests that happen to use PySpark directly pass, while others don't. Any way to make this work?rough-motherboard-57371
12/04/2025, 10:48 PMPANTS_LEVEL=debug did my command run successfully. I do not yet have a minimal command to repro but in general wondering if this is a known possibility and a sign of misconfiguration, etc.many-vase-25409
12/05/2025, 8:30 PMFile "/data/project/digero/.cache/nce/8b397e34f7388bd488c1673f7080b8a63aef66cfe7d264e94f8e8438171ae5fa/tools.pex/.bootstrap/pex/cache/access.py", line 79, in _lock
OSError: [Errno 9] Bad file descriptor
Error: Failed to establish atomic directory /data/project/digero/.cache/nce/5c54a0c25255df2afb0525da856d49eb81b48cff0e32a9c868f77a296af24319/locks/configure-7d68ce086d9fb08674404fe09a2aa1798f91c7d51081d1dc782d0d758d0fef39. Population of work directory failed: Boot binding command failed: exit status: 1