Slackbot
09/02/2022, 12:30 AMElijah Ben Izzy
10/11/2022, 4:56 PMSlackbot
11/02/2022, 9:31 AMSlackbot
12/12/2022, 12:27 AMStefan Krawczyk
12/27/2022, 6:41 AMStefan Krawczyk
01/02/2023, 5:00 PMSlackbot
01/23/2023, 9:52 PMStefan Krawczyk
01/25/2023, 6:46 PMElijah Ben Izzy
02/16/2023, 10:13 PMSlackbot
04/06/2023, 2:57 AMSlackbot
04/21/2023, 5:04 AMSlackbot
04/23/2023, 5:57 AMSlackbot
08/27/2023, 10:52 PMSlackbot
08/30/2023, 8:59 AMSlackbot
01/04/2024, 6:30 PMPieter Wijkstra
03/16/2024, 9:29 PMGilad Rubin
07/22/2024, 10:13 PMStefan Krawczyk
07/22/2024, 10:18 PMIliya R
08/10/2024, 8:33 PMIliya R
08/22/2024, 6:53 PMIliya R
08/26/2024, 12:41 PMANN
family of checks: https://docs.astral.sh/ruff/rules/#flake8-annotations-annJernj Frank
10/08/2024, 12:29 PMCharles Swartz
10/08/2024, 4:03 PMTaskExecutionHook
First, some background … I have recently been creating custom rich
-based lifestyle adapters, mainly building on the existing PrintLn
and ProgressBar
. I hit a little bit of a snag when using task-based parallel DAGs. Currently, for task-based DAGs, ProgressBar
uses a bar with an unknown length because - by my assessment - the number of nodes is determined by the execution_path
in GraphExecutionHook.run_before_graph_execution
and this will generally not match the number of tasks. I found TaskExecutionHook
and this allows me to log task information, but I do not see a way to determine the number of tasks in this hook.
Just to give this some concrete context – my idea was to create a two-level progressbar for task-based DAGs, a static one that tracks overall tasks and an ephemeral one that tracks the groups within the tasks.
Would you be open to altering the TaskExecutionHook
hook in a way that makes this information available? Perhaps where the tasks are initially grouped in TaskBasedGraphExecutor.execute
? If so, let me know and I will open an issue and/or PR. Thanks!Justin Donaldson
10/09/2024, 9:39 PMPiotr Bieszczad
04/01/2025, 6:19 AMparameterize
/ 'extract
/ step(...).named([...]
a lot of strings is created, and not being able to navigate using them, makes it difficult to debug.Elijah Ben Izzy
04/01/2025, 6:02 PMCharles Swartz
04/04/2025, 2:59 AMCharles Swartz
04/04/2025, 3:07 AMunpack_fields
(a cross between extract_columns
and extract_fields
). It would expect the decorated function to return a tuple and unpack field names corresponding to elements in that tuple. For example, the following would create two fields text_field="Hello"
and `int_field=42`:
@unpack_fields("text_field", "int_field")
def A() -> Tuple[str, int]:
return "Hello", 42
• Update the existing modifier extract_fields
so that it will accept a list of field names (in addition to the backward compatible dict) and then determine the field types from the type annotation. This would only work for homogenous dictionaries, but it would reduce some redundant keystrokes. For example, the following would extract the standard X_train
, X_test
, y_train
, and y_test
as `np.ndarray`:
@extract_fields(['X_train', 'X_test' 'y_train' 'y_test'])
def train_test_split_func(...) -> Dict[str, np.ndarray]:
...
return {"X_train": ..., "X_test": ..., "y_train": ..., "y_test": ...}
Note that there may be a way to make this accept variadic field names as well, but it might be tricky to preserve complete backward compatibility. For example:
@extract_fields('X_train', 'X_test' 'y_train' 'y_test')
def train_test_split_func(...) -> Dict[str, np.ndarray]:
...
return {"X_train": ..., "X_test": ..., "y_train": ..., "y_test": ...}
Mattias Fornander US
04/24/2025, 6:54 PMattrs_stats.py
next to pydantic_stats.py ?Iliya R
04/26/2025, 5:35 AM