#general

- n
Naren Castellon

02/02/2024, 8:21 PMI am trying to train a multivariate model with MLForecast when I build the model there is no problem, when I use the fit() method neither, the problem is when I use the predict() method`# fit the models`

`mlf.fit(train, fitted=True, static_features=[],`

`prediction_intervals=PredictionIntervals(n_windows=5, h=16, method="conformal_distribution"))`

`# predict model`

It gives me the following error:`forecast_df = mlf.predict(h=12, level=[80.95], X_df=exo)`

The problem arises when I add the X_df, but when they enter the univariate model everything works perfectly`ValueError: Found missing inputs in X_df. It should have one row per id and date for the complete forecasting horizon`

j- 2
- 22

- m
Mairon Cesar Simoes Chaves

02/02/2024, 8:30 PMHello everyone, I hope you are all well. I'm trying to create custom variables from the date field by looking at the documentation https://nixtlaverse.nixtla.io/mlforecast/docs/how-to-guides/custom_date_features.html . However, I would like the result to be in dummies and not integers, is this possible? I tried like this:Copy code

But there was an error`def month_dummies(dates): return pd.get_dummies(dates.month, prefix='month') mlf = MLForecast( freq = 'D', models=[KNeighborsRegressor()], target_transforms=[Differences([1,7])], lags=[1,7,14,21,28], date_features=[month_dummies], num_threads=4 )`

j- 2
- 4

- n
Naren Castellon

02/02/2024, 10:26 PMI'm training a Mlforecast model**@José Morales**this is other problem!!! But it gives me the following error:

If I eliminate the prediction intervals parameter everything works perfectly`ValueError: el resultado de la validación cruzada produjo menos resultados de los esperados. Verifique que la frecuencia establecida en el constructor MLForecast coincida con la de su serie y que no falten períodos.`

j- 2
- 4

- o
Omar Shaikh

02/05/2024, 6:40 AMHi everyone, I hope this is the right channel to discuss this. Im fairly new to time series analysis and was able to build out a prophet based solution that worked sort of well for us for the purposes of anomaly detection. I tried replacing prophet with the statsforecast prophet adapter, but it looks like the model is overfitting onto the data. The blue line isn't as smooth as I would like to be. As a beginner, Is there some parameters of things that i should lookup to help make the blue line more smooth like prophet? (also the future forecast also looks weird for the adapter). Frequency of data is 10 min, approx 6000 values. Attaching pictures:👋 1j- 2
- 7

- v
Valeriy

02/05/2024, 1:59 PMDo Nixtla libraries have capabilty to create ensembles for stats/ml/dl models for example based on performance in cross-validation?j- 2
- 1

- v
Valeriy

02/05/2024, 4:26 PMIs there a way to speed up hierarchical forecast? It seems to be quite slow, is it on Numba?❤️ 1tj- 3
- 11

- o
Omar Shaikh

02/08/2024, 4:58 AMCould someone guide me on how I can replace prophet completely using statsforecast? Without using the adapter. - p
Petteri Teikari

02/08/2024, 3:16 PMHello, I have a similarish question to the one earlier by**@Taha**(https://nixtlacommunity.slack.com/archives/C031EJJMH46/p1706801837854849) I wanted to finetune the forecaster on my own niche biosignals (see attached image), and then save the model and use it for some future data (and for my validation split), is this even possible? And I was mainly interested in how well does the historical forecasting worked for imputing the missing chunks (at random) for my data which was probably not the intended use of this forecasting library? Do you have plans to make this as a more "generic foundation model" with an option for own data finetuning, and using this for "normal downstream tasks" such as outlier detection, imputation, denoising, reconstruction, classification? And my df is probably too large as I get this failed attempts for the finetuning attempt?👍 1c- 2
- 2

- s
s k

02/10/2024, 3:21 PMI would like to ask, will the credit be replenished 420 per month?m- 2
- 1

- m
Miro Lavi

02/11/2024, 7:36 AMHi all, I’m forecasting ‘y’ with annual seasonality (52x7), varied promotion calendar effects, and an increasing trend beyond training data max. Using statsforecast AutoARIMA with Fourier terms and promotion dummies, I achieved a 15% MAPE, but CV results range from 5-40% monthly. I believe the issue is AutoARIMA’s inability to account for interrelationships between exogenous regressors. How can I address this? Should I be using some different model? [Edit]: data is stationary and adfuller p-value < 0.05m- 2
- 2

- p
Pete

02/11/2024, 3:47 PMHello Time Series Forecasting experts! I've been playing around with the idea of leveraging statistical forecasting to predict outcomes in Traumatic Brain Injury (TBI) research. Our focus is on predicting critical clinical events based on biomarker trends, such as the necessity for CT scans, MRIs, and assessing mortality risks. Biomarkers here are quantifiable biological parameters measured in blood at various time points (admission, every second day for a week, and follow up at 3months) that serve as biochemical indicators for brain injury severity and progression. With the StatsForecast library, we're looking to tap into its robust suite of forecasting models. However, the intricacies of our project demand not just any forecasting approach, but one adept at integrating a multifaceted array of exogenous variables, from clinical and demographic data to medical imaging metrics. These variables are pivotal, given their profound implications on TBI outcomes and the temporal dynamics they introduce into our models.**Integrating Exogenous Variables for Enhanced Forecasts:**How can we best harness StatsForecast's capabilities to integrate these variables effectively, ensuring they enrich our models without overshadowing inherent time series patterns? Are there specific features within StatsForecast that facilitate this integration, especially for variables with significant temporal shifts?**Adapting Time Series Forecasting for Predicting Clinically Relevant Events:**The heart of our project lies in forecasting not just continuous outcomes, but binary and multiclass clinical events. This intersection between traditional time series forecasting and classification or regression models presents a unique challenge. I'm curious about strategies that have proven effective in bridging this gap, particularly in dynamically adjusting models to the evolving datasets characteristic of TBI patient care.**Can StatsForecast be integrated with classification models, or are there best practices within the community for such hybrid forecasting approaches?****Dynamic Modeling and Real-Time Data Handling:**Given the dynamic nature of TBI patient data, how do we keep our models both responsive and precise over time? Insights on implementing rolling forecasts or incremental model updates within StatsForecast would be incredibly valuable, especially for accommodating real-time data streams.**Collaboration Opportunities:**I am exploring various research questions, such as predicting recovery trajectories based on biomarker trends and the impact of various exogenous variable. If these or any other areas spark an interest, I'm keen on discussing potential collaborations, be it through direct contribution, discussions, or co-authoring research findings. Your expertise could significantly advance our understanding and innovation in TBI care and research. Thank you for considering my inquiry. I look forward to the enriching discussions and potential partnerships this might bring. Warm Regards Peterjt- 3
- 4

- j
Jonathan

02/12/2024, 1:59 PMHi, with MLForecast if we use dynamic exogenous variables where future values are not known and apply direct forecasting strategy (one model per step) then is it required to pass future values of the dynamic exogenous variables to X_df considering that in a direct forecasting strategy it will not use them ? I supposed that we need to pass however the static exogenous variables.j- 2
- 6

- m
Mairon Cesar Simoes Chaves

02/12/2024, 6:30 PMHello team and community, I've been working on some useful functions for my last project, particularly focused on intermittent sales data, a common scenario when dealing with large datasets containing thousands of SKUs, many of which exhibit sporadic sales. These functions are implemented with Numba's

decorator to ensure fast and efficient execution, which is crucial for handling large volumes of data. I would like to share these functions with you, hoping they can be useful for others in their projects. Here are the functions: 1.`@njit`

**average_days_with_sales**: Calculates the average number of days with sales over a specified lag period, useful for understanding the sales frequency of each SKU.Copy code

2.`@njit def average_days_with_sales(x, lag): n = len(x) result = np.full(n, 0.0) # Initializes the result with 0.0 instead of NaN for i in range(lag - 1, n): sum_positive_sales = np.sum(x[i - lag + 1:i + 1] > 0) result[i] = sum_positive_sales / lag if lag > 0 else 0.0 return result`

**linear_log_trend**: Generates a linear logarithmic trend for a time series.Copy code

3.`@njit def linear_log_trend(x): n = len(x) t = np.arange(1, n + 1) # Creates a time array from 1 to n log_trend = np.log(t) # Calculates the natural logarithm of each time point return log_trend`

**rolling_correlation:**Calculates the rolling correlation of a time series with a specified lag.Copy code`@njit def rolling_correlation(x, lag): n = len(x) result = np.full(n, np.nan) # Initializes the result with NaNs for i in range(lag, n): x1 = x[i - lag:i] x2 = x[i - lag + 1:i + 1] mean_x1 = np.mean(x1) mean_x2 = np.mean(x2) std_x1 = np.std(x1) std_x2 = np.std(x2) if std_x1 == 0.0 or std_x2 == 0.0: result[i] = 0.0 # Avoids division by zero else: cov = np.mean((x1 - mean_x1) * (x2 - mean_x2)) corr = cov / (std_x1 * std_x2) result[i] = corr`

**4. rolling_cv**: Calculates the rolling coefficient of variation (CV).Copy code

5.`@njit def rolling_cv(x, window): n = len(x) result = np.full(n, 0.0) # Initializes with 0.0 instead of NaN for i in range(window - 1, n): window_data = x[i - window + 1:i + 1] sum_data = 0.0 sum_squares = 0.0 for val in window_data: sum_data += val sum_squares += val * val mean = sum_data / window if window > 0 else 0.0 if mean == 0.0: result[i] = 0.0 # Avoids division by zero else: std = np.sqrt(sum_squares / window - mean * mean) result[i] = std / mean`

**rolling_mean_positive_only:**Calculates the rolling mean considering only positive sales days, ignores effects of zero demand.Copy code`@njit def rolling_mean_positive_only(x, window): n = len(x) result = np.full(n, 0.0) # Initializes with 0.0 instead of NaN for i in range(window - 1, n): window_data = x[i - window + 1:i + 1] sum_data = 0.0 count = 0 for val in window_data: if val > 0: sum_data += val count += 1 if count > 0: result[i] = sum_data / count else: result[i] = 0.0 # Window without positive values, mean is 0`

**6. rolling_skewness:**Calculates the rolling skewness to understand the distribution of sales over a time window.Copy code

7.`@njit def rolling_mean_positive_only(x, window): n = len(x) result = np.full(n, 0.0) # Initializes with 0.0 instead of NaN for i in range(window - 1, n): window_data = x[i - window + 1:i + 1] sum_data = 0.0 count = 0 for val in window_data: if val > 0: sum_data += val count += 1 if count > 0: result[i] = sum_data / count else: result[i] = 0.0 # Window without positive values, mean is 0`

**rolling_kurtosis:**Calculates the rolling kurtosis, helping identify the presence of outliers in sales and how data deviates from a normal distribution.Copy code

I believe these functions could be extremely useful for detailed time series analysis in sales contexts, especially for dealing with the intermittent nature of many SKUs. They are designed to be flexible and can be easily adapted to meet specific analysis needs. in the attached image is the result of an XGboost on the last panel data job I worked on:`@njit def rolling_kurtosis(x, window): n = len(x) result = np.full(n, 0.0) # Initializes with 0.0 instead of NaN for i in range(window - 1, n): window_data = x[i - window + 1:i + 1] mean = np.mean(window_data) std = np.std(window_data) if std > 0: kurtosis = np.mean((window_data - mean) ** 4) / (std ** 4) - 3 else: kurtosis = 0.0 result[i] = kurtosis return result`

🙌 6🔥 1mj- 3
- 12

- l
LinenBot

02/13/2024, 5:24 AMWelcome!

joined #general.`Mariana Menchero García`

- t
Toni Borders

02/13/2024, 9:16 AMHi Nixtla, I am hitting the following error in hierarchicalforecast when I attempt to add the prediction intervals at the various tiers: > _raise Exception(f’Please include

prediction intervals in `Y_hat_df`’)_ > _Exception: Please include`{model_name}`

prediction intervals in `Y_hat_df`_ I am adding the intervals as follows: I add the level parameter when forecasting the base predictions:`index`

Copy code

Then I add the level parameter when reconciling`fcst = StatsForecast( df=Y_train_df, models=[AutoETS(season_length=season)], freq=data_freq, n_jobs=-1 ) Y_hat_df = fcst.forecast(h=h, level=[95], fitted=True) Y_fitted_df = fcst.forecast_fitted_values()`

Copy code

I can see the Y_hat_df has the following columns:`reconcilers = [ BottomUp(), MinTrace(method='mint_shrink', nonnegative=True), MinTrace(method='ols', nonnegative=True) ] Y_rec_df = hrec.reconcile(Y_hat_df=Y_hat_df, Y_df=Y_fitted_df, #Y_fitted_df Y_train_df S=S_df, tags=tags, level=[95]), intervals_method='permbu')`

The error is getting thrown from hierarchicalforecast/core.py, but core.py seems to be looking for a columns name like *-lo and *-hi`unique_id ds AutoETS AutoETS-lo-95 AutoETS-hi-95`

Copy code

Is there something I am missing?`pi_model_names = [name for name in model_names if ('-lo' in name or '-hi' in name)] pi_model_name = [pi_name for pi_name in pi_model_names if model_name in pi_name] pi = len(pi_model_name) > 0 n_series = len(Y_hat_df.index.unique()) if not pi: raise Exception(f'Please include `{model_name}` prediction intervals in `Y_hat_df`')`

j- 2
- 7

- v
Valeriy

02/13/2024, 2:32 PMkudos to the team, same as in statsforecast love the speed and models in NeuralForecast. Great selection of models and blazing speed. ⚡ 🙌 💥🙌 4 - m
Max (Nixtla)

02/13/2024, 2:47 PM<!channel>, many of you asked for our take on Lag-Llama. We think the model represents an important milestone in open-source foundational models, but our reproducible benchmark showed it is almost 42% less accurate than a seasonal naive. Here is the link: https://twitter.com/azulgarza_/status/1757413888847487132👍 5🙌 3🎯 3🙏 3🤣 3👀 7❤️ 3 6🔥 2😆 1✅ 9dy+3- 6
- 5

- r
Roman Nikitin

02/13/2024, 2:55 PM - h
Haris Rashid

02/15/2024, 8:55 AMdoes Nixtla has any documentation on ways its possible to speed up computation for multiple time series and multiple models? - l
Luis Enrique Patiño

02/15/2024, 3:29 PMHi everyone, any idea on why I get this error? I'm ussing mlforecast 0.11.6Copy code`--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <command-3502381988876672> in <module> 7 8 from statsforecast import StatsForecast ----> 9 from mlforecast import MLForecast 10 from mlforecast.target_transforms import Differences 11 from sklearn.preprocessing import FunctionTransformer /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 165 # Import the desired module. If you're seeing this while debugging a failed import, 166 # look at preceding stack frames for relevant error information. --> 167 original_result = python_builtin_import(name, globals, locals, fromlist, level) 168 169 is_root_import = thread_local._nest_level == 1 /local_disk0/.ephemeral_nfs/envs/pythonEnv-5d38fe17-c8d0-4951-8ecb-9da422d71277/lib/python3.8/site-packages/mlforecast/__init__.py in <module> 1 __version__ = "0.11.5" 2 __all__ = ['MLForecast'] ----> 3 from mlforecast.forecast import MLForecast /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 165 # Import the desired module. If you're seeing this while debugging a failed import, 166 # look at preceding stack frames for relevant error information. --> 167 original_result = python_builtin_import(name, globals, locals, fromlist, level) 168 169 is_root_import = thread_local._nest_level == 1 /local_disk0/.ephemeral_nfs/envs/pythonEnv-5d38fe17-c8d0-4951-8ecb-9da422d71277/lib/python3.8/site-packages/mlforecast/forecast.py in <module> 16 from utilsforecast.compat import DataFrame 17 ---> 18 from mlforecast.core import ( 19 DateFeature, 20 Freq, /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 165 # Import the desired module. If you're seeing this while debugging a failed import, 166 # look at preceding stack frames for relevant error information. --> 167 original_result = python_builtin_import(name, globals, locals, fromlist, level) 168 169 is_root_import = thread_local._nest_level == 1 /local_disk0/.ephemeral_nfs/envs/pythonEnv-5d38fe17-c8d0-4951-8ecb-9da422d71277/lib/python3.8/site-packages/mlforecast/core.py in <module> 26 from utilsforecast.validation import validate_format, validate_freq 27 ---> 28 from .compat import CORE_INSTALLED, BaseLagTransform, Lag 29 from .grouped_array import GroupedArray 30 from mlforecast.target_transforms import ( /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 165 # Import the desired module. If you're seeing this while debugging a failed import, 166 # look at preceding stack frames for relevant error information. --> 167 original_result = python_builtin_import(name, globals, locals, fromlist, level) 168 169 is_root_import = thread_local._nest_level == 1 /local_disk0/.ephemeral_nfs/envs/pythonEnv-5d38fe17-c8d0-4951-8ecb-9da422d71277/lib/python3.8/site-packages/mlforecast/compat.py in <module> 10 from coreforecast.grouped_array import GroupedArray as CoreGroupedArray 11 ---> 12 from mlforecast.lag_transforms import BaseLagTransform, Lag 13 14 CORE_INSTALLED = True /databricks/python_shell/dbruntime/PythonPackageImportsInstrumentation/__init__.py in import_patch(name, globals, locals, fromlist, level) 165 # Import the desired module. If you're seeing this while debugging a failed import, 166 # look at preceding stack frames for relevant error information. --> 167 original_result = python_builtin_import(name, globals, locals, fromlist, level) 168 169 is_root_import = thread_local._nest_level == 1 /local_disk0/.ephemeral_nfs/envs/pythonEnv-5d38fe17-c8d0-4951-8ecb-9da422d71277/lib/python3.8/site-packages/mlforecast/lag_transforms.py in <module> 23 24 # %% ../nbs/lag_transforms.ipynb 4 ---> 25 class BaseLagTransform(BaseEstimator): 26 _core_tfm: core_tfms.BaseLagTransform 27 /local_disk0/.ephemeral_nfs/envs/pythonEnv-5d38fe17-c8d0-4951-8ecb-9da422d71277/lib/python3.8/site-packages/mlforecast/lag_transforms.py in BaseLagTransform() 24 # %% ../nbs/lag_transforms.ipynb 4 25 class BaseLagTransform(BaseEstimator): ---> 26 _core_tfm: core_tfms.BaseLagTransform 27 28 def transform(self, ga: CoreGroupedArray) -> np.ndarray: AttributeError: module 'coreforecast.lag_transforms' has no attribute 'BaseLagTransform'`

j- 2
- 9

- n
Naren Castellon

02/16/2024, 8:43 PMHello Team Nixtla, I have a couple of questions. 1. Generally in each NeuralForecast model when making the predictions I have 2 results, for example LSTM and LSTM -median, the question is what is LSTM-median? And if I can use the LSTM-median as the forecast itself to use, for example, sales of a certain item. 2. In the case of the TCN model, when I make the predictions, I get several outputs, the same as before, I can consider the forecast for the other outputs, and I did a couple of tests for my data that I am training, and for example TCN- mu-1 is more precise than the same TCN model. What is TCN-mu-1 and TCN-std-1 (I know it is mean and standard deviation), what is the prediction that I should take into account? The same thing has happened to me with the NHITS model, the NHIS-loc is more precise than the NHITS, so I don't know which one or what to consider to use it in case you want to apply it to forecast sales for example. 3. I can automate several models at the same time, because I have tested and I get an error, for example`# Configuration for AutoRRN`

`config_RNN = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)`

`# Configuration for LSTM`

`config_LSTM = dict(max_steps=2, val_check_steps=1, input_size=-1, encoder_hidden_size=8)`

`models = [AutoRNN(h=18, config=config_RNN, num_samples=1, cpus=1),`

`AutoLSTM(h=18, config=config_LSTM, num_samples=1, cpus=1),]idad abarca 20 de las 24 zonas horarias existentes en el mundo.`

n- 2
- 1

- b
Bahador Biglari

02/17/2024, 10:18 PMHello all, I have the following two questions: I want to use Nixtla for Cashflow forcasting and what-if analysis. how should I do it? how can I incorporate it in langchain application? - m
Mairon Cesar Simoes Chaves

02/19/2024, 8:18 PMHey team, I encountered an issue with one of the functions in our codebase that I'd like to discuss.Copy code

The function`def moving_croston(x, window, alpha=0.1): """ Calculates an estimate of the interval between sales using the Croston model applied in a moving manner to the time series x, returning the most recent value. Parameters: x: Time series of sales. window: Size of the window for the moving application of the Croston model. alpha: Smoothing parameter for the Croston model. Returns: The most recent estimated value of the interval between sales. """ def moving_croston(y, alpha): # Initializations demands = y[y > 0] if len(demands) == 0: return np.nan # Returns NaN if there are no sales intervals = np.diff(np.where(y > 0))[0] + 1 # +1 to count the days correctly s_demands = demands[0] s_intervals = intervals.mean() if len(intervals) > 0 else np.nan # Exponential smoothing for demands and intervals for d in demands[1:]: s_demands = alpha * d + (1 - alpha) * s_demands for i in intervals: s_intervals = alpha * i + (1 - alpha) * s_intervals return s_intervals # Applies the Croston model in a moving manner if len(x) < window: return np.nan # Returns NaN if the series is shorter than the window interval_estimate = croston_method(x[-window:], alpha) return interval_estimate`

, used for estimating the interval between sales using the Croston model, seems to be causing errors when used as a moving feature in MLForecast. The function itself doesn't utilize Numba for optimization, but the error message points to a Numba-related issue. It's perplexing because we're not explicitly using Numba in this function, although we do use it in other moving features without any problems. I've reviewed the code and couldn't find any obvious reasons for the error. It might be related to how the function interacts with other parts of the codebase or how it's integrated into the MLForecast pipeline. I'd appreciate it if we could take some time to troubleshoot this together and figure out the root cause. If anyone has insights or suggestions on how to approach this issue, please feel free to share. Let's work together to resolve this issue and ensure the smooth functioning of our codebase. Best regards,`moving_croston`

j- 2
- 1

- j
Jonathan

02/20/2024, 1:59 PMHi, with StatsForecast, MLForecast and NeuralForecast is there a way to easily compute the MASE and CRPS when doing cross-validation ? Do you have an example of that ?m- 2
- 3

- j
juanitorduz

02/20/2024, 10:55 PMHi! I've been a Nixltla user for a bit less than a year and I love your work! Looking forward to sharing and learning from you all!❤️ 5👀 1 - c
Chenghao Liu

02/21/2024, 6:47 AMHi, when I use timegpt for forecasting, I see a warning message about restricting input……, will timegpt automatically truncate the input length? What the max input length for forecasting task?c- 2
- 2

- j
Jonathan

02/23/2024, 11:05 AMI have check the documentation of utilsforecast for evaluation metric associated with quantile score like quantile_loss, mqloss and scaled_crps. As far as I know, to calculate a quantile score you need to first predict a quantile then evaluate this quantile with the observed data. For example, in the example found in the documentation (https://nixtlaverse.nixtla.io/utilsforecast/evaluation.html) the series is generated with a 80 % high/low probabilistic interval so for quantile P10 and P90. Then to calculate the quantile score for P10, we should pass in the formulae q=0.1 and the predict quantile P10 in order to calculate the quantile score. The example and source code use only the observed data and the point forecast so I'm confused. Maybe I'm missing something here.j- 2
- 4

- s
Scottfree Analytics LLC

02/25/2024, 11:44 PMDoes Nixtla support multi-horizon models, e.g., the ability to predict different horizons with one model?m- 2
- 4

- d
Dimitris Floros

02/27/2024, 2:15 PMHello all! I have been using NeuralForecast and MLForecast, and I have the following question: If I wish to make predictions only at, e.g., 8am every day, is there a way to train a model so that it only attempts to predict the next h hours from 8am, and not in a rolling horizon? Or is this not beneficial, and I am better off training with rolling forecasts and only using it at 8am? Do you have any experiments on the advantages/disadvantages of these types of forecasting needs? Thank you!m- 2
- 2

- a
Alejandro Holguin Mora

03/01/2024, 5:02 PMWithin your experiments / business cases will you have forecast results for time series corresponding to financial markets such as EURUSD? If so, would you think it possible to share them with me? Or could you help me with your suggestions regarding which tools from your portfolio (paid / open source) I should use to begin to understand how the product works?