The Watts and the Waves
05/07/2021, 7:39 AMNeotheyoshare
05/07/2021, 10:46 AMWTLNetwork (❌ Voice requests)
05/07/2021, 10:48 AMpi
05/07/2021, 10:49 AMNeotheyoshare
05/07/2021, 10:51 AMpi
05/07/2021, 11:25 AMpi
05/07/2021, 11:31 AMpi
05/07/2021, 11:31 AMNeotheyoshare
05/07/2021, 11:38 AMpi
05/07/2021, 11:58 AMpi
05/07/2021, 12:00 PMpi
05/07/2021, 12:00 PMpi
05/07/2021, 12:00 PMpi
05/07/2021, 12:01 PMpi
05/07/2021, 12:01 PM1-fluoropentane
05/07/2021, 2:09 PM1-fluoropentane
05/07/2021, 2:09 PM1-fluoropentane
05/07/2021, 2:10 PMpi
05/07/2021, 2:11 PMpi
05/07/2021, 2:12 PM1-fluoropentane
05/07/2021, 2:12 PMzwf
05/07/2021, 2:12 PMzwf
05/07/2021, 2:12 PMWTLNetwork (❌ Voice requests)
05/07/2021, 6:22 PMFP16 Run: False
Dynamic Loss Scaling: True
Distributed Run: False
cuDNN Enabled: True
cuDNN Benchmark: False
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-e8a871a0e98d> in <module>()
5 print('cuDNN Benchmark:', hparams.cudnn_benchmark)
6 train(output_directory, log_directory, checkpoint_path,
----> 7 warm_start, n_gpus, rank, group_name, hparams, log_directory2)
5 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in <lambda>(t)
489 Module: self
490 """
--> 491 return self._apply(lambda t: t.cuda(device))
492
493 def xpu(self: T, device: Optional[Union[int, device]] = None) -> T:
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 15.78 GiB total capacity; 14.42 GiB already allocated; 2.75 MiB free; 14.48 GiB reserved in total by PyTorch)
WTLNetwork (❌ Voice requests)
05/07/2021, 6:23 PMpi
05/07/2021, 6:26 PMWTLNetwork (❌ Voice requests)
05/07/2021, 6:26 PMpi
05/07/2021, 6:31 PM