https://uberduck.ai/ logo
Join Discord
Powered by
# tacotron-2-support
  • m

    mega b

    09/15/2022, 1:32 AM
    Yes
  • u

    {K EY1} (Kei)

    09/15/2022, 2:35 AM
    Damn What happened wrong during training?
  • h

    haru0l

    09/15/2022, 10:21 AM
    probably model somehow didnt save properly when ending training
  • u

    {K EY1} (Kei)

    09/15/2022, 1:42 PM
    Damn
  • m

    mepc36

    09/15/2022, 1:48 PM
    Someone mind helping me understand what these "loss" logs mean for the accuracy/quality of the model? It's the output from a radtts model training. As in, which of these do I care about? Do these numbers indicate a bad/good model?
    Copy code
    iteration: 464  (3.30 s)  |  lr: 0.001  |  loss_mel: -1.067  |  loss_prior_mel: 0.517  |  loss_ctc: 3.740  |  loss_duration: 0.286  |  loss_f0: 0.005  |  loss_energy: 0.005  |  loss_vpred: 1.372  |  binarization_loss: 0.791
  • m

    mepc36

    09/15/2022, 1:48 PM
    Its from iteration #464 in epoch #1 so it's early yet
  • u

    {K EY1} (Kei)

    09/15/2022, 1:52 PM
    It all looks normal to me, but i'm not sure what loss to look at in order to know when to stop I'm not experienced with radtts training
  • m

    mepc36

    09/15/2022, 2:08 PM
    Thats my question too, here's the repo: https://github.com/NVIDIA/radtts
  • m

    mepc36

    09/15/2022, 2:08 PM
    So is there no universal measurement of loss (like 5, or 0, or negative 1 million), at which we can say, "This model is a good model?"
  • m

    mepc36

    09/15/2022, 2:08 PM
    I read this article but it didn't answer those questions: https://towardsdatascience.com/understanding-loss-functions-the-smart-way-904266e9393
  • u

    {K EY1} (Kei)

    09/15/2022, 2:16 PM
    Yeah i'm not sure
  • u

    {K EY1} (Kei)

    09/15/2022, 2:16 PM
    Are you running tensorboard?
  • u

    {K EY1} (Kei)

    09/15/2022, 2:17 PM
    You can maybe check the graphs, or audio inference if there is any
  • m

    mepc36

    09/15/2022, 3:07 PM
    Not sure, ill check... I guess I'm really just looking for a standardized tool that will tell me how good my model is, any ideas?
  • m

    mepc36

    09/15/2022, 3:08 PM
    Like thes graphs that the tacotron2 notebook in #841437191073955920 publishes during training: https://colab.research.google.com/drive/1CIPTTj94EocZe2w5-zCDC3G44K9OaFal
  • o

    Odcub Rodriguez

    09/16/2022, 12:09 AM
    should I request a model? or.. what?
  • u

    (Dawn) Will Draw Fictional Women

    09/16/2022, 12:14 AM
    no?
  • o

    Odcub Rodriguez

    09/16/2022, 12:19 AM
    why no with a question mark?
  • o

    OccultMC

    09/16/2022, 2:35 AM
    "He found his art never progressed when he literally used his sweat and tears."
  • o

    OccultMC

    09/16/2022, 2:35 AM
    what are some good tips with scaling a model
  • o

    OccultMC

    09/16/2022, 2:35 AM
    or some tips on making the voice more smoother
  • p

    PUMPKINEATER

    09/16/2022, 3:35 AM
    That is TortoiseTTS?
  • u

    (Dawn) Will Draw Fictional Women

    09/16/2022, 3:36 AM
    why would he post tortoise clips in fucking tacotron support
  • p

    PUMPKINEATER

    09/16/2022, 3:37 AM
    Idk it just sounds like tortoisetts
  • t

    tylerdurdenceketi

    09/16/2022, 8:35 AM
    Hello, I get "ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1])" error Can you help me?
  • m

    mepc36

    09/16/2022, 1:59 PM
    Anyone know if the legacy tacotron2 notebook in notebooks is trained from a warmstart or not? This one: https://colab.research.google.com/drive/1CIPTTj94EocZe2w5-zCDC3G44K9OaFal
  • m

    mepc36

    09/16/2022, 2:03 PM
    The answer is "no", according to line 615 in the .py you get from downloading that notebook in .py format
  • u

    {K EY1} (Kei)

    09/16/2022, 2:04 PM
    It looks to me like it's warm started It has a warm start argument when you start the training step
  • m

    mepc36

    09/16/2022, 2:07 PM
    yeah all the vars are in global scope though, so I'm 99% sure that the invocation of
    train(..., warmstart_arg_here)
    in the training step is just passing in that warm_start var whose value is
    False
    . It didn't get overwritten in between the var definition and the train function call I don't think
  • u

    {K EY1} (Kei)

    09/16/2022, 2:08 PM
    Well, if it is warm starting you'll see a message that says "warm starting from checkpoint {whatever}"
1...434445...158Latest