https://uberduck.ai/ logo
Join Discord
Powered by
# machine-learning
  • w

    WeegeeFan1

    11/14/2022, 10:44 AM
    I'm saying all of this assuming it connects via the main motherboard.
  • h

    hecko

    11/14/2022, 10:58 AM
    i mean phones already have chips specifically for ai or assuming it can make models usable on standard architectures (which isn't clear), it could be put in the cloud and rented out, like tpu·s
  • w

    WeegeeFan1

    11/14/2022, 10:59 AM
    I don't think half of our training notebooks are designed to even allow TPU much less CPU only or whatever these chips are called
  • w

    WeegeeFan1

    11/14/2022, 11:00 AM
    Neither do I think that the chips in AIs are designed to do anything except run AIs, not train them with massive datasets
  • h

    hecko

    11/14/2022, 11:02 AM
    the official article is very vague but i think it's talking about training https://blog.seas.upenn.edu/rethinking-the-computer-chip-in-the-age-of-ai/
  • h

    hecko

    11/14/2022, 11:03 AM
    > It supports on-chip storage, or the capacity to hold the enormous amounts of data required for deep learning, parallel search, a function that allows for accurate data filtering and analysis, and matrix multiplication acceleration, the core process of neural network computing.
  • h

    hecko

    11/14/2022, 11:04 AM
    cpu training is actually unintentionally supported, and i feel like tpu training should be easy enough but i've never done it so idk for certain
  • w

    WeegeeFan1

    11/14/2022, 9:58 PM
    Is there a way to separate individual vocalists? I know about seperating vocals from instruments but I would really think an individual vocalist separator AI would be most useful..
  • w

    WeegeeFan1

    11/14/2022, 9:58 PM
    How do you use CPU training? I'll do litterally anything no matter how long it takes to get around the stupid Colab limits. I've not figured out local setup on windows yet either
  • w

    WeegeeFan1

    11/14/2022, 9:59 PM
    I really need a visual aid. I learn better visually.
  • h

    hecko

    11/14/2022, 9:59 PM
    if you wanna train on cpu on colab it's not worth it
  • h

    hecko

    11/14/2022, 9:59 PM
    it's like 10 times slower than gpu
  • h

    hecko

    11/14/2022, 9:59 PM
    and given that you get 3-4 hours of gpu per day it's slower than just doing that
  • h

    hecko

    11/14/2022, 9:59 PM
    but if you wanna know: runtime → change runtime type →
    None
  • w

    WeegeeFan1

    11/14/2022, 10:00 PM
    Most scripts in the talknet 2 training model don't work without a GPU. They'll complain about you needing (1) but I only have (0)
  • w

    WeegeeFan1

    11/14/2022, 10:00 PM
    Most likely because it needs CUDA
  • w

    WeegeeFan1

    11/14/2022, 10:01 PM
    Which I don't think you'll exacly get out of a CPU
  • h

    hecko

    11/14/2022, 10:01 PM
    oh talknet
  • h

    hecko

    11/14/2022, 10:01 PM
    i thought pipeline
  • w

    WeegeeFan1

    11/14/2022, 10:01 PM
    What's the difference between pipeline and normal?
  • w

    WeegeeFan1

    11/14/2022, 10:01 PM
    I came to the uberduck community after starting to learn talknet
  • h

    hecko

    11/14/2022, 10:05 PM
    pipeline is for tacotron 2
  • h

    hecko

    11/14/2022, 10:05 PM
    the main difference is that it supports multispeaker models
  • w

    WeegeeFan1

    11/14/2022, 10:08 PM
    Oh alright
  • w

    WeegeeFan1

    11/14/2022, 10:09 PM
    I'm just wondering I noticed my voices coming out of the uberduck website are noticably more compressed then they were when I was testing them. As in bitrate.
  • w

    WeegeeFan1

    11/14/2022, 10:09 PM
    Is that true? Does uberduck compress?
  • h

    hecko

    11/14/2022, 10:12 PM
    not quite
  • h

    hecko

    11/14/2022, 10:12 PM
    the default output is 22.05khz but the synthesis notebook upscales that to 32khz so it's more like the notebook decompresses
  • w

    WeegeeFan1

    11/14/2022, 10:17 PM
    So the notebook is automatically better? I find it is anyway because I can use the "reduce matallic noise (slow)" button which helps me loads. AND I can run the notebook for synthesis locally
  • w

    WeegeeFan1

    11/14/2022, 10:18 PM
    Which, surprisingly, the local version of the notebook is much quicker than the online one.
1...101010111012...1068Latest