134 Commits (master)
 

Author SHA1 Message Date
  rafaelvalle 185cd24e04 waveglow: updating waveglow submodule 4 years ago
  Rafael Valle 0102db28e2 README.md: updating waveglow published model 4 years ago
  rafaelvalle 6f435f7f29 updating waveglow submodule 4 years ago
  Rafael Valle dd49ffa850
Merge pull request #143 from taras-sereda/master 4 years ago
  Rafael Valle 2f2ed639c6
Merge pull request #279 from sih4sing5hong5/patch-1 4 years ago
  Rafael Valle 604e74de88
Merge pull request #313 from NVIDIA/dependabot/pip/tensorflow-1.15.2 4 years ago
  dependabot[bot] dbd477bd8d
build(deps): bump tensorflow from 1.12.0 to 1.15.2 4 years ago
  Rafael Valle 91ae5b57b0
Update requirements.txt 4 years ago
  Rafael Valle 2583315739
Merge pull request #303 from NTT123/fix-batch-size-1 4 years ago
  Rafael Valle ca5a22a71a
Merge pull request #304 from NTT123/remove-tensorboardX 4 years ago
  ntt123 438b9399ff remove tensorboardX; use torch.utils.tensorboard 4 years ago
  ntt123 14dbc37497 fix error when batch size = 1 4 years ago
  Rafael Valle 6d0635e8c1 train.py: printing correct variable 4 years ago
  Rafael Valle a513db50d0 utils.py: compatibility with new pytorch 5 years ago
  Rafael Valle 37a033de6f logger.py: compatibility with new tensorboardX 5 years ago
  薛丞宏 53a97e83df
[bug-fix] pillow dependency in Dockerfile 5 years ago
  Rafael Valle 70d37f9e7d
train.py: reporting the right variable 5 years ago
  Rafael Valle 131c1465b4
Merge pull request #188 from jybaek/fixed-waveglow-link 5 years ago
  jybaek d5321ff0ca Fixed link to download waveglow from inference.py 5 years ago
  rafaelvalle c76ac3b211 README.md: clarifying terminology 5 years ago
  rafaelvalle e3d2d0a5ef README.md: using proper nomenclature 5 years ago
  rafaelvalle a992aea070 README.md: updating terminology 5 years ago
  rafaelvalle eb2a171690 Merge branch 'master' of https://github.com/NVIDIA/tacotron2 5 years ago
  rafaelvalle 821bfeba5d README.md: adding instructions to install apex 5 years ago
  rafaelvalle d6670c8ed7 Dockerfile: updating to use latest pytorch and apex 5 years ago
  rafaelvalle 0274619e45 train.py: using amp for mixed precision training 5 years ago
  rafaelvalle bb20035586 inference.ipynb: adding fp16 inference 5 years ago
  rafaelvalle 1480f82908 model.py: renaming variables, removing dropout from lstm cell state, removing conversions now handled by amp 5 years ago
  rafaelvalle 087c86755f logger.py: using new pytorch api 5 years ago
  Rafael Valle ece7d3f568 train.py: changing dataloder params given sampler 5 years ago
  rafaelvalle f37998c59d train.py: shuffling at every epoch 5 years ago
  rafaelvalle bff304f432 README.md: adding explanation on training from pre-trained model 5 years ago
  rafaelvalle 3869781877 train.py: adding routine to warm start and ignore layers, e.g. embedding.weight 5 years ago
  rafaelvalle bb67613493 hparams.py: adding ignore_layers argument to ignore text embedding layers when warm_starting 5 years ago
  rafaelvalle af1f71a975 inference.ipynb: adding code to remove waveglows bias 5 years ago
  rafaelvalle fc0d34cfce stft.py: moving window_sum to cuda if magnitude is cuda 5 years ago
  Taras Sereda 5f03d07488
seed from hparams for TextMelLoader 5 years ago
  Rafael Valle f2c94d94fd
Merge pull request #136 from GrzegorzKarchNV/master 5 years ago
  gkarch df4a466af2 Fixing concatenation error for fp16 ditributed training 5 years ago
  rafaelvalle 825ffa47d1 inference.ipynb: reverting fp16 inference for now 6 years ago
  rafaelvalle 4d7b04120a inference.ipynb: changing waverglow inference fo fp16 6 years ago
  rafaelvalle 6e430556bd train.py: val logger on gpu 0 only 6 years ago
  rafaelvalle 3973b3e495 hparams.py: distributed using tcp 6 years ago
  rafaelvalle 52a30bb7b6 distributed.py: replacing to avoid distributed error 6 years ago
  rafaelvalle 0ad65cc053 train.py: renaming variable to n_gpus 6 years ago
  rafaelvalle 8300844fa7 hparams.py: removing 22khz 6 years ago
  rafaelvalle f06063f746 train.py: renaming function, removing dataparallel 6 years ago
  rafaelvalle 3045ba125b inference.ipynb: cleanup 6 years ago
  rafaelvalle 4c4aca3662 README.md: layout 6 years ago
  rafaelvalle 05dd8f91d2 README.md: adding submodule init to README 6 years ago