Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

训练不一样 #3

Open
wxing232 opened this issue Nov 25, 2020 · 4 comments
Open

训练不一样 #3

wxing232 opened this issue Nov 25, 2020 · 4 comments

Comments

@wxing232
Copy link

大佬,在第六步,为什么你的Iter: x/719 。我的只有Iter: x/159哈,num-epochs都是80,其它参数也一样,上一步得到的句子数量都一样
fix_data_dir.sh: kept all 600490 utterances.
fix_data_dir.sh: old files are kept in data/train_combined_no_sil/.backup
fix_data_dir.sh: kept all 590655 utterances.
fix_data_dir.sh: old files are kept in data/train_combined_no_sil/.backup
所以我最终的结果很差:
EER: 3.387%
minDCF(p-target=0.01): 0.3582
minDCF(p-target=0.001): 0.5667
是我哪里参数有问题吗,或者说Iter的参数在哪里调?

@qiny1012
Copy link
Owner

你将num-epochs进行缩放会怎么样?迭代次数怎么变化?

@wxing232
Copy link
Author

迭代次数是num-epochs的2n倍,我看你的是9n 倍呀

@qiny1012
Copy link
Owner

--num-repeats 35
--trainer.optimization.num-jobs-initial=1 \
--trainer.optimization.num-jobs-final=1 \

这些参数也会影响,我对kaldi的神经网络具体的迭代计算也不是很清楚。一般都是使用kaldi提取特征,然后使用python搭建神经网络。

@wxing232
Copy link
Author

好的,非常感谢楼主

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants