Data Splitting

#3
by QuanDang - opened

Thank you for your work. It facilitates further research a lot. I just have one small question.

I found that YouTube-SL-25 provides no pre-defined splits, so how do you split the data for your baseline experiments?

I plan to try some models on your data, so I just want to ensure fair comparisons with your baselines.

SignerX org

@QuanDang This dataset is still being uploaded. However, I have just added the information you requested:

https://huggingface.co/datasets/SignerX/SignVerse-2M/blob/main/SignVerse-2M-metadata_split.csv

Its division is carried out in a 90:5:5 ratio for all the videos, even for the unknown data (but ensuring that val and test are at least greater than 1)

If there are any mistakes, please feel free to contact me for correction.

Thank you for your information 😁

QuanDang changed discussion status to closed

Hi @FangSen9000 , sorry for bothering you again.

I'm currently using your poses for sign language translation, so I would like to ask some further questions about your results on this task πŸ˜…

In the paper, you mentioned that BLEU4 is in the range 25–28. Could you clarify whether these numbers come from your training on ground-truth pose sequences of DWPose or from back-translation poses of SignDW-Transformer?

As I understand, the "back-translation" results are in Table 2. However, at the beginning of the "25–28 BLEU4" paragraph, you mentioned "For pose-space back-translation ...", so I'm a bit confused about the training data to obtain the 25-28 BLEU4 πŸ˜….

Thank you for your time and support 😁

QuanDang changed discussion status to open
SignerX org

@QuanDang SLT model got 25-28 BLEU4 in different kind language, it is SLTUNET

@FangSen9000 I understand it's from different languages.

I'm just wondering about your training data for this SLT model. Are you using real poses from DWPose, or back-translation poses?

SignerX org
β€’
edited 1 day ago

@QuanDang I modified the SLTUnet so that its encoder could directly take DWPose as input. This article has not been made public yet (It is expected to be made public in about six months. This is because I am currently occupied with other work).

If it were you, you might directly use the original RGB encoder. However, the effect might have a slight deviation, but the impact would not be significant.

FangSen9000 changed discussion status to closed

Sign up or log in to comment