| | --- |
| | license: mit |
| | datasets: |
| | - omarmomen/babylm_10M |
| | language: |
| | - en |
| | metrics: |
| | - perplexity |
| | library_name: transformers |
| | --- |
| | # Model Card for omarmomen/structroberta_s2_final |
| |
|
| | This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. |
| | The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/) |
| |
|
| | <strong>omarmomen/structroberta_s2_final</strong> is a modification on the Roberta Model to incorporate syntactic inductive bias using an unsupervised parsing mechanism. |
| |
|
| | This model variant places the parser network after 4 attention blocks. |
| |
|
| | The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k). |
| |
|
| | https://arxiv.org/abs/2310.20589 |