Contribution Challenge

Contribution Challenge 2020 Winners

The 2020 contribution challenge has closed and we received a record number of 9 submissions, thanks to all who joined and to the Fetzer Franklin Fund for their support. We are working on making the data available on MetaLab.

The three winners are:

  • Gabrielle Strouse (University of South Dakota) with a meta-analysis on whether infants learn from video just as well as from live interaction, and the results are already published: doi.org/10.1111/cdev.13429

  • Z. L. Zhou (UCLA) with a meta-analysis on the time course of native-language phonotactic learning; stay tuned for a full paper.

  • Loretta Gasparini (European Master’s in Clinical Linguistics+) on infants’ preference for their native language and ability to discriminate between languages and accents, a preprint is available here: doi.org/10.31219/osf.io/rmn5x

The Contribution Challenge 2020 Call

To celebrate the latest (ongoing) upgrade and the inclusion of 25 meta-analyses to date, the MetaLab team with support of the Fetzer Franklin Fund is organizing a challenge for authors of meta-analyses on cognitive development (data challenge) and for contributors of shiny apps (code challenge). The contribution challenge will offer three $1,000 US in cash prizes distributed between (teams of) authors or coders, who contribute meta-analysis data to the MetaLab database or, who make substantial contributions to the site, for example proposing and implementing a shiny app providing a new analysis or functionality in coordination with the MetaLab team. Deadline is 2020-10-15 (15 October). You can find more information here: https://bit.ly/MetaLabChallenge2020

Contribution Challenge 2018 Winners

The MetaLab challenge calling for meta-analyses on cognitive development, with support from Berkeley Initiative for Transparency in the Social Sciences (BITSS), has closed. We received data for 7 meta-analyses, which will be added to MetaLab in the coming months.

The winners are three early career researchers: Angeline Tsui (Ottawa / Stanford), M. Julia Carbajal (LSCP Paris), and Katie Von Holzen (LPP Paris / Maryland).

Angeline Tsui contributed data on a meta-analysis of the “Switch Task”, a key paradigm in language acquisition research. In a switch task infants are taught new labels for unknown objects (such as “lif” vs “neem”). Their knowledge is then tested by whether they can detect the switching of the word-object pairings (calling the “lif” now “neem”). Results from switch task studies raised the possibility that there are differences in infants’ abilities to distinguish speech sounds in a pure speech perception task (where no visual information giving cues to the referent is presented) versus in a word learning context, and led to a string of follow-up studies that are synthesized in this meta-analysis. Angeline’s paper describing the meta-analysis in more detail is currently under review in Developmental Psychology (Preprint).

M. Julia Carbajal conducted a meta-analysis on infants’ ability to distinguish frequent from rare words (like “hello” versus “hallux”) when these words are just presented via a speech stream without visual referents. In this type of studies, researchers typically compare how long infants like to listen to different word lists (one with very frequent and one with very rare words), which is an easy to apply but very indirect measure. Studies on infants’ ability to distinguish those word lists were the first to establish when infants begin to systematically learn words in their native language, albeit with varying results across studies. It was thus a good moment to estimate the meta-analytic effect size. The paper on this meta-analysis is currently in preparation.

Katie Von Holzen’s meta-analysis (conducted in collaboration with MetaLab team member Christina Bergmann, which led to 50% of her data being discounted) addresses infants’ sensitivity to mispronunciations (for example, whether “tog” is a good label for “dog”). Dealing with mispronunciations is another key skill in language acquisition and processing, and the meta-analysis aims to show whether infants become more strict or more lenient with experience as to how a word should sound. A short report on the meta-analysis is appearing in the Proceedings of the Cognitive Science Society Conference (Preprint), a full-length paper is in preparation.

We would also like to specifically highlight the contribution of Hugh Rabagliati, Brock Ferguson, and Casey Lew-Williams, who would have been among the winners based on their contribution, but generously stepped down to leave the prize for an early career researcher. The meta-analysis they contributed addresses how infants can learn rules that are implicit in their environment. Their open access paper just appeared in the journal Developmental Science (Rabagliati, H., Ferguson, B., & Lew-Williams, C. (2018). “The profile of abstract rule learning in infancy: Meta-analytic and experimental evidence”. Developmental Science, DOI: 10.1111/desc.12704).

Thank you to everyone who participated in our challenge. MetaLab continues to be open for submissions, we provide further information on the Tutorials page.

Data-sharing policy

Meta-analyses will be added to the MetaLab online database. Users will be able to download your data and potentially re-use it. However, MetaLab requires anyone who uses a dataset to cite the contributors of the dataset, even if there is no publication or pre-print yet. Citation that users should use are available in the documentation of each dataset, potentially specifying « in preparation », and/or linking to an online repository (such as OSF). Note that we will update these entries as preprints and published papers become available. Learn more by reading our full citation policy. Unpublished meta-analyses shared on MetaLab do not count as publication.

MetaLab is dynamic: Meta-analysis can be updated, adding new relevant studies when they are published. Contributors can retain control on this for as long as they want to. Two options exists for the curation and review of data. Contributors can choose to be the curator. This means a contributor agrees to be the person responsible for identifying new relevant papers and signaling them to the MetaLab data manager, who will add them to the database. Curators are expected to check the data entered regularly. Curators are part of the MetaLab team and can choose to join discussions regarding e.g. site revamping. Alternatively, contributors can choose to step down completely, and it will be MetaLab’s job to assign a new curator for such a dataset.

Previous
Next