Baselight

MLQA - Multilingual Question-Answering

Multilingual Question-Answering Dataset

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset

Loading...
Loading...

About this Dataset

MLQA - Multilingual Question-Answering


MLQA - Multilingual Question-Answering

Multilingual Question-Answering Dataset

By mlqa (From Huggingface) [source]


About this dataset

The dataset consists of several files in CSV format that provide context passages or paragraphs along with corresponding questions and answer options. The context passages serve as the source of information from which the questions are derived, and the answer options are potential answers to these questions.

Each file in the dataset contains different language combinations for evaluation purposes. For example, mlqa.es.zh_test.csv focuses on testing multilingual question-answering models in Spanish and Chinese languages. Similarly, mlqa.hi.de_test.csv provides test data specifically for evaluating Hindi-German language pairs.

In order to facilitate accurate evaluation of models' performance, each file includes multiple columns for context and answers. This allows researchers to assess how well their models can generate correct answers based on the given contexts.

Research Ideas

  • Evaluation of multilingual question-answering models: This dataset can be used to evaluate the performance of different models designed for multilingual question-answering. By providing context, question, and answer pairs in multiple languages, it allows researchers to measure the accuracy and effectiveness of their models across different language pairs.
  • Cross-lingual transfer learning: The MLQA dataset can be utilized to develop cross-lingual transfer learning techniques. Models trained on this dataset can learn to perform question-answering tasks in one language and then transfer that knowledge to answer questions in another language.
  • Language understanding research: Researchers studying natural language processing (NLP) and language understanding can use this dataset to analyze how different languages handle questions and answers within various contexts. They can explore linguistic patterns, variations, and differences across languages by comparing the performance of NLP models trained on this dataset for both similar and dissimilar language pairs

Acknowledgements

If you use this dataset in your research, please credit the original authors.
Data Source

License

License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication
No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

Columns

File: mlqa.es.zh_test.csv

Column name Description
context The text passage or paragraph in which a question is being asked. (Text)
answers The possible answers to the question, along with their start and end positions within the context passage. (Text)

File: mlqa.hi.de_test.csv

Column name Description
context The text passage or paragraph in which a question is being asked. (Text)
answers The possible answers to the question, along with their start and end positions within the context passage. (Text)

File: mlqa.zh.de_test.csv

Column name Description
context The text passage or paragraph in which a question is being asked. (Text)
answers The possible answers to the question, along with their start and end positions within the context passage. (Text)

Acknowledgements

If you use this dataset in your research, please credit the original authors.
If you use this dataset in your research, please credit mlqa (From Huggingface).

Tables

Mlqa Vi Zh Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_vi_zh_test
  • 1.37 MB
  • 1943 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_vi_zh_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Vi Zh Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_vi_zh_validation
  • 133.8 KB
  • 184 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_vi_zh_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Ar Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_ar_test
  • 1.05 MB
  • 1912 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_ar_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Ar Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_ar_validation
  • 103.6 KB
  • 188 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_ar_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh De Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_de_test
  • 866.13 KB
  • 1621 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_de_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh De Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_de_validation
  • 110.06 KB
  • 190 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_de_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh En Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_en_test
  • 2.67 MB
  • 5137 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_en_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh En Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_en_validation
  • 269.21 KB
  • 504 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_en_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Es Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_es_test
  • 1.05 MB
  • 1947 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_es_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Es Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_es_validation
  • 95.94 KB
  • 161 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_es_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Hi Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_hi_test
  • 900.8 KB
  • 1767 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_hi_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Hi Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_hi_validation
  • 107.19 KB
  • 189 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_hi_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Vi Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_vi_test
  • 1.08 MB
  • 1943 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_vi_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Vi Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_vi_validation
  • 117.12 KB
  • 184 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_vi_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Zh Test

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_zh_test
  • 2.66 MB
  • 5137 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_zh_test (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Mlqa Zh Zh Validation

@kaggle.thedevastator_mlqa_multilingual_question_answering_dataset.mlqa_zh_zh_validation
  • 268.61 KB
  • 504 rows
  • 4 columns
Loading...

CREATE TABLE mlqa_zh_zh_validation (
  "context" VARCHAR,
  "question" VARCHAR,
  "answers" VARCHAR,
  "id" VARCHAR
);

Share link

Anyone who has the link will be able to view this.