(function(){var el = document.createElement("script");el.src = "https://lf1-cdn-tos.bytegoofy.com/goofy/ttzz/push.js?0fd7cab5264a0de33b798f00c6b460fb0c1e12a69e1478bfe42a3cdd45db451bbc434964556b7d7129e9b750ed197d397efd7b0c6c715c1701396e1af40cec962b8d7c8c6655c9b00211740aa8a98e2e";el.id = "ttzz";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(el, s);})(window)

facebook/dpr-question_encoder-single-nq-base

古风汉服美女图集


dpr-question_encoder-single-nq-base


Table of Contents

  • Model Details
  • How To Get Started With the Model
  • Uses
  • Risks, Limitations and Biases
  • Training
  • Evaluation
  • Environmental Impact
  • Technical Specifications
  • Citation Information
  • Model Card Authors


Model Details

Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. dpr-question_encoder-single-nq-base is the question encoder trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019).

  • Developed by: See GitHub repo for model developers
  • Model Type: BERT-based encoder
  • Language(s): CC-BY-NC-4.0, also see Code of Conduct
  • License: English
  • Related Models:

    • dpr-ctx_encoder-single-nq-base
    • dpr-reader-single-nq-base
    • dpr-ctx_encoder-multiset-base
    • dpr-question_encoder-multiset-base
    • dpr-reader-multiset-base
  • Resources for more information:

    • Research Paper
    • GitHub Repo
    • Hugging Face DPR docs
    • BERT Base Uncased Model Card


How to Get Started with the Model

Use the code below to get started with the model.
from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer
tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
model = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base")
input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output


Uses


Direct Use

dpr-question_encoder-single-nq-base, dpr-ctx_encoder-single-nq-base, and dpr-reader-single-nq-base can be used for the task of open-domain question answering.


Misuse and Out-of-scope Use

The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.


Risks, Limitations and Biases

CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.


Training


Training Data

This model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:

[The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.


Training Procedure

The training procedure is described in the associated paper:

Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.

Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.

The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.


Evaluation

The following evaluation information is extracted from the associated paper.


Testing Data, Factors and Metrics

The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.


Results

Top 20 Top 100
NQ TriviaQA WQ TREC SQuAD NQ TriviaQA WQ TREC SQuAD
78.4 79.4 73.2 79.8 63.2 85.4 85.0 81.4 89.1 77.2


facebook/dpr-question_encoder-single-nq-base
收录说明:
1、本网页并非 facebook/dpr-question_encoder-single-nq-base 官网网址页面,此页面内容编录于互联网,只作展示之用;
2、如果有与 facebook/dpr-question_encoder-single-nq-base 相关业务事宜,请访问其网站并获取联系方式;
3、本站与 facebook/dpr-question_encoder-single-nq-base 无任何关系,对于 facebook/dpr-question_encoder-single-nq-base 网站中的信息,请用户谨慎辨识其真伪。
4、本站收录 facebook/dpr-question_encoder-single-nq-base 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,
5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航
© 版权声明

相关文章