标签:Feature Extraction

sonoisa/sentence-luke-japanese-base-lite

This is a Japanese sentence-LUKE model. 日本語用Sentence-LUKEモデルです。 日本語Sentence-BERTモデルと同一のデータセットと設定で学習しました。手元の...

gogamza/kobart-base-v2

      (B AR出口 T 、是论文中使用的 文本填充 40 GB 语言模型。由此导出的 由[可选]共享: 全熙元(哈文、 ...

asapp/sew-tiny-100k

SEW-tiny SEW by ASAPP ResearchThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is al...

google/canine-s

CANINE-s (CANINE pre-trained with subword loss) Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was...

gogamza/kobart-base-v1

KoBART-base-v1 from transformers import PreTrainedTokenizerFast, BartModel tokenizer = PreTrainedTokenizerFast.from_pretrained('gogamza/kobart-ba...

allenai/specter2

SPECTER 2.0 SPECTER 2.0 is the successor to SPECTER and is capable of generating task specific embeddings for scientific tasks when paired with ad...

sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco

DistilBert for Dense Passage Retrieval trained with Balanced Topic Aware Sampling (TAS-B) We provide a retrieval trained DistilBert-based model (w...

GanjinZero/UMLSBert_ENG

CODER: Knowledge infused cross-lingual medical term embedding for term normalization. English Version. Old name. This model is not UMLSBert!!! Gith...

AiLab-IMCS-UL/lvbert

Latvian BERT-base-cased model. @inproceedings{Znotins-Barzdins:2020:BalticHLT, author = 'A. Znotins and G. Barzdins', title = 'LVBERT: Transformer-...

microsoft/xclip-base-patch16-zero-shot

X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained on Kinetics-400. It was introduced in the paper Expanding Lang...
1234