michellejieli/NSFW_text_classifier
Fine-tuned DistilRoBERTa-base for NSFW Classification
Model Description
DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
The model is a fine-tuned version of DistilBERT.
It was fine-tuned on 14317 Reddit posts pulled from the (Reddit API) [https://praw.readthedocs.io/en/stable/].
How to Use
from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/NSFW_text_classification")
classifier("I see you’ve set aside this special time to humiliate yourself in public.")
Output:
[{'label': 'NSFW', 'score': 0.998853325843811}]
Contact
Please reach out to michelle.li851@duke.edu if you have any questions or feedback.
收录说明:
1、本网页并非 michellejieli/NSFW_text_classifier 官网网址页面,此页面内容编录于互联网,只作展示之用;
2、如果有与 michellejieli/NSFW_text_classifier 相关业务事宜,请访问其网站并获取联系方式;
3、本站与 michellejieli/NSFW_text_classifier 无任何关系,对于 michellejieli/NSFW_text_classifier 网站中的信息,请用户谨慎辨识其真伪。
4、本站收录 michellejieli/NSFW_text_classifier 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,
5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)
前往AI网址导航