(function(){var el = document.createElement("script");el.src = "https://lf1-cdn-tos.bytegoofy.com/goofy/ttzz/push.js?0fd7cab5264a0de33b798f00c6b460fb0c1e12a69e1478bfe42a3cdd45db451bbc434964556b7d7129e9b750ed197d397efd7b0c6c715c1701396e1af40cec962b8d7c8c6655c9b00211740aa8a98e2e";el.id = "ttzz";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(el, s);})(window)

michellejieli/NSFW_text_classifier

古风汉服美女图集


Fine-tuned DistilRoBERTa-base for NSFW Classification


Model Description

DistilBERT is a transformer model that performs sentiment analysis. I fine-tuned the model on Reddit posts with the purpose of classifying not safe for work (NSFW) content, specifically text that is considered inappropriate and unprofessional. The model predicts 2 classes, which are NSFW or safe for work (SFW).
The model is a fine-tuned version of DistilBERT.
It was fine-tuned on 14317 Reddit posts pulled from the (Reddit API) [https://praw.readthedocs.io/en/stable/].


How to Use

from transformers import pipeline
classifier = pipeline("sentiment-analysis", model="michellejieli/NSFW_text_classification")
classifier("I see you’ve set aside this special time to humiliate yourself in public.")

Output:
[{'label': 'NSFW', 'score': 0.998853325843811}]


Contact

Please reach out to michelle.li851@duke.edu if you have any questions or feedback.



michellejieli/NSFW_text_classifier
收录说明:
1、本网页并非 michellejieli/NSFW_text_classifier 官网网址页面,此页面内容编录于互联网,只作展示之用;
2、如果有与 michellejieli/NSFW_text_classifier 相关业务事宜,请访问其网站并获取联系方式;
3、本站与 michellejieli/NSFW_text_classifier 无任何关系,对于 michellejieli/NSFW_text_classifier 网站中的信息,请用户谨慎辨识其真伪。
4、本站收录 michellejieli/NSFW_text_classifier 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,
5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航
© 版权声明

相关文章