(function(){var el = document.createElement("script");el.src = "https://lf1-cdn-tos.bytegoofy.com/goofy/ttzz/push.js?0fd7cab5264a0de33b798f00c6b460fb0c1e12a69e1478bfe42a3cdd45db451bbc434964556b7d7129e9b750ed197d397efd7b0c6c715c1701396e1af40cec962b8d7c8c6655c9b00211740aa8a98e2e";el.id = "ttzz";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(el, s);})(window)

mrm8488/codebert-base-finetuned-detect-insecure-code

古风汉服美女图集


CodeBERT fine-tuned for Insecure Code Detection

codebert-base fine-tuned on CodeXGLUE — Defect Detection dataset for Insecure Code Detection downstream task.


Details of CodeBERT

We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.


Details of the downstream task (code classification) – Dataset

Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.
The dataset used comes from the paper Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks. All projects are combined and splitted 80%/10%/10% for training/dev/test.
Data statistics of the dataset are shown in the below table:

#Examples
Train 21,854
Dev 2,732
Test 2,732


mrm8488/codebert-base-finetuned-detect-insecure-code
收录说明:
1、本网页并非 mrm8488/codebert-base-finetuned-detect-insecure-code 官网网址页面,此页面内容编录于互联网,只作展示之用;
2、如果有与 mrm8488/codebert-base-finetuned-detect-insecure-code 相关业务事宜,请访问其网站并获取联系方式;
3、本站与 mrm8488/codebert-base-finetuned-detect-insecure-code 无任何关系,对于 mrm8488/codebert-base-finetuned-detect-insecure-code 网站中的信息,请用户谨慎辨识其真伪。
4、本站收录 mrm8488/codebert-base-finetuned-detect-insecure-code 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,
5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航
© 版权声明

相关文章