(function(){var el = document.createElement("script");el.src = "https://lf1-cdn-tos.bytegoofy.com/goofy/ttzz/push.js?0fd7cab5264a0de33b798f00c6b460fb0c1e12a69e1478bfe42a3cdd45db451bbc434964556b7d7129e9b750ed197d397efd7b0c6c715c1701396e1af40cec962b8d7c8c6655c9b00211740aa8a98e2e";el.id = "ttzz";var s = document.getElementsByTagName("script")[0];s.parentNode.insertBefore(el, s);})(window)

vinvino02/glpn-nyu

古风汉服美女图集


GLPN fine-tuned on NYUv2

Global-Local Path Networks (GLPN) model trained on NYUv2 for monocular depth estimation. It was introduced in the paper Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Kim et al. and first released in this repository.
Disclaimer: The team releasing GLPN did not write a model card for this model so this model card has been written by the Hugging Face team.


Model description

GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation.


Intended uses & limitations

You can use the raw model for monocular depth estimation. See the model hub to look for
fine-tuned versions on a task that interests you.


How to use

Here is how to use this model:
from transformers import GLPNFeatureExtractor, GLPNForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = GLPNFeatureExtractor.from_pretrained("vinvino02/glpn-nyu")
model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-nyu")
# prepare image for the model
inputs = feature_extractor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)

For more code examples, we refer to the documentation.


BibTeX entry and citation info

@article{DBLP:journals/corr/abs-2201-07436,
author = {Doyeon Kim and
Woonghyun Ga and
Pyunghwan Ahn and
Donggyu Joo and
Sehwan Chun and
Junmo Kim},
title = {Global-Local Path Networks for Monocular Depth Estimation with Vertical
CutDepth},
journal = {CoRR},
volume = {abs/2201.07436},
year = {2022},
url = {https://arxiv.org/abs/2201.07436},
eprinttype = {arXiv},
eprint = {2201.07436},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07436.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}


vinvino02/glpn-nyu
收录说明:
1、本网页并非 vinvino02/glpn-nyu 官网网址页面,此页面内容编录于互联网,只作展示之用;
2、如果有与 vinvino02/glpn-nyu 相关业务事宜,请访问其网站并获取联系方式;
3、本站与 vinvino02/glpn-nyu 无任何关系,对于 vinvino02/glpn-nyu 网站中的信息,请用户谨慎辨识其真伪。
4、本站收录 vinvino02/glpn-nyu 时,此站内容访问正常,如遇跳转非法网站,有可能此网站被非法入侵或者已更换新网址,导致旧网址被非法使用,
5、如果你是网站站长或者负责人,不想被收录请邮件删除:i-hu#Foxmail.com (#换@)

前往AI网址导航
© 版权声明

相关文章