Notice
Recent Posts
Recent Comments
Link
| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | |||||
| 3 | 4 | 5 | 6 | 7 | 8 | 9 |
| 10 | 11 | 12 | 13 | 14 | 15 | 16 |
| 17 | 18 | 19 | 20 | 21 | 22 | 23 |
| 24 | 25 | 26 | 27 | 28 | 29 | 30 |
| 31 |
Tags
- Kubernetes
- pytorch
- PytorchLightning
- FastAPI
- datascience
- GitHub Action
- GIT
- 완전탐색
- torchserve
- docker
- vscode
- github
- rnn
- Matplotlib
- pep8
- Kaggle
- NLP
- wandb
- NaverAItech
- FDS
- 프로그래머스
- python
- 백준
- GCP
- 알고리즘
- 코딩테스트
- DeepLearning
- autoencoder
- leetcode
- 네이버AItech
Archives
- Today
- Total
목록multimodal (1)
Sangmun
https://arxiv.org/abs/1912.13318 LayoutLM: Pre-training of Text and Layout for Document Image Understanding Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread use of pre-training models for NLP applications, they almost exclusively focus on text-level manipulation, while neglecting layout and arxiv.org Introduction Document A..
논문리뷰
2023. 3. 5. 21:26