Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- vscode
- leetcode
- pytorch
- torchserve
- PytorchLightning
- GIT
- 코딩테스트
- 프로그래머스
- FastAPI
- docker
- FDS
- 알고리즘
- GitHub Action
- Kaggle
- python
- 네이버AItech
- DeepLearning
- datascience
- NLP
- NaverAItech
- wandb
- autoencoder
- Kubernetes
- github
- 완전탐색
- GCP
- pep8
- rnn
- Matplotlib
- 백준
Archives
- Today
- Total
목록Transformer (1)
Sangmun

https://arxiv.org/abs/1706.03762 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv.org 개요 기존의 Sequence to Sequence에서 LSTM을 활용한 기계번역 방법론을 제안하였으나 길이가 길어지면 성능이 떨어지..
네이버 AI 부스트캠프 4기
2022. 10. 3. 19:30