Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
Tags
- 코딩테스트
- leetcode
- vscode
- Matplotlib
- pytorch
- python
- DeepLearning
- FDS
- FastAPI
- NLP
- docker
- pep8
- 알고리즘
- GCP
- 완전탐색
- 프로그래머스
- Kaggle
- GitHub Action
- 네이버AItech
- GIT
- autoencoder
- NaverAItech
- github
- PytorchLightning
- Kubernetes
- rnn
- torchserve
- datascience
- wandb
- 백준
Archives
- Today
- Total
목록Transformer (1)
Sangmun
[NLP] Transformer
https://arxiv.org/abs/1706.03762 Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new arxiv.org 개요 기존의 Sequence to Sequence에서 LSTM을 활용한 기계번역 방법론을 제안하였으나 길이가 길어지면 성능이 떨어지..
네이버 AI 부스트캠프 4기
2022. 10. 3. 19:30