일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
- k8s클러스터업그레이드
- 지속적통합
- ITQ 워드
- k8s
- CKA
- k8s클러스터
- ubuntu
- ML 플랫폼
- DBeaver
- 필기
- apt-get설치에러
- GPG에러
- 지속적배포
- OA 마스터
- gradle오류
- gradle빌드오류
- 정보처리기사
- GPG error
- vscode
- CI/CD
- 쿠버네티스
- 쿠버플로우
- 2022정보처리기사
- kubeflow
- 정처기
- onpremise
- 클러스터업그레이드
- kubernetes
- 개발환경
- 군사교육
- Today
- Total
Conqrean security blog
[PyTorch KServe] 관련 링크 본문
Kfserving > KServe 로 바뀌어 이제 KServe는 독립형 오픈소스라 한다.
참고 링크
KServe Docs webSite
https://kserve.github.io/website/0.8/
Home - KServe Documentation Website
ModelMesh is designed for high-scale, high-density and frequently-changing model use cases. ModelMesh intelligently loads and unloads AI models to and from memory to strike an intelligent trade-off between responsiveness to users and computational footprin
kserve.github.io
https://kserve.github.io/website/0.7/sdk_docs/docs/KServeClient/#example_3
KServeClient - KServe Documentation Website
KServeClient KServeClient(config_file=None, context=None, client_configuration=None, persist_config=True) User can loads authentication and cluster information from kube-config file and stores them in kubernetes.client.configuration. Parameters are as foll
kserve.github.io
Deploy PyTorch model with TorchServe InferenceService
https://kserve.github.io/website/modelserving/v1beta1/torchserve/
PyTorch - KServe Documentation Website
Deploy PyTorch model with TorchServe InferenceService In this example, we use a trained pytorch mnist model to predict handwritten digits by running an inference service with TorchServe predictor. Creating model storage with model archive file TorchServe p
kserve.github.io
'개인자료 > Kubeflow' 카테고리의 다른 글
쿠버플로우(Kubeflow) 란? (0) | 2021.05.07 |
---|