...
| Bloc de code |
|---|
addgroup pod --uid **ID DU GROUPE POD SUR L'ENSEMBLE DES SYSTÈMES (FRONTAL, ENCODEURS, NFS, ETC...)** [SI GROUP IP DIFFERENT DU GROUPE USER]
adduser pod -u **ID DE L'UTILISATEUR POD SUR L'ENSEMBLE DES SYSTÈMES (FRONTAL, ENCODEURS, NFS, ETC...)** --gid **ID DU GROUPE POD SUR L'ENSEMBLE DES SYSTÈMES (FRONTAL, ENCODEURS, NFS, ETC...)**
adduser pod sudo |
Installation/déploiement de Pod sur Debian 12 (La partie "virtualenvs" change un peu par rapport aux distribution précédentes)
...
| Bloc de code |
|---|
deb http://ftp.fr.debian.org/debian/ bookworm main contrib non-free non-free-firmware deb-src http://ftp.fr.debian.org/debian/ bookworm main contrib non-free non-free-firmware deb http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware deb-src http://security.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware # bookworm-updates, to get updates before a point release is made; # see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports deb http://ftp.fr.debian.org/debian/ bookworm-updates main contrib non-free non-free-firmware deb-src http://ftp.fr.debian.org/debian/ bookworm-updates main contrib non-free non-free-firmware |
| Bloc de code |
|---|
sudo apt update
sudo apt upgrade |
Ajout des sources "NVIDIA" pour une installation depuis la distrib des dernières versions
| Bloc de code |
|---|
su - pod mkdir nvidia cd nvidia/ wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt update sudo apt install nvidia-kernel-open-dkms sudo apt install nvidia-driver sudo apt install cuda-drivers sudo reboot su - pod sudo apt install nvidia-cuda-toolkit sudo reboot su - pod ## POUR VÉRIFIER LES VERSIONS NVIDIA ET CUDA INSTALLÉES ## nvidia-smi +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA RTX A6000 On | 00000000:3B:00.0 Off | Off | | 30% 26C P8 15W / 300W | 5MiB / 49140MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ |
...
Paramétrage (optimisé) ffmpeg à utiliser dans Pod pour l'encodage GPU
| Bloc de code |
|---|
""" FFMPEG SETTINGS LOCATE IN : pod/video/encoding_settings.py pod/video/encoding_gpu_settings.py all settings can be overwritte in your settings locale """ FFMPEG_USE_GPU = True FFMPEG_CMD_GPU = "ffmpeg -hwaccel_device 0 -hwaccel_output_format cuda -hwaccel cuda" cuda -hwaccel cuda" FFMPEG_PRESET_GPU = "p6" FFMPEG_LEVEL_GPU = 0 FFMPEG_INPUT_GPU = '-hide_banner -threads %(nb_threads)s -i "%(input)s" ' FFMPEG_LEVEL = 0 FFMPEG_PRESET = "p6" FFMPEG_PROFILE = "high" FFMPEG_LIBXLIBX_GPU = "h264_nvenc" FFMPEG_MP4_ENCODE_GPU = ( '%(cut)s -map 0:v:0 %(map_audio)s -c:v %(libx)s -vf "scale_cuda=-2:%(height)s:interp_algo=bicubic:format=yuv420p" ' + "-preset %(preset)s -profile:v %(profile)s " + "-level %(level)s " + "-forced-idr 1 " + "-b:v %(maxrate)s -maxrate %(maxrate)s -bufsize %(bufsize)s -rc vbr -rc-lookahead 20 -bf 1 " + '-force_key_frames "expr:gte(t,n_forced*1)" ' + '-c:a aac -ar 48000 -b:a %(ba)s -movflags faststart -y -fps_mode passthrough "%(output)s" ' ) FFMPEG_HLS_COMMON_PARAMS_GPU = ( "%(cut)s " + "-c:v %(libx)s -preset %(preset)s -profile:v %(profile)s " + "-level %(level)s " + "-forced-idr 1 " + '-force_key_frames "expr:gte(t,n_forced*1)" ' + "-c:a aac -ar 48000 " ) FFMPEG_HLS_ENCODE_PARAMS_GPU = ( '-vf "scale_cuda=-2:%(height)s:interp_algo=bicubic:format=yuv420p" -maxrateb:v %(maxrate)s -b:a:0maxrate %(bamaxrate)s -bufsize %(maxratebufsize)s -rc constqp -b:v 0kb:a:0 %(ba)s -rc vbr -rc-lookahead 20 -bf 1 -qp 32 -2pass 1 -multipass 2 -spatial-aq 1 -aq-strength 11-bf 1 ' + "-hls_playlist_type vod -hls_time %(hls_time)s -hls_flags single_file " + '-master_pl_name "livestream%(height)s.m3u8" ' + '-y "%(output)s" ' ) FFMPEG_CREATE_THUMBNAIL_GPU = ( '-vf "select=between(t\,50\,10%(duration)s)*eq(pict_type\,PICT_TYPE_I),thumbnail_cuda=2,scale_cuda=-2:720:interp_algo=bicubic:format=yuv420p,hwdownload,format=yuv420p" -frames:v 3%(nb_thumbnail)s -vsync vfr "%(output)s_%%04d.png"' ) |
| Info |
|---|
La commande FFMPEG_CREATE_THUMBNAIL permet ici de ne garder des miniatures (3) qu'entre les secondes 5 et 10 sur des frames complètes (PICT_TYPE_I). A adapter en fonction de vos besoins. |
...
| Bloc de code |
|---|
cd /etc/init.d/
sudo -E wget https://raw.githubusercontent.com/celery/celery/main/extra/generic-init.d/celeryd
sudo chmod u+x /etc/init.d/celeryd
sudo vi /etc/default/celeryd
CELERYD_NODES="worker5" # Nom du/des worker(s). Ajoutez autant de workers que de tache à executer en paralelle.
DJANGO_SETTINGS_MODULE="pod.settings" # settings de votre Pod
CELERY_BIN="/home/pod/.virtualenvs/django_pod3/bin/celery" # répertoire source de celery
CELERY_APP="pod.main" # application où se situe celery
CELERYD_CHDIR="/usr/local/django_projects/podv3" # répertoire du projet Pod (où se trouve manage.py)
CELERYD_OPTS="--time-limit=86400 --concurrency=1 --max-tasks-per-child=1 --prefetch-multiplier=1" # options à appliquer en plus sur le comportement du/des worker(s)
CELERYD_LOG_FILE="/var/log/celery/%N.log" # fichier log
CELERYD_PID_FILE="/var/run/celery/%N.pid" # fichier pid
CELERYD_USER="pod" # utilisateur système utilisant celery
CELERYD_GROUP="pod" # groupe système utilisant celery
CELERY_CREATE_DIRS=1 # si celery dispose du droit de création de dossiers
CELERYD_LOG_LEVEL="INFO" # niveau d'information qui seront inscrit dans les logs
sudo /etc/init.d/celeryd start
workon django_pod3
celery -A pod.main worker -l info
tail -f /var/log/celery/worker5.log -n200 |
...