The Beelink EQ14, powered by Intel’s N150 processor, is a small, relatively inexpensive mini PC that can be a great little home media server. With Intel Quick Sync Video (QSV) support, it’s capable of efficient hardware-accelerated transcoding using very little power. I picked one of these up a little while ago, but only just recently had a moment to sit down with it and try to get things set up and running.
I’ll admit, while the setup process wasn’t bad, I hit a number of roadblocks asking the way. Not because it’s especially difficult, but mostly because I’ve never messed around with I’ve of these things before. Assuming that there will be some subset of people in the same boat, I figured that it might be helpful to write up the steps that got things working for me.
This guide walks you through setting up Jellyfin on Ubuntu with Docker from scratch — focusing primarily on ensuring that it’s actually doing hardware-accelerated transcoding. I haven’t done any testing that pushes it to its limits, and I’m not savvy enough to say what all kinds of video formats/encodings it could actually handle.
For this write-up, I’m focusing on the most straightforward steps to get things up and running in a fashion consistent with basic security principles — no Portainer or reverse proxy; just a clean, focused setup. Of course, you can absolutely do the later parts of this process in something like Portainer (that’s actually what I prefer), and you can set up Nginx Proxy Manager, and lots of other things, but I wanted to focus on just getting from Point A to Point B so that I could get Jellyfin up and running on this bad boy.
So, with that, here’s how I got a Jellyfin server up and running on this little box. There was a lot of trial-and-error in-between these steps, but this is ultimately what worked for me.
1. Install Ubuntu with the Right Kernel
Jellyfin’s hardware transcoding features rely on proper kernel and driver support. Initially, I installed Ubuntu 24.04 LTS, but I couldn’t even get the OS to find the iGPU for hardware acceleration until I reinstalled it using the HWE (Hardware Enablement) version.
Why HWE? The HWE kernel provides newer drivers and support for more recent hardware. The N150 iGPU in the EQ14 needs these updates to function properly with VAAPI and/or QSV. Why Ubuntu 24.04 LTS instead of trying a newer version (e.g., 24.10)? Well, I wanted that long-term support, which (I believe) you don’t get if you simply pull the latest upstream kernel.
I did a clean install of 24.04 LTS with HWE, but I believe that you can install or switch to HWE like this:
sudo apt update
sudo apt install --install-recommends linux-generic-hwe-24.04
sudo reboot
After rebooting, verify your kernel version:
uname -r
You should see something like 6.11.x
or newer.
2. Install Docker and Configure Permissions
Docker lets you run isolated containers — perfect for Jellyfin, since you can install and upgrade it independently of your system.
To install Docker and all of the Docker-related bits and pieces that you’ll need:
sudo apt update
sudo apt install \
ca-certificates \
curl \
gnupg \
lsb-release -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
After Docker is installed, allow your user to run Docker commands without sudo
:
sudo usermod -aG docker $USER
newgrp docker
Why this step matters: running Docker without sudo
improves convenience and avoids permission issues when mounting local directories. Plus, from a security perspective, it’s generally just a good idea use the lowest set of permissions necessary.
3. Install Intel GPU Drivers (VAAPI)
Intel’s iGPU needs proper drivers to support VAAPI, which is the Linux interface for accessing hardware-accelerated video features like encoding and decoding.
Install the drivers:
sudo apt install intel-media-va-driver-non-free i965-va-driver vainfo
Then test it:
vainfo
Even if this throws errors, transcoding will likely still work. On my system, vainfo
reports errors every single time. But VA-API and QSV still work perfectly inside Docker. So treat this test as a possible test, but it’s not a perfectly diagnostic one since I’m running this entire server in a headless fashion.
This is one of the things that I fought against the most — I wanted to be extra sure that the system could actually see/access the iGPU before going through a more elaborate setup with Docker containers — we’ve all probably had experience with wasting hours upon hours setting up a stack only to realize that the config just wasn’t going to work.
I honestly never got vainfo
to give me anything other than errors about failures to connect, e.g.,
error: can't connect to X server!
error: failed to initialize display
I ran out and bought an HDMI dummy plug to see if that would solve it, but nothing worked. So, I moved on and went to see whether I could actually access the iGPU from within Docker containers, and it worked perfectly – everything was beautiful. So, that’s what we’re doing next.
4. Confirm Hardware Acceleration with FFmpeg in Docker
You want to be absolutely sure that Docker has access to your iGPU. This test confirms whether QSV is available to containers. Note that the code in this section will temporarily create and run Docker containers that are removed upon completion.
Run this:
docker run --rm ghcr.io/linuxserver/ffmpeg ffmpeg -hwaccels
You should see qsv
listed, along with several other acceleration methods (e.g., vaapi
):
ffmpeg version 7.1.1 Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04)
configuration: --disable-debug --disable-doc --disable-ffplay --enable-alsa -- enable-cuda-llvm --enable-cuvid --enable-ffprobe --enable-gpl --enable-libaom -- enable-libass --enable-libdav1d --enable-libfdk_aac --enable-libfontconfig --ena ble-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libkvazaar --e nable-liblc3 --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore -amrwb --enable-libopenjpeg --enable-libopus --enable-libplacebo --enable-librav 1e --enable-librist --enable-libshaderc --enable-libsrt --enable-libsvtav1 --ena ble-libtheora --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-lib vorbis --enable-libvpl --enable-libvpx --enable-libvvenc --enable-libwebp --enab le-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg - -enable-libzmq --enable-nonfree --enable-nvdec --enable-nvenc --enable-opencl -- enable-openssl --enable-stripping --enable-vaapi --enable-vdpau --enable-version 3 --enable-vulkan
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.101 / 61. 19.101
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
Hardware acceleration methods:
vdpau
cuda
vaapi
qsv
drm
opencl
vulkan
Just to be extra sure, you can run a test transcode within a Docker container to simulate QSV in action:
docker run --rm \
--device /dev/dri:/dev/dri \
--entrypoint ffmpeg \
ghcr.io/linuxserver/ffmpeg \
-init_hw_device qsv=hw:/dev/dri/renderD128 \
-hwaccel qsv -hwaccel_device hw -hwaccel_output_format qsv \
-f lavfi -i testsrc=duration=3:size=1280x720:rate=30 \
-vf 'format=nv12,hwupload=extra_hw_frames=64' \
-c:v h264_qsv -f null -
If it runs without errors and shows that it’s outputting frames, you’re good. Here’s my output:
ffmpeg version 7.1.1 Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13 (Ubuntu 13.3.0-6ubuntu2~24.04)
configuration: --disable-debug --disable-doc --disable-ffplay --enable-alsa --enable-cuda-llvm --enable-cuvid --ena ble-ffprobe --enable-gpl --enable-libaom --enable-libass --enable-libdav1d --enable-libfdk_aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-libkvazaar --enable-liblc3 --enable-libmp3lam e --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libplacebo --e nable-librav1e --enable-librist --enable-libshaderc --enable-libsrt --enable-libsvtav1 --enable-libtheora --enable-li bv4l2 --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpl --enable-libvpx --enable-libvvenc --ena ble-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --en able-nonfree --enable-nvdec --enable-nvenc --enable-opencl --enable-openssl --enable-stripping --enable-vaapi --enabl e-vdpau --enable-version3 --enable-vulkan
libavutil 59. 39.100 / 59. 39.100
libavcodec 61. 19.101 / 61. 19.101
libavformat 61. 7.100 / 61. 7.100
libavdevice 61. 3.100 / 61. 3.100
libavfilter 10. 4.100 / 10. 4.100
libswscale 8. 3.100 / 8. 3.100
libswresample 5. 3.100 / 5. 3.100
libpostproc 58. 3.100 / 58. 3.100
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/local/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/local/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
Input #0, lavfi, from 'testsrc=duration=3:size=1280x720:rate=30':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: wrapped_avframe, rgb24, 1280x720 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 30 tbn
Stream mapping:
Stream #0:0 -> #0:0 (wrapped_avframe (native) -> h264 (h264_qsv))
Press [q] to stop, [?] for help
[h264_qsv @ 0x64a2714357c0] Using the constant quantization parameter (CQP) by default. Please use the global_quality option and other options for a quality-based mode or the b option and other options for a bitrate-based mode if the default is not the desired choice.
Output #0, null, to 'pipe:':
Metadata:
encoder : Lavf61.7.100
Stream #0:0: Video: h264, qsv(tv, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 30 fps, 30 tbn
Metadata:
encoder : Lavc61.19.101 h264_qsv
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/null @ 0x64a271437880] video:41KiB audio:0KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing over head: unknown
frame= 90 fps=0.0 q=33.0 Lsize=N/A time=00:00:02.93 bitrate=N/A speed= 6.8x
Note: As mentioned, since I’m running this server completely headless, I have an HDMI dummy plug instead into one of the Beelink HDMI ports. Some headless systems don’t expose the GPU properly unless a display is connected. It’s worth trying if you hit roadblocks. I have no idea how necessary this is, as I had already installed the dummy plug and haven’t tested without it, so your mileage may vary here.
5. Create Docker Directory Structure for Jellyfin
Organizing your Docker containers under a common folder structure keeps things tidy and makes backups easy. I like to keep these all in my home directory, but you certainly don’t have to — you do you.
mkdir -p ~/docker/jellyfin/config
mkdir -p ~/docker/jellyfin/cache
The config
folder holds Jellyfin’s settings, metadata, plugins, etc. The cache
folder helps Jellyfin with things like thumbnails and transcode buffers.
6. Find Your User and Group IDs
To make sure Jellyfin runs with the correct permissions (and can access the GPU), get your UID and GIDs:
id
Look for:
uid=1000(...)
→ that’s your PUID (e.g., 1000)gid=110(...)
→ that’s your PGID (e.g., 110)- Any group labeled
video
→ this is your video GID (e.g., 44) - Any group labeled
render
→ this is your render GID (e.g., 993; this might not show up under this command, and it may not actually be necessary)
These IDs ensure your container has access to the necessary devices, but there’s an important nuance here worth elaborating on. If you’re running your Jellyfin container as the root user, it will generally already have the permissions it needs to access hardware devices like the iGPU — meaning you wouldn’t typically need to worry about what groups the user belongs to.
However, running containers as root is often discouraged from a security standpoint. In my setup, I’m explicitly running the Jellyfin container as a normal, non-root user (specifically using my user’s UID and GID), which means I need to explicitly provide access to any hardware or system resources it should use. For Jellyfin, that includes the video and render groups, since those are tied to access control for the GPU’s VAAPI and QSV interfaces.
When we ran those ffmpeg
hardware tests earlier, the containers were running as root by default, which is why they had no trouble using the iGPU without needing to join the video or render groups. But in a production container running under a normal user account, the absence of those group permissions prevent transcoding from working, as Jellyfin won’t have access to the necessary iGPU interface. So if you’re seeing transcoding failures even though the test containers worked — double-check your UID, GID, and those group memberships.
7. Write the Docker Compose File for Jellyfin
Now you can define how Jellyfin should run, what resources it uses, and how it connects to your media.
Save this as ~/docker/jellyfin/docker-compose.yml
:
version: "3.8"
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
restart: unless-stopped
user: "1000:110" # UID:GID of your non-root user
group_add:
- "44" # Your video GID (check with `getent group video`)
- "993" # Your render GID (check with `getent group render`)
volumes:
- /home/<your-username>/docker/jellyfin/config:/config
- /home/<your-username>/docker/jellyfin/cache:/cache
- /etc/localtime:/etc/localtime:ro
devices:
- /dev/dri:/dev/dri # Intel iGPU for VAAPI
ports:
- 8096:8096
security_opt:
- no-new-privileges:true
cap_drop:
- MKNOD
- AUDIT_WRITE
This compose file sets up Jellyfin securely, with your user’s permissions and hardware access enabled. A few important notes:
- security_opt and cap_drop: These options help harden the container.
security_opt: no-new-privileges:true
prevents the container from gaining additional privileges, even if binaries inside try to. Thecap_drop
section removes specific Linux capabilities from the container that Jellyfin doesn’t need — likeMKNOD
(creating device files) andAUDIT_WRITE
(writing audit logs). By stripping these away, you’re reducing the attack surface in case the container were ever compromised. - Replace user info: Where it says
user: "1000:110"
, you’ll want to substitute those numbers with the actual UID and GID of your system user. The same is true for the group add lines: you’ll want to include the correct video GID and render GID for these, otherwise the container won’t have access to the hardware for transcoding. - Replace <your-username> with correct path: That includes updating volume paths (e.g.,
/home/ryanboyd/docker/jellyfin/cache
and/home/ryanboyd/docker/jellyfin/config
). - Media mounts: If your media live on an external drive or a NAS share, you’ll need to mount that first (e.g., via
/etc/fstab
), and then include the mounted path as a read-only volume in the compose file. I won’t get into mounting external storage in a persistent fashion here (there are a zillion other guides online) but, once that’s done, you’ll want to include it in the volumes listed in the compose file. For example:
volumes:
- /mnt/nas-media:/media:ro
This makes your media library available to Jellyfin while keeping it safe from modification by keeping it as read-only access.
8. Start Jellyfin
Launch the container:
cd ~/docker/jellyfin
docker compose up -d
Then visit Jellyfin in your browser:
http://<your-local-ip>:8096
From here, Jellyfin will guide you through its initial setup wizard, where you can create your user account and point it to your media folders.
Once you get through the initial setup wizard, remember to:
- Turn on hardware transcoding in the admin dashboard under Playback > Transcoding.
- Once enabled, you can verify that it’s working by playing a video and clicking the gear icon during playback to change the bitrate/resolution.
- If hardware acceleration isn’t configured correctly, changing playback quality will likely trigger a Jellyfin pop-up message saying that there was some kind of playback error.
- You can also inspect the Dashboard > Activity Log in the Jellyfin interface (or look at the logs in
/config/log
inside the container volume) to confirm whether hardware transcoding is being used. You can also check this via the system terminal while transcoding is happening by looking at resource usage via something likesudo intel_gpu_top
(you’ll need to install Intel’s GPU tools first) — you should see your iGPU doing some heavy lifting, but only during transcoding 🙂
Conclusions
So, there you have it: a simple and secure Jellyfin install on the Beelink EQ14 with an N150 processor, running completely locally without any reverse proxies or third-party dashboards. You can absolutely layer in those features later — things like Nginx Proxy Manager, Portainer, or a full-fledged homepage dashboard. But if you’re just getting started and want a minimal, functional stack, this will get you there fast and clean.