You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CPU was pegged at 100% on my server. Found 24 leftover processes from 1Panel running docker logs --tail 30 against all containers and grepping for error keywords. dockerd was at 35-88% CPU doing ~637k read syscalls/sec with near-zero actual I/O — polling log file descriptors in a loop. Killing those processes dropped CPU from 100% to ~6% instantly.
The processes were spawned on Apr24 with no active dashboard session at the time. PPID was 1 (orphaned after the parent web handler exited). They just accumulate and never clean up.
Steps to Reproduce
Not sure how to reliably trigger it — I wasn't actively doing anything when it happened. But the script 1Panel runs looks like:
Probably triggered by dashboard load or background monitoring. The problem is these processes pile up and don't terminate properly.
The expected correct result
Don't scan container logs automatically unless the user explicitly asks, or
Make sure processes actually terminate and close their fds, or
Give users a setting to disable this — on a single-core VPS running docker logs --tail 30 across all containers simultaneously is destructive
Related log output
# dockerd at peak load
PID %CPU COMMAND
3145844 35-88 dockerd (fluctuates, spikes when logs processes are active)
# System-level CPU breakdown
%Cpu(s): 53.4 us, 46.6 sy, 0.0 ni, 0.0 id, 0.0 wa
# 46.6% system CPU with 0% idle — dockerd was consuming the entire core# dockerd I/O stats (3-second sample, observed around 20:35 CST)
syscr: 135357722361 -> 135359633625 (+1,911,264 read syscalls = ~637k/sec)
rchar: 117951608 -> 117951641 (+33 bytes actually read = near-zero)
read_bytes: 233037824 (unchanged, no actual disk I/O)
# This confirms dockerd was polling container log file descriptors# in a tight loop, not doing real I/O# Process tree of the orphaned workers
PID 4141078 (PPID=1, orphaned — original parent was 1Panel web handler)
└─ PID 4141088
└─ docker logs --tail 30 1Panel-bitwarden-Zkt1
└─ docker logs --tail 30 1Panel-maddy-mail-LCO7
└─ docker logs --tail 30 1Panel-roundcube-EXLr
... (one per container, 16 containers total)
# 24 such processes were alive, some already 2 days old (since Apr24)# All in S (sleeping) state but dockerd still burning CPU on fds# Server environment
1 vCPU, 1.9GB RAM, 1GB swap (814MB used)
Debian 12 (kernel 6.1.0-44), Docker 29.4.1, overlay2 storage
json-file log driver (max-size 50m, max-file 3)
16 Docker containers running
# 1Panel DB settings
MonitorStatus = enable
MonitorInterval = 5 (default)
Additional Information
Workaround: cron job running pkill -f "docker logs.*--tail" every 5 minutes — drops CPU from 100% to ~6% immediately
Also set MonitorInterval from 5 to 60 in the database, but unsure if that helps with this specific issue
Contact Information
wran@outlook.it
1Panel Version
v1.10.34-lts
Problem Description
CPU was pegged at 100% on my server. Found 24 leftover processes from 1Panel running
docker logs --tail 30against all containers and grepping for error keywords. dockerd was at 35-88% CPU doing ~637k read syscalls/sec with near-zero actual I/O — polling log file descriptors in a loop. Killing those processes dropped CPU from 100% to ~6% instantly.The processes were spawned on Apr24 with no active dashboard session at the time. PPID was 1 (orphaned after the parent web handler exited). They just accumulate and never clean up.
Steps to Reproduce
Not sure how to reliably trigger it — I wasn't actively doing anything when it happened. But the script 1Panel runs looks like:
Probably triggered by dashboard load or background monitoring. The problem is these processes pile up and don't terminate properly.
The expected correct result
docker logs --tail 30across all containers simultaneously is destructiveRelated log output
Additional Information
pkill -f "docker logs.*--tail"every 5 minutes — drops CPU from 100% to ~6% immediately