Notes
Containers installed through Portainer
- Immich
Containers installed through command line
Located in /srv/dev-disk-by-uuid-31275b2a-43bc-4c99-b8cb-6672862bf771
- Nextcloud
- Diun
- Paperless-ngx
- Hugo (Hugo is in /configs)
Located in /opt/compose in one big stack:
- Gluetun
- Flaresolverr
- Prowlarr
- QbitTorrent
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
# - 8888:8888/tcp # HTTP proxy Remove if not using
- 8080:8080 # qbittorrent web interface
- 6081:6881
- 6081:6881/udp
- 6011:6011
- 9696:9696 # prowlarr
- 8191:8191 #flaresolverr
# - 7878:7878 # radarr
volumes:
- ${CONFIGS}Gluetun:/config
labels:
- "diun.enable=true"
environment:
- FIREWALL_OUTBOUND_SUBNETS=172.17.0.0/16
# - HTTPPROXY=on #Remove if not using for HTTP proxy
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=[REDACTED]
- WIREGUARD_ADDRESSES=[REDACTED]
- SERVER_CITIES="Chicago IL"
healthcheck:
test: ping -c 1 www.google.com || exit 1
interval: 30s
timeout: 10s
retries: 3
start_period: 45s
restart: unless-stopped
flaresolverr:
# DockerHub mirror flaresolverr/flaresolverr:latest
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
network_mode: "container:gluetun"
labels:
- "diun.enable=true"
environment:
- LOG_LEVEL=${LOG_LEVEL:-info}
- LOG_HTML=${LOG_HTML:-false}
- CAPTCHA_SOLVER=${CAPTCHA_SOLVER:-none}
- TZ=America/New_York
# ports:
# - 8191:8191
restart: unless-stopped
depends_on:
gluetun:
condition: service_healthy
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- ${CONFIGS}Prowlarr/config:/config
- ${ARRPATH}Prowlarr/backup:/data/Backup
- ${ARRPATH}Downloads:/downloads
labels:
- "diun.enable=true"
# ports:
# - 9696:9696
network_mode: "container:gluetun"
restart: unless-stopped
depends_on:
gluetun:
condition: service_healthy
qbittorrent:
image: linuxserver/qbittorrent:latest
container_name: qbittorrent
restart: unless-stopped
labels:
- "com.centurylinklabs.watchtower.enable=false"
- "diun.enable=true"
volumes:
- ${CONFIGS}Qbittorrent/config:/config
- ${ARRPATH}Downloads:/downloads
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
- WEBUI_PORT=8080
network_mode: "container:gluetun"
healthcheck:
start_period: 15s
depends_on:
gluetun:
condition: service_healthy
Install OMV
Handle Lid and Screen Blank
- on
/etc/systemd/logind.confHandleLidSwitch=ignore
- on
/etc/default/grubGRUB_CMDLINE_LINUX_DEFAULT="quiet consoleblank=100"- 100 is in seconds
sudo update-grub
Static IP
Comcast Residential Internet fix
- Go to
10.0.0.1on your browser- default is User: admin Password: password
- Go to Connected Devices and find your server device
- Edit the server
- Choose Reserved IP and remember the address (in my case it was something like
10.0.0.222 - This address is how you connect to your server from another device within your network (Next Step)
SSH
- Now that we have our static IP, go to your main machine (not server), open terminal and type
ssh root@10.0.0.222 - For the first time, you’ll be prompted to say yes before continuing.
- Enter your password and now you’re in!
- You’ll need to use this for many of the following steps.
First steps
- SSH into server
apt updateapt install neovim ranger file git curl unzip -y
NEOVIM
- create or edit
/etc/profile.d/editor.shnvim /etc/profile.d/editor.shexport EDITOR="nvim" export VISUAL="nvim"
source /etc/profile.d/editor.shupdate-alternatives --install /usr/bin/editor editor /usr/bin/nvim 60update-alternatives --config editor- choose the nvim option
- check if
echo $EDITORgivesnvim
.BASHRC (aliases etc)
nvim .bashrc(it’s in the /root folder)
# Added by openmediavault (https://www.openmediavault.org).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
# ~/.bashrc: executed by bash(1) for non-login shells.
# Note: PS1 and umask are already set in /etc/profile. You should not
# need this unless you want different defaults for root.
# PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '
# umask 022
export TERM=xterm-256color
# You may uncomment the following lines if you want `ls' to be colorized:
export LS_OPTIONS='--color=auto'
eval "$(dircolors)"
alias ls='ls $LS_OPTIONS'
alias ll='ls $LS_OPTIONS -l'
alias l='ls $LS_OPTIONS -lA'
#
# Some more alias to avoid making mistakes:
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
alias f='ranger --choosedir=$HOME/.rangerdir; LASTDIR=`cat $HOME/.rangerdir`; cd "$LASTDIR"'
alias v='nvim'
alias ..='cd ..'
alias ...='cd ../..'
alias ....='cd ../../..'
RANGER
- all the above plus:
ranger --copy-config=allto copy the config files- open ranger, type
gR - change to
True”
if fm.username == 'root':
fm.settings.preview_files = True
fm.settings.use_preview_script = True
fm.log.appendleft("Running as root, disabling the file previews.")
OMV-Extras
- We need to install omv-extras in order to install docker.
- On your server (which you ssh into), type:
wget -O - https://github.com/OpenMediaVault-Plugin-Developers/packages/raw/master/install | bash
SnapRaid
https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:snapraid
MergerFS
https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:mergerfs
VERY IMPORTANT!!!
Since we’re using MergerFS, we need to make sure to:
- Have an absolute path for container configs (see bellow on .env). If we use a mergerfs path, there’s the risk of the server trying to load the configs before the mergerfs pool starts, which can lead to a lot of trouble.
- Have an absolute path to the compose shared folder that we use to configure Docker for the same reasons as above.
- on the OMV WebGUI, go to Storage -> Shared Folders and create a shared folder at one of the drives.
- On Services -> Compose -> Setings, link the compose thing to the shared folder created.
.env
Create a global.env file on OMV:
ARRPATH=/srv/mergerfs/Merger1/data/Arr/
CONFIGS=/srv/dev-disk-by-uuid-31275b2a-43bc-4c99-b8cb-6672862bf771/configs/
MEDIA SERVER
*Arr Stack
Bazarr
services:
bazarr:
image: lscr.io/linuxserver/bazarr:latest
container_name: bazarr
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
- UMASK_SET=022
volumes:
- ${CONFIGS}Bazarr/config:/config
- ${ARRPATH}Radarr/movies:/data/Movies
- ${ARRPATH}Sonarr/tvshows:/data/TVShows
ports:
- 6767:6767
labels:
- "diun.enable=true"
restart: unless-stopped
I had to adjust the path in bazarr so it could “see” sonarr and radarr folders, even though I had it in the environments. After that, it worked like a charm.
Prowlarr
services:
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- ${CONFIGS}Prowlarr/config:/config
- ${ARRPATH}Prowlarr/backup:/data/Backup
- ${ARRPATH}Downloads:/downloads
labels:
- "diun.enable=true"
# ports:
# - 9696:9696
network_mode: "container:gluetun"
restart: unless-stopped
In settings -> Apps -> add app -> Radarr/Sonarr:
- use the Gateway address from above in the “Prowlarr Server” and “Radarr/Sonarr Server” addresses, like this:
http://172.17.0.1:9696http://172.17.0.1:8989
Radarr
services:
radarr:
image: lscr.io/linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- ${CONFIGS}Radarr/config:/config
- ${ARRPATH}Radarr/movies:/data/movies
- ${ARRPATH}Radarr/backup:/data/Backup
- ${ARRPATH}Downloads:/downloads
labels:
- "diun.enable=true"
ports:
- 7878:7878
restart: unless-stopped
Sonarr
services:
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
volumes:
- ${CONFIGS}Sonarr/config:/config
- ${ARRPATH}Sonarr/backup:/data/Backup
- ${ARRPATH}Sonarr/tvshows:/data/tvshows
- ${ARRPATH}Downloads:/downloads
labels:
- "diun.enable=true"
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
ports:
- 8989:8989
restart: unless-stopped
Huntarr
services:
huntarr:
image: huntarr/huntarr:latest
container_name: huntarr
restart: always
ports:
- "9705:9705"
volumes:
- ${CONFIGS}Huntarr:/config
labels:
- "diun.enable=true"
environment:
- TZ=America/New_York
Cleanuparr
services:
cleanuparr:
image: ghcr.io/cleanuparr/cleanuparr:latest
container_name: cleanuparr
restart: unless-stopped
ports:
- "11011:11011"
volumes:
- ${CONFIGS}Cleanuparr/config:/config
environment:
- PORT=11011
- BASE_PATH=
- PUID=1000
- PGID=100
- UMASK=022
- TZ=America/New_York
labels:
- "diun.enable=true"
# Health check configuration
healthcheck:
test: ["CMD", "curl", "-f", "http://10.0.0.90:11011/health"]
interval: 30s # Check every 30 seconds
timeout: 10s # Allow up to 10 seconds for response
start_period: 30s # Wait 30 seconds before first check
retries: 3 # Mark unhealthy after 3 consecutive failures
Flaresolverr
services:
flaresolverr:
# DockerHub mirror flaresolverr/flaresolverr:latest
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
network_mode: "container:gluetun"
labels:
- "diun.enable=true"
environment:
- LOG_LEVEL=${LOG_LEVEL:-info}
- LOG_HTML=${LOG_HTML:-false}
- CAPTCHA_SOLVER=${CAPTCHA_SOLVER:-none}
- TZ=America/New_York
# ports:
# - 8191:8191
restart: unless-stopped
VPN etc
Gluetun
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
# - 8888:8888/tcp # HTTP proxy Remove if not using
- 8080:8080 # qbittorrent web interface
- 6081:6881
- 6081:6881/udp
- 6011:6011
- 9696:9696 # prowlarr
- 8191:8191 #flaresolverr
# - 7878:7878 # radarr
volumes:
- ${CONFIGS}Gluetun:/config
labels:
- "diun.enable=true"
environment:
- FIREWALL_OUTBOUND_SUBNETS=[address]
# - HTTPPROXY=on #Remove if not using for HTTP proxy
- VPN_SERVICE_PROVIDER=[VPN Provider]
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=[REDACTED]
- WIREGUARD_ADDRESSES=[REDACTED]
- SERVER_CITIES="Chicago IL"
To find FIREWALL_OUTBOUND_SUBNETS:
- ssh into omv server
ssh@10.0.0.222 docker network inspect bridge- Use the address after Subnet:
- “Subnet”: “172.17.0.0/16”, “Gateway”: “172.17.0.1”
IMPORTANT: We will use the Gateway address in prowlarr because of the VPN stuff.
qBitTorrent
services:
qbittorrent:
image: linuxserver/qbittorrent:latest
container_name: qbittorrent
restart: unless-stopped
labels:
- "com.centurylinklabs.watchtower.enable=false"
- "diun.enable=true"
volumes:
- ${CONFIGS}Qbittorrent/config:/config
- ${ARRPATH}Downloads:/downloads
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
- WEBUI_PORT=8080
network_mode: "container:gluetun"
healthcheck:
start_period: 15s
When first installed, the user will be admin, and the password will be in the logs.
On Settings -> Advanced, change the “Network Interface” to tun0.
Very important if using MergerFS:
- in the OMV7 WebGUI, go to Storage -> mergerfs -> edit
- in options, change to
defaults,cache.files=partial
Add this to the “exclude files list” exclude_list
Soulseek
services:
slskd:
environment:
- SLSKD_REMOTE_CONFIGURATION=true
- "SLSKD_SHARED_DIR=/music"
ports:
- 5030:5030/tcp
- 5031:5031/tcp
- 50300:50300/tcp
volumes:
- ${ARRPATH}Soulseek:/app:rw
- /srv/mergerfs/Music/Popular Music:/music:rw
user: 1000:100
image: slskd/slskd:latest
For soulseek, in the WebGUI, uncomment only the things you want to change in the config file. Make sure to change the username so that it connects to the server.
Jellyfin
services:
jellyfin:
image: lscr.io/linuxserver/jellyfin:latest
container_name: jellyfin
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- ${CONFIGS}Jellyfin/config:/config
- ${ARRPATH}Radarr/movies:/data/Movies
- ${ARRPATH}Sonarr/tvshows:/data/TVShows
labels:
- "diun.enable=true"
ports:
- 8096:8096
- 7359:7359/udp #Service Discovery
- 1900:1900/udp #Client Discovery
restart: unless-stopped
After installing Jellyfin, when opening the WebGUI, you might be faced with a login page instead of the setup wizard. If that’s the case, just refresh the page (maybe more than once), and you’ll get the setup wizard.
An important point is to have the /config folder not pointed to the MergerFS path, but to an actual path (like /srv/xxxxxx/
Plugins
- Fanart
- Intro Skipper
- LanguageTags
- Movies get tagged with their language
- Smart Collections
- Create collections from tags. Paired with LanguageTags make it easy to create collections for countries.
FileBot
services:
filebot:
image: jlesage/filebot
ports:
- "5800:5800"
environment:
- PUID=1000
- PGID=100
- DARK_MODE=1
volumes:
- ${CONFIGS}Filebot/config:/config
- ${ARRPATH}Filebot:/storage
You need to pay for FileBot, but from my experience, it’s the best tool to rename files to be Jellyfin compliant.
Nextcloud AIO
To have Nextcloud AIO up and running, you need to get a domain and do some DNS stuff with Cloudflare and Nginx Proxy Manager.
I used TechHut’s youtube guide: https://youtu.be/DFUmfHqQWyg
- The gist of it is to have Cloudflare handle your domain (I bought a 1.111b xyz domain on Namecheap for $0.85/year).
- In Cloudflare, go to DNS
- create A record, name is your root address (or just @), content is the IP of the machine you host Nextcloud (in my case is the tailscale network so I can access it remotely), no proxy, DNS only.
- create CNAME record for wildcard, name is *, content is the root address, again no proxy, DNS only.
- go to Account -> API tokens -> Create template -> Edit zone DNS Use template -> Change Zone Resources to All zones -> continue to summary -> create token
- on Nginx Proxy Manager CONTINUE
I had to do this to get rid of some errors:
/etc/docker/daemon.json
{
"data-root": "/var/lib/docker",
"dns": ["1.1.1.1", "8.8.8.8"]
}
services:
nextcloud-aio-mastercontainer:
image: ghcr.io/nextcloud-releases/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed.
network_mode: bridge
labels:
- "diun.enable=true"
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed.
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 8181:8080
environment:
# AIO_COMMUNITY_CONTAINERS: "local-ai memories" # Community containers https://github.com/nextcloud/all-in-one/tree/main/community-containers
APACHE_PORT: 11000 # Use this port in Nginx Proxy Manager
# NC_TRUSTED_PROXIES: 172.18.0.3 # this is the NPM proxy ip address in the docker network !
FULLTEXTSEARCH_JAVA_OPTIONS: "-Xms1024M -Xmx1024M"
NEXTCLOUD_DATADIR: /srv/mergerfs/Merger1/NEXTCLOUD_data # ⚠️ Warning: do not set or adjust this value after the initial Nextcloud installation is done!
NEXTCLOUD_MOUNT: /src/mergerfs/Merger1/data # Allows the Nextcloud container to access the chosen directory on the host.
NEXTCLOUD_UPLOAD_LIMIT: 1028G
NEXTCLOUD_MAX_TIME: 7200
NEXTCLOUD_MEMORY_LIMIT: 1028M
NEXTCLOUD_ENABLE_DRI_DEVICE: true # Intel QuickSync
SKIP_DOMAIN_VALIDATION: false # This should only be set to true if things are correctly configured.
# TALK_PORT: 3478 # This allows to adjust the port that the talk container is using which is exposed on the host. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
volumes:
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed.
Nginx Proxy Manager (NPM)
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format <host-port>:<container-port>
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
labels:
- "diun.enable=true"
volumes:
- ${CONFIGS}nginxproxymanager/data:/data
- ${CONFIGS}nginxproxymanager/letsencrypt:/etc/letsencrypt
depends_on:
- db
db:
image: 'jc21/mariadb-aria:latest'
restart: unless-stopped
labels:
- "diun.enable=true"
volumes:
- ${CONFIGS}nginxproxymanager/mysql:/var/lib/mysql
- In the WebGUI, go to Hosts -> Proxy Hosts
Immich
I installed Immich with Portainer.
Create a stack and add the compose file:
name: immich
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
volumes:
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
env_file:
- stack.env
ports:
- '2283:2283'
depends_on:
- redis
- database
restart: always
healthcheck:
disable: false
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
# extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
# file: hwaccel.ml.yml
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
env_file:
- stack.env
restart: always
healthcheck:
disable: false
redis:
container_name: immich_redis
image: docker.io/valkey/valkey:8-bookworm@sha256:ff21bc0f8194dc9c105b769aeabf9585fea6a8ed649c0781caeac5cb3c247884
healthcheck:
test: redis-cli ping || exit 1
restart: always
database:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0
# image: ghcr.io/immich-app/postgres:14-vectorchord0.3.0-pgvectors0.2.0@sha256:fa4f6e0971f454cd95fec5a9aaed2ed93d8f46725cc6bc61e0698e97dba96da1
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
# Uncomment the DB_STORAGE_TYPE: 'HDD' var if your database isn't stored on SSDs
DB_STORAGE_TYPE: 'HDD'
volumes:
# Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
restart: always
power-tools:
container_name: immich_power_tools
image: ghcr.io/varun-raj/immich-power-tools:latest
ports:
- "8001:3000"
env_file:
- stack.env
volumes:
model-cache:
Add the environment variables:
UPLOAD_LOCATION=/srv/mergerfs/Merger1/data/portainer/library
DB_DATA_LOCATION=/srv/dev-disk-by-uuid-8d480631-d323-4e8e-92a1-42a85272b03c/data/portainer/postgres
IMMICH_VERSION=v1.137.0
DB_PASSWORD=[REDACTED]
DB_USERNAME=[REDACTED]
DB_DATABASE_NAME=immich
IMMICH_URL=http://10.0.0.90:2283
DB_HOST=database
DB_PORT=5432
IMPORTANT:
UPLOAD_LOCATIONcan be in the mergerfs poolDB_DATA_LOCATIONneeds to be an absolute path.
Tailscale
services:
ts-authkey-test:
image: tailscale/tailscale:latest
container_name: ts-authkey-test
hostname: omv-server
environment:
- TS_AUTHKEY=tskey-auth-[REDACTED]
- TS_STATE_DIR=/var/lib/tailscale
- TS_USERSPACE=false
- TS_EXTRA_ARGS=--accept-dns=false
volumes:
- ${CONFIGS}Tailscale/ts-authkey-test/state:/var/lib/tailscale
labels:
- "diun.enable=true"
devices:
- /dev/net/tun:/dev/net/tun
cap_add:
- net_admin
restart: unless-stopped
network_mode: host
This is for remote access to the server.
You need to have an account at Tailscale (it’s free). Generate an auth key. It’s on settings -> keys and generate an auth key.
Use that auth key on the environment TS_AUTHKEY on our docker compose file.
You also need to install and run Tailscale on the device you want to use.
With tailscale running everywhere, use the server address that shows on the tailscale account page or app, and add the port you want to use, like 100.0.12.2:8096 for Jellyfin.
Tools
Omni-tools
services:
omni-tools:
image: iib0011/omni-tools:latest
container_name: omni-tools
labels:
- "diun.enable=true"
restart: unless-stopped
ports:
- "8484:80"
Stirling-PDF
services:
6 stirling-pdf:
7 image: docker.stirlingpdf.com/stirlingtools/stirling-pdf:latest
8 ports:
9 - '8083:8080'
10 volumes:
11 - ${CONFIGS}StirlingPDF/trainingData:/usr/share/tessdata # Required for extra OCR languages
12 - ${CONFIGS}StirlingPDF/extraConfigs:/configs
13 - ${CONFIGS}StirlingPDF/customFiles:/customFiles/
14 - ${CONFIGS}StirlingPDF/logs:/logs/
15 - ${CONFIGS}StirlingPDF/pipeline:/pipeline/
16 environment:
17 - DISABLE_ADDITIONAL_FEATURES=false
18 - LANGS=en_US
WeTTY
networks:
my-net:
external: true
services:
wetty:
image: wettyoss/wetty:latest
container_name: wetty
networks:
my-net:
cap_add:
- NET_ADMIN
labels:
- "diun.enable=true"
environment:
- SSHHOST=10.0.0.90 # Your NAS Ip here
- SSHPORT=22 # You SSH port Here
- SSHUSER=root
ports:
- 3000:3000
restart: unless-stopped
After doing this, you need to go to OMV7 -> Services -> Compose -> Networks and create a new empty bridge network by clicking the plus button.
Name it my-net, as shown in the docker-compose file above.
To run it, the address needs to be followed by /wetty like this: http://10.0.0.90:3000/wetty
DumbPad
services:
dumbpad:
image: dumbwareio/dumbpad:latest
container_name: dumbpad
restart: unless-stopped
ports:
- 3003:3000
volumes:
# Where your notes will be stored
- /srv/mergerfs/Merger1/data/DumbPad:/app/data
labels:
- "diun.enable=true"
environment:
# The title shown in the web interface
SITE_TITLE: DumbPad
# Optional PIN protection (leave empty to disable)
DUMBPAD_PIN:
# The base URL for the application
BASE_URL: http://10.0.0.90:3003
Paperless-ngx
Located in /srv/dev-disk-by-uuid-31275b2a-43bc-4c99-b8cb-6672862bf771/PAPERLESS
networks:
backend:
driver: bridge
services:
paperless:
container_name: paperless
image: ghcr.io/paperless-ngx/paperless-ngx:latest
restart: unless-stopped
networks:
- backend
ports:
- 8050:8000
labels:
- "diun.enable=true"
security_opt:
- no-new-privileges:true
depends_on:
- paperless-redis
- paperless-postgres
volumes:
- ./paperless/paperless/data:/usr/src/paperless/data
- /srv/mergerfs/Merger1/data/paperless/paperless/media:/usr/src/paperless/media
- /srv/mergerfs/Merger1/data/paperless/paperless/export:/usr/src/paperless/export
- ./paperless/paperless/consume:/usr/src/paperless/consume
environment:
USERMAP_UID: ${USERMAP_UID}
USERMAP_GID: ${USERMAP_GID}
PAPERLESS_TIME_ZONE: ${PAPERLESS_TIME_ZONE}
PAPERLESS_OCR_LANGUAGE: ${PAPERLESS_OCR_LANGUAGE}
PAPERLESS_ENABLE_UPDATE_CHECK: ${PAPERLESS_ENABLE_UPDATE_CHECK}
PAPERLESS_REDIS: redis://paperless-redis:6379
PAPERLESS_DBHOST: ${PAPERLESS_DBHOST}
PAPERLESS_DBNAME: ${PAPERLESS_DBNAME}
PAPERLESS_DBUSER: ${PAPERLESS_DBUSER}
PAPERLESS_DBPASS: ${PAPERLESS_DBPASS}
PAPERLESS_SECRET_KEY: ${PAPERLESS_SECRET_KEY}
PAPERLESS_FILENAME_FORMAT: "// "
PAPERLESS_URL: ${PAPERLESS_URL}
PAPERLESS_ALLOWED_HOSTS: ${PAPERLESS_ALLOWED_HOSTS}
PAPERLESS_ADMIN_USER: ${PAPERLESS_ADMIN_USER}
PAPERLESS_ADMIN_PASSWORD: ${PAPERLESS_ADMIN_PASSWORD}
paperless-postgres:
container_name: paperless-postgres
image: postgres:16.0-alpine
restart: unless-stopped
networks:
- backend
security_opt:
- no-new-privileges:true
volumes:
- ./paperless/postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
paperless-redis:
container_name: paperless-redis
image: redis:7.2-alpine
restart: unless-stopped
networks:
- backend
security_opt:
- no-new-privileges:true
volumes:
- ./paperless/redis:/data
environment:
REDIS_ARGS: "--save 60 10"
.env
# User mapping
USERMAP_UID=1000
USERMAP_GID=100
# Paperless settings
PAPERLESS_TIME_ZONE=America/New_York
PAPERLESS_OCR_LANGUAGE=eng
PAPERLESS_ENABLE_UPDATE_CHECK=true
PAPERLESS_URL=https://paperless.86798788.xyz
PAPERLESS_ALLOWED_HOSTS=paperless.86798788.xyz,localhost,127.0.0.1,100.95.189.126,10.0.0.90
# Database settings
POSTGRES_USER=[USER]
POSTGRES_DB=paperless-db-omk
POSTGRES_PASSWORD=[REDACTED]
PAPERLESS_DBHOST=paperless-postgres
PAPERLESS_DBNAME=paperless-db-omk
PAPERLESS_DBUSER=[USER]
PAPERLESS_DBPASS=[REDACTED]
# Secret key (generate a long random one in production!)
PAPERLESS_SECRET_KEY=[REDACTED]
# Initial admin setup (remove after first login)
PAPERLESS_ADMIN_USER=[USER]
PAPERLESS_ADMIN_PASSWORD=[REDACTED]
MotionEye
services:
motioneye:
image: ccrisan/motioneye:master-amd64 # official motionEye build
container_name: motioneye
restart: unless-stopped
volumes:
- ${CONFIGS}motioneye/config:/etc/motioneye
- ${ARRPATH}motioneye/storage:/var/lib/motioneye
ports:
- "8765:8765" # Web UI
devices:
- /dev/video0:/dev/video0 # USB webcam
- /dev/video1:/dev/video1
labels:
- "diun.enable=true"
environment:
- TZ=America/New_York # change to your timezone
FMHY
- create folder somewhere (absolute path)
git clone https://github.com/fmhy/edit.gitcd edit-
docker compose up --build - to run:
cd /srv/dev-disk-by-uuid-31275b2a-43bc-4c99-b8cb-6672862bf771/FMHY/edit-
docker compose up --detach - to update:
cd editdocker compose downgit reset --hard origin/maingit pulldocker compose up --build -d
Email Notifications
On OMV’s WebGUI:
System -> Notification -> Settings
- Enabled
- smtp.gmail.com
- 587
- STARTTLS
- USERNAME@gmail.com
- Authentication required
- USERNAME
- Password
- You need to generate an app password. Just google how to do it. Google will generate one with spaces, you should delete the spaces.
- Add the recipients
Glance (Dashboard)
services:
glance:
container_name: glance
image: glanceapp/glance
restart: unless-stopped
volumes:
- ${CONFIGS}Glance:/app/config
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "diun.enable=true"
ports:
- 3030:8080
Glance was a bit tricky to get working. (Otavio from the future, I think it was tricky because my first server had some mistakes with the whole mergerfs configuration and config files being on the mergerfs pool which was a big issue)
The glance.yml file should go to an absolute path, according to the volumes path in the compose file:
/srv/dev-disk-by-uuid-f7b0c05a-e445-402d-a0c0-b8f2a4f9c677/configs/glance/glance.yml
The default port is 8080:8080, but it conflicts with qBitTorrent. So you have to reroute the port, which I did with 3030:8080
All the configuration for the dashboard is made through the glance.yml file.
My glance.yml file: glance.yml
Website
Using Jekyll
- Install Jekyll
apt install jekyll - Follow the instructions on the jekyll website
After the site is built bundle exec jekyll build, create and deploy in the website folder a simple container (compose.yml file):
services:
web:
image: nginx:alpine
ports:
- "4000:80"
volumes:
- ./_site:/usr/share/nginx/html:ro
docker compose up -d
Website should be available at http://localhost:4000
If you make any changes and do a jekyll build, it should reflect on the website, no need to restart the container.
I mainly use it to host icons for Glance.
DIUN
Notification for container updates
compose.yml
name: diun
services:
diun:
image: crazymax/diun:latest
command: serve
volumes:
- "./data:/data"
- "./diun.yml:/diun.yml:ro"
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- "TZ=America/New_York"
- "LOG_LEVEL=info"
- "LOG_JSON=false"
restart: always
diun.yml
db:
path: diun.db
watch:
workers: 10
schedule: "0 */6 * * *"
jitter: 30s
firstCheckNotif: false
runOnStartup: true
defaults:
watchRepo: false
notifyOn:
- new
- update
sortTags: reverse
notif:
mail:
host: smtp.gmail.com
port: 587
ssl: false
insecureSkipVerify: false
localname: localhost
username: EMAILHERE@gmail.com
password: APP_PW_HERE
from: EMAILHERE@gmail.com
to:
- OTHEREMAIL@gmail.com
templateTitle: " released"
templateBody: |
Docker tag which you subscribed to through provider has been released.
providers:
docker:
watchStopped: true
# file:
# directory: ./imagesdir
Add to containers:
labels:
- "diun.enable=true"
Not in use
Pi-Hole
services:
pihole:
hostname: pihole
container_name: pihole
image: pihole/pihole:latest
network_mode: "host"
environment:
TZ: 'America/New_York'
FTLCONF_webserver_api_password: [REDACTED]
volumes:
- '/srv/dev-disk-by-uuid-f7b0c05a-e445-402d-a0c0-b8f2a4f9c677/data/pihole/etc-pihole:/etc/pihole'
- '/srv/dev-disk-by-uuid-f7b0c05a-e445-402d-a0c0-b8f2a4f9c677/data/pihole/etc-dnsmasq.d:/etc/dnsmasq.d'
restart: unless-stopped
Integration with Tailscale
On the Tailscale admin page, go to the tab DNS
- Add a custom nameserver with the OMV server tailscale address (something like
100.111.55.11) - Enable the “override DNS servers”
On the OMV machine, disable systemd-restored:
sudo systemctl disable systemd-resolved --now
sudo systemctl stop systemd-resolved
Beucause Pi-Hole is now using the host network, you need to change OMV’s own port.
- On the OMV WebGUI, go to Systems -> Workbench and switch port 80 to something else, like 81
- The WebGUI will crash, so now you have to open it using the port. In my case, it’s
10.0.0.90:81
With all the containers up, you can access the Pi-Hole WebGUI with 10.0.0.66/admin
Nextcloud
First create a network for nextcloud and mariadb:
- ssh into the server
docker network create nextcloud-net
MariaDB
services:
mariadb:
image: linuxserver/mariadb
container_name: mariadb
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=[REDACTED]
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- /srv/dev-disk-by-uuid-d657f449-9e05-43a7-840f-756115e9855c/data2/nextcloud/db:/config
networks:
- nextcloud-net
restart: unless-stopped
networks:
nextcloud-net:
external: true
Old Nextcloud config
services:
nextcloud:
image: lscr.io/linuxserver/nextcloud:latest
container_name: nextcloud
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- /srv/dev-disk-by-uuid-d657f449-9e05-43a7-840f-756115e9855c/data2/nextcloud/config:/config
- /srv/dev-disk-by-uuid-d657f449-9e05-43a7-840f-756115e9855c/data2/nextcloud/data:/data
ports:
- 8022:80
networks:
- nextcloud-net
restart: unless-stopped
networks:
nextcloud-net:
external: true
Nextcloud Office
- First you need to create a Collabora docker container:
services:
collabora:
image: collabora/code
container_name: collabora
environment:
- domain=10\\.0\\.0\\.90:8022\\|[REDACTED]:8022
- extra_params=--o:ssl.enable=false
- username=[REDACTED]
- password=[REDACTED]
volumes:
- /srv/dev-disk-by-uuid-d657f449-9e05-43a7-840f-756115e9855c/data2/collabora-config/coolwsd.xml:/etc/coolwsd/coolwsd.xml
ports:
- 9980:9980
networks:
- nextcloud-net
restart: unless-stopped
networks:
nextcloud-net:
external: true
If using tailscale, you’ll have to edit the coolwsd.xml file. That’s why I defined a custom path to it under volumes:.
In the coolwsd.xml, edit the alias_groups section to look like this:
<alias_groups desc="default mode is 'first' it allows only the first host when groups are not defined. set mode to 'groups' and define group to allow multiple host and its aliases" mode="groups">
<group>
<host desc="[REDACTED]" allow="true">http://[REDACTED]:8022</host>
</group>
<group>
<host desc="10.0.0.90" allow="true">http://10.0.0.90:8022</host>
</group>
</alias_groups>
Make sure to change to mode="groups".
In each <group> section, add the addresses you need. In my case, 10.0.0.90:8022 for my local host, and [REDACTED]:8022 for my tailscale network.
Restart the collabora container.
- Next you install the app in the Nextcloud GUI.
- Go to Settings -> Administration -> Office
- Put your local Collabora address under “Use your own server”
http://10.0.0.90:9980
- Save. It will tell you if everything is correct.
Make sure the “Collaborative tags” app is enabled.
It should work now.
Nextcloud config.php
Located in :/srv/dev-disk-by-uuid-d657f449-9e05-43a7-840f-756115e9855c/data2/nextcloud/config/www/nextcloud/config)
<?php
$CONFIG = array (
'datadirectory' => '/data',
'instanceid' => 'REDACTED',
'passwordsalt' => 'REDACTED',
'secret' => 'REDACTED',
'trusted_domains' =>
array (
0 => '10.0.0.90:8022',
1 => '[REDACTED]:8022',
),
'dbtype' => 'mysql',
'version' => '31.0.5.1',
'overwrite.cli.url' => 'http://10.0.0.90:8022',
'maintenance_window_start' => 1,
'log_type' => 'file',
'logfile' => 'nextcloud.log',
'loglevel' => 3,
'logdateformat' => 'F d, Y H:i:s',
'dbname' => 'nextcloud',
'dbhost' => 'mariadb',
'dbport' => '',
'dbtableprefix' => 'oc_',
'mysql.utf8mb4' => true,
'dbuser' => 'nextcloud',
'dbpassword' => 'REDACTED',
'installed' => true,
'memcache.local' => '\\OC\\Memcache\\APCu',
'filelocking.enabled' => true,
'memcache.locking' => '\\OC\\Memcache\\APCu',
'upgrade.disable-web' => true,
'richdocuments' =>
array (
'wopi_url' => 'http://10.0.0.90:9980',
'wopi_allowlist' =>
array (
0 => '10.0.0.90:8022',
1 => '[REDACTED]:8022',
),
),
'maintenance' => false,
);
Cron jobs
Fix slowness
On host machine:
docker exec -it nextcloud bash
crontab -e
Append:
*/5 * * * * php /app/www/public/cron.php
To find where the cron.php is: find / -name cron.php 2>/dev/null
Check if Nexcloud is using the cronjob instead of AJAX:
- on the WebGUI, go to Administration Settings -> Basic Settings