Posts
-
A Quick and Not-So-Easy Nextcloud Setup
Nextcloud is pretty straight-forward to deploy, especially using their official docker image, but there are a few things that you might come across and would need to know that are spread all over their docs or forums. I wrote a relatively short guide on how you could deploy a working Nextcloud docker container.
The goal in my case is to run a reasonably modular, upgradeable and usable container, proxied through the host server’s NGINX and with SSL/TLS via Let’s Encrypt.
Prerequisites
- An up-to-date GNU/Linux distro,
- NGINX,
- Certbot,
- Docker and docker-compose,
- A few GiBs of disk space, and some more for your files.
The most convenient way to configure the container is via
docker-compose
. You should ideally pass the passwords via docker secrets, for which you need to run a swarm, or running a HashiCorp vault, but that overcomplicates this tutorial and will be discussed in a future post. For now, we’ll pass them via environment files.To begin, create directory somewhere and add your
docker-compose.yaml
file. It should look like the one below. (click to expand)docker-compose.yaml
version: '3' volumes: nextcloud: db: services: db: image: mariadb restart: always command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW volumes: - ${MYSQLDIR}:/var/lib/mysql env_file: - DB_ENV_FILE # CHANGE app: image: nextcloud restart: always ports: - PORT:80 # CHANGE 'PORT' to a proxy port number. e.g. 9999 links: - db volumes: - ${WEBDIR}:/var/www/html - ${DATADIR}:/var/www/html/data env_file: - APP_ENV_FILE # CHANGE
As stated above, the docker will access your passwords through environment variables, which are read from the files mentioned above under
env_file
. Here’s an example of what the files would look like.DB_ENV_FILE
MYSQL_ROOT_PASSWORD=DB_ROOT_PASS # CHANGE MYSQL_PASSWORD=DB_USER_PASS # CHANGE MYSQL_DATABASE=nextcloud MYSQL_USER=nextcloud # Optionally change MYSQLDIR=/mysql/dir # CHANGE TZ=Europe/Berlin # Recommended, change for user's timezone
APP_ENV_FILE
MYSQL_HOST=db OVERWRITEHOST=nextcloud.example.com # CHANGE OVERWRITEPROTOCOL=https MYSQL_PASSWORD=DB_USER_PASS # CHANGE MYSQL_DATABASE=nextcloud MYSQL_USER=nextcloud # Optionally change, see DB_ENV_FILE WEBDIR=/nextcloud/web/dir # CHANGE DATADIR=/nextcloud/data/dir # CHANGE VIRTUAL_HOST=nextcloud.example.com # CHANGE TZ=Europe/Berlin # Timezone, Optional but Recommended
NGINX
Nextcloud is proxied through NGINX, which requires a few config modifications. Most of it is directly copied from their documentation, but the headers are slightly modified, as I include
proxy_intercept_errors
for custom error pages.nginx.conf
server { server_name nextcloud.example.com; # CHANGE access_log /var/log/nginx/nextcloud_access.log; # Optionally change location / { auth_basic off; # CHANGE # PROXY_URL: typically 127.0.0.1 # PORT: see `docker-config.yaml` (e.g. 9999) proxy_pass http://PROXY_URL:PORT/; # Proxy headers proxy_cache_bypass $http_upgrade; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; ## Proxy timeouts, buffering proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_max_temp_file_size 0; proxy_buffering off; proxy_request_buffering off; ## OPTIONAL: Custom error pages proxy_intercept_errors on; ## Headers add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Headers *; add_header Access-Control-Expose-Headers "content-range, content-length, accept-ranges"; add_header Access-Control-Allow-Methods "GET"; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; client_max_body_size 32M; # You MAY need to make this bigger. Depends on use-case. } ## Needed for CALDAV sync location /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } ## OPTIONAL. See "Custom error pages" error_page 500 501 502 503 504 /errorpage.html; location /errorpage.html { root /some/location/here; # CHANGE } }
What’s next? Your Letsencrypt certificate. Not much to explain here:
certbot --nginx -d nextcloud.example.com # CHANGE
and ensure your NGINX ssl config is using the recommended (or better) configuration. Certbot should have added a cron job to automatically renew the certificate.
Logging
You should enable logrotate for your
access_log
file, as well as/var/www/html/data/nextcloud.log
. Keep in mind, Nextcloud has built-in log size limiting, which you can set as follows in/var/www/html/config/config.php
. Note: this will overwrite the log file, and not ‘rotate’ in the traditional sense.... // 10 MiB limit 'log_rotate_size' => 10 * 1024 * 1024, ...
Caching
This could significantly improve performance. I will cover this at a later date, so refer to the documentation for now.
Useful Commands
Click to expand
## Create and run your container from docker-compose.yaml ## You can also update this way docker-compose pull docker-compose up -d ## Update docker app docker-compose exec app bash -c "apt-get update && apt-get -y upgrade" ## Install imagemagick, required for SVG support docker-compose exec app bash -c "apt-get install -y imagemagick" ## Run an occ command. Remember this for later! docker-compose exec --user www-data app php occ <command>
Common Issues
Failing Regular cron jobs
You can replace the built in AJAX/Webcron with a either a host cron job or systemd timer service. For cron, append a docker user crontab via
crontab -e
to:## run every 5 minutes */5 * * * * docker exec --user www-data nextcloud_app_1 php -f cron.php
or if you prefer systemd timers, add the following files to a systemd service path:
nextcloudcron.service
[Unit] Description=Nextcloud cron Requires=docker.service [Service] # CHANGE, user that can run docker commands on server. User=DOCKER_PRIVILEGED_USER ExecStart=/usr/bin/docker exec --user www-data nextcloud_app_1 php -f cron.php KillMode=process
nextcloudcron.timer
[Unit] Description=Run Nextcloud cron every 5 minutes [Timer] OnBootSec=5min OnUnitActiveSec=5min Unit=nextcloudcron.service [Install] WantedBy=timers.target
and enable the service as follows in your shell:
# either with sudo or as root systemctl daemon-reload systemctl enable nextcloudcron.service systemctl enable --now nextcloudcron.timer
Be sure to replace in settings under
nextcloud.example.com/settings/admin
.Current Issues
- Nextcloud updates sometimes need a few days before they are available via docker, since the workflow they use is very strange.
- Standard server-side encryption has to be manually enabled, and it does not encrypt file names or folder structures, only the file data itself.
- Confirm that you have enabled it with
occ encryption:status
. - Once it is enabled, you have to run
occ encryption:encrypt-all
to encrypt files uploaded before it it was enabled, as per docs:
- Confirm that you have enabled it with
Encrypt all data files for all users. For performance reasons, when you enable encryption on a Nextcloud server only new and changed files are encrypted. This command gives you the option to encrypt all files.
- The server-size encryption implementation is somewhat questionable:
- Keys are stored on the server.
- If user keys are enabled, keys are encrypted with users’ passwords and the private keys are stored on the server. Given most Nextcloud users run apps which regularly contact the server, this increases the attack surface.
- The docs recommend storing on 3rd party or external storage with eCryptFS or LUKS, which add complexity, cost and requires trust of any possible 3rd parties (which is the main reason people switch to Nextcloud.)
- Some data is still unencrypted anyways, including calendar and tasks.
- File size increases by ~35%.
- End-to-end encryption severely limits usability:
- Many Nextcloud Apps do not support E2E.
- It will be impossible to use certain features such as CalDAV sync, sharing, etc.
Common Issues and Considerations
So far, the only issue I had (disregarding the delayed 23.0.4 update) was an inaccessible file due to transactional locking right after upgrading to version 24. You usually do not need to run
occ maintenance:repair
since this is done automatically after an update, but feel free to try.If you are running redis, flushing might work as per this forum post:
# Login redis-cli -s /var/run/redis/redis.sock # Log in to redis and flush $> auth YOUR_REDIS_PASSWORD $> flushall
If you do not run redis, or the above doesn’t work, try to rescan first by running the following
occ
commands:occ files:scan --all occ files:repair-tree
Some exceptions may be printed during or after the process. Reading through should confirm that locking is the issue. If not, then the suggestions below likely won’t help you.
If the suggestions above don’t work, you might need to delete the lock entries in mariadb. Log in to mariadb via:
# CHANGE MYSQL_USER to your mysql database user docker-compose exec db mariadb -u MYSQL_USER -p
and run the following commands in its interpreter. The broken entries should have a
.lock
of -1. Keep in mind: This suggestion comes with no warranty. I am not responsible if your kittens die or babies explode, so back up your files.Nuke from orbit
-- select your nextcloud database use nextcloud; -- search select * from oc_file_locks where oc_file_locks.lock <>0; -- Delete ALL broken file locks. These files may not be accessible anymore. delete from oc_file_locks where lock =-1; -- Alternatively, delete the files by ID one by one: delete from oc_file_locks where id = KNOWN_ID_HERE;
The issue SHOULD be solved now, and the locked files can be deleted and/or reuploaded if you need to.
Resources
Anything Else
I’ll write a guide on how to enable redis and either docker secrets or vaults post-installation.
-
Javascript concurrency and linting
Papaparse, Promise, and JSHint
Few people will have to deal with concurrency issues with JS. As of two days ago, I became one of those people.
A project I am working on at work at the moment is a Geocharts implementation. To spare you the details, we have to fetch some data from a remote server, parse it (using Papaparse), and throw it into the Charts array.
Work with Papaparse enough, you will learn that it won’t work well with asynchronous code by default, and you’ll figure out that getting async/await to work with it is a pain in the back-side. After a few hours of work, I was able to solve the issue with JS Promise with the help of our good friend Stack Overflow.
My solution looks like this:
urls = [...] Promise.all( urls .map(url=> new Promise( (resolve,reject)=> Papa.parse(url, { download:true, complete:resolve, error:reject }) )) ).then(function(results) { // results is an array with an object for every URL. ... google.charts.setOnLoadCallback(drawMap); })
You then continue, as you would, for your Geocharts code.
In the process of working on this project, I’ve started using a linter for Javascript called JSHint integrated with VSCode. This will come in useful as you begin to define what you consider to be good and bad coding practices, since it will help you remain consistent with your code. What you define in your
.jshintrc
is up to you, but a decent starter is the default example. A few modifications I’ve done were:{ ... "moz" : true, // Allow Mozilla-specific syntax "esversion" : 6, // ECMAScript version 6 "lastsemic" : false, // Require last semicolon in {} statements "eqnull" : false, // do not allow == null "strict" : true, // true: Requires all functions run in ES5 Strict Mode ... }
Happy hacking.
Subscribe via RSS