How to Build a Personal Overpass Server on a Tiny Budget

Posted by Kai Johnson on 3/30/2023

The GNIS matching project I’ve been working on uses a lot of Overpass queries to find things in OSM. At some point during the project, I needed a faster, more reliable Overpass server than the public servers. So I built a local Overpass server as cheaply as I could. It’s working well. This is how you can build one for yourself.

Why Would I Build My Own Overpass Server?

If you’re using the Overpass API for software development, you’re going to be running a lot of queries. You could use a public Overpass instance, but it’s more polite and a lot more efficient to run one locally. Also, public overpass servers have query limits that you may not like. And sometimes they go down or flake out, and then there’s nothing you can do but wait until the operators fix them. If you run your own server, your fate is in your own hands!

For most use cases, a cheap local Overpass server can be significantly faster than using one of the public Overpass servers. The setup described here is a lot smaller with a lot less computing power than those big public servers. But it doesn’t have the entire world hammering on it constantly. Also, Overpass queries can return huge amounts of data. The network latency and throughput is a lot better on your own local network segment than if you’re downloading results from halfway across the world.

I’d like to give a special thanks to Kumi Systems for hosting the public Overpass server that I abused until I set up my own server. They’re providing a great service for the OSM community!

Do I Really Want to Do This?

Running an Overpass server is not for the faint of heart. The software is really finicky and not easy to maintain. You need to have some good experience with Linux system administration and the will and patience to deal with things that don’t work the way they’re supposed to.

What’s in this guide?

There are four useful guides to setting up an Overpass server, and you should read all of them:

  1. The Overpass quick installation guide
  2. The Overpass complete installation guide
  3. The Overpass API Installation guide on the Wiki
  4. And by far the most excellent of the four, ZeLonewolf’s Overpass Installation Guide

These four guides describe how to set up the Overpass software. This blog entry describes how to set up the hardware on a very small budget. It also has tips that will make the other four guides easier to use.

Getting the Hardware

Overpass likes to use a fair amount of memory, a huge amount of disk space, and a fair amount of CPU time. We’re going to make some compromises to get a working Overpass server with reasonable performance on a tiny budget. Memory and storage are relatively cheap, so we’ll remove those bottlenecks and your system will end up being CPU bound.

Here are the specs you’re looking for:

  • A PC that can run Ubuntu
  • 16GB of RAM
  • A primary SSD big enough for Ubuntu and some scratch files, 256GB is plenty
  • An unused M.2 slot or PCIe X4 (or X8/X16) slot
  • A DVD-R drive (if that’s what you’re using for the Ubuntu installation)

To that, you’ll add:

  • A 1TB M.2 NVMe PCIe SSD
  • An M.2 to PCIe adapter (if needed)

As of early 2023, there are plenty of cheap refurbished Dell desktop computers for sale on Amazon in the U.S.: https://www.amazon.com/s?k=dell+sff. Start with a cheap Dell and you should be able to get all the hardware for under $200.

If you have options, look for a PC with the most RAM and fastest CPU that fits your budget. You’re going to be looking at computers with CPUs that are few generations behind the latest processors. You don’t have to get the latest CPU, but try not to get one of the oldest ones. As I’m writing this, that means you’re looking at a 6th or 7th gen Core i5 or i7 processor.

You’ll also want some spare hardware around for the setup and backups:

  • A 4GB or larger USB flash drive or a blank DVD-R for the Ubuntu install
  • A 1TB or larger USB drive for backups
  • A monitor with an appropriate monitor cable that you can plug in for the initial setup
  • An Ethernet cable you can plug into a spare port on your hub or router

The cheap refurbished computers on Amazon often don’t have Wi-Fi, but this setup is better with wired Ethernet anyway. If you’re going to use Wi-Fi, add a USB adapter that works with Linux to your shopping list if you need one.

About Network Quotas

The initial setup for Overpass is going to download a couple hundred gigabytes of data for the database. If you mess up, you might have to download the data twice. If you’re doing this on a home network connection, make sure you’re not going to get billed for going over your monthly quota.

After you have the server up and running, the update files are relatively small. So they’re not likely to push you over the limit.

Setting up the Hardware

Dell has nice owners manuals for their systems. Google the model name of your system and “owners manual” and download the PDF file to your daily use computer for reference.

Plug in that cheap computer with the monitor, keyboard and mouse and boot it up. It likely has Windows 10 preinstalled and likely won’t ever run Windows 11.

Give the system a once over to make sure everything looks like it’s working normally, then download the Ubuntu installer. You can choose either Ubuntu Desktop if you’d like to have the GUI, or Ubuntu Server if you’re going to run headless and only login via SSH. Pick whatever you prefer.

Ubuntu has very good installation instructions. Follow them, download the installer image, and burn it onto the DVD or USB flash drive. From there you can boot up the installer and install Ubuntu on your system. I chose to delete the Windows NTFS partition and replace it with a fresh ext4 partition, but you can make other decisions about how you want to manage partitions on your main SSD. Follow the Ubuntu instructions! They’re great!

Check out your new Ubuntu installation and make sure it looks good, including checking out the network connection. If you’re running headless, confirm that you can access the system using SSH.

Power down, and if you’re runnning headless, get rid of the keyboard, monitor, and mouse.

Crack open the case and install the 1TB M.2 NVMe PCIe SSD drive, using the PCIe X4 adapter if needed. This is where that owner’s manual you downloaded helps. Most hardware is pretty easy to work on, but sometimes it’s not obvious how to remove the system components to get at the slots on the motherboard. The owner’s manual will show you how to pop out all the parts.

Put the system back together and rack it where it’s going to live permanently.

Now you need to format and mount that new SSD drive. This is a pretty good reference for what you need to do: https://gist.github.com/keithmorris/b2aeec1ea947d4176a14c1c6a58bfc36

You can use a DOS/MBR partition table for the drive, but since you’re starting from scratch, you might choose a gpt partition table instead. This page describes how to set that up in fdisk: https://ostechnix.com/create-linux-disk-partitions-with-fdisk/

This SSD drive is going to hold your Overpass database, which is huge. So you just want one big partition with an ext4 filesystem.

Once you have that set up, decide where you want to keep Overpass in your file system. ZeLonewolf uses /opt/op, which is as good as anywhere, but you can choose a different location if you like. Create that directory and mount the NVMe SSD there. This is a pretty good reference for permanently mounting the NVMe SSD drive: https://help.ubuntu.com/community/Fstab

Setting up the Software

There are already four good guides to setting up the Overpass software. I’m not going to reproduce them, but I’ll add some commentary.

First, DO NOT BLINDLY COPY AND PASTE COMMANDS FROM THE GUIDES! Take a close look at what each step is doing and make sure the parameters match your setup and your use cases.

Second, NONE OF THE GUIDES (INCLUDING THIS ONE) ARE PERFECT! Read all of them thoroughly to get the best understanding of how to install and manage the Overpass software.

The Overpass quick installation guide - This is really a cheat sheet for someone who already knows how to run Overpass. It cuts a lot of corners and leaves out a lot of things you’re already supposed to know. I wouldn’t suggest trying to follow this guide literally.

The Overpass complete installation guide - This is expanded version of the “quick” installation guide, but it’s also a cheat sheet for someone who already knows how to run Overpass. It still cuts some corners and leaves out things you’re already supposed to know. It’s probably not enough for a first-time user.

The Overpass API Installation document on the Wiki - This is a reference guide that fills in many of the blanks in the “quick” and “complete” installation guides. It’s written more to cover specific cases, so it’s not always linear. But it is a good reference.

ZeLonewolf’s Overpass Installation Guide - This guide covers everything you need to get Overpass up and running, from start to finish, with some good explanation. This guide is set up for one particular configuration, which may or may not be exactly what you want.

ZeLonewolf’s guide is really the only one that’s usable start to finish, but for this configuration there are some changes we want to consider. And there are some places where you might want things to be a little different for your use case or your personal preferences. Going step by step through ZeLonewolf’s sections:

Configure Overpass User & Required Dependencies

ZeLonewolf gets this right where the other guides are missing some important information. Specifically, you must have the liblzr-dev package if your Overpass server is going to index areas. None of the other guides will tell you that.

I like to give the Overpass user a standard home directory in /home and keep the source code and build scripts there, but deploy the software builds to a directory in /opt. ZeLonewolf puts everything in /opt, which is fine. But I find that having a separate home directory keeps things cleaner.

I also put the Overpass user in nogroup, didn’t assign sudo privileges, and didn’t set a login password. This makes the Overpass user account somewhat more restricted, just in case any of the Overpass software components gets compromised.

We already made the /opt/op directory as the mount point for the NVMe SSD, so we can skip that step. My user and dependency setup looks like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 sudo su # mkdir -p /opt/op # groupadd op # usermod -a -G op user useradd -d /home/op -g nogroup -m -s /bin/bash --disabled-login op chown -R op:nogroup /opt/op apt-get update apt-get install g++ make expat libexpat1-dev zlib1g-dev apache2 liblz4-dev a2enmod cgid a2enmod ext_filter exit

Of course, you can use ZeLonewolf’s setup as-is or make your own modifications.

Web Server Configuration

ZeLonewolf’s setup is great. Rather than editing the 000-default.conf file in /etc/apache2/sites-available/, I prefer to put an overpass.conf file in /etc/apache2/sites-enabled/ and leave the default example alone.

Since this is your own personal Overpass instance, you can run really long queries. Setting TimeOut to 600 is plenty because it’s hard to keep the rest of the software stack happy for longer than that.

ZeLonewolf configures the full path for the log files, but it probably makes sense to use the ${APACHE_LOG_DIR} prefix.

Compile and Install Overpass

Don’t copy the URL for the Overpass tarball from ZeLonewolf’s script. Either use https://dev.overpass-api.de/releases/osm-3s_latest.tar.gz or browse to the https://dev.overpass-api.de/releases/ directory and pick the release you want.

Download the Planet

When ZeLonewolf says this will take a long time, he means it! It took me about 7 hours to download a clone of the Overpass database on my 1 Gbps connection. The first time I tried the download was on Patch Tuesday, and the Windows machine I had SSH’d in from rebooted halfway through. That killed my shell on the Overpass server and aborted the download. So I had to start over from scratch. Don’t make that mistake. Use nohup and run the download_clone.sh command in the background.

Also, my use case didn’t require attic data. You can adjust the --meta option as you like for your use case.

1 2 cd /opt/op nohup bin/download_clone.sh --db-dir=db --source=http://dev.overpass-api.de/api_drolbr/ --meta=yes >/dev/null &

Backup

ZeLonewolf casually says, “Now would be an excellent time to backup your downloaded database folder.” That’s not a suggestion. You don’t want to download the database a third time and bump up against your network quota. Plug in and mount that spare USB drive and make a backup of the database NOW.

ZeLonewolf uses cp for the backup. I like to use rsync. It doesn’t matter so much this time, but it will be better when you want to make an incremental update to your backup later.

1 rsync -rtv /opt/op/db /media/op/usb-drive

Modify that with the right paths for your setup.

Configure Launch Scripts

ZeLonewolf is right that the scripts that come with Overpass are not ideal. He has some good scripts that work for his use case, but I had to make some significant changes for this low-powered server. Here’s what’s going on in that script.

First, Overpass is really finicky about paths and working directories. You always want to start overpass from the /opt/op directory, and you have to have all the directory aliases in this script set up right.

Second, whenever Overpass goes down (or is shut down), it leaves a bunch of semaphore files around and it will refuse to start up until these files are cleaned up. So, before you start Overpass, you always have to delete these files.

Third, there are several separate processes that make up the Overpass server:

  • The osm-base dispatcher, which is the core process for the server
  • The areas dispatcher, which is used for area updates
  • The fetch_osc.sh script which polls for and downloads changeset data
  • The apply_osc_to_db.sh script which reads the changeset data and imports it into the database

Then there’s ZeLonewolf’s area_updater.sh script that runs in a loop, continuously updating the index of areas. That’s a replacement for the rules_loop.sh script that comes with Overpass and basically does the same thing.

The first change you have to make to the launch.sh script is to put a sleep 5 command after the startup of the osm-base dispatcher. Apparently there’s a race condition between the startup of this dispatcher and the rest of the components, because if the other components get running before the dispatcher is ready, they get stuck and don’t do anything. That probably doesn’t show up on the high-powered public Overpass servers, but we’re running on pennies here.

To have a healthy Overpass server, you need the fetch_osc.sh script getting regular changeset updates and the apply_osc_to_db.sh script importing them promptly. ZeLonewolf describes this in his guide, but if the updates aren’t keeping up with real time, things get bad fast.

On this low-powered server, the process kicked off by the area_updater.sh script is a problem for that. The area indexing is both CPU and I/O intensive and the script runs it continuously. That can get in the way of the regular changeset updates.

There are two changes you can make to keep this from being a problem. First, the ionice and nice parameters in ZeLonewolf’s script give the area indexing more priority than the changeset updates. We want to swap that around.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 #!/usr/bin/env bash # updated to work with Overpass v0.7.61.4 EXEC_DIR="/opt/op/bin" DB_DIR="/opt/op/db" DIFF_DIR="/opt/op/diff" LOG_DIR="/opt/op/log" BASE_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --osm-base.*/\2/ p') if [ $BASE_DISPATCHER_PID ] then echo "WARNING: dispatcher --osm-base is already running" $BASE_DISPATCHER_PID else rm -fv /dev/shm/osm3s_osm_base rm -fv $DB_DIR/osm3s_osm_base ionice -c 2 -n 7 nice -n 17 nohup "$EXEC_DIR/dispatcher" --osm-base --meta --space=10737418240 --db-dir="$DB_DIR" >> "$LOG_DIR/osm_base.out" & echo "INFO: started dispatcher --osm-base" sleep 3 fi AREA_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --areas.*/\2/ p') if [ $AREA_DISPATCHER_PID ] then echo "WARNING: dispatcher --areas is already running" $AREA_DISPATCHER_PID else rm -fv /dev/shm/osm3s_areas rm -fv $DB_DIR/osm3s_areas ionice -c 3 nice -n 19 nohup "$EXEC_DIR/dispatcher" --areas --allow-duplicate-queries=yes --db-dir="$DB_DIR" >> "$LOG_DIR/areas.out" & echo "INFO: started dispatcher --areas" fi APPLY_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]pply_osc_to_db.sh.*/\2/ p') if [ $APPLY_PID ] then echo "WARNING: apply_osc_to_db.sh is already running" $APPLY_PID else ionice -c 2 -n 7 nice -n 17 nohup "$EXEC_DIR/apply_osc_to_db.sh" "$DIFF_DIR" `cat "$DB_DIR/replicate_id"` --meta=yes >> "$LOG_DIR/apply_osc_to_db.out" & echo "INFO: started apply_osc_to_db.sh" fi FETCH_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[f]etch_osc.sh.*/\2/ p') if [ $FETCH_PID ] then echo "WARNING: fetch_osc.sh is already running" $FETCH_PID else ionice -c 3 nice -n 19 nohup "$EXEC_DIR/fetch_osc.sh" `cat "$DB_DIR/replicate_id"` "https://planet.openstreetmap.org/replication/minute" "$DIFF_DIR" >> "$LOG_DIR/fetch_osc.out" & echo "INFO: started fetch_osc.sh" fi echo "INFO: verifying startup" sleep 3 BASE_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --osm-base.*/\2/ p') if [ $BASE_DISPATCHER_PID ] then echo "INFO: dispatcher --osm-base is running" $BASE_DISPATCHER_PID else echo "ERROR: dispatcher --osm-base is not running" echo "INFO: shutting down all components" $EXEC_DIR/shutdown.sh exit fi AREA_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --areas.*/\2/ p') if [ $AREA_DISPATCHER_PID ] then echo "INFO: dispatcher --areas is running" $AREA_DISPATCHER_PID else echo "ERROR: dispatcher --areas is not running" echo "INFO: shutting down all components" $EXEC_DIR/shutdown.sh exit fi APPLY_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]pply_osc_to_db.sh.*/\2/ p') if [ $APPLY_PID ] then echo "INFO: apply_osc_to_db.sh is running" $APPLY_PID else echo "ERROR: apply_osc_to_db.sh is not running" echo "INFO: shutting down all components" $EXEC_DIR/shutdown.sh exit fi FETCH_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[f]etch_osc.sh.*/\2/ p') if [ $FETCH_PID ] then echo "INFO: fetch_osc.sh is running" $FETCH_PID else echo "ERROR: fetch_osc.sh is not running" echo "INFO: shutting down all components" $EXEC_DIR/shutdown.sh exit fi

The osm-base dispatcher and apply_osc_to_db.sh script run at ionice class 2 for best effort, and the areas dispatcher runs at ionice class 3 so it only gets I/O scheduling when the system is idle. The nice values for CPU scheduling line up with this too.

We’re going to take the area_updater.sh script out of launch.sh entirely. Replace it with a script that doesn’t loop, and install it as a cron job. That way we can update the area index less frequently, rather than running it non-stop.

1 2 3 4 5 6 7 8 9 10 11 12 13 #!/usr/bin/env bash DB_DIR="/opt/overpass/db" EXEC_DIR="/opt/overpass/bin" LOG_DIR="/opt/overpass/log" pushd "$EXEC_DIR" echo "`date '+%F %T'`: update started" >> "$LOG_DIR/area_update.out" ionice -c 3 nice -n 19 "$EXEC_DIR/osm3s_query" --progress --rules < "$DB_DIR/rules/areas.osm3s" >> "$LOG_DIR/area_update.out" 2>&1 echo "`date '+%F %T'`: update finished" >> "$LOG_DIR/area_update.out" popd

This also runs the osm3s_query process for area updates at ionice class 3 with low CPU priority. The osm3s_query process also seems to grumble to stderr, so I’m forwarding that to the log as well.

On a small server like this, re-indexing all the areas takes 2-3 hours. I’m running the area indexing once a day. You could run it more frequently, but I wouldn’t run it more than every four hours.

If you’d prefer to keep the area_updater.sh script and not use a cron job, edit the script to change sleep 3 to sleep 1h.

Log File Management

ZeLonewolf tried to use symbolic links to move all the Overpass log files to a single directory, but logrotate really doesn’t like that. Instead, we’ll just leave the logs where they are and rotate them in place. Here’s what the modified configuration in /etc/logrotate.d/overpass looks like.

1 2 3 4 5 6 7 8 9 10 /opt/op/diff/*.log /opt/op/state/*.log /opt/op/db/*.log /opt/op/log/*.out { daily missingok copytruncate rotate 3 compress delaycompress notifempty create 644 op nogroup }

Server Automation

ZeLonewolf has a crontab entry that deletes old changeset files. That’s really important, but it’s possible that the command could delete the replicate_id and state.txt files that are crucial to keeping Overpass running. Let’s keep those files safe. Here’s what the modified crontab entries look like.

1 2 0 1 * * * find /opt/op/diff -mtime +2 -type f -regex ".*[0-9]+.*" -delete 0 18 * * * /opt/op/bin/single_pass_area_updater.sh

This is also where I run my area updates.

I do not recommend using @reboot in the crontab entry to launch Overpass at startup.

An uncontrolled shutdown of the Overpass processes can corrupt the database files. Sometimes this can’t be avoided, e.g. when there’s a power outage. If the database is corrupted, you need to restore a backup or start over with a fresh clone.

For regular administration, I use this shutdown.sh script to bring down Overpass safely:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 #!/usr/bin/env bash # updated to work with Overpass v0.7.61.4 EXEC_DIR="/opt/op/bin" DB_DIR="/opt/op/db" DIFF_DIR="/opt/op/diff" LOG_DIR="/opt/op/log" FETCH_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[f]etch_osc.sh.*/\2/ p') if [ $FETCH_PID ] then kill $FETCH_PID echo "INFO: killed fetch_osc.sh" $FETCH_PID sleep 1 else echo "WARNING: fetch_osc.sh is not running" fi FETCH_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[f]etch_osc.sh.*/\2/ p') if [ $FETCH_PID ] then echo "ERROR: unable to kill fetch_osc.sh - other processes may still be running" exit fi APPLY_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]pply_osc_to_db.sh.*/\2/ p') if [ $APPLY_PID ] then for (( i=0; i<100; i++ )) do APPLY_IDLE_COUNT=$(tail -n 5 $DB_DIR/apply_osc_to_db.log | grep -c from) if [ $APPLY_IDLE_COUNT -eq 5 ] then kill $APPLY_PID echo "INFO: killed apply_osc_to_db.sh" $APPLY_PID break else echo "INFO: waiting for apply_osc_to_db.sh to finish updates" sleep 6 fi done else echo "WARNING: apply_osc_to_db.sh is not running" fi for (( i=0; i<100; i++ )) do APPLY_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]pply_osc_to_db.sh.*/\2/ p') if [ $APPLY_PID ] then echo "INFO: waiting for apply_osc_to_db.sh to die" sleep 3 else break fi done APPLY_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]pply_osc_to_db.sh.*/\2/ p') if [ $APPLY_PID ] then echo "ERROR: unable to kill apply_osc_to_db.sh - other processes may still be running" exit fi AREA_SCRIPT_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]rea_updater.sh.*/\2/ p') if [ $AREA_SCRIPT_PID ] then kill $AREA_SCRIPT_PID echo "INFO: killed area_updater.sh" $AREA_SCRIPT_PID sleep 1 else echo "WARNING: area_updater.sh is not running" fi AREA_SCRIPT_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[a]rea_updater.sh.*/\2/ p') if [ $AREA_SCRIPT_PID ] then echo "ERROR: unable to kill area_update.sh - other processes may still be running" exit fi AREA_UPDATER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[o]sm3s_query --progress --rules.*/\2/ p') while [ $AREA_UPDATER_PID ] do echo "INFO: waiting for area updater to finish - this may take a L-O-N-G time" sleep 15 AREA_UPDATER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[o]sm3s_query --progress --rules.*/\2/ p') done AREA_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --areas.*/\2/ p') if [ $AREA_DISPATCHER_PID ] then $EXEC_DIR/dispatcher --areas --terminate echo "INFO: terminated dispatcher --areas" $AREA_DISPATCHER_PID sleep 1 else echo "WARNING: dispatcher --areas is not running" fi AREA_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --areas.*/\2/ p') if [ $AREA_DISPATCHER_PID ] then echo "ERROR: unable to terminate dispatcher --areas - other processes may still be running" exit fi BASE_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --osm-base.*/\2/ p') if [ $BASE_DISPATCHER_PID ] then $EXEC_DIR/dispatcher --osm-base --terminate echo "INFO: terminated dispatcher --osm-base" $BASE_DISPATCHER_PID sleep 1 else echo "WARNING: dispatcher --osm-base is not running" fi BASE_DISPATCHER_PID=$(ps -ef | sed -n 's/\([[:alpha:]]\+\) \+\([[:digit:]]\+\).*[d]ispatcher --osm-base.*/\2/ p') if [ $BASE_DISPATCHER_PID ] then echo "ERROR: unable to terminate dispatcher --osm-base - other processes may still be running" exit fi

Note that if the area update query process is running, this script may wait an hour or more for that process to finish a full pass over all the area creation.

Performance Verification

This section is crucial for keeping tabs on the health of your Overpass server. If something starts to go wrong, you’ll notice it in the fetch_osc.out, apply_osc_to_db.log, and area_update.out files. Keep tabs on these to make sure everything looks normal.

What if something goes wrong?

If you have Overpass running and it dies, you might have to start over with a clean database. The easiest way to do this is to restore the db directory from a backup. Make sure you update your backups frequently!

If you’re having trouble getting Overpass up and running in the first place, go back to those four guides and look for clues. US users can also try the #overpass channel on Slack.

May your queries be fast and your results accurate!