TuxLabs LLC

All things DevOps

Category: NoSQL

MongoDB data loss avoided courtesy of AWS EBS & Snapshots

Published / by tuxninja / Leave a Comment

Cross Region MongoDB Across A Slow Network (Napster) Bad, AWS Snapshots (Metallica) Good!

I recently found myself in a bit of a pickle. My team and I had deployed a 3 node MongoDB cluster configured as two nodes in us-east-1 and one node in us-west-2 to maximize our availability while minimizing cost. Ultimately, there were two problems with this approach. The first is that for reasons mostly outside of our control the rest of our application stack above the database was deployed in us-east-1 drastically reducing any availability benefit the tertiary node in us-west-2 was buying us. Additionally, we were not aware at the time we made this choice, but our cluster/replication traffic was going across a VPN with very limited bandwidth that frequently suffered network partitions due to network maintenance and a lack of redundancy. We found our MongoDB cluster failing over frequently due to losing communication with it’s members and when it did our cluster had a difficult time recovering because replication couldn’t catch up across the VPN.

After restarting Mongo several times, including removing the data directory and starting over fresh, ultimately replication was going to take days to sync, and we could not afford to wait that long. We needed to restore the cluster health ASAP so we could move all nodes to us-east-1 mitigating our network issue with our VPN that was introducing so much pain.

Now the system I am referring to is production, it cannot lose data, and it cannot take downtime/a maintenance. Given these constraints I started googling ways to catch up your MongoDB, when it will not catch up on it’s own. I tried some things I found like rsync etc, before realizing it wasn’t any faster across that slow VPN link. Ultimately, I decided I was going to try a snapshot. Now the document I read warned me that a live snapshot may result in potentially inconsistent data, but again I had to try it given the constraints I mentioned before. I had few options. In the end as it turns out, it worked perfectly and in under an hour I had my entire cluster healthy. Using the AWS CLI utility, here is how I did it…

Step 1 take the snapshot of the healthy node

I actually took the snapshot in the GUI at first… so not shown here, but for the record to create a snapshot, go to your volume under Ec2 Volumes and click actions then create snapshot and save the snapshot ID. (Or alternatively do it with the CLI like I did for everything else).

Step 2, copy the snapshot from your source region to your destination region

Make sure you copy to your clipboard the snapshot ID returned…

Step 3, Create a new volume from the copied snapshot


Step 4, Attach the volume to the system

Oh No We Got An Error!

Ah ok, simple fix, we created the volume in a different AZ than the node we were attaching to.

(delete the old volume) Then…

Step 5, create a new volume from the snapshot, but this time specify the same AZ (us-east-1b instead of us-east-1c) as the node we wish to attach it to

Step 6, try attaching the new volume (cross your fingers)


Sweet it worked…Now it’s time to do some work on the node we attached this volume to.

Step 7, check if the new attachment is visible to the system

Yup sure, is, we can see our device ‘xvdc’ is a 300G disk that has no mount point. We can also see ‘xvdb’ which is our original mongo data mount, mounted under /mnt.

Step 8, create mount point and mount the new device

Step 9, shutdown Mongo if it’s running

Step 10, copy the snapshot data, to the existing MongoDB data directory

Step 11, fix permissions for the copied data

NOTE: Do not forget this step or you will get errors starting the MongoDB service

Step 12, start Mongo back up

Step 13, Check Mongo Cluster Status

For contrast, here is what it looked like before, pay close attention to node/member

Now that our DB is verified healthy it’s time to cleanup.

Step 14, clean our now unnecessary waste ( and thank the gods)

Umount & Delete

Detach Volume

Delete Volume & Snapshots


When I ran into this issue and googled around a bit, I really didn’t find anyone with a detailed account of how they got out of it. Thus I was inspired by the opportunity to help others in the future and the result is this post. I hope it finds someone, someday, facing a similar scenario and graciously lifts them out of the depths! Godspeed, happy clouding.