dns

How to setup Flask and Apache on an Ubuntu VM in DigitalOcean with a Custom Domain

In this video I show how setup Flask and Apache on an Ubuntu VM in Digital Ocean with a custom domain. This was made after someone in the comments on my other DigitalOcean video requested it. If there is something else anyone would like to see, please just let me know I am happy to provide these walk through’s.

Note: I hit a number of challenges with DNS in this one, I think it’s fun to watch me struggle. Enjoy!

How to setup Flask and Apache on an Ubuntu VM in DigitalOcean with a Custom Domain Read More »

Consul for Service Discovery

Why Service Discovery ?

Service Discovery effectively replaces the process of having to manually assign or automate your own DNS entries for nodes on your network. Service Discovery aims to move even further away from treating VM’s like pets to cattle, by getting rid of the age old practice of Hostname & FQDN having contextual value. Instead when using services discovery nodes are automatically registered by an agent and automatically are configured in DNS for both nodes and services running on the machine.

Consul

Consul by Hashicorp is becoming the de-facto standard for Service Discovery. Consul’s full features & simplistic deployment model make it an optimal choice for organizations looking to quickly deploy Service Discovery capabilities in their environment.

Components of Consul

  1. The Consul Agent
  2. An optional JSON config file for each service located under /etc/consul.d/<service>.json
    1. If you do not specific a JSON file, consul can still start and will provide discovery for the nodes (they will have DNS as well)

A Quick Example of Consul

How easy is it to deploy console ?

  1. Download / Decompress and install the Consul agent – https://www.consul.io/downloads.html
  2. Define services in a JSON file (if you want) – https://www.consul.io/intro/getting-started/services.html
  3. Start the agent on the nodes – https://www.consul.io/intro/getting-started/join.html
  4.  Make 1 node join 1 other node (does not matter which node) to join the cluster, which gets you access to all cluster metadata

Steps 1 and 2 Above

  1. After downloading the Consul binary to each machine and decompressing it, copy it to /usr/local/bin/ so it’s in your path.
  2. Create the directory
    sudo mkdir /etc/consul.d
  3. Optionally, run the following to create a JSON file defining a fake service running
echo '{"service": {"name": "web", "tags": ["rails"], "port": 80}}' \
    >/etc/consul.d/web.json

Step 3 Above

Run the agent on each node, changing IP accordingly.

tuxninja@consul-d415:~$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=agent-one -bind=10.73.172.110 -config-dir /etc/consul.d

Step 4 Above

tuxninja@consul-d415:~$ consul join 10.73.172.108
Successfully joined cluster by contacting 1 nodes.

Wow, simple…ok now for the examples….

Show cluster members

tuxninja@consul-dcb3:~$ consul join 10.73.172.110
Successfully joined cluster by contacting 1 nodes.

Look up DNS for a node

tuxninja@consul-dcb3:~$ dig @127.0.0.1 -p 8600 agent-one.node.consul
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> @127.0.0.1 -p 8600 agent-one.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2450
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;agent-one.node.consul.		IN	A
;; ANSWER SECTION:
agent-one.node.consul.	0	IN	A	10.73.172.110
;; Query time: 1 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 03 21:43:47 UTC 2016
;; MSG SIZE  rcvd: 76
tuxninja@consul-dcb3:~$

Lookup DNS for a service

tuxninja@consul-dcb3:~$  dig @127.0.0.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55798
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul.		IN	A
;; ANSWER SECTION:
web.service.consul.	0	IN	A	10.73.172.110
;; Query time: 2 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 03 21:46:54 UTC 2016
;; MSG SIZE  rcvd: 70
tuxninja@consul-dcb3:~$

Query the REST API for Nodes

tuxninja@consul-dcb3:~$ curl localhost:8500/v1/catalog/nodes
[{"Node":"agent-one","Address":"10.73.172.110","TaggedAddresses":{"wan":"10.73.172.110"},"CreateIndex":3,"ModifyIndex":1311},{"Node":"agent-two","Address":"10.73.172.108","TaggedAddresses":{"wan":"10.73.172.108"},"CreateIndex":1338,"ModifyIndex":1339}

Query the REST API for Services

tuxninja@consul-dcb3:~$ curl http://localhost:8500/v1/catalog/service/web
[{"Node":"agent-one","Address":"10.73.172.110","ServiceID":"web","ServiceName":"web","ServiceTags":["rails"],"ServiceAddress":"","ServicePort":80,"ServiceEnableTagOverride":false,"CreateIndex":5,"ModifyIndex":772}

Consul for Service Discovery Read More »

Preventing (bind9) DNS Naughty-ness (named.conf & iptables/ufw) on Ubuntu

If you run a DNS server on the Internet with a default configuration many people/robots will take advantage of you. The same is true for Mail, but that is another article. Needless to say if you are running a service on the Internet, the naughty goblins will find you. To thwart these dirty criminals all that’s necessary is to configure your named.conf properly. However, since these robotos are being naughty there is a high degree of certainty they are infected endpoints, and as such I really don’t want them coming anywhere near me or my machines. After all for humanity sake we don’t want to be infected by the deadly plague ! This article is short and sweet, here is how to protect your DNS server & your server in one article using named.conf & ufw (iptables).

 

Named.conf.options

Now a days named.conf is really just a file that inherits 3 other files, named.conf.local, named.conf.options, and named.conf.default-zones. The one we are going to fix is named.conf.options. The configuration below should only be applied in a scenario where you want to run an authorative nameserver, and a caching name server, but the key is you only want to allow people to query the cache that ‘you know personally or are you’ vs. allowing the entire internet, because then bad things happen. If this is not the setup you are going for, don’t do this 🙂 But if it is follow along.

Add the following section with the proper IP’s to the top fo the file

acl "trusted" {
192.241.206.98;
localhost;
localnets;
};

Note you can also add a CIDR for a subnet like 192.168.0.0/16

After that’s done under the options {} section… make it look like this

        allow-query { any; };
        allow-recursion { trusted; };
        allow-query-cache { trusted; };
        allow-transfer { 202.157.182.142; };

Note, allow transfer is necessary if you have a secondary nameserver that needs to receive updates. Now restart bind9

tuxninja@tlprod1:/etc/bind$ sudo service bind9 restart

Ok now all querying including behavior from non-trusted people will not be allowed. If it is working check your /var/log/syslog and you will see some denies like this

Nov 11 16:00:31 tlprod1 named[952]: client 192.163.221.224#80 (hehehey.ru): query (cache) 'hehehey.ru/ANY/IN' denied
Nov 11 16:00:31 tlprod1 named[952]: client 192.163.221.224#80 (hehehey.ru): query (cache) 'hehehey.ru/ANY/IN' denied
Nov 11 16:00:31 tlprod1 named[952]: client 104.37.29.110#4761 (hehehey.ru): query (cache) 'hehehey.ru/ANY/IN' denied

Now the above is from my actual log file. I was quite annoyed that clients are basically abusing the hell out of hehehey.ru… so I decided I don’t want to talk to those people at all. To those people I should be a blackhole. To do this I used UFW which is short for uncomplicated firewall, which essentially makes dealing with Iptables much much nicer. It’s only my 2nd time using UFW, but I’ve been using Iptables for well over a decade. Anyway, here is my simple setup with UFW that I came up with.

tuxninja@tlprod1:/etc/bind$ sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)

tuxninja@tlprod1:/etc/bind$ sudo ufw default allow outgoing
Default outgoing policy changed to 'allow'
(be sure to update your rules accordingly)

tuxninja@tlprod1:/etc/bind$ sudo ufw allow ssh
Rules updated
Rules updated (v6)

tuxninja@tlprod1:/etc/bind$ sudo ufw allow 80
Rules updated
Rules updated (v6)

So we are configuring the default policy to deny all incoming traffic, allow outgoing, and then allow SSH & Apache/Web traffic basically. Next I created a script called block.sh to add ufw deny rules for bad actors I parsed out of my log, here’s what block.sh looks like

# cat block.sh 
#!/bin/bash

while read line; do
	ufw deny from $line
done

Don’t forget to chmod +x your shell script. Then I did this… blocking all bad actors…

root@tlprod1:~# cat /var/log/syslog | grep hehehey.ru | grep -v repeated | awk -F ' ' '{print $7}' | cut -d '#' -f 1 | ./block.sh

Note, use sudo if you don’t run this as root. This will go through my log and find all these bad requests, and block the requestor. It’s quite aggresive, so be careful, make sure you thoroughly limit your parsing with grep to only block things you really don’t want talking to your server, because this blocks ALL traffic from this requestor to your service, not just DNS.

Once that is complete you need to finally permit good DNS requests by running

ufw allow 53

And then finally enable your firewall

ufw enable

If you are successful you should see entries in your log that look like this

Nov 11 15:10:35 tlprod1 kernel: [1652178.544292] [UFW BLOCK] IN=eth0 OUT= MAC=04:01:63:57:8a:01:3c:8a:b0:0d:3f:f0:08:00 SRC=65.60.18.103 DST=192.241.206.198 LEN=72 TOS=0x00 PREC=0x00 TTL=247 ID=31303 PROTO=UDP SPT=20225 DPT=53 LEN=52

You can also view all your firewall rules by running

sudo ufw status numbered

Happy Blocking !

 

 

 

Preventing (bind9) DNS Naughty-ness (named.conf & iptables/ufw) on Ubuntu Read More »