6 minutes
HTTPS - Let’s Encrypt
While I have mucked around with self-signed certs in the past, I have never actually owned a personal DV cert - mainly because it was never worth the cost. As such, I never bothered to setup https on my web server. But now with the EFF’s awesome new project Let’s Encrypt, I can now get my own free real DV cert. This is just a document of what it took to go from HTTP to HTTPS on my static site.
1. Installing
At the time of this writing, my web server is running on Ubuntu 14.04.4 with nginx 1.8.1. For me installing Let’s Encrypt required installing git and bc and then doing a git clone to download it locally.
sudo apt-get -y install git bc
sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
2. Request cert using webroot plugin
Webroot is a plugin that requests the cert by putting a particular file on the website, proving that you do in fact own the domain. The first step is to allow remote access to the folder it wants to be in in the nginx site config file. Mine is called www, but obviously yours might not be - the default is /default.
sudo nano /etc/nginx/sites-available/www
In the server block, I am just adding a small change for the well-known location.
location ~ /.well-known {
allow all;
}
and then reloading the server.
sudo service nginx reload
and then we are going to request the actual cert. Notice, I am requesting one cert for both the site and the two sub-domains I use on this server. Obviously you will need to edit this to where your webroot is as well as which sub-domains you are looking to use.
cd /opt/letsencrypt
./letsencrypt-auto certonly -a webroot --webroot-path=/var/www/html -d aptprojects.net -d www.aptprojects.net -d [hostname].aptprojects.net
After providing your email address and signing the agreement, it hopefully spits out a congratulation note. This mentions that you should probably backup that /etc/letsencrpyt folder at this point. Sounds good to me. On my desktop, I simply scp that file down (once I assign my user the rights to read it).
scp -r -P [port] [user]@[hostname].aptprojects.net:/etc/letsencrypt /home/aaron/Documents/Certs/
Next I use openssl to create a strong Diffie-Helman group. This takes a few minues, but creates a DH group that we can use as well.
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
3. Configure nginx
This is to setup the website to use the new certs and push all users over to https
sudo nano /etc/nginx/sites-available/www
Comment out the normal section in server config that sets up port 80 listeners
#listen 80 default_server;
#listen [::]:80 default_server;
Then add in an https listener
listen 443 ssl;
server_name aptprojects.net www.aptprojects.net;
ssl_certificate /etc/letsencrypt/live/aptprojects.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/aptprojects.net/privkey.pem;
Next add in all of the appropriate TLS options
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;
Lastly add in a redirect in case anyone tries to use port 80 to connect to http. This needs to be outside of the normal server block.
server {
listen 80;
server_name aptprojects.net www.aptprojects.net;
return 301 https://$host$request_uri;
}
Then I’m just restarting the nginx service
sudo service nginx reload
4. Configure ufw
Quickly running a status on ufw will show that I am not configured to allow any 443 traffic
sudo ufw status verbose
So I go ahead and allow it.
sudo ufw allow https
That’s it, I am now fully https at least for the next three months. My next steps are to test it in my browsers and also over at the ssltest at ssl labs. This all worked perfectly for me.
5. Setup the autorenew of the domain certs
To renew the certs we just got, we simply run the following command.
/opt/letsencrypt/letsencrypt-auto renew
Running it now shows that my certs are not due for renewal yet. So instead of just running it, I am going to set it up as a cron job to run once a week.
sudo crontab -e
And then I add in the following two lines at the bottom of the file. This sets up lets encrypt to attempt to update certs every Monday at 2:30am and then restart the web server at 2:35am.
30 2 * * 1 /opt/letsencrypt/letsencrypt-auto renew >> /var/log/le-renew.log
35 2 * * 1 /etc/init.d/nginx reload
6. Updating Let’s Encrypt
Because this wasnt installed in a normal apt-get kinda way, to keep it current, I have to run the following commands. As you may be aware this software is still in beta and as such they have been making a lot of changes to it. Keeping it updated is certainly for the best.
cd /opt/letsencrypt
sudo git pull
Credits
Most of this information came from one very very good tutorial on how to set this up over at digitalocean. I also stole some great ideas from Raymii. In fact the only thing I did here is document what worked and changed parameters to match my installation.
7. Follow up
So, while this worked great in dev, this didn’t exactly work for my production servers. I specifically ran into two different problems that ended up not allowing me to get certs or renew certs after following this guide. Both of these issues were due to the 301 redirect I had you install in step 3. Because we already requested the cert at that point, it should work fine, but on renew this will fail as the web root will correctly install the right stuff into .well-known, but on reaching for it at port 80 the acme server will get a 301 error back. The second issue was that I had different sub-domains pointed to different web roots as different virtual hosts (nginx calls them server blocks), while my let’s encrypt request only specified one web root. An option to fix that problem is that you can specify different web roots for different sub-domains in the let’s encrypt request. That being said, the easiest way to resolve both of these problems for me was to simply edit the server block that handles the redirect and specify an allow rule pointing to a particular web root whenever someone requests well-known, regardless of which sub-domain they are requesting it from.
server {
listen 80;
listen [::]:80;
server_name sub-example.aptprojects.net www.aptprojects.net aptprojects.net;
location '/.well-known/' {
default_type "text/plain";
root /var/www/html;
}
location / {
return 301 https://$host$request_uri;
}
}
Other than these two issues, I did want to mention that when playing your ./letsencrypt-auto command line options, you should really start with –test-cert option set. By adding this option in at the end, you will get a working certificate - just not a valid one. With this cert, you will be able to test your site and your config and make sure everything is perfect before requesting a real cert. The reason you want to do this is because let’s encrypt limits the number of certs that you can request per week - let’s encrypt rate limits. These limits don’t apply to the test server, so test away and once you are 100% happy with your config, then it is super simple to just overwrite your test cert with an actual production cert.
cd /opt/letsencrypt
./letsencrypt-auto certonly -a webroot --webroot-path=/var/www/html -d aptprojects.net -d www.aptprojects.net -d [hostname].aptprojects.net --test-cert