WordPress and cookie free domains

wordpressOne of the tips you will get from Google’s Pagespeed Insights is to host your static content on a cookie free domain.

I always ignored this tip, but today I decided to find out how to accomplish this with WordPress. And it turned out to be relatively easy.

The first thing you have to do is create a new domain and let that point to the wp-content  directory of your website.

Let’s assume the domain of your website is yourdomain.com. You can choose to host your static content on a subdomain, for example static.yourdomain.com. The other option is to just use a completely different domain, for example static-yourdomain.com.

After you have your new domain pointing to the wp-content  directory of your website you have to add the following two lines of code to your wp-config.php  file:

define("WP_CONTENT_URL", "http://static.yourdomain.com"); 
define("COOKIE_DOMAIN", "www.yourdomain.com");

And that’s it! You should now see all your assets served from your cookie free domain.

However, in my case I ran into some trouble when trying to log in again to the WordPress backend, somehow there was a redirect loop.

The fix was easy, I just removed the COOKIE_DOMAIN  constant from my wp-config.php  file and everything worked again.

There is one other thing you should be aware of if you are using a subdomain as your cookie free domain.

If there are cookies set for yourdomain.com your browser will also send them with every request for your subdomain, and you went to all the trouble of setting up a cookie free domain for nothing. So make sure there are no such cookies in your case!

However, if you force the www subdomain for your website there should not be a problem, because WordPress will set the domain of the cookies to www.yourdomain.com. But if your site can be reached on both www.yourdomain.com and yourdomain.com you should check this.

February 5, 2016

Add swap space to Ubuntu VPS

logo-ubuntuA few days ago a cronjob did not finish. After some digging through the log files I found out it was killed by the kernel because the server ran out of memory.

I did not want to upgrade to a plan with more memory for this VPS, because the current amount of memory is more than enough for normal operation. So I decided to add a swap file for this server. I found a good explanation on how to do this in this DigitalOcean article.

However, if you are looking to quickly add a swap file yourself without reading the complete article, here are the commands I used to create a swap file of 4 gigabyte for my Ubuntu 14.04 VPS:

sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo "/swapfile   none    swap    sw    0   0" | sudo tee /etc/fstab -a
sudo sysctl vm.swappiness=10
echo "vm.swappiness=10" | sudo tee /etc/sysctl.conf -a
sudo sysctl vm.vfs_cache_pressure=50
echo "vm.vfs_cache_pressure=50" | sudo tee /etc/sysctl.conf -a

If you want to create a swap file of some other size you should change the call to fallocate in the first line. For more details on the other commands in the example above you should read the DigitalOcean article, it explains them all quite well.

January 14, 2016

Laravel migration and renaming a column

For a Laravel project I created a migration in which I added several attributes to a database table and also renamed an existing attribute. I used the following code in the up() method of the migration:

Schema::table('table', function (Blueprint $table) {
    $table->renameColumn('existing_column', 'renamed_column');

I ran the migration for my Sqlite test database, but after running my tests it turned out the rename was not executed. However, the new attribute was added to the table.

After some experimenting I found out that you should do column renames in separate calls to the Schema facade. After changing the code in the up() method of my migration to the following everything worked as expected:

Schema::table('table', function (Blueprint $table) {

Schema::table('table', function (Blueprint $table) {
    $table->renameColumn('existing_column', 'renamed_column');

I don’t know if this issue is specific to Sqlite or that it also occurs on other databases such as MySQL or PostreSQL. I also have not spent time looking into the cause of this issue. If you know or have an idea please let me know in the comments 🙂

January 4, 2016

Get the name of the current view in Laravel

To fix a problem in a Laravel application I needed access to the name of the current view in the master layout. I found a nice solution for this problem in this post on StackOverflow:

View::composer('*', function($view){
    View::share('view_name', $view->getName());

I added this code to my AppServiceProvider, and I can now use the $view_name variable in all my views. It will contain the name of the view as you called it from your controller, for example pages.home.

One thing to note is that there is a comment to this solution on StackOverflow that says that this works, but that the $view_name will be overwritten if you use @include for partials in your views. In my case this is not an issue, but something to keep in mind if you run into problems.

December 16, 2015

Automatically update multiple VPS with Ansible

ansible-logoOver time the number of VPS’s I manage has increased. Until recently I logged in manually to each server to update the software. But with the increasing number of VPS’s this started to get tedious. So I decided to look into a way to automate this, and I decided to use Ansible. In this post I will share my current setup. I will assume you are somewhat familiar with Ansible, but if you are not you can take a look at this tutorial on serversforhackers.com to learn more about Ansible. If you rather have a video tutorial they also have you covered, in that case you can take a look at this free series.

Because all my servers run Ubuntu 14.04 I can update them all with the same commands, which makes things easy because I can simply use a single playbook to do just that. Because it seemed likely that somebody else had already tackled this problem before I turned to Google to look for a playbook, and I found this post by Chao Huang. In his posts he describes an Ansible playbook that updates all the software on Debian/Ubuntu based systems and also reboots the server after updating if this is necessary. So that’s what I use:

# Upgrade Debian/Ubuntu based systems and reboot if necessary.
 - hosts: 
   sudo: yes
     - name: Check if there are packages available to be installed/upgraded
       command: /usr/lib/update-notifier/apt-check --package-names
       register: packages
     - name: Upgrade all packages to the latest version
       apt: update_cache=yes upgrade=dist
       when: packages.stderr != ""
     - name: Check if a reboot is required
       register: file
       stat: path=/var/run/reboot-required get_md5=no
     - name: Reboot the server
       command: /sbin/reboot
       when: file.stat.exists == true

On all my servers I use the same user to login, but this user does have to use sudo to run some of the commands in the playbook above. My servers are also configured so that a user has to provide a password to use sudo. So I needed a way to provide this password to Ansible. However, because I am not that familiar with Ansible this turned out to be more difficult than I expected, mostly because I did not want to store the sudo passwords in cleartext. But after some experimenting I figured out an approach that works quite well. In my current setup I use host variables to specify the sudo password for each of my servers, and I use Ansible Vault to encrypt the passwords.

If you want to specify host variables for the host example.com you can create the file /etc/ansible/host_vars/example.com.yml and add your variables to it. These variables will then be used for that host in any playbook you run. So in my setup I create a host variables file for each of my servers and specify the sudo password in it:

ansible_become_pass: PASSWORD

To encrypt these files with Ansible Vault you have to create them with the following command:

ansible-vault create /etc/ansible/host_vars/example.com.yml

This command will ask you for a password, make sure you use the same password for all your host variables files. You need to use the same password for all host variables files, because you can specify only one Ansible Vault password when running playbooks.

With sudo passwords specified for all my servers I can now update them all with a single command:

ansible-playbook playbook.yml --ask-vault-pass

If you have any suggestions or questions just leave a comment below.

December 12, 2015

Spatie Laravel Backup and Envoyer

laravel-logoTo setup automated backups for a Laravel application I decided to try out the Spatie Laravel Backup package. It can backup your application to all Laravel filesystems you have configured.

With this package you can configure which files you want to include and exclude in your backups. The default configuration of this package includes the base_path() directory and excludes the storage_path() and base_path('vendor') directories.

However, I have some files under the storage_path() directory that I do want to backup, so I modified my configuration to include those files. When I ran the backup locally everything worked fine, but when I tested it out on my production server the files in the storage_path() directory were missing.

After some digging around I found out that this came from the fact that I use Envoyer to deploy my application. Envoyer stores the last four releases and uses a symlink to point to the current release. It also uses symlinks for files/directories that should be shared between releases, for example the .env file and the storage_path() directory. The directory structure used by Envoyer looks something like this:

    current -> /path/to/application/releases/20151209090624/
            .env -> /path/to/application/.env
            storage -> /path/to/application/storage/

If you configure your backups to include the base_path() directory with the directory layout above, the files in the storage directory will not be included. This is because the call to base_path() will output the real path to the current release, which will be /path/to/application/releases/20151209090624/ for the example above. However, the storage_path() directory is not included in this path, since it is located at /path/to/application/storage/. That is why the files in the storage_path() directory were not included in my backups.

The fix for this problem is easy though, you can simply add the storage_path() directory to the list of included directories you configure for you backups and they will be included in your backups.

December 9, 2015

Laravel absolute URLs from background queue

laravel-logoFor a project I use background queues to send confirmation emails. In the Blade template of this email I use the link_to_route() helper function. But when sending the emails through the background queue the domain for this link defaulted to localhost.

Normally Laravel falls back to the domain of the request, but that is not available when code runs in the background queue. I fixed this by adding the following line of code to the register() method of my AppServiceProvider:


In my case I also added a APP_URL environment variable, because I want to set the correct URL for all environments of the app, and chaged the code to the following:

December 7, 2015

Nginx, Let’s Encrypt and Firefox untrusted connection

Let's Encrypt logoToday I received my invite to the Let’s Encrypt beta, so I decided to try it out for one of my websites. I got everything working in Google Chrome, but when I tried to view the site in Firefox I got a warning that the connection was untrusted. It took me some time to find out what the issue was.

If you generate a new certificate with Let’s Encrypt for a domain (example.com in this post) the following files are created:


I used the file cert.pem for the ssl_certificate and ssl_certificate_key variables in my Nginx virtual host configuration. This works without a problem on Google Chrome, but on Firefox this leads to the untrusted connection warning. To fix this you should use the fullchain.pem file instead of the cert.pem file for the ssl_certificate and ssl_certificate_key variables. If you do that, everything works as it should for Firefox.

I will finish this post with a list of all the SSL related variables I use in my Nginx virtual host configuration:

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
December 1, 2015

FSEvents not compatible with your operating system or architecture

NPM logoWhile installing the React Native examples I ran into the following error when running the npm install  command:


npm ERR! notsup Unsupported
npm ERR! notsup Not compatible with your operating system or architecture: fsevents@1.0.5
npm ERR! notsup Valid OS: darwin
npm ERR! notsup Valid Arch: any
npm ERR! notsup Actual OS: linux
npm ERR! notsup Actual Arch: x64

This error message says that fsevents package does not support my operating system. In my case the solution for this error was to upgrade npm to the latest version. I was still using npm 2.x, and after upgrading to npm 3.3.12 the error was gone.

November 30, 2015