Skip to main content

Web Development Setup on macOS

A Flexible Web Development Setup on macOS

This is a simple way to configure a macOS system for web development. It allows one to develop multiple sites locally, without reconfiguring anything when changing sites/customers.

There are two key components to the strategy: dnsmasq and the httpd daemon's configuration.

Local Network Configuration

On my local network, where I do most of my development, I have a router ( that is configured to provide IP addresses via DHCP. I also have a Raspberry Pi running Pi-hole ( that is the local network's DNS server (both static and dynamic addresses). The The DHCP server on the router is configured to offer the Pi-hole as the sole DNS server. The Pi-hole is configured to forward only fully qualified queries to external recursive DNS servers (, when a hostname is not local.

This means that typical clients on the local net are given an IP by the router and directed to use the Pi-hole for DNS, thereby giving everyone free ad blocking with zero manual configuration effort on my part.

There are only a handful of TLDs that are acceptable for exclusive use on a local network. .local is one example, but multicast DNS and Apple's Bonjour networking have mostly appropriated that. .dev is another TLD suitable for this purpose. Rather than use my true public TLD (, I have chosen to use .dev for all devices on my private local network. Thus, my desktop (named desk) can be accessed on the local network as either desk or

The remainder of this note describes how to configure a DNS server local to a development machine for the purposes of web application development.

The typical scenario is anticipated to be something like the following... I have deployed a production web site at I am doing local development on new features for the client, and I'd like to be able to access them by browsing to, whether I'm on my local network, in a Starbucks, or at the client site.

If my goal was a solution that worked only while I was on my local network, the easiest approach would be to simply add static host entries on the Pi-hole which point back to my development machine. Because this approach won't work when I'm off my local network, I have taken the more complex approach of using dnsmask on my development machine.


dnsmasq will be used to handle domain name resolution on the development machine for the sites under development. From the dnsmasq site:

Dnsmasq provides network infrastructure for small networks: DNS,
DHCP, router advertisement and network boot. It is designed to
be lightweight and have a small footprint, suitable for resource
constrained routers and firewalls. It has also been widely used
for tethering on smartphones and portable hotspots, and to
support virtual networking in virtualisation frameworks.

The TLD .dev is commonly used by developers for this purpose. As I've already used that TLD for my local domain, I've chosen to use .ldev for use here.

I wish to configure dnsmasq on my development machine so that any domain name ending in .ldev resolves to the development machine. For example, will resolve to the IP address of the client's site, which will resolve to my local IP. This arrangement make it quite simple to develop code locally and then deploy remotely.

Note that most instructions for using dnsmasq in this manner focus on changes to /opt/local/etc/dnsmasq.conf. It's a valid technique, however, I wanted the dnsmasq on my development machine to handle only resolution of hosts in the .ldev TLD, while allowing macOS to handle resolution of all remaining TLDs normally (i.e., as directed by the DNS configuration provided by the router via DHCP).

Here are the steps:

  • Use macports to install dnsmasq. At the end of the installation, macports will emit a message about how to configure dnsmasq so that it is started automatically at boot time. Make a note of this for later use.

  • Configure dnsmasq so that any TLD ending in .ldev is resolved to the local machine at

    • Copy the example dnsmasq.conf configuration provided by macports to the correct location:

      $ sudo cp `port contents dnsmasq | fgrep dnsmasq.conf` \
    • Update the configuration file to resolve .ldev as described:

      $ sudo -s -- "echo 'address=/ldev/' >> \
    • Use macports to configure launchd to start dnsmasq at boot time:

      $ sudo port load dnsmasq
    • Test the configuration of your local DNS server (i.e., dnsmasq):

      # Should resolve to
      $ dig @
      # Should both resolve to correct IP
      $ dig @
      $ dig

At this point, dnsmasq is working properly.

In the future, if changes are made to the dnsmasq configuration, one should either send dnsmask a SIGHUP signal or stop and restart it via:

$ sudo launchctl stop org.macports.dnsmasq
$ sudo launchctl start org.macports.dnsmasq

Configure macOS

We now need to tweak macOS so that it uses dnsmasq for name resolution. There are two options:

  • Send all DNS requests to dnsmasq, or
  • Send only queries for the .ldev TLD to dnsmasq.

As already mentioned, I chose the latter. To do this:

$ sudo mkdir /etc/resolver
$ sudo -s -- "echo 'nameserver' > \

In the second command, note that the name of the file being created is ldev, which is the name of the TLD we wish to have dnsmasq handle. In other words, macOS uses dnsmasq to resove names only matching the file name.

Note that macOS does not use /etc/resolv.conf to choose DNS servers. Do not edit the file -- it is generated automatically when System Preferences changes related information. It exists so that software which does use resolv.conf will continue to function properly.

Test your new configuration. 'dig' should resolve to and 'dig' should resolve to the site's correct address.

You say you want to undo what you've done? Not a problem:

$ sudo launchctl stop org.macports.dnsmasq
$ sudo rm /opt/local/etc/dnsmasq.conf
$ sudo rm -rf /etc/resolver
$ sudo port uninstall dnsmasq
$ sudo discoveryutil mdnsflushcache

Configuring Apache

dnsmasq was configured with a scheme that can be used for any development site you work with -- there are no configuration changes when you add or remove sites. Let's configure Apache in the same manner.


We will create a new directory which will contain all sites. Under this directory there will be additional directories -- one for each site. These subdirectories can be added/removed at will.

# First setup the container directory
$ sudo mkdir -p /www/sites

# Make it mine
$ ME=`whoami` sudo -E -s -- "chown -R $ME /www"

Let's setup two dummy sites for testing, one for customer and one for customer The wwwroot directory of each will contain the directory structure for all of the files to be served for that site.

$ mkdir -p /www/sites/foo/wwwroot
$ echo 'hello, world' > /www/sites/foo/wwwroot/index.html

$ mkdir -p /www/sites/bar/wwwroot
$ echo 'hello, world' > /www/sites/bar/wwwroot/index.html


To configure Apache, you'll need to edit two configuration files. This is for 10.11.6, YMMV.

File: /etc/apache2/httpd.conf
  1. Find the line that begins with #LoadModule and ends with Uncomment the line by removing the #. This will enable vhosts on your system.
  2. Find the line that begins with #Include and ends with httpd-vhosts.conf. Uncomment the line by removing the #. This tells Apache where to find the vhost configuration information.
  3. Save the changes.
File: /etc/apache2/extra/httpd-vhosts.conf

Comment out the configuration that is present and add the following:

<Directory "/www">
     Options Indexes MultiViews FollowSymLinks
     AllowOverride All
     Order allow,deny
     Allow from all
     Require all granted

<Virtualhost *:80>
     VirtualDocumentRoot "/www/home/wwwroot"
     UseCanonicalName Off

<Virtualhost *:80>
     VirtualDocumentRoot "/www/sites/%1/wwwroot"
     ServerAlias *.dev
     UseCanonicalName Off
Final Steps

Restart Apache so that it loads the new config: sudo apachectl restart.

Now, test your sites:

# Both should print 'hello, world'
$ curl
$ curl

At this point, you can add and remove sites under /www with impunity, never having to reconfigure or restart anything again.

Django Addendum

After writing the preceding, I wanted to do a similar thing, except running a Django WSGI application from a local repository while supporting SSL. This addendum covers that scenario.

The approach I took was to create a separate VHost configuration for the client. I simply included this configuration from httpd.conf. Here are the salient parts. Note that some of this content veers off into Django/WSGI content.

I created a httpd_macOS.conf file, which contained:

# Configuration for client's Django/WSGI application to be run locally on OS-X.
# Assumes:
#     * SSL not otherwise in use locally. I.e., /etc/apache2/extra/httpd-ssl.conf
#       has not been Included via httpd.conf.
#     * You are running dnsmasq locally and configured so that any any DNS name
#       ending in '.dev' resolves to
#     * 'mod_wsgi' was installed via 'pip install mod_wsgi'.
# To use this:
#   1) In /etc/apache2/httpd.conf, uncomment the following lines:
#         '#LoadModule socache_shmcb_module libexec/apache2/'
#         '#LoadModule ssl_module libexec/apache2/'
#   2) In /etc/apache2/httpd.conf, append a line with an Include directive
#      to Include this file.
#      'Include /Users/khe/repos/clientname/ecommerce/apache/httpd_macOS.conf'

# Next two lines are the output from 'mod_wsgi-express module-config'
LoadModule wsgi_module "/Users/khe/.virtualenvs/ecom/lib/python2.7/site-packages/mod_wsgi/server/"
WSGIPythonHome "/Users/khe/.virtualenvs/ecom"

# Grant httpd access to application html/etc. files.
<Directory "/Users/khe/repos/clientname/ecommerce/ecom/store">
    Options All
    AllowOverride All
    Order allow,deny
    Allow from all
    Require all granted

Alias /static                /Users/khe/repos/clientname/ecommerce/ecom/store/static-collect
Alias /static-collect/admin  /Users/khe/repos/clientname/ecommerce/ecom/store/static-collect/admin
Alias /media                 /Users/khe/repos/clientname/ecommerce/ecom/store/media

# Grant httpd access to WSGI script.
<Directory "/Users/khe/repos/clientname/ecommerce/apache">
   <Files "django.wsgi">
      Order allow,deny
      Allow from all
      Satisfy any

# LogLevel info    # helpful for debugging WSGI problems

<VirtualHost *:80>
    VirtualDocumentRoot "/Users/khe/repos/clientname/ecommerce/ecom/store"
    UseCanonicalName Off
    WSGIScriptAlias  /  /Users/khe/repos/clientname/ecommerce/apache/django.wsgi

# These are OK for local/dev use.  Do not use these for production.
Listen 443
SSLPassPhraseDialog builtin
SSLSessionCache         "shmcb:/private/var/run/ssl_scache(512000)"
SSLSessionCacheTimeout  300
SSLStaplingCache        "shmcb:/private/var/run/ssl_stapling(32768)
SSLStaplingStandardCacheTimeout 3600
SSLUseStapling On

<VirtualHost *:443>
    VirtualDocumentRoot "/Users/khe/repos/clientname/ecommerce/ecom/store"
    UseCanonicalName Off
    WSGIScriptAlias  /  /Users/khe/repos/clientname/ecommerce/apache/django.wsgi
    SSLEngine on
    # Create your own credentials and refer to them here.
    SSLCertificateFile     /Users/khe/repos/clientname/ecommerce/apache/ssl/
    SSLCertificateKeyFile  /Users/khe/repos/clientname/ecommerce/apache/ssl/

Note that instead of using /www as before, the above allows httpd to access files in my local directory structure, specifically the repo for a client. This is acceptable for local isolated development, but one should never do this on a server that is publicly accessible, as the security risk is large.

Since I was asked about it, here is a sample WSGI application (i.e., django.wsgi) that could be used with the above.

import os
import sys

# Update sys.path to include the Django application.  Here is
# how to do it relative to the location of the WSGI script. YMMV.
wsgi_dir = os.path.dirname(__file__)
app_dir = os.path.dirname(wsgi_dir)
sys.path.append(os.path.join(app_dir, 'ecom'))
sys.path.append(os.path.join(app_dir, 'ecom', 'store'))

os.environ['DJANGO_SETTINGS_MODULE'] = 'store.settings'

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()

Original: December 27, 2015 Updated: June 10, 2018