There are many ways of devising a workflow for development, and they are usually judged on their efficacy in resolving bottlenecks. Naturally, this process is sensitive to what is being developed, and this note outlines one such workflow, which I use when developing with Grav.
Certainly, any workflow can benefit from version control to maintain a stream of backups and an easy way of undoing mistaken steps. Version control has been around for years, popularised through Git, Subversion, and Mercurial. In more recent years development environments have also become popular to simplify workflows, especially applying changes locally, sharing them for testing, and finally deploying them to a live server.
The workflow suggested is general enough to apply to the organization of pages, themes, and plugins between environments, but could also easily be ported to a different file-based CMS. It implements version control, various environments, and a semi-automatic way of deploying changes between environments.
There are various philosophies on how many environments should be used for going from development to production, but common to many methodologies are a local one for development, an external one for testing, and finally a live one for production. Simply put, we want to conduct development and initial testing locally – on our own devices – before anything is put through extensive tests. When we reach a goal or milestone in this development, we want others to test it for errors and inconsistencies – thus we “stage” it in an environment that operates equivalently to the live server.
These goals or milestones should, in my opinion, hold some level of significance for reaching the final goal of the project. More pragmatically, for developing with Grav, it entails a feature or a stage of development which warrants more extensive testing than is usually done in local development.
For Grav, my basic workflow looks like this:
- Testing: Develop templates and styling (on a non-cached Grav installation), using mock pages and content.
- Pages: Format and spellcheck actual pages, generate responsive images (non-destructively, through Gulp).
- Staging: Test site with actual pages on current templates and styles (cached Grav).
- Production: Live, optimized site with actual pages.
And this works well enough, but we also want the benefit of version control to keep close track of development, and so unexpected errors on Live can be undone at a moment’s notice. I do this by pushing template- and style-files from Testing, as well as Pages, to Staging, which when they check out are pushed to BitBucket which automatically deploys to the Live server.
First of all, I have set up my local environment to accomodate Grav’s Multisite Setup. This is for one single, important reason: I test the site with both mockup (placeholder text and images) and actual content. The first is especially important in theme development to account for varied pages and types of content.
In comparison to a regular Grav installation, this requires very little. We only want a few things: Separated pages, potentially separated configurations, and separate caches. In local development I do not generally have cache enabled, and configurations are mostly shared. We achieve this with a
setup.php-file in the root of Grav:
grav/user/sites/ we need two folders:
test.grav.dev. I have these setup in the VirtualHost of my server so that
grav.dev uses the mockup content and
test.grav.dev the actual content. The
config folder holds each site’s
system.yaml, which remain mostly untouched. However, I use a shared
config/plugins folder for the plugin configurations. This is easily achieved with a symbolic link:
mklink /J grav/user/sites/test.grav.dev/config/plugins grav/user/sites/grav.dev/config/plugins
This tells Windows that I want a
test.grav.dev/config/plugins folder which virtually mirrors the contents of
grav.dev/config/plugins. Thus, any changes made in the former are a directly available in the latter.
Building Pages from Source
At the root of my local WAMP-installation I generate the
pages-folder from a
Source-folder, which holds all pages (
.md) and their images. I do this because I use responsive images on the sites I create (load smaller images on smaller screens, etc). This is done through Gulp like this: gulpfile.js. The structure of this root folder (
www) looks like this:
node_modules/: Holds NPM packages
pages/:Holds the generated pages, temporarily
responsive/: Temporarily holds responsive images
Source/: Holds the pages, structured hierarchically as normal pages in Grav
output.log: (Optional) Log from the default Gulp task
gulpfile.js: The Gulp tasks
package.json: The NPM packages to use
Now, when the default Gulp task (just
gulp in the
www directory) is run, a few things happen:
- The current pages are deleted.
- Pages are copied from
- Responsive images are generated (from
pages/) and placed in
- Responsive images are minimized and copied from
responsive-folder is cleaned.
During this process, the console will be quite busy spewing out details of what it is doing. Afterwards, running
gulp move will delete the current pages in
grav/user/sites/test.grav.dev/pages and move the updated ones there. The optional log is generated by running the Gulp task as such:
gulp 2>&1 | tee output.log. I keep the
www/pages folder intact so that changes to pages during testing can be quickly undone, by just running
With this setup, I can regenerate files on the fly, and test the site with both mockup and actual content. Since the two sites share everything except pages, changes to themes or plugins do not need to be manually updated. The
test.grav.dev site is essentially the Testing Environment, and holds everything that will later be pushed to the Staging Environment.
To deploy Grav you essentially just need to move the
user-folder to another server, but this is where we want to add in a layer of version control. In my case I simply commit the relevant files directly to Git when a goal or milestone is reached, and then push this to a remote repository like BitBucket or GitHub using SourceTree or through CLI.
Managing .gitignore-files and avoiding conflicts in Git can be a hassle in terms of avoiding superfluous files or having to resolve conflicts down the line, so a good .gitignore-file is handy.
As mentioned, the remote repository automatically deploys from the repository to the Staging server. This server is actually the same server as the Live one, the files are just pushed elsewhere than the Live-domain. Thus, extensive testing – specifically of optimizations to assets like CSS, images, JS – is done here. Since Staging and Live are both on the same server, there is complete consistency in how the site operates.
At the end of a deployment the cache is, of course, cleared. I typically use the Private plugin or
.htaccess rules to keep this environment available only to testers.
As the Live Environment is a virtual replica of the Staging Environment, it only needs to be updated by another deployment. The only difference is that this deployment is manual rather than automatic. The key here is that test reports are reviewed, bugs resolved, and changes made before the Live site is updated. Inconsistencies are not essential in Staging as it operates more as a place for continuous testing and rapid updates, whereas on Live they are inadmissible. The final workflow in regards to Grav looks like this:
The workflow described advocates three environments: Testing, Staging, and Live. Implicitly, it also favors a centralized location for code – the remote repository which the Staging and Live Environments mirror. The benefit of this is that code generally improves from extensive testing and code review from other users, but more importantly that code is forcefully kept up-to-date rather than developed in a decentralized fashion – leading to fewer failed “builds”. Also, the server used for Staging perfectly represents the conditions of Live so migration is painless as long as the master branch is kept clean.
Further, the Testing Environment allows for a variety of content so that the most harrowing of errors in style, structure or functionality are ironed out before this extensive testing. Finally, the focus on limiting manual tasks such as copying files speeds up the development process and allows semi-automatic deployments with integrated version control.
The process could potentially benefit from continuous integration and deployment – a system wherein changes were not pushed if SASS, JS, PHP, or Twig returned a fatal error – but most of those (apart from PHP) are usually discovered in local development.
Do you have any questions about this approach, or feedback on it? Contact me.