More and more Magento 2 projects reach their final stage all across the world. And with that, the question what would be the ideal deployment procedure also pops up. During trainings I have been discussing the right steps over and over again. Even though I do not build shops itself, here's a writeup on what I think would be a proper deployment of Magento 2.
Magento 2 has a couple of principles which make a deployment a bit more complex than a simple
git pull. First of all, there are multiple modes that Magento can run in (default, developer, production) and with production mode certain code needs to be generated manually:
First of all, there is the concept of static view file deployment, with which various files are copied to the
Second of all, there is the concept of Dependency Injection which requires some code to be compiled. For this, classes are generated and stored in the
var/ folder (though this will change in future Magento versions). The command to run is
There are a few more steps in deployment that are discussed below.
The annoying thing about both processes is that it takes time to complete, a lot of time. There have been various attempts to make this faster: At Magento DevParadise 2016, I've joined a hackathon project to add flags to the static view file processing command, allowing deployment to be more specific - it is now included in the core. Also, Magento 2.1 improved the generic speed of things. And more improvements are on the way. However, the fact remains: It is slow.
Instead of complaining about it, deal with it. Make sure your deployment procecure incorporates the fact that deployment is lengthy. Run all the steps through some scripted procedure (bash, Roboli, Capistrano, Fabric, etcetera) and make sure the ordering of these steps is correct: This will be the main conclusion of this article.
When deploying to production, the command
composer install should be used to roll-out all Magento packages (and dependencies) with specific versions as defined in the
composer.lock file. Runing a composer-install might be slow, especially when the Magento servers are busy with handling traffic. If this becomes a bottleneck, you could setup your own Satis mirror of the Magento Marketplace.
Updating composer packages as part of the procedure of going live, might prove dangerous: It might be that Packagist or the Magento Marketplace is down. If this worries you, run these commands when you are preparing deployment, not when the deployment itself takes place: For instance, run composer install and the 2 generation commands on a staging environment and copy the generated files (DI compilation,
vendor) to the production environment. Don't include those generated files in your main git repository, it will mess up things quickly.
One other important command during deployment is the
bin/magento setup:upgrade command which will check for new or updated modules and run their
Setup procedure. The command runs quickly and there is nothing much to say about it, except of course that it should be tested properly. However, once you start running this command, make sure you are in Maintenance Mode.
A caveat here is that by default the upgrade-command will wipe out all of the generated DI files and the copied view-files. To prevent this from happening (because we already generated these files in the steps above), you can add a flag
Entering Maintenance Mode is vital to make sure customers don't see ugly errors while finishing up deployment. Most disaster can strike when queries are made to update database tables, while actually the database structure itself is updated at the same time. So make sure the Maintenance Mode is enabled when running the database updates:
Now that we have gone through all of the steps, we can summarize things by summing up the steps in their right order. However, instead of coming up with 1 list I would like to suggest we have 2 lists: One for code deployment, one for database changes. The reason for this is that all of the steps that require a lot of time deal with code. The database changes only take seconds. If we woud run all of these steps in one go, we would take down the shop for a really long time, minutes even.
Instead a proper approach would be to create a copy of the production environment, preferably in the same root folder as the production site. Next, we can update that copy - I prefer to call it a shadow environment - which might take minutes. Once we are done, we replace all of the files of the production site with the new files of the shadow environment and finalize the deployment to production.
To run the changes on the files, the following steps are needed:
Again, this might take minutes. It might even go wrong. However, if all of this takes place in a shadow environment, there is no risk to the production site. Only once all of the steps are completed succesfully, we move to the second stage - putting these file updates actually in production.
Switching shadow and production could be done through simple folder operations (
mv b c; mv a b; mv b a;) or symlinking. Actually, the procedure of working with symlinks is part of the approach of Capistrano, so if you are looking for a better tool to deploy Magento 2, opt for Capistrano. However, to me, a tool like Capistrano, Fabric or whatever, is never much more than the execution of clever commands in a scripted way: You can use many tools for getting to the same end result.
So now we can swap all of the files of the actual production site with the files that we gathered in the shadow environment. The only thing that remains is to run the database changes, which is done in Maintenance Mode.
bin/magento setup:upgrade --keep-generated
The previous code-gathering step might have taken minutes to complete. This database step usually takes 1 or 2 seconds. Meaning: With this procedure you never have more than a few seconds downtime, unless you mess up.
On our own Yireo sites we run PHP 7 with Zend OPcache enabled. However, we also disable the setting
opcache.validate_timestamps to allow for maximum performance. The benefit is that PHP does not need to check anymore the timestamp of PHP files on disk, it simply prefers the compiled PHP source in memory instead. However, this also means that if you make changes to the PHP files (like with Magento 2 deployment) that the OPcache needs to be flushed. There are various ways to do this, but the most reliable method for me is restarting (or reloading) the PHP-FPM process.
So our own deployment also takes in that the PHP-FPM instance is reloaded, after the rest of the deployment has been finished: In our environment this is done through the command
service php-fpm reload.
An alternative deployment is to take all of the commands that are mentioned above to take place in a shadow environment (which sits alongside the production site) and run them in a separate environment (separate server) instead. This could be a CI environment, or call it a remote shadow copy. Whenever deployment succeeds here, the files could be wrapped up a tar file, copied to the production environment and unpacked there in the right folder. Actually, the principle here is the same: Don't run all the commands directly on the production site. Instead, work with a temporary copy where you can take all actions that influence the files. And swap this copy with the actual production site.
Deployment might be lenghty. But it absolutely does not mean that the shop is down for minutes. A proper deployment procedure needs to be in place to allow for a minimizing of downtime. I hope you can put the above to good use.