<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[automation - fe84]]></title><description><![CDATA[> notes to self _]]></description><link>https://blog.foureight84.com/</link><generator>Ghost 4.8</generator><lastBuildDate>Mon, 13 Apr 2026 02:27:44 GMT</lastBuildDate><atom:link href="https://blog.foureight84.com/tag/automation/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[DIY Home Automation (Part 1)]]></title><description><![CDATA[Home automation with Zigbee, MQTT, and Home Assistant using Docker.]]></description><link>https://blog.foureight84.com/home-automation/</link><guid isPermaLink="false">629f57d887242e000198048b</guid><category><![CDATA[automation]]></category><category><![CDATA[zigbee]]></category><category><![CDATA[traefik]]></category><category><![CDATA[docker]]></category><category><![CDATA[home assistant]]></category><category><![CDATA[tasmota]]></category><dc:creator><![CDATA[foureight84]]></dc:creator><pubDate>Thu, 09 Jun 2022 19:50:25 GMT</pubDate><media:content url="https://blog.foureight84.com/content/images/2022/06/ha-logo-pretty.svg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.foureight84.com/content/images/2022/06/ha-logo-pretty.svg" alt="DIY Home Automation (Part 1)"><p>I&apos;ve been using Philips Hue lightbulbs in my house for a few years. The hardware has been reliable. The software on the other hand is not the best. Being tied to a mobile app as the only source of control can be cumbersome. While my Philips Hue bulbs are also connected to Google Home, I don&apos;t have one set up in every room of the house. Plus, the Philips Hue app is lackluster when it comes to making automation rules. Most of the time the GPS-based triggers do not work properly.</p><p>I&apos;ve also been wanting to create more home automation &#x2013; especially with my lights. Being able to dim or turn off the lights completely when no one is in the room is extremely useful and can save a few dollars on the electricity bill. Buying a Philips Hue motion sensor is an option but they are quite expensive ($45-65USD). There are many other cheaper options available but that would require buying another gateway and maybe even a whole set of new lights since there&apos;s a lack of cross-brand support.</p><p>The IoT market is very segmented. At the lowest level, there are several competing wireless protocols available; Zigbee, Z-Wave, Lora, and WiFi to name a few. Sticking with one protocol also presents compatibility issues directly resulting from proprietary implementation across different brands. A Philips Hue hub/bridge will work with Hue lightbulbs and a handful of partnered brands. Even with partner brand support, there&apos;s a high chance of limited features when mixing. A typical off-the-shelf smart home today means multiple proprietary hardware and cloud services that do not guarantee direct integration with one another. This makes for a very expensive setup and requires looking for workarounds and expensive third-party services to connect them all, like Google Home and Samsung Smarthings. Luckily, Home Assistant falls into the cost effective solution if you&apos;re willing to put in the work to set it up. </p><p>With an underutilized thin-client acting as a <a href="https://blog.foureight84.com/swarm-your-pihole/">local DNS and Adblock (Pi-hole)</a> server, Home Assistant seems like the perfect solution to all of the above-mentioned issues by eliminating the need to buy into different IoT ecosystems. Home Assistant not only acts as a unifying abstraction layer, but it also has a large community-driven set of tools and plugins to accomplish tasks such as creating automation rules for supported devices.</p><h3 id="getting-started">Getting Started</h3><figure class="kg-card kg-image-card kg-card-hascaption"><a href="https://blog.foureight84.com/content/images/2022/06/Zigbee-Home-Assistant-2.svg"><img src="https://blog.foureight84.com/content/images/2022/06/Zigbee-Home-Assistant-2.svg" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1443" height="703"></a><figcaption>IoT stack typology</figcaption></figure><p><strong>Bill of Materials</strong></p><ul><li><strong>Zigbee enabled devices</strong> (pre-existing or just to test). e.g. Light bulbs, motion sensors, temperature sensors, etc.</li><li><strong>Zigbee coordinator</strong>. This could be a USB-dongle like the <a href="https://www.phoscon.de/en/conbee2">ConBee II</a>, or <a href="https://sonoff.tech/product/diy-smart-switch/sonoff-dongle-plus/">SONOFF Zigbee 3.0</a>. For this project, I chose the <a href="https://sonoff.tech/product/smart-home-security/zbbridge/">Sonoff ZBBridge</a>. At the time of writing, the <a href="https://itead.cc/product/sonoff-zigbee-bridge-pro/">Pro</a> version of the Sonoff ZBBridge was released a few weeks ago (it supports up to 128 devices). </li><li><strong>(Optional) <a href="https://www.amazon.com/gp/product/B07K76Q2DX">FTDI Programmer</a></strong>. Necessary if you want to use the Sonoff ZBBridge route to load a custom firmware called <a href="https://tasmota.github.io/docs/">Tasmota</a>.</li><li><strong>A dedicated server with Docker installed</strong>. This can be a Rapsberry Pi or an old computer. The idea is to reduce power consumption, so low-powered is important. The Dell Wyse 5070 thin client I am using has a Pentium Silver J5005 which only consumes roughly 10W max. If you are planning on using a Raspberry Pi, then the <a href="https://www.home-assistant.io/installation/raspberrypi/">Home Assistant Pi image</a> is better suited than a Docker setup (you can still use the configuration files in this guide).</li><li>If you&apos;re using an Intel based system, I highly recommend <a href="https://clearlinux.org/downloads">Intel Clear Linux Server</a>. It&apos;s highly optimized for Intel CPUs.</li><li><strong>Reference</strong>: <a href="https://zigbee.blakadder.com/">https://zigbee.blakadder.com/</a> is a great place to check device support for Zigbee2MQTT or ZHA (alternative Zigbee integration for Home Assistant).</li></ul><p>I chose to use the Sonoff ZBBridge instead of the USB Zigbee dongle because my server is sitting in a closet at one end of the house. While Zigbee is a mesh protocol, I wasn&apos;t too sure if there would be a strong signal to the first device. Plus, having an untethered coordinator makes it easier to place it almost anywhere in the house.</p><p>Sonoff ZBBridge is an ESP MCU based controller that&apos;s shipped with Sonoff&apos;s firmware for their IoT managed solution. Thankfully there are pin-outs on the board that allows for reflashing. You can either choose <a href="https://github.com/thegroove/esphome-zbbridge">ESPHome</a> or <a href="https://tasmota.github.io/docs/">Tasmota</a>. I went with Tasmota. </p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p><strong>Follow the guides below to flash your Sonoff ZBBridge &#xA0;before proceeding:</strong></p><ul><li><strong>Sonoff ZBBridge</strong> <a href="https://zigbee.blakadder.com/Sonoff_ZBBridge.html">Tasmota flashing guide</a></li><li><strong>Sonoff ZBBridge Pro</strong> <a href="https://notenoughtech.com/home-automation/tasmota-on-sonoff-zb-bridge-pro/">Tasmota flashing guide</a> (source via <a href="https://github.com/arendst/Tasmota/discussions/14419">Github</a>)</li></ul><!--kg-card-begin: markdown--><blockquote>
<p>Tasmota flashing and setup are required before proceeding.</p>
</blockquote>
<!--kg-card-end: markdown--><p>Flashing will be a two step process: 1) Flash the ESP MCU with Tasmota 2) Flash a custom Zigbee module firmware once Tasmota is up and running. Lastly, remember to follow the guide&apos;s configuration template to set it up for <strong>Zigbee2Tasmota</strong>. ZHA (Home Assistant Zigbee plugin) is another alternative that provides a direct connection without requiring an MQTT broker such as Eclipse Mosquitto. However, from my experience, ZHA has less device support and a bit complicated to custom add new devices.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p>Once you&apos;ve completed the step above:</p><ul><li>Make note of your Tasmota Sonoff ZBBridge&apos;s IP address.</li><li>Update your router&apos;s DHCP settings to make the IP address static for the Tasmota Sonoff ZBBridge. Without this configuration, your setup will break when Tasmota&apos;s IP address changes.</li></ul><!--kg-card-begin: html--><br><!--kg-card-end: html--><p>On your server, clone the project template:</p><pre><code class="language-bash">git clone https://github.com/foureight84/ha_zigbee_docker.git &amp;&amp; cd ha_zigbee_docker</code></pre><!--kg-card-begin: markdown--><h4 id="docker">Docker</h4>
<!--kg-card-end: markdown--><p>Templates repository is located here: <a href="https://github.com/foureight84/ha_zigbee_docker">https://github.com/foureight84/ha_zigbee_docker</a></p><p>If you are already using Traefik then modifications will need to be made before running the docker-compose.yaml / docker-swarm.yaml. The same applies to importing the templates into Portainer.</p><p>The scaffold Docker volumes folders adhere to &quot;iot&quot; as the service name for the docker instances. If you wish to use a different name, make sure to edit the foler name prefix accordingly.</p><!--kg-card-begin: markdown--><h4 id="docker-swarm-setup">Docker Swarm Setup</h4>
<!--kg-card-end: markdown--><p><a href="#docker-standalone-setup">Skip to the next section if you&apos;re using standalone Docker.</a></p><ul><li>Create &quot;traefik&quot; overlay network</li></ul><pre><code>docker network create --driver=overlay --attachable --subnet=48.84.0.0/16 --gateway=48.84.0.1 traefik</code></pre><!--kg-card-begin: markdown--><blockquote>
<p>Make sure to update <code>iot_homeassistant\_data\configuration.yaml</code> if you set a different subnet for the &quot;traefik&quot; network. This is necessary to access Home Assistant&apos;s web UI.</p>
</blockquote>
<!--kg-card-end: markdown--><ul><li>Update <code>iot_zigbee2mqtt\_data\configuration.yaml</code> with your Tasmota Sonoff ZBBridge&apos;s IP address</li></ul><pre><code class="language-yaml">serial:
  port: tcp://&lt;&lt;tasmota ip address&gt;&gt;:8888 #update to match your static ip for Tasmota Sonoff ZBBridge
  adapter: ezsp</code></pre><ul><li>Copy configuration to docker volumes storage location:</li></ul><pre><code class="language-bash">sudo cp -a iot_* /var/lib/docker/volumes/</code></pre><ul><li>Run the Home Assistant stack:</li></ul><pre><code>docker stack deploy -c docker-swarm.yaml iot</code></pre><!--kg-card-begin: markdown--><blockquote>
<p>Stack name needs to match volumes folders&apos; prefix &quot;iot_&quot;. Rename volumes folder if you wish to use a different stack name.</p>
</blockquote>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h4 id="docker-standalone-setup">Docker Standalone Setup</h4>
<!--kg-card-end: markdown--><p><a href="#dns">Ignore this step if you are using Docker Swarm.</a></p><ul><li>Update <code>iot_zigbee2mqtt\_data\configuration.yaml</code> with your Tasmota Sonoff ZBBridge&apos;s IP address</li></ul><pre><code class="language-yaml">serial:
  port: tcp://&lt;&lt;tasmota ip address&gt;&gt;:8888 #update to match your static ip for Tasmota Sonoff ZBBridge
  adapter: ezsp</code></pre><ul><li>Copy configuration to docker volumes storage location:</li></ul><pre><code class="language-bash">sudo cp -a iot_* /var/lib/docker/volumes/</code></pre><ul><li>Run the Home Assistant stack:</li></ul><pre><code class="language-bash">docker-compose -p iot up -d</code></pre><!--kg-card-begin: markdown--><h4 id="dns">DNS</h4>
<!--kg-card-end: markdown--><p>You will need to create DNS entries in your router to access the running services. For my instance, 192.168.1.2 is my Docker server and these are my router DNS records:</p><pre><code class="language-hosts">192.168.1.2 traefik.home
192.168.1.2 home-assistant.home
192.168.1.2 node-red.home
192.168.1.2 zigbee2mqtt.home</code></pre><!--kg-card-begin: markdown--><h4 id="home-assistant-initial-setup">Home Assistant Initial Setup</h4>
<!--kg-card-end: markdown--><ul><li>Browse to <code>http://home-assistant.home</code> and create your account. At the end of the account creation wizard, you should see a screen similar similar to this:</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="468" height="354"></figure><ul><li>Click on <code>mqtt</code> and <code>mosquitto</code> as the broker and submit. You should get a &quot;Success&quot; confirmation. Instead of using the Docker IP for the Eclipse Mosquitto container, we are providing it with the hostname instead.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-1.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="486" height="439"></figure><!--kg-card-begin: markdown--><h4 id="installing-hacs-home-assistant-community-store">Installing HACS (Home Assistant Community Store)</h4>
<!--kg-card-end: markdown--><p>HACS (Home Assistant Community Store) as the name suggests, this is a repository of community developed plugins for Home Assistant. While there are well maintained plugins, they are unofficially supported. Be aware of this when installing any plugin as it may cause unwanted behaviors with Home Assistant.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p>We will need HACS in order to complete the Node-RED integration.</p><!--kg-card-begin: markdown--><blockquote>
<p>A <a href="https://github.com">Github</a> account is required to install HACS.</p>
<p><strong>Why?</strong></p>
<p>HACS uses the GitHub API to gather information about all available and downloaded repositories. This API is rate limited to 60 requsets every hour for unauthenticated requests, which is not enough. So HACS needs to make authenticated requests to that API. (<a href="https://hacs.xyz/docs/faq/github_account#:~:text=HACS%20uses%20the%20GitHub%20API,authenticated%20requests%20to%20that%20API.">source</a>)</p>
</blockquote>
<!--kg-card-end: markdown--><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://hacs.xyz/docs/setup/download"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Download | HACS</div><div class="kg-bookmark-description">HACS download steps</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://hacs.xyz/favicon.ico" alt="DIY Home Automation (Part 1)"><span class="kg-bookmark-author">HACS</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://assets.hacs.xyz/logo.svg" alt="DIY Home Automation (Part 1)"></div></a></figure><ul><li>You&apos;ll need to go inside the Home Assistant container:</li></ul><pre><code class="language-bash">docker exec -it $(docker ps -q -f name=iot_homeassistant) bash</code></pre><ul><li>Once inside the Home Assistant container:</li></ul><pre><code class="language-bash">wget -O - https://get.hacs.xyz | bash -</code></pre><ul><li>Restart Home Assistant from the web UI (http://home-assistant.home) by going to <code>Settings &gt; System &gt; Restart</code>.</li></ul><!--kg-card-begin: html--><br><!--kg-card-end: html--><ul><li>After Home Assistant restarts, complete the installation by adding the HACS integration. To do so, go to <code>Settings &gt; Device &amp; Services &gt; + Add Integration</code> and type <code>HACS</code>. You will need to acknowledge all the checkboxes and follow the instruction to add HACS to your Github account.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-2.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="626" height="467" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-2.png 600w, https://blog.foureight84.com/content/images/2022/06/image-2.png 626w"></figure><p>You will see <code>HACS</code> as one of the left-menu items in the Home Assistant web UI.</p><!--kg-card-begin: markdown--><h4 id="setting-up-node-red-for-creating-automation-rules">Setting up Node-RED for creating Automation Rules</h4>
<!--kg-card-end: markdown--><p>Node-RED is a flow-based development tool for visual programming. We will be installing specific a community developed &quot;Palette&quot; for Home Assistant.</p><ul><li>From the left-menu in the Home Assistant web UI, click on <code>Node Red</code>.</li></ul><!--kg-card-begin: markdown--><blockquote>
<p>Node Red and Zigbee2MQTT menu items are custom added via the Home Assistant <code>configuration.yaml</code>. <a href="https://www.home-assistant.io/integrations/panel_iframe/">iframe Panel documentation</a><br>
These are essentially iframe links to our Node-RED and Zigbee2MQTT web UI instances.</p>
</blockquote>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-3.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1177" height="686" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-3.png 600w, https://blog.foureight84.com/content/images/size/w1000/2022/06/image-3.png 1000w, https://blog.foureight84.com/content/images/2022/06/image-3.png 1177w" sizes="(min-width: 720px) 720px"></figure><ul><li>Access the &quot;Palette Manager&quot; by pressing <code>alt + shift + p</code> or go to the hamburger menu on the top right of the iframe then choose <code>Manage palette</code>.</li><li>Click on the <code>Install</code> tab and search for <code>home-assistant</code> and look for <code>node-red-contrib-home-assistant-websocket</code>.</li><li>After the palette installs, scroll to the bottom of the Node-RED nodes list on the left and you should see the <code>home assistant</code> section with all of the associated nodes.</li><li>Drag the &apos;API&apos; node into the flow workspace and double-click to edit its properties.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-4.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1398" height="696" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-4.png 600w, https://blog.foureight84.com/content/images/size/w1000/2022/06/image-4.png 1000w, https://blog.foureight84.com/content/images/2022/06/image-4.png 1398w" sizes="(min-width: 720px) 720px"></figure><ul><li>Open a new browser tab and head to your Home Assistant web UI (http://home-assistant.home). We will be creating a long-lived token for Node-RED to connect with HA. Click on your profile on the left-menu and scroll to the bottom. Click <code>Create Token</code> and call it <code>Node RED</code>. Copy the entire token string that should look like this: <code>eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJlNmY1Nzk5YjAzNjg0MjY1ODRkYTc5Yjc0YTVhMTI1ZCIsImlhdCI6MTY1NDgwMTM4NiwiZXhwIjoxOTcwMTYxMzg2fQ.vjbD8T378Da7nMzgAKotVbzMd-77pAVGI6c7wfcKf6U</code></li><li>Go back to the HA tab with Node-RED opened with the API node properties menu. Click to add new server. Fill in <code>http://home-assistant:8123</code> for Base URL field and paste your access token.</li></ul><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-5.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1151" height="704" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-5.png 600w, https://blog.foureight84.com/content/images/size/w1000/2022/06/image-5.png 1000w, https://blog.foureight84.com/content/images/2022/06/image-5.png 1151w" sizes="(min-width: 720px) 720px"></figure><ul><li>After the server has been added, the API node can be deleted. Make sure to click <code>Deploy</code> to save your configuration.</li></ul><!--kg-card-begin: html--><br><!--kg-card-end: html--><p>You&apos;re all set to create automation rules.</p><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-8.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1339" height="1005" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-8.png 600w, https://blog.foureight84.com/content/images/size/w1000/2022/06/image-8.png 1000w, https://blog.foureight84.com/content/images/2022/06/image-8.png 1339w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><h4 id="pairing-new-zigbee-devices">Pairing New Zigbee Devices</h4>
<!--kg-card-end: markdown--><ul><li>In your HA web UI, click on the <code>Zigbee2MQTT</code>. By default via the <code>iot_zigbee2mqtt\_data\configuration.yaml</code> all Zigbee devices in pairing mode will be allowed to join. This can disabled from the Zigbee2MQTT (top-right button).</li><li>Put your device into pairing Mode. Depending on the device, it will either be holding down the <code>reset</code> button on the device or turning a switch on and off in combination for light bulbs. See manufacturer instructions for details.</li></ul><!--kg-card-begin: html--><br><!--kg-card-end: html--><p>You will see the device show up in Zigbee2MQTT after a few seconds after entering pairing mode. You&apos;ll want to name the device appropriately.</p><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-6.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1333" height="793" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-6.png 600w, https://blog.foureight84.com/content/images/size/w1000/2022/06/image-6.png 1000w, https://blog.foureight84.com/content/images/2022/06/image-6.png 1333w" sizes="(min-width: 720px) 720px"></figure><p>These devices should also appear in Home Assistant under the <code>Settings &gt; Devices &amp; Services &gt; mosquitto MQTT</code>.</p><figure class="kg-card kg-image-card"><img src="https://blog.foureight84.com/content/images/2022/06/image-7.png" class="kg-image" alt="DIY Home Automation (Part 1)" loading="lazy" width="1335" height="794" srcset="https://blog.foureight84.com/content/images/size/w600/2022/06/image-7.png 600w, https://blog.foureight84.com/content/images/size/w1000/2022/06/image-7.png 1000w, https://blog.foureight84.com/content/images/2022/06/image-7.png 1335w" sizes="(min-width: 720px) 720px"></figure><p></p><p>Home Assistant also has an Android and iOS companion app that provides additional tracking such as your location. All of this data is fed to your local instance allowing for GPS based automation rules such as turn the lights on when coming home at night, or setting up motion and time based light triggering.</p><p>In the next guides, I&apos;ll walk through WireGuard VPN setup to maintain a tunnel to your home network to access Home Assistant without exposing it to the public internet. </p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Terraforming Ghost on Linode with Google Drive Backup Using Rclone]]></title><description><![CDATA[Using Terraform to setup Ghost on Linode with auto daily backup to Google Drive using Rclone.]]></description><link>https://blog.foureight84.com/deploying-ghost-on-linode-with-cheap-remote-backup-using-terraform/</link><guid isPermaLink="false">60d92ea4883663000189ce70</guid><category><![CDATA[ghost]]></category><category><![CDATA[blog]]></category><category><![CDATA[terraform]]></category><category><![CDATA[hcl]]></category><category><![CDATA[docker]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[linode]]></category><category><![CDATA[automation]]></category><category><![CDATA[rclone]]></category><category><![CDATA[backup]]></category><dc:creator><![CDATA[foureight84]]></dc:creator><pubDate>Thu, 01 Jul 2021 08:20:58 GMT</pubDate><media:content url="https://blog.foureight84.com/content/images/2021/07/ghost_network_diagram.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="table-of-contents">Table of contents</h3>
<ul>
<li><a href="#terraform">Terraform</a>
<ul>
<li><a href="#linode">Linode</a></li>
<li><a href="#cloudflare">Cloudflare</a></li>
<li><a href="#variables-and-definitions">Variables &amp; Definitions</a></li>
</ul>
</li>
<li><a href="#docker">Docker</a>
<ul>
<li><a href="#ghost-stack">Ghost Stack</a></li>
<li><a href="#rclone">Rclone</a></li>
</ul>
</li>
<li><a href="#deployment">Deployment</a></li>
</ul>
<!--kg-card-end: markdown--><hr><img src="https://blog.foureight84.com/content/images/2021/07/ghost_network_diagram.png" alt="Terraforming Ghost on Linode with Google Drive Backup Using Rclone"><p>At the beginning of the pandemic, I decided to build a custom keyboard (post on this later). The project led to me wanting to add a Blackberry trackball and this would require a special way to mount it to the keyboard PCB. After weeks of trying to jury-rigged a mount from common parts to attach the trackball to the keyboard, I realized that it would be much easier to design and 3D print the necessary part. </p><p>I also needed a better way to document my projects and starting a blog is better for visibility and accessibility than a markdown README file on Github. Ghost is a good candidate, offering a lightweight straightforward writing platform (with markdown support). There&apos;s also a robust list of third-party integrations which would be good down the road if I needed to scale.</p><p>I also wanted to be able to easily backup my data in case I needed to move to a different host. At first, I was eyeing an AWS S3 bucket or perhaps Linode&apos;s offering called &quot;Object Storage.&quot; At the moment, I don&apos;t need that much backup space. A free option would be best. This is where Rclone is handy as it would allow me to tarball essential ghost files and have a daily backup. I could also deploy Rclone on another server, such as a local NAS, as an added backup destination. </p><p>I chose Terraform in order to automate the setup tasks so that I can easily switch hosts in the future. The one plus side with Terraform is that it uses declarative language to define tasks and this is, to me, a lot easier to read and understand in the future. If you haven&apos;t looked at your code in over six months, it may as well been written by someone else.</p><!--kg-card-begin: markdown--><h3 id="project-overview">Project Overview</h3>
<ul>
<li>Terraform deployment</li>
<li>Cloudflare proxied DNS</li>
<li>Docker
<ul>
<li>Nginx reverse proxy</li>
<li>Let&apos;s Encrypt for automatic SSL</li>
<li>Ghost 4</li>
<li>Rclone backup blog data to Google Drive</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.foureight84.com/content/images/2021/06/ghost_network_diagram.png" class="kg-image" alt="Terraforming Ghost on Linode with Google Drive Backup Using Rclone" loading="lazy" width="1661" height="1252" srcset="https://blog.foureight84.com/content/images/size/w600/2021/06/ghost_network_diagram.png 600w, https://blog.foureight84.com/content/images/size/w1000/2021/06/ghost_network_diagram.png 1000w, https://blog.foureight84.com/content/images/size/w1600/2021/06/ghost_network_diagram.png 1600w, https://blog.foureight84.com/content/images/2021/06/ghost_network_diagram.png 1661w" sizes="(min-width: 720px) 720px"><figcaption>Network Diagram</figcaption></figure><p>This project can be found on my <a href="https://github.com/foureight84/ghost-linode-terraform">github</a>. Clone and follow along:</p><pre><code class="language-bash">git clone https://github.com/foureight84/ghost-linode-terraform.git &amp;&amp; cd ghost-linode-terraform</code></pre><hr><h2 id="terraform">Terraform</h2><p>A declarative configuration-based infrastructure as code API wrapper. There are a few pros and cons to consider. Keep in mind that this is my first time using Terraform these are surface-level points observed.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p><strong>Pros:</strong></p><ul><li>Quick and straightforward (standardized) configuration syntax declaring what tasks to perform.</li><li>Easy to read and double-check my work. Terraform&apos;s &apos;plan&apos; feature to check for syntax errors and preview the final result.</li><li>Configurations can be changed and applied quickly without having to go through service consoles.</li><li>All pieces in my stack can be managed through Terraform (see cons).</li></ul><p><strong>Cons:</strong></p><ul><li>Providers often offer web APIs to perform the same tasks, and at best, there will be feature parity. But there is a chance that feature-parity on Terraform is a second priority to a provider and support lags behind.</li><li>Not all services may have Terraform support, which could lead to the additional complexity of having to manage multiple tools. This is often the case in a real-world scenario.</li></ul><!--kg-card-begin: html--><aside class="note">Linode does allow for changing a node&apos;s root password after it has been shut down. This is done through their web console.</aside><!--kg-card-end: html--><ul><li>Applying my Terraform configuration update doesn&apos;t always behave as expected. The functionality limitation depends greatly on the provider. For example, after a Linode is created, applying changes to certain properties such as root password will result in Terraform destroying the instance to create a new node using updated configurations.</li></ul><p>Before diving into specific files in the project, here is a brief overview and explanation of the project structure:</p><pre><code class="language-Treeview">[..root]
&#x2502;  cloudflare.tf                   //resource module for CloudFlare rules
&#x2502;  data.tf                         //templates ex. docker-compose.yaml, bash scripts, etc...
&#x2502;  linode.tf                       //resource module for Linode instance setup
&#x2502;  outputs.tf                      //handle specific data to display after terraforming
&#x2502;  providers.tf                    //tokens, auth keys, etc required by service providers
&#x2502;  terraform.tfvars.example        //example answers for input prompts 
&#x2502;  variables.tf                    //defined inputs required for terraforming
&#x2502;  versions.tf                     //declaration of providers and versions to use
&#x2502;
&#x2514;&#x2500;[scripts]
  &#x251C;&#x2500;[linode]
  &#x2502;       docker-compose.yaml      //main stack. nginx proxy, letsencrypt, ghost
  &#x2502;       stackscript.sh           //boot time script specific to Linode to setup env
  &#x2502;
  &#x2514;&#x2500;[rclone]
    &#x2502;     backup.sh                //cron script for backing up ghost blog directory
    &#x2502;     docker-compose.yaml      //rclone docker application
    &#x2502;
    &#x2514;&#x2500;&#x2500;[config]
            rclone.conf.example    //rclone configuration for cloud storage</code></pre><hr><h3 id="linode">Linode</h3><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/linode.tf">linodes.tf</a>, <a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/scripts/linode/stackscript.sh">stackscript.sh</a>, <a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/data.tf">data.tf</a></p><p>This is a straightforward configuration to create a Linode with the use of Linode&apos;s Stackscript. Which is essentially a run-once bash script that gets executed on the first boot. </p><p>The stackscript.sh lives in the <code>ghost-linode-terraform/scripts/linode/</code> directory and is parsed by Terraform as a data template. Terraform&apos;s templating syntax needs to be taken into consideration when parsing text files. Most notable are variables and their escape characters. Any $string or ${string} notation will be regarded as a template variable, while $$string and $${string} are escaped bash variables. <a href="https://www.terraform.io/docs/language/expressions/strings.html">More about strings and templates</a>.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><figure class="kg-card kg-code-card"><pre><code class="language-HCL">script = &quot;${data.template_file.stackscript.rendered}&quot;</code></pre><figcaption>linode.tf, example of data template being referenced</figcaption></figure><p>Alongside the Stackscript, the &quot;linode_instance&quot; resource block also includes <code>stackscript_data</code> property. This is a way of providing data to the one-time boot script. The key-value assignments within this block correspond with the &apos;User-defined fields&apos; at the top of stackscript.sh.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">#!/bin/sh
# &lt;UDF name=&quot;DOCKER_COMPOSE&quot; label=&quot;Docker compose file&quot; default=&quot;&quot; /&gt;
# &lt;UDF name=&quot;ENABLE_RCLONE&quot; label=&quot;(Bool) Flag to turn on RClone Support&quot; default=&quot;false&quot; /&gt;
# &lt;UDF name=&quot;RCLONE_DOCKER_COMPOSE&quot; label=&quot;RClone docker compose file&quot; default=&quot;&quot; /&gt;
# &lt;UDF name=&quot;RCLONE_CONFIG&quot; label=&quot;RClone configuration file&quot; default=&quot;&quot; /&gt;
# &lt;UDF name=&quot;BACKUP_SCRIPT&quot; label=&quot;Backup script&quot; default=&quot;&quot; /&gt;</code></pre><figcaption>stackscript.sh, user-defined fields declaration</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-HCL">stackscript_data = {
    &quot;DOCKER_COMPOSE&quot; = &quot;${data.template_file.docker_compose.rendered}&quot;
    &quot;ENABLE_RCLONE&quot; = var.enable_rclone
    &quot;RCLONE_DOCKER_COMPOSE&quot; = &quot;${data.template_file.rclone_docker_compose.rendered}&quot;
    &quot;RCLONE_CONFIG&quot; = &quot;${data.template_file.rclone_config.rendered}&quot;
    &quot;BACKUP_SCRIPT&quot; = &quot;${data.template_file.backup_script.rendered}&quot;
  }</code></pre><figcaption>linode.tf, assigning data template string to UDF</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-HCL">data &quot;template_file&quot; &quot;docker_compose&quot; {
  template = &quot;${file(&quot;${path.module}/scripts/linode/docker-compose.yaml&quot;)}&quot;

  vars = {
    &quot;ghost_blog_url&quot; = &quot;${var.ghost_blog_url}&quot;
    &quot;letsencrypt_email&quot; = &quot;${var.letsencrypt_email}&quot;
  }
}</code></pre><figcaption>data.tf, example of parsing a file into a data template</figcaption></figure><p>In the snippet above, docker-compose.yaml for the Ghost stack is parsed as a data template and <code>ghost_blog_url</code> and <code>letsencrypt_email</code> variables get evaluated and then passed as a string to the Stackscript at runtime.</p><p>At the time of deployment, the Stackscript will be created before the Linode instance. Terraform allows direct referencing named values. This can be seen in the &quot;linode_instance&quot; block where <code>linode_stackscript.ghost_deploy.id</code> is assigned as the stackscript ID to include when creating the Linode instance. Finally, other parsed data templates such the docker-compose.yaml are assigned to stackscript user-defined fields and sent as <code>stackscript_data</code>.</p><h3 id="cloudflare">Cloudflare</h3><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/cloudflare.tf">cloudflare.tf</a></p><p>The Cloudflare configuration is relatively straightforward. Since the foureight84.com domain is already managed by Cloudflare, a lookup is performed and the named value is referenced with each resource call as <code>zone_id</code> property.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><figure class="kg-card kg-code-card"><pre><code class="language-HCL">data &quot;cloudflare_zones&quot; &quot;ghost_domain_zones&quot; {
  filter {
    name   = var.cloudflare_domain
    status = &quot;active&quot;
  }
}</code></pre><figcaption>cloudflare.tf managed domain (foureight84.com)</figcaption></figure><p>An &apos;A&apos; record is created for the blog (blog.foureight84.com) and end-to-end HTTPS encryption is enforced (<code>ssl=&quot;strict&quot;</code>) along with requiring all HTTPS origin pull requests only come from Cloudflare. Nginx-proxy-companion will generate a validation file under the path blog.foureight84.com/.well-known/&lt;some random string&gt; for SSL certification requests. Let&apos;s Encrypt validation of this generated file accepts both HTTP and HTTPS (ports 80 and 443), a page rule is created to ensure that the request does not get blocked.</p><h3 id="variables-and-definitions">Variables and Definitions</h3><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/variables.tf">variables.tf</a></p><p>As seen in <code>linode.tf</code>, <code>cloudflare.tf</code>, and especially <code>data.tf</code>, <code>${var.&lt;some string&gt;}</code> are used throughout. These are references to declared variables in <code>variables.tf</code> such as API tokens for our services, domain names, and etc. These variables show up as input prompts when performing Terraform plan, apply, or destroy actions. Variables contain properties such as data type, descriptions, default values, and custom validation rules. <a href="https://www.terraform.io/docs/language/values/variables.html">More about input variables</a>.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/terraform.tfvars.example">terraform.tfvars.example</a></p><p>This Terraform deployment requires 17 data inputs in order to perform its tasks and that&apos;s 17 possible chances to introduce error. Luckily, Terraform supports dictionary referencing in the form of <code>.tfvars</code> files. Tfvars are key-value files where the key is the variable name. A <code>.tfvars</code> file is referenced during run-time in order to provide required inputs. For example:</p><pre><code class="language-bash">terraform plan -var-file=&quot;defined.tfvars&quot;</code></pre><hr><h2 id="docker">Docker</h2><p>This project has two separate docker-compose environments. The first is our blog stack, and the second is Rclone to perform data backup. I decided to separate the Rclone docker service from the primary Ghost stack for two reasons:</p><ol><li>I want to be able to mount the cloud storage prior to running the Ghost stack so that data restore can be performed should a backup exists (see <code><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/scripts/linode/stackscript.sh#L51">stackscript.sh</a></code>). </li><li>Cloud drive mount should always stay active. Changes to my Ghost stack should not impact Rclone&apos;s availability.</li></ol><h3 id="ghost-stack">Ghost Stack</h3><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/scripts/linode/docker-compose.yaml">docker-compose.yaml</a></p><p>This runs 3 containers using the following images:</p><ul><li>jwilder/nginx-proxy</li><li>jrcs/letsencrypt-nginx-proxy-companion</li><li>ghost:4-alpine</li></ul><p>The default directory is <code>/root/ghost</code>. This can be changed via Terraform during deployment from the tfvars file. Since this is a set default value, Terraform does not prompt for the input value.</p><h4 id="jwildernginx-proxy"><em>jwilder/nginx-proxy</em></h4><p>This is the &quot;front&quot; facing container on the origin server (Linode) sitting behind Cloudflare. Visitors&apos; incoming requests for <em>https://blog.foureight84.com</em> will go through Cloudflare, which is forwarded to our reverse-proxy, where the request is relayed to the running Docker service matching the <code>VIRTUAL_HOST</code> value. Finally, nginx-proxy will collect the served content from the Ghost service and send it back to the visitor.</p><p>The reverse proxy will be listening to exposed ports 80 and 443.</p><h4 id="jrcsletsencrypt-nginx-proxy-companion"><em>jrcs/letsencrypt-nginx-proxy-companion</em></h4><p>Nginx-proxy-companion handles automatic SSL registration for the running Docker services. The important environment variables to keep in mind are:</p><pre><code class="language-YAML">environment:
  - LETSENCRYPT_HOST=${ghost_blog_url}
  - LETSENCRYPT_EMAIL=${letsencrypt_email}</code></pre><!--kg-card-begin: html--><aside class="note">${ghost_blog_url} is an example of a Terraform templating variable. These variables will be replaced with proper values during deployment.</aside><!--kg-card-end: html--><p>The above environment variables are added to Docker services that require SSL certificates (Ghost container in this example). In the future, if I need to add additional subdomains, such as <em>www.foureight84.com</em>, then that service will require <code>LETSENCRYPT_HOST=www.foureight84.com</code>. I can avoid having to repeatedly define <code>LETSENCRYPT_EMAIL</code> by setting the <code>DEFAULT_EMAIL</code> environment variable in the nginx-proxy-companion instead.</p><p>By default, a production certificate will be requested. Let&apos;s Encrypt has a limit of 10 cert requests every 7 days for normal users. All requested certs are stored in the path <code>/etc/acme.sh</code> which is mounted to the <code>acme</code> Docker volume.</p><!--kg-card-begin: html--><aside class="note">The current setup creates a full end-to-end encryption path. SSL requests from visitors to Cloudflare as well as communication between origin and Cloudflare.</aside><!--kg-card-end: html--><p>The nginx-proxy and nginx-proxy-companion share <code>certs</code>, <code>vhost.d</code>, and <code>nginx.html</code> volumes. If I need to move to a different host, then the files store in <code>certs</code> and <code>acme</code> docker volumes will need to be backed up. The former is where the generated private key used for SSL certificate requests is stored, whereas the latter is contains the generated certificates and checked against upon service startup. Without the original private key then the SSL certificate cannot be renewed and without SSL certificate would force a new request. </p><!--kg-card-begin: html--><aside class="note">Let&apos;s Encrypt certificates have a 90-day lifespan, which has pros and cons. Read more about the discussion on their <a href="https://community.letsencrypt.org/t/pros-and-cons-of-90-day-certificate-lifetimes/4621">forum thread</a> relating to the matter. This should be evaluated carefully for production usage.</aside><!--kg-card-end: html--><p>Avoid performing a <code>docker volume prune</code> or <code>docker system prune</code> without proper backup. They are technically not essential unless the weekly certificate request limit has been reached. To get the path to these volumes, run the terminal command <code>docker volume inspect &lt;volume name&gt;</code>.</p><h4 id="ghost4-alpine"><em>ghost:4-alpine</em></h4><p>This is an all-in-one image. I believe prior versions used MySQL. With Ghost v4, SQLite is now the recommended DB. This works out better for my requirement as it is easier to backup.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><pre><code class="language-YAML">ports:
  - 127.0.0.1:8080:2368</code></pre><p>I am just remapping the default port 2358 to 8080. This will be internal and not required as the Ghost service will be not directly exposed to public traffic. As mentioned earlier this will be handled by the reverse proxy where incoming requests will be directed to the matching <code>VIRTUAL_HOST</code>.</p><h3 id="rclone">Rclone</h3><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/scripts/rclone/docker-compose.yaml">docker-compose.yaml</a></p><p>The default directory is <code>/root/rclone</code> but can be changed using Terraform tfvars at runtime.</p><p>Rclone is a command-line application that enables cloud storage to be mounted on the host filesystem. I believe it supports over 30 well-known services such as Google Drive, Dropbox, AWS S3, Amazon Drive, etc. I decided to stick with Google Drive since I have 100GB of underutilized storage. I plan on incorporating Rclone in a <a href="https://www.truenas.com/truenas-scale/">TrueNAS SCALE</a> self-built NAS as a redundant backup in the future.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p>Here are the steps to setting up Google Drive with Rclone:</p><!--kg-card-begin: markdown--><ul>
<li>Start with this: <a href="https://rclone.org/drive/#making-your-own-client-id">https://rclone.org/drive/#making-your-own-client-id</a>
<ul>
<li>Don&apos;t forget to submit app for verification. It will be an automatic approval. The generated auth token will not renew if the app is in development mode.</li>
</ul>
</li>
<li>Then follow this guide to attach the Google Drive account to Rclone: <a href="https://rclone.org/drive/">https://rclone.org/drive/</a></li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><br><!--kg-card-end: html--><figure class="kg-card kg-code-card"><pre><code class="language-YAML">volumes:
  - ${rclone_dir}/config:/config/rclone
  - ${rclone_dir}/mount:/data:shared
  - /etc/passwd:/etc/passwd:ro
  - /etc/group:/etc/group:ro</code></pre><figcaption>Rclone docker-compose.yaml</figcaption></figure><!--kg-card-begin: html--><aside class="note">One caveat to using Rclone is that backups will count against a VPS&apos;s monthly traffic quota. Whereas, using block storage from the same host usually does not.</aside><!--kg-card-end: html--><p><code>/etc/passwd</code> and <code>/etc/group</code> mounts are required for <a href="https://en.wikipedia.org/wiki/Filesystem_in_Userspace">FUSE</a> to work properly inside the container. Additionally, a premade configuration from another Rclone instance is needed. This can be done by completing a Rclone setup wizard for Google Drive. Make sure to copy the configuration to the project&apos;s folder: <code>ghost-linode-terraform/scripts/rclone/config/</code></p><p>In my setup, the Google Drive configuration is called &apos;gdrive.&apos; This needs to reflect in the docker-compose&apos;s command block:</p><pre><code class="language-YAML">command: &quot;mount gdrive: /data&quot;</code></pre><p>The default mount path is <code>/root/rclone/mount</code>.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p><a href="https://github.com/foureight84/ghost-linode-terraform/blob/a7fe63d1945162968eeb88a85a4452e3a3b640fc/scripts/rclone/backup.sh">backup.sh</a></p><p>This script is responsible for creating tarballs from the ghost blog directory on the host machine. By default, the script maintains a rolling 7-day backup with <code>latest.tgz</code> being the last run archive.</p><p>A crontab is added through Linode&apos;s Stackscript and is set to run daily at 11PM (system time). Keep in mind that UTC is the default system time zone.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.foureight84.com/content/images/2021/07/Rclone_backup_gdrive_view.JPG" class="kg-image" alt="Terraforming Ghost on Linode with Google Drive Backup Using Rclone" loading="lazy" width="647" height="445" srcset="https://blog.foureight84.com/content/images/size/w600/2021/07/Rclone_backup_gdrive_view.JPG 600w, https://blog.foureight84.com/content/images/2021/07/Rclone_backup_gdrive_view.JPG 647w"><figcaption>Google Drive view of blog backups</figcaption></figure><hr><h2 id="deployment">Deployment</h2><p>If you have not done so, install Terraform CLI on your localhost. <a href="https://learn.hashicorp.com/tutorials/terraform/install-cli">Follow this installation guide</a>.</p><!--kg-card-begin: html--><br><!--kg-card-end: html--><p><strong>Clone project</strong></p><pre><code class="language-bash">git clone https://github.com/foureight84/ghost-linode-terraform.git &amp;&amp; cd ghost-linode-terraform</code></pre><p><strong>Initialize Terraform workspace</strong></p><figure class="kg-card kg-code-card"><pre><code class="language-bash">terraform init</code></pre><figcaption>This will download required provider modules</figcaption></figure><p><strong>Create your tfvars definition file</strong></p><pre><code class="language-bash">cp terraform.tfvars.example defined.tfvars</code></pre><p><strong>Open <code>defined.tfvars</code> and fill in all required values</strong></p><pre><code class="language-HCL">
//project_dir = &quot;&quot;                                // default /root/ghost
//rclone_dir = &quot;&quot;                                 // default /root/rclone

linode_api_token = &quot;&quot;
linode_label = &quot;&quot;
linode_image = &quot;linode/ubuntu20.04&quot;               // see terraform linode provider documentation for these values
linode_region = &quot;us-west&quot;                         // see terraform linode provider documentation for these values
linode_type = &quot;g6-nanode-1&quot;                       // see terraform linode provider documentation for these values
linode_authorized_users = [&quot;&quot;]                    // user profile created on linode with associated ssh pub key. https://cloud.linode.com/profile/keys
linode_group = &quot;blog&quot;
linode_tags = [ &quot;ghost&quot;, &quot;docker&quot; ]

linode_root_password = &quot;&quot;

cloudflare_domain = &quot;&quot;                           // requires that your domain is already managed by cloudflare. value ex: foureight84.com
cloudflare_email = &quot;&quot;
cloudflare_api_key = &quot;&quot;                          // not to be mistaken with cf api token

letsencrypt_email = &quot;&quot;

ghost_blog_url = &quot;&quot;                              // ex. blog.foureight84.com

enable_rclone =                                  // boolean (default false). change to true if using rclone. see README.md in rclone directory on how to setup config beforehand</code></pre><p><strong>Double-check that everything is correct and get a deployment preview</strong></p><pre><code class="language-bash">terraform plan -var-file=&quot;defined.tfvars&quot;</code></pre><p><strong>Apply Terraform changes to production</strong></p><pre><code class="language-bash">terraform apply -var-file=&quot;defined.tfvars&quot;</code></pre><p></p><p></p><p></p><p></p>]]></content:encoded></item></channel></rss>