{"pageProps":{"posts":[{"dateModified":"2019-10-01 22:15","datePublished":{"formatInWords":"9 months ago","formatDate":"01 October 2019","value":"2019-10-01 22:15"},"disqusIdentifier":"5bc9ff9949ac5006c0a84f00","excerpt":"A guide to create JavaScript monorepos with Lerna and Yarn Workspaces. Explaining what is a monorepo, what are they useful for and how to create one with a code example","html":"
The monorepo term is a compound word between \"mono\", from Ancient Greek \"mรณnos\", that means \"single\" and \"repo\" as a shorthand of \"repository\".
\n\n\nA monorepo is a strategy for storing code and projects in a single repository.
\n
Monorepos allow you to reuse packages and code from another modules while keeping them independent and isolated. This is particularly useful when you have a ton of code that you're constantly repeating on different projects.
\nDependencies are hoisted to the root level of the project, that means you can share dependencies across all the packages that you have in your monorepo. This reduces the overhead from updating and managing multiple versions of the same dependency.
\nMaking cross-repo changes within different repositories is painful. Typically involves manual coordination between teams and repos. For example let's say you have an API that is used by many clients and you want to make a breaking change into the contract. It's not trivial to apply the update to all the clients and then coordinate the deploy of the projects and so on. With a monorepo it's easier since everything is contained in a single unit.
\nBefore considering to implement a monorepo architecture, make sure you have the problems that this concept solves โ ๏ธ. There's no need to overengineer a project. Remember keep it simple โจ
\nNow that we know what is a monorepo, the tools that we're going to use and what are they useful for, let's create a real example to see how it works.
\nLet's begin creating our monorepo ๐. The first thing we need to do is define the structure of the project. In this example I created two directories:
\npackages/
: This directory will contain the isolated modules that we are going to reuse on all the applications.applications/
: This directory will contain all the applications of our monorepo..\nโโโ src\n โโโ applications\n โโโ packages\n
\nAfter that, we're going to create package.json
to define the workspaces
and dependencies of our monorepo.
The workspaces
field is what Yarn uses to symlink our code to the node_modules
in order to reuse and import the code, we'll see this later on.
Finally we install lerna
as a devDependency
to manage the monorepo.
{\n \"private\": true,\n \"engines\": {\n \"yarn\": \">=1.17.3\"\n },\n \"name\": \"monorepo-example\",\n \"workspaces\": [\n \"src/applications/*\",\n \"src/packages/*\"\n ],\n \"scripts\": {},\n \"devDependencies\": {\n \"lerna\": \"latest\"\n }\n}\n
\nNow, let's define how Lerna is going to manage our monorepo in a lerna.json
configuration file.
packages
: The directories that we defined as workspaces
in the package.json
.npmClient
: The client used to run the commands.useWorkspaces
: This flag tells lerna that we're going to use yarn workspaces.{\n \"lerna\": \"latest\",\n \"packages\": [\n \"src/applications/*\",\n \"src/packages/*\"\n ],\n \"version\": \"1.0.0\",\n \"npmClient\": \"yarn\",\n \"useWorkspaces\": true\n}\n
\nWe finished our setup ๐! Let's add some simple code to see how we can manage and reuse packages on our monorepo.
\nA package inside our monorepo context, is an isolated and reusable piece of code. That means, every time we want to create a new package, we're going to create a new independent directory.
\n.\nโโโ packages\n โโโ sayHello\n โโโ index.js\n โโโ package.json\n
\nEach package needs to have a package.json
with the name
and version
fields defined. This is important because this describes how we're going to import and use this package on the code base. You can also have dependencies in your package if you need them. On this example I'm writing a simple package called sayHello
.
{\n \"name\": \"@packages/sayHello\",\n \"version\": \"1.0.0\",\n}\n
\nThink of every directory inside the packages/
folder as an isolated module, with his own tests, dependencies and code.
const sayHello = (name) => {\n console.log(`Hello ${name} ๐๐ผ`)\n\n return name\n}\n\nmodule.exports = sayHello\n
\nThis was pretty simple right? Now let's say that we have an application that it's called cli
. In order to use sayHello
package we should add it as a dependency
on the package.json
file. To do that we have a fancy yarn
command ๐
yarn workspace @applications/cli add @packages/sayHello@1.0.0\n
\nNow from our cli
application we can import and use the package! ๐ฏ
const sayHello = require('@packages/sayHello')\n\nsayHello('Carlos')\n
\nFinally, we run our cli
application from the command line using Lerna ๐
You can find the example explained on the post on this GitHub repository ๐. I know this was pretty simple, but there are a ton of things you can do with monorepos! For example you can share react components in different applications while keeping them isolated. But take a look below ๐ to see monorepos on big open source projects!
\nHere's a list of well known open source projects that are using the monorepo architecture:
\n\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/JavaScript-monorepos-with-Lerna-and-Yarn-Workspaces.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/JavaScript-monorepos-with-Lerna-and-Yarn-Workspaces.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/JavaScript-monorepos-with-Lerna-and-Yarn-Workspaces.png"}},"slug":"javascript-monorepos-lerna-yarn-workspaces","title":"JavaScript monorepos with Lerna and Yarn Workspaces"},{"dateModified":"2019-09-12 17:00","datePublished":{"formatInWords":"10 months ago","formatDate":"12 September 2019","value":"2019-09-12 17:00"},"disqusIdentifier":"5d78d0fcf942665cd6becd9a","excerpt":"I'm going to explain why it's important and how you can use error boundaries in a React-Native application to improve error resiliency ๐จโ๐ป","html":"React 16 released a new concept called Error Boundary. This concept introduces a new way to catch JavaScript errors ๐ in a React project.
\nIn this post I'm going to explain why it's important and how you can use error boundaries in a React-Native application to improve error resiliency, so let's get into it! ๐จโ๐ป
\nAccording to the official React docs ๐:
\n\n\nAs of React 16, errors that were not caught by any error boundary will result in unmounting of the whole React component tree ๐ฑ.
\n
Unmounting the whole React component tree, means that if you don't catch errors at all the user will see an empty white screen ๐ฅ. Most of the time without having any feedback. This is not a great UX โ, fortunately you can fix this by using Error Boundaries โ .
\nTo benefit from Error Boundaries, we'll have to create a stateful component that will use the following lifecycle methods โป๏ธ:
\ngetDerivedStateFromError
: This method is going to update the component state to display a fallback UI.componentDidCatch
: This method should be used to log the error to an external service.So let's create the component that will catch errors in our application:
\nclass ErrorBoundary extends React.Component {\n state = { hasError: false }\n\n static getDerivedStateFromError (error) {\n return { hasError: true }\n }\n\n componentDidCatch (error, info) {\n logErrorToService(error, info.componentStack)\n }\n\n render () {\n return this.state.hasError\n ? <FallbackComponent />\n : this.props.children\n }\n}\n
\nPretty simple right? With a few lines of code, you can catch errors on your React-Native app ๐
\nTo use it, all you need to do now is to wrap it around any component that could throw an error.
\nconst App = () => (\n <ErrorBoundary>\n <Children />\n </ErrorBoundary>\n)\n
\nThis component will catch all the errors that are thrown by any of his children. A common thing is to use it at the top level of your application ๐ to catch anything without having to use it on every screen or route ๐
\nThat's how our FallbackComponent
looks whenever an error is thrown by our application ๐
โ ๏ธ Error Boundaries only catch JavaScript errors, all the native crashes that your application might have are not handled.
\nreact-native-error-boundary
Few months ago, I created a simple, flexible and reusable React-Native error boundary component. Take a look into it ๐ if you're thinking about adding error boundaries to your app!
\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/handling-react-native-errors-with-error-boundaries.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/handling-react-native-errors-with-error-boundaries.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/handling-react-native-errors-with-error-boundaries.png"}},"slug":"managing-react-native-crashes-with-error-boundaries","title":"Managing React-Native crashes with Error Boundaries"},{"dateModified":"2018-10-16 12:10","datePublished":{"formatInWords":"over 1 year ago","formatDate":"16 October 2018","value":"2018-10-16 12:10"},"disqusIdentifier":"5b6c646126d36606d1805ab3","excerpt":"Creating scalable React components using the folder pattern. A simple way to organize and structure React Components.","html":"It's been a while since I've started working with React and React-Native in production. One of the greatest things about React is the flexibility the library gives to you. Meaning that you are free to decide how do you want to implement almost every detail of your project for example the architecture and structure.
\nHowever this freedom on the long term, could lead to a complex and messy codebase, specially if you don't follow a pattern. In this post I'll explain a simple way to organize and structure React Components.
\n\n\nA Component is a JavaScript function or class that returns a piece of UI.
\n
We're going to create an EmojiList
component and then we are going to refactor it breaking it up into smaller isolated pieces applying the folder pattern. Here's how our component looks like:
As I mentioned before, we can start really simple and small, without following any pattern. This is our EmojiList
component contained in a single function.
If you open the CodeSandbox sidebar you'll see that our file tree looks like this:
\n.\nโโโ components\nโ โโโ EmojiList.js\nโ โโโ styles.js\nโโโ index.js\n
\nThere's nothing wrong with this approach. But on larger codebases that kind of component becomes hard to maintain, because there a lot of things in it: state, ui, data... Take a look at our component code below ๐
\nEmojiList.js
import React from \"react\"\n\nimport styles from \"./styles\"\n\nclass EmojiList extends React.Component {\n state = {\n searchInput: \"\",\n emojis: []\n }\n\n render() {\n const emojis = this.state.emojis.filter(emoji =>\n emoji.code.includes(this.state.searchInput.toLowerCase())\n )\n\n return (\n <ul style={styles.list}>\n <input\n style={styles.searchInput}\n placeholder=\"Search by name\"\n type=\"text\"\n value={this.state.searchInput}\n onChange={event => this.setState({ searchInput: event.target.value })}\n />\n {emojis.map((emoji, index) => (\n <li key={index} style={styles.item}>\n <div style={styles.icon}>{emoji.emoji}</div>\n <div style={styles.content}>\n <code style={styles.code}>{emoji.code}</code>\n <p style={styles.description}>{emoji.description}</p>\n </div>\n </li>\n ))}\n </ul>\n )\n }\n}\n\nexport default EmojiList\n
\nA step to improve this code, would be to create separate components into the same file and then using them at the main component. However, you'll be sharing styles among other things and that could be confusing.
\nLet's start refactoring the single component into multiple ones by breaking up the UI into a component hierarchy.
\nIf we take a look at the image, it's easy to identify that we can break up our UI in three different components: ๐
\nEmojiList
: Combines the smaller components and shares the state down.SearchInput
: Receives user input and displays the search bar.EmojiListItem
: Displays the List Item for each emoji, with the icon, name and description.We're going to create a folder for each component, with two files, an index.js
that is going to hold all the code for the component and the styles.js
. That's one of the good things about this pattern. Every component defines his own UI and styles, isolating this piece of code from another components that doesn't need to know anything about them.
Notice that inside the EmojiList
folder, (that is a component), we add two nested components that only will be used within the EmojiList
component. Again, that's because these two components aren't going to be used out of that context. This helps reducing the visual clutter a lot.
.\nโโโ EmojiList\nโ โโโ EmojiListItem\nโ โ โโโ index.js\nโ โ โโโ styles.js\nโ โโโ SearchInput\nโ โ โโโ index.js\nโ โ โโโ styles.js\nโ โโโ index.js\nโ โโโ styles.js\nโโโ index.js\n
\nNow let's isolate and separate the code into the three components from the smallest to the biggest one:
\nEmojiListItem/
This component renders every emoji item that will appear on the list.
\nimport React from \"react\"\n\nimport styles from \"./styles\"\n\nconst EmojiListItem = (props) => (\n <li style={styles.item}>\n <div style={styles.icon}>{props.emoji}</div>\n <div style={styles.content}>\n <code style={styles.code}>{props.code}</code>\n <p style={styles.description}>{props.description}</p>\n </div>\n </li>\n)\n\nexport default EmojiListItem\n
\nSearchInput/
This component receives the user input and updates the state of the parent component.
\nimport React from \"react\"\n\nimport styles from \"./styles\"\n\nconst SearchInput = (props) => (\n <input\n style={styles.searchInput}\n placeholder=\"Search by name\"\n type=\"text\"\n value={props.value}\n onChange={props.onChange}\n />\n)\n\nexport default SearchInput\n
\nEmojiList/
This is the top level component, holds the state and data of our example and imports the other components to recreate the whole UI of our tiny application. Isolating components makes the render method more readable and easier to understand โจ.
\nimport React from \"react\"\n\nimport SearchInput from \"./SearchInput\"\nimport EmojiListItem from \"./EmojiListItem\"\nimport styles from \"./styles\"\n\nclass EmojiList extends React.Component {\n state = {\n searchInput: \"\",\n emojis: []\n }\n\n render() {\n const emojis = this.state.emojis.filter(emoji =>\n emoji.code.includes(this.state.searchInput.toLowerCase())\n )\n\n return (\n <ul style={styles.list}>\n <SearchInput\n onChange={(event) => this.setState({ searchInput: event.target.value })}\n value={this.state.searchInput}\n />\n {emojis.map((emoji, index) => (\n <EmojiListItem\n key={index}\n code={emoji.code}\n description={emoji.description}\n emoji={emoji.emoji}\n />\n ))}\n </ul>\n )\n }\n}\n\nexport default EmojiList\n
\nThat's basically the architecture that I use at the company I'm working on. I'm pretty satisfied with the experience of using this pattern. Our components turned out a lot easier to maintain and use. Anyway there are no silver bullets on Software Engineering, so figure what works best for you or your team!
\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/scalable-react-components.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/scalable-react-components.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/scalable-react-components.png"}},"slug":"scalable-react-components-architecture","title":"Scalable React Components architecture"},{"dateModified":"2018-10-15 09:53","datePublished":{"formatInWords":"almost 2 years ago","formatDate":"19 September 2018","value":"2018-09-19 09:53"},"disqusIdentifier":"5ae59dcdb3211a06522ad69b","excerpt":"The process of continuously delivering React Native apps with Fastlane and Travis CI automatically. ","html":"A year ago I wrote a post about how Fastlane could help us to improve our React Native apps shipping process. At that moment even though everything was automated, the deployment relied on one of us with a provisioned machine in order to launch the rocket ๐. We could improve easily that process by continuously delivering our apps through a CI machine. That's when Travis CI comes to the rescue! ๐ท๐ปโโ๏ธ
\nBefore explaining what's the problem, it's important to understand the complexity of our deployment process.
\nIn a nutshell we have two platforms: iOS ๐, Android ๐ค and every platform compiles two applications: Beta testing app also known as Canary ๐ค and Production ๐ one.
\nBasically every platform goes through a lane sequantially that looks like this ๐
\nNow let's see in depth every step of the deployment process to understand what we do.
\nSigning native applications is scary ๐ฑ, specially when you come from the JavaScript ecosystem. Certificates, provisioning profiles, keys... You have to be utterly organized when using them in a development team.
\nWe adopted the codesigning.guide concept through Fastlane. Basically this idea comes up with having a specific git repository to store and distribute certificates across a development team. We store both iOS and Android code signing files on an encrypted private git repository that lives on GitHub.
\nThen, our CI machine on every deploy clones the repository and installs the decrypted certificates. On iOS the CI creates an OS X Keychain where the certificates are installed.
\nNative builds and stores require code version bumps.
\nEvery platform has his own way to manage versions and build numbers. The difference between those two is that the version should be used as the public store number that identifies a new release, and the build number is an incremental identifier that bumps on every build.
\nAndroid ๐ค
\nversionName
VERSION_CODE
iOS ๐
\nCFBundleShortVersionString
CFBundleVersion
and CURRENT_PROJECT_VERSION
Those attributes are stored on .plist
, .pbxproj
, .properties
and .gradle
files. To automate and do version management we use the package.json version number as the source of truth for our public version numbers ๐ฏ. This allows us to use npm version
cli command to manage bumps.
We need to provision two machines to build and compile our native applications.
\nFor iOS we setup a macOS system with Xcode, because it's the only way to compile and sign the application. On Android we provision a Linux system, with all the Android Studio, packages and tools that we need.
\nThose machines are created by our CI, that means every build runs on a new fresh and clean environment ๐ป.
\nTo distribute the application to beta testers we use TestFlight on iOS and HockeyApp for Android. We tried Google Play Beta but it was too slow on the app roll out compared to HockeyApp.
\nTo distribute the application to the stores we upload the production build to TestFlight on iOS and Google Play Store for Android. The release is done manually by a human being.
\nTo get human readable information about crashes and errors, we use a service called Bugsnag. Every time we deploy a new build, we need to upload debug symbols .dSYM
and sourcemaps to Bugsnag.
Finally, when the apps are deployed, we need to inform our beta testers, release manager and developers, that we have a new version. We use Slack with a bot that sends alerts to some channels.
\nEvery time we wanted to do a release, we had to manually fire ๐ฅ the Fastlane deployment lanes. That means that human factor was needed. This was a time consuming process that often failed due to code sign, biased environments, software updates, native platform dependencies...
\n\n\nMachines should work, people should think.
\n
Definitely we decided to end with those problems by automating all the things!
\nThe solution is to implement this automated process into a system that continously delivers our master
branch pushes up to the stores magically ๐, giving freedom to the manager to decide when a new release comes up. Finally, we could forget about everything and be happy! โค๏ธ
Now we're going to take a look on how we integrated Travis and Fastlane to automate the deployment of our apps ๐.
\nWe have two deployment
lanes one for Android and one for iOS. I've simplified the lanes a little bit for the explanation to focus on the important parts of it. First we deploy Android platform and then iOS.
The lane receives a version number that comes from the package.json
, as I said before this allows us to do versioning through npm.
The first thing we do is bumping the public version number and the build number. On the iOS lane, we need to setup_certificates
, to save them on the Keychain and be able to sign the apps.
After that we start the canary
๐ค and production
๐ lanes. Those two are the ones who build the native app.
Canary
: Beta testing build, ships to TestFlight and HockeyApp.Production
: Production build, ships to TestFlight and Google Play Store.Then, we upload all the sourcemaps and debug symbol files to Bugsnag.
\nNext, we create a git branch where the version bumps will be commited, through the commit_and_push_version_bump
lane. Later, on the iOS lane we merge the created git branch on top of master
using the git_flow_merge
lane. We need to commit the bumps, in order to maintain the version along with the deployments. Otherwise the stores should throw an error that the uploaded version already exists!
Finally we reach out Slack, to communicate both deployments.
\nAndroid ๐ค
\nlane :deployment do |version: version|\n bump_version_number(version: version)\n canary\n production\n sh 'npm run repositories:upload:android'\n commit_and_push_version_bump\n slack_notification(platform: 'Android', version: version)\nend\n
\niOS ๐
\nlane :deployment do |version: version|\n setup_certificates\n bump_version_number(version: version)\n canary\n production\n sh 'npm run repositories:upload:ios'\n commit_and_push_version_bump\n git_flow_merge(version: version)\n slack_notification(platform: 'iOS', version: version)\nend\n
\nSo, here's how our git log, looks like after merging a branch to master
and making a deploy ๐:
We use build stages, to run our deployment process in three steps, sequentially. This allows us to deploy our apps only on the master
branch when our tests passed โ
.
Let's take a look at the build stages ๐
\nEvery build stage has his own provisioning and enviroment. For instance, Deploy iOS
runs on a macOS machine with Xcode and Node.js installed, while Deploy Android
uses an Ubuntu machine with JDK, AndroidSDK and Node.js.
Test stage โ
\nOn the first stage we execute the linters and test suites. To ensure everything is working as expected. If something fails here, we automatically stop the deploy.
\n- stage: Test and lint โ
\n language: node_js\n node_js: 8.5.0\n install: yarn\n script: npm run test:lint && npm run test:unit\n
\nAndroid stage ๐ค
\nAndroid stage creates a provisioned Ubuntu machine with all the software and dependencies needed. Then we build the Canary ๐ค and Production ๐ applications apps. After that we deploy them. In around 15 minutes โฐ our Android apps ship. ๐
\n- stage: Deploy Android ๐ค\n if: branch = master AND type = push\n language: android\n jdk: oraclejdk8\n android:\n components:\n - tools\n - platform-tools\n - android-26\n - extra-google-m2repository\n - extra-google-google_play_services\n before_install:\n - nvm install 8.5.0\n - gem install bundler\n - bundle install\n before_script:\n - ./internals/scripts/travis/gitconfig.sh\n install: yarn\n script: npm run deployment:android\n
\niOS stage ๐
\niOS stage creates a provisioned macOS machine with Xcode and all the dependencies needed. Then we build the Canary ๐ค and Production ๐ apps. After that we deploy them. In around 20 minutes โฐ our iOS apps ship. ๐
\n- stage: Deploy iOS ๐\n if: branch = master AND type = push\n language: node_js\n node_js: 8.5.0\n os: osx\n osx_image: xcode9.2\n before_install: bundle install\n before_script:\n - ./internals/scripts/travis/gitconfig.sh\n install: yarn\n script: npm run deployment:ios\n
\nReactiveConf is a two days conference about functional and reactive programming that takes place in Bratislava ๐ธ๐ฐ, at the Old Market Hall. I attended the conference with three workmates from Ulabox, @sospedra_r, @juanmaorta and @markcial. The venue was organized in two stages: the Main Stage and the Discovery Stage.
\nThe conference day started with the registration, a backpack with a cool t-shirt, socks and stickers was given to every attendee.
\nThe Old Market was a comfortable and big place to watch the Main Stage talks. Upstairs there was a big space with TV's to follow the live-stream, a ton of bean bags and a few XBOX's to play with ๐ฎ.
\nDiscovery Stage was located in another place called Ateliรฉr Babylon, within a 5 minute walk from the Old Market Hall. This stage was too small, at the second day we had to turn back to the Old Market, due to the lack of space, losing the opportunity to watch the \"Understanding Webpack from inside out\" talk. ๐
\nThe food was very nice ๐. A few serving points were distributed along the Market Hall. Also, the schedule was very well planned the number of breaks helped to follow the presentations easily!
\nIn terms of talks the quality and technical level was good. But with some speakers I thought that they were selling us their product instead of training us. Also I noticed that some talks, IMHO were offtopic according to the Reactive paradigm.
\nHaving said that, these are my favourite talks โญ๏ธ:
\nAll talks were recorded and live streamed through the ReactiveConf youtube channel. Be sure to take a look into it! ๐
\n\nI learnt a lot of things at the venue and I really enjoyed the great atmosphere of the conference. Actually I've met a ton of friendly people!
\nHuge thanks to Ulabox ๐ for giving us the oportunity to attend ReactiveConf! ๐ธ๐ฐ โ๏ธ
\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/v1504460420/nmm99etv7j32h5lsulgx.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/v1504460420/nmm99etv7j32h5lsulgx.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/v1504460420/nmm99etv7j32h5lsulgx.png"}},"slug":"reactiveconf-2017","title":"ReactiveConf 2017"},{"dateModified":"2017-10-18 10:15","datePublished":{"formatInWords":"over 2 years ago","formatDate":"18 October 2017","value":"2017-10-18 10:15"},"disqusIdentifier":"59cfbaac613ac70679db193e","excerpt":"The process of moving my website and Ghost blog from Heroku to DigitalOcean. Provisioning up the server with Node.js, Nginx, LetsEncrypt and PM2.","html":"Last weekend I've moved my website and blog to DigitalOcean. At the time of building this website 2015 I choosed Heroku as the platform to host my application, because I didn't wanted to deal with server provisioning and maintenance.
\nHeroku is probably the easiest way to ship your application into production ๐. In my use case, the GitHub repository that hosts the code for my website, was connected to Heroku and magically I've managed to ship my application using continuous deployment through Travis CI and GitHub.
\nHowever I knew that I would be switching at some point to an IaaS considering I would need more control over the infrastructure of my application.
\nMy website is a Node.js application built with Express, Pug and SCSS. The blog runs on a self-hosted Ghost.
\nSince the start of using Heroku, I wanted to use a single Dyno to keep it simple. But every dyno is attatched to a process, so I managed to start Ghost as a module from the Express application. This workaround wasn't the best solution, but it worked more than a year and a half.
\nRecently, Ghost went out of beta and released the 1.0.0
with a ton of breaking changes. Since then it was nearly impossible with my needs to keep using Heroku.
I decided to make the move considering it an opportunity to learn and improve my infrastructure.
\n\n\nIf you don't have a DigitalOcean account, use this link to register and get $10 for free!
\n
After spinning up a 5$ droplet, the first thing I did was disabling root login and password authentication. That means only SSH can be used to connect to the server.
\n$ sudo ufw allow 'Nginx Full' && sudo ufw allow OpenSSH\n$ sudo ufw enable && sudo ufw status\n\nTo Action From\n-- ------ ----\nOpenSSH ALLOW Anywhere\nNginx Full ALLOW Anywhere\nOpenSSH (v6) ALLOW Anywhere (v6)\nNginx Full (v6) ALLOW Anywhere (v6)\n
\nWhen I was on Heroku, I used Cloudflare to obtain SSL ๐. But LetsEncrypt is way better. Just because you get end to end encryption.
\nTo get your SSL certificate, first, you need to install certbot.
\n$ sudo add-apt-repository ppa:certbot/certbot\n$ sudo apt-get update && sudo apt-get install python-certbot-nginx\n
\nThen, open your Nginx configuration file find server_name
directive and set your domain.
server_name example.com www.example.com;\n
\nVerify the configuration and if you have no errors, reload it.
\n$ sudo nginx -t\n$ sudo systemctl reload nginx\n
\nNow it's time to obtain our SSL certificates for the domain specified at the Nginx config file.
\n$ sudo certbot --nginx -d example.com -d www.example.com\n
\n\n\nโ ๏ธ Before obtaining the certs, you'll have to point the domain to your DigitalOcean IP. That's the way Certbot verifies that you control the domain you're requesting a certificate for.
\n
If that's successful, certbot will ask how you'd like to configure your HTTPS. Finally the certificates will be downloaded, installed, and loaded.
\nLetsEncrypt certificates are only valid for ninety days, however the Certbot cli includes an option to renew our SSL certificates and we can automate this process with a crontab.
\n$ sudo crontab -e\n
\nAdd a new line inside the crontab file and save it. Basically you're asking to your server to run the certbot renew --quiet
command every day at 04:00.
0 4 * * * /usr/bin/certbot renew --quiet\n
\nBoth applications are started as a daemon on the server. So even though the server is restarted, automatically both apps will go up.
\ncarloscuesta.me
: Uses PM2 โ production process manager for Node.js.
carloscuesta.me/blog
: Ghost uses ghost-cli to update and run the blog.
I use Nginx as a reverse proxy against the applications that are running on localhost
.
The first block of my configuration file, redirects all the requests to https.
\nserver {\n listen 80;\n listen [::]:80;\n server_name carloscuesta.me www.carloscuesta.me;\n return 301 https://$server_name$request_uri;\n}\n
\nAfter enforcing HTTPs, we use another server block to set our locations
. Those locations will define how Nginx should handle the requests to specific resources.
As an example, if your make a request to carloscuesta.me
, Nginx will match our /
location and is going to proxy_pass the request to my http://localhost:PORT
where the Node.js application is started.
Also, we're enabling HTTP2 and SSL for our server, providing the certificates and keys needed.
\nserver {\n listen 443 ssl http2 default_server;\n listen [::]:443 ssl http2 default_server;\n\n ssl_certificate ...;\n ssl_certificate_key ...;\n ssl_dhparam ...;\n\n server_name carloscuesta.me www.carloscuesta.me;\n\n location / {\n proxy_pass http://localhost:PORT;\n proxy_set_header Connection \"upgrade\";\n proxy_set_header Host $http_host;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_set_header X-NginX-Proxy true;\n proxy_set_header X-Real-IP $remote_addr;\n }\n\n location ^~ /blog {\n # Same as / with another port\n }\n}\n
\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/moving-to-digitalocean.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/moving-to-digitalocean.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/moving-to-digitalocean.png"}},"slug":"moving-to-digitalocean","title":"Moving to DigitalOcean"}],"repositories":[{"description":"An emoji guide for your commit messages. ๐ ","language":"javascript","name":"gitmoji","stars":7468,"url":"https://github.com/carloscuesta/gitmoji"},{"description":"A gitmoji interactive command line tool for using emojis on commits. ๐ป","language":"javascript","name":"gitmoji-cli","stars":2260,"url":"https://github.com/carloscuesta/gitmoji-cli"},{"description":"A material design theme for your terminal. โจ","language":"shell","name":"materialshell","stars":710,"url":"https://github.com/carloscuesta/materialshell"},{"description":"A Front End development Gulp.js based workflow. ๐","language":"javascript","name":"starterkit","stars":84,"url":"https://github.com/carloscuesta/starterkit"},{"description":"A material design theme for Hyper based on materialshell. โจ","language":"javascript","name":"hyper-materialshell","stars":69,"url":"https://github.com/carloscuesta/hyper-materialshell"},{"description":"A simple and reusable React-Native error boundary component ๐","language":"javascript","name":"react-native-error-boundary","stars":47,"url":"https://github.com/carloscuesta/react-native-error-boundary"}]},"__N_SSG":true}