{"pageProps":{"posts":[{"dateModified":"2019-10-01 22:15","datePublished":{"formatInWords":"9 months ago","formatDate":"01 October 2019","value":"2019-10-01 22:15"},"disqusIdentifier":"5bc9ff9949ac5006c0a84f00","excerpt":"A guide to create JavaScript monorepos with Lerna and Yarn Workspaces. Explaining what is a monorepo, what are they useful for and how to create one with a code example","html":"

What is a monorepo ?

\n

The monorepo term is a compound word between \"mono\", from Ancient Greek \"mรณnos\", that means \"single\" and \"repo\" as a shorthand of \"repository\".

\n
\n

A monorepo is a strategy for storing code and projects in a single repository.

\n
\n

What are they useful for ?

\n
โ™ป๏ธ Reusing isolated pieces of code
\n

Monorepos allow you to reuse packages and code from another modules while keeping them independent and isolated. This is particularly useful when you have a ton of code that you're constantly repeating on different projects.

\n
๐Ÿงฐ Simplifying dependencies management
\n

Dependencies are hoisted to the root level of the project, that means you can share dependencies across all the packages that you have in your monorepo. This reduces the overhead from updating and managing multiple versions of the same dependency.

\n
๐Ÿ›  Refactoring cross-project changes
\n

Making cross-repo changes within different repositories is painful. Typically involves manual coordination between teams and repos. For example let's say you have an API that is used by many clients and you want to make a breaking change into the contract. It's not trivial to apply the update to all the clients and then coordinate the deploy of the projects and so on. With a monorepo it's easier since everything is contained in a single unit.

\n

Before considering to implement a monorepo architecture, make sure you have the problems that this concept solves โš ๏ธ. There's no need to overengineer a project. Remember keep it simple โœจ

\n

The tools

\n\n

Now that we know what is a monorepo, the tools that we're going to use and what are they useful for, let's create a real example to see how it works.

\n

Creating the monorepo

\n
Setup
\n

Let's begin creating our monorepo ๐Ÿ‘. The first thing we need to do is define the structure of the project. In this example I created two directories:

\n\n
.\nโ””โ”€โ”€ src\n    โ”œโ”€โ”€ applications\n    โ””โ”€โ”€ packages\n
\n

After that, we're going to create package.json to define the workspaces and dependencies of our monorepo.

\n

The workspaces field is what Yarn uses to symlink our code to the node_modules in order to reuse and import the code, we'll see this later on.

\n

Finally we install lerna as a devDependency to manage the monorepo.

\n
{\n  \"private\": true,\n  \"engines\": {\n    \"yarn\": \">=1.17.3\"\n  },\n  \"name\": \"monorepo-example\",\n  \"workspaces\": [\n    \"src/applications/*\",\n    \"src/packages/*\"\n  ],\n  \"scripts\": {},\n  \"devDependencies\": {\n    \"lerna\": \"latest\"\n  }\n}\n
\n

Now, let's define how Lerna is going to manage our monorepo in a lerna.json configuration file.

\n\n
{\n  \"lerna\": \"latest\",\n  \"packages\": [\n    \"src/applications/*\",\n    \"src/packages/*\"\n  ],\n  \"version\": \"1.0.0\",\n  \"npmClient\": \"yarn\",\n  \"useWorkspaces\": true\n}\n
\n

We finished our setup ๐Ÿ™Œ! Let's add some simple code to see how we can manage and reuse packages on our monorepo.

\n
Creating packages
\n

A package inside our monorepo context, is an isolated and reusable piece of code. That means, every time we want to create a new package, we're going to create a new independent directory.

\n
.\nโ””โ”€โ”€ packages\n    โ””โ”€โ”€ sayHello\n        โ”œโ”€โ”€ index.js\n        โ””โ”€โ”€ package.json\n
\n

Each package needs to have a package.json with the name and version fields defined. This is important because this describes how we're going to import and use this package on the code base. You can also have dependencies in your package if you need them. On this example I'm writing a simple package called sayHello.

\n
{\n  \"name\": \"@packages/sayHello\",\n  \"version\": \"1.0.0\",\n}\n
\n

Think of every directory inside the packages/ folder as an isolated module, with his own tests, dependencies and code.

\n
const sayHello = (name) => {\n  console.log(`Hello ${name} ๐Ÿ‘‹๐Ÿผ`)\n\n  return name\n}\n\nmodule.exports = sayHello\n
\n
Using packages
\n

This was pretty simple right? Now let's say that we have an application that it's called cli. In order to use sayHello package we should add it as a dependency on the package.json file. To do that we have a fancy yarn command ๐ŸŽ‰

\n
yarn workspace @applications/cli add @packages/sayHello@1.0.0\n
\n

Now from our cli application we can import and use the package! ๐Ÿ’ฏ

\n
const sayHello = require('@packages/sayHello')\n\nsayHello('Carlos')\n
\n

Finally, we run our cli application from the command line using Lerna ๐Ÿš€

\n

\"monorepo-workspaces\"

\n

You can find the example explained on the post on this GitHub repository ๐Ÿ‘€. I know this was pretty simple, but there are a ton of things you can do with monorepos! For example you can share react components in different applications while keeping them isolated. But take a look below ๐Ÿ‘‡ to see monorepos on big open source projects!

\n

Opensource monorepo projects

\n

Here's a list of well known open source projects that are using the monorepo architecture:

\n\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/JavaScript-monorepos-with-Lerna-and-Yarn-Workspaces.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/JavaScript-monorepos-with-Lerna-and-Yarn-Workspaces.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/JavaScript-monorepos-with-Lerna-and-Yarn-Workspaces.png"}},"slug":"javascript-monorepos-lerna-yarn-workspaces","title":"JavaScript monorepos with Lerna and Yarn Workspaces"},{"dateModified":"2019-09-12 17:00","datePublished":{"formatInWords":"10 months ago","formatDate":"12 September 2019","value":"2019-09-12 17:00"},"disqusIdentifier":"5d78d0fcf942665cd6becd9a","excerpt":"I'm going to explain why it's important and how you can use error boundaries in a React-Native application to improve error resiliency ๐Ÿ‘จโ€๐Ÿ’ป","html":"

React 16 released a new concept called Error Boundary. This concept introduces a new way to catch JavaScript errors ๐Ÿ› in a React project.

\n

In this post I'm going to explain why it's important and how you can use error boundaries in a React-Native application to improve error resiliency, so let's get into it! ๐Ÿ‘จโ€๐Ÿ’ป

\n

Why you should use them ?

\n

According to the official React docs ๐Ÿ“˜:

\n
\n

As of React 16, errors that were not caught by any error boundary will result in unmounting of the whole React component tree ๐Ÿ˜ฑ.

\n
\n

Unmounting the whole React component tree, means that if you don't catch errors at all the user will see an empty white screen ๐Ÿ’ฅ. Most of the time without having any feedback. This is not a great UX โŒ, fortunately you can fix this by using Error Boundaries โœ….

\n

\"React-Native

\n

How to use Error Boundaries

\n

To benefit from Error Boundaries, we'll have to create a stateful component that will use the following lifecycle methods โ™ป๏ธ:

\n\n

So let's create the component that will catch errors in our application:

\n
class ErrorBoundary extends React.Component {\n  state = { hasError: false }\n\n  static getDerivedStateFromError (error) {\n    return { hasError: true }\n  }\n\n  componentDidCatch (error, info) {\n    logErrorToService(error, info.componentStack)\n  }\n\n  render () {\n    return this.state.hasError\n      ? <FallbackComponent />\n      : this.props.children\n  }\n}\n
\n

Pretty simple right? With a few lines of code, you can catch errors on your React-Native app ๐ŸŽ‰

\n

To use it, all you need to do now is to wrap it around any component that could throw an error.

\n
const App = () => (\n  <ErrorBoundary>\n    <Children />\n  </ErrorBoundary>\n)\n
\n

This component will catch all the errors that are thrown by any of his children. A common thing is to use it at the top level of your application ๐Ÿ” to catch anything without having to use it on every screen or route ๐Ÿ‘

\n

That's how our FallbackComponent looks whenever an error is thrown by our application ๐Ÿ˜

\n

\"react-native-error-boundary\"

\n

โš ๏ธ Error Boundaries only catch JavaScript errors, all the native crashes that your application might have are not handled.

\n

Introducing react-native-error-boundary

\n

Few months ago, I created a simple, flexible and reusable React-Native error boundary component. Take a look into it ๐Ÿ‘€ if you're thinking about adding error boundaries to your app!

\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/handling-react-native-errors-with-error-boundaries.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/handling-react-native-errors-with-error-boundaries.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/handling-react-native-errors-with-error-boundaries.png"}},"slug":"managing-react-native-crashes-with-error-boundaries","title":"Managing React-Native crashes with Error Boundaries"},{"dateModified":"2018-10-16 12:10","datePublished":{"formatInWords":"over 1 year ago","formatDate":"16 October 2018","value":"2018-10-16 12:10"},"disqusIdentifier":"5b6c646126d36606d1805ab3","excerpt":"Creating scalable React components using the folder pattern. A simple way to organize and structure React Components.","html":"

It's been a while since I've started working with React and React-Native in production. One of the greatest things about React is the flexibility the library gives to you. Meaning that you are free to decide how do you want to implement almost every detail of your project for example the architecture and structure.

\n

However this freedom on the long term, could lead to a complex and messy codebase, specially if you don't follow a pattern. In this post I'll explain a simple way to organize and structure React Components.

\n
\n

A Component is a JavaScript function or class that returns a piece of UI.

\n
\n

We're going to create an EmojiList component and then we are going to refactor it breaking it up into smaller isolated pieces applying the folder pattern. Here's how our component looks like:

\n

\"emojilist\"

\n

EmojiList

\n

As I mentioned before, we can start really simple and small, without following any pattern. This is our EmojiList component contained in a single function.

\n\n

If you open the CodeSandbox sidebar you'll see that our file tree looks like this:

\n
.\nโ”œโ”€โ”€ components\nโ”‚   โ”œโ”€โ”€ EmojiList.js\nโ”‚   โ””โ”€โ”€ styles.js\nโ””โ”€โ”€ index.js\n
\n

There's nothing wrong with this approach. But on larger codebases that kind of component becomes hard to maintain, because there a lot of things in it: state, ui, data... Take a look at our component code below ๐Ÿ‘‡

\n

EmojiList.js

\n
import React from \"react\"\n\nimport styles from \"./styles\"\n\nclass EmojiList extends React.Component {\n  state = {\n    searchInput: \"\",\n    emojis: []\n  }\n\n  render() {\n    const emojis = this.state.emojis.filter(emoji =>\n      emoji.code.includes(this.state.searchInput.toLowerCase())\n    )\n\n    return (\n      <ul style={styles.list}>\n        <input\n          style={styles.searchInput}\n          placeholder=\"Search by name\"\n          type=\"text\"\n          value={this.state.searchInput}\n          onChange={event => this.setState({ searchInput: event.target.value })}\n        />\n        {emojis.map((emoji, index) => (\n          <li key={index} style={styles.item}>\n            <div style={styles.icon}>{emoji.emoji}</div>\n            <div style={styles.content}>\n              <code style={styles.code}>{emoji.code}</code>\n              <p style={styles.description}>{emoji.description}</p>\n            </div>\n          </li>\n        ))}\n      </ul>\n    )\n  }\n}\n\nexport default EmojiList\n
\n

A step to improve this code, would be to create separate components into the same file and then using them at the main component. However, you'll be sharing styles among other things and that could be confusing.

\n

Refactor

\n

Let's start refactoring the single component into multiple ones by breaking up the UI into a component hierarchy.

\n

\"emojilist-breakdown\"

\n

If we take a look at the image, it's easy to identify that we can break up our UI in three different components: ๐Ÿ› 

\n\n

We're going to create a folder for each component, with two files, an index.js that is going to hold all the code for the component and the styles.js. That's one of the good things about this pattern. Every component defines his own UI and styles, isolating this piece of code from another components that doesn't need to know anything about them.

\n\n

Notice that inside the EmojiList folder, (that is a component), we add two nested components that only will be used within the EmojiList component. Again, that's because these two components aren't going to be used out of that context. This helps reducing the visual clutter a lot.

\n
.\nโ”œโ”€โ”€ EmojiList\nโ”‚   โ”œโ”€โ”€ EmojiListItem\nโ”‚   โ”‚   โ”œโ”€โ”€ index.js\nโ”‚   โ”‚   โ””โ”€โ”€ styles.js\nโ”‚   โ”œโ”€โ”€ SearchInput\nโ”‚   โ”‚   โ”œโ”€โ”€ index.js\nโ”‚   โ”‚   โ””โ”€โ”€ styles.js\nโ”‚   โ”œโ”€โ”€ index.js\nโ”‚   โ””โ”€โ”€ styles.js\nโ””โ”€โ”€ index.js\n
\n

Now let's isolate and separate the code into the three components from the smallest to the biggest one:

\n

EmojiListItem/

\n

This component renders every emoji item that will appear on the list.

\n
import React from \"react\"\n\nimport styles from \"./styles\"\n\nconst EmojiListItem = (props) => (\n  <li style={styles.item}>\n    <div style={styles.icon}>{props.emoji}</div>\n    <div style={styles.content}>\n      <code style={styles.code}>{props.code}</code>\n      <p style={styles.description}>{props.description}</p>\n    </div>\n  </li>\n)\n\nexport default EmojiListItem\n
\n

SearchInput/

\n

This component receives the user input and updates the state of the parent component.

\n
import React from \"react\"\n\nimport styles from \"./styles\"\n\nconst SearchInput = (props) => (\n  <input\n    style={styles.searchInput}\n    placeholder=\"Search by name\"\n    type=\"text\"\n    value={props.value}\n    onChange={props.onChange}\n  />\n)\n\nexport default SearchInput\n
\n

EmojiList/

\n

This is the top level component, holds the state and data of our example and imports the other components to recreate the whole UI of our tiny application. Isolating components makes the render method more readable and easier to understand โœจ.

\n
import React from \"react\"\n\nimport SearchInput from \"./SearchInput\"\nimport EmojiListItem from \"./EmojiListItem\"\nimport styles from \"./styles\"\n\nclass EmojiList extends React.Component {\n  state = {\n    searchInput: \"\",\n    emojis: []\n  }\n\n  render() {\n    const emojis = this.state.emojis.filter(emoji =>\n      emoji.code.includes(this.state.searchInput.toLowerCase())\n    )\n\n    return (\n      <ul style={styles.list}>\n        <SearchInput\n          onChange={(event) => this.setState({ searchInput: event.target.value })}\n          value={this.state.searchInput}\n        />\n        {emojis.map((emoji, index) => (\n          <EmojiListItem\n            key={index}\n            code={emoji.code}\n            description={emoji.description}\n            emoji={emoji.emoji}\n          />\n        ))}\n      </ul>\n    )\n  }\n}\n\nexport default EmojiList\n
\n

That's basically the architecture that I use at the company I'm working on. I'm pretty satisfied with the experience of using this pattern. Our components turned out a lot easier to maintain and use. Anyway there are no silver bullets on Software Engineering, so figure what works best for you or your team!

\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/scalable-react-components.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/scalable-react-components.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/scalable-react-components.png"}},"slug":"scalable-react-components-architecture","title":"Scalable React Components architecture"},{"dateModified":"2018-10-15 09:53","datePublished":{"formatInWords":"almost 2 years ago","formatDate":"19 September 2018","value":"2018-09-19 09:53"},"disqusIdentifier":"5ae59dcdb3211a06522ad69b","excerpt":"The process of continuously delivering React Native apps with Fastlane and Travis CI automatically. ","html":"

A year ago I wrote a post about how Fastlane could help us to improve our React Native apps shipping process. At that moment even though everything was automated, the deployment relied on one of us with a provisioned machine in order to launch the rocket ๐Ÿš€. We could improve easily that process by continuously delivering our apps through a CI machine. That's when Travis CI comes to the rescue! ๐Ÿ‘ท๐Ÿปโ€โ™‚๏ธ

\n

The process

\n

Before explaining what's the problem, it's important to understand the complexity of our deployment process.

\n

In a nutshell we have two platforms: iOS ๐Ÿ, Android ๐Ÿค– and every platform compiles two applications: Beta testing app also known as Canary ๐Ÿค and Production ๐Ÿš€ one.

\n

Basically every platform goes through a lane sequantially that looks like this ๐Ÿ‘‡

\n\n

Now let's see in depth every step of the deployment process to understand what we do.

\n

Code sign setup โœ๏ธ

\n

Signing native applications is scary ๐Ÿ˜ฑ, specially when you come from the JavaScript ecosystem. Certificates, provisioning profiles, keys... You have to be utterly organized when using them in a development team.

\n

We adopted the codesigning.guide concept through Fastlane. Basically this idea comes up with having a specific git repository to store and distribute certificates across a development team. We store both iOS and Android code signing files on an encrypted private git repository that lives on GitHub.

\n

Then, our CI machine on every deploy clones the repository and installs the decrypted certificates. On iOS the CI creates an OS X Keychain where the certificates are installed.

\n

Version management ๐Ÿ”–

\n

Native builds and stores require code version bumps.

\n

Every platform has his own way to manage versions and build numbers. The difference between those two is that the version should be used as the public store number that identifies a new release, and the build number is an incremental identifier that bumps on every build.

\n

Android ๐Ÿค–

\n\n

iOS ๐Ÿ

\n\n

Those attributes are stored on .plist, .pbxproj, .properties and .gradle files. To automate and do version management we use the package.json version number as the source of truth for our public version numbers ๐Ÿ’ฏ. This allows us to use npm version cli command to manage bumps.

\n

Native builds ๐Ÿ“ฆ

\n

We need to provision two machines to build and compile our native applications.

\n

For iOS we setup a macOS system with Xcode, because it's the only way to compile and sign the application. On Android we provision a Linux system, with all the Android Studio, packages and tools that we need.

\n

Those machines are created by our CI, that means every build runs on a new fresh and clean environment ๐Ÿ’ป.

\n

Beta testing distribution ๐Ÿค

\n

To distribute the application to beta testers we use TestFlight on iOS and HockeyApp for Android. We tried Google Play Beta but it was too slow on the app roll out compared to HockeyApp.

\n

Stores distribution ๐Ÿš€

\n

To distribute the application to the stores we upload the production build to TestFlight on iOS and Google Play Store for Android. The release is done manually by a human being.

\n

Sourcemaps ๐Ÿ—บ

\n

To get human readable information about crashes and errors, we use a service called Bugsnag. Every time we deploy a new build, we need to upload debug symbols .dSYM and sourcemaps to Bugsnag.

\n

Communication ๐Ÿ—ฃ

\n

Finally, when the apps are deployed, we need to inform our beta testers, release manager and developers, that we have a new version. We use Slack with a bot that sends alerts to some channels.

\n

The problem

\n

Every time we wanted to do a release, we had to manually fire ๐Ÿ”ฅ the Fastlane deployment lanes. That means that human factor was needed. This was a time consuming process that often failed due to code sign, biased environments, software updates, native platform dependencies...

\n
\n

Machines should work, people should think.

\n
\n

Definitely we decided to end with those problems by automating all the things!

\n

The solution

\n

The solution is to implement this automated process into a system that continously delivers our master branch pushes up to the stores magically ๐ŸŽ‰, giving freedom to the manager to decide when a new release comes up. Finally, we could forget about everything and be happy! โค๏ธ

\n

Now we're going to take a look on how we integrated Travis and Fastlane to automate the deployment of our apps ๐Ÿ‘.

\n

Fastlane

\n

We have two deployment lanes one for Android and one for iOS. I've simplified the lanes a little bit for the explanation to focus on the important parts of it. First we deploy Android platform and then iOS.

\n

The lane receives a version number that comes from the package.json, as I said before this allows us to do versioning through npm.

\n

The first thing we do is bumping the public version number and the build number. On the iOS lane, we need to setup_certificates, to save them on the Keychain and be able to sign the apps.

\n

After that we start the canary ๐Ÿค and production ๐Ÿš€ lanes. Those two are the ones who build the native app.

\n\n

Then, we upload all the sourcemaps and debug symbol files to Bugsnag.

\n

Next, we create a git branch where the version bumps will be commited, through the commit_and_push_version_bump lane. Later, on the iOS lane we merge the created git branch on top of master using the git_flow_merge lane. We need to commit the bumps, in order to maintain the version along with the deployments. Otherwise the stores should throw an error that the uploaded version already exists!

\n

Finally we reach out Slack, to communicate both deployments.

\n

Android ๐Ÿค–

\n
lane :deployment do |version: version|\n  bump_version_number(version: version)\n  canary\n  production\n  sh 'npm run repositories:upload:android'\n  commit_and_push_version_bump\n  slack_notification(platform: 'Android', version: version)\nend\n
\n

iOS ๐Ÿ

\n
lane :deployment do |version: version|\n  setup_certificates\n  bump_version_number(version: version)\n  canary\n  production\n  sh 'npm run repositories:upload:ios'\n  commit_and_push_version_bump\n  git_flow_merge(version: version)\n  slack_notification(platform: 'iOS', version: version)\nend\n
\n

So, here's how our git log, looks like after merging a branch to master and making a deploy ๐Ÿ™Œ:

\n

\"GitHub

\n

Travis CI

\n

We use build stages, to run our deployment process in three steps, sequentially. This allows us to deploy our apps only on the master branch when our tests passed โœ….

\n

Let's take a look at the build stages ๐Ÿ‘‡

\n

\"travis-build-stages\"

\n

Every build stage has his own provisioning and enviroment. For instance, Deploy iOS runs on a macOS machine with Xcode and Node.js installed, while Deploy Android uses an Ubuntu machine with JDK, AndroidSDK and Node.js.

\n

Test stage โœ…

\n

On the first stage we execute the linters and test suites. To ensure everything is working as expected. If something fails here, we automatically stop the deploy.

\n
- stage: Test and lint โœ…\n  language: node_js\n  node_js: 8.5.0\n  install: yarn\n  script: npm run test:lint && npm run test:unit\n
\n

Android stage ๐Ÿค–

\n

Android stage creates a provisioned Ubuntu machine with all the software and dependencies needed. Then we build the Canary ๐Ÿค and Production ๐Ÿš€ applications apps. After that we deploy them. In around 15 minutes โฐ our Android apps ship. ๐Ÿ‘

\n
- stage: Deploy Android ๐Ÿค–\n  if: branch = master AND type = push\n  language: android\n  jdk: oraclejdk8\n  android:\n    components:\n      - tools\n      - platform-tools\n      - android-26\n      - extra-google-m2repository\n      - extra-google-google_play_services\n  before_install:\n    - nvm install 8.5.0\n    - gem install bundler\n    - bundle install\n  before_script:\n    - ./internals/scripts/travis/gitconfig.sh\n  install: yarn\n  script: npm run deployment:android\n
\n

iOS stage ๐Ÿ

\n

iOS stage creates a provisioned macOS machine with Xcode and all the dependencies needed. Then we build the Canary ๐Ÿค and Production ๐Ÿš€ apps. After that we deploy them. In around 20 minutes โฐ our iOS apps ship. ๐Ÿ‘

\n
- stage: Deploy iOS ๐Ÿ\n  if: branch = master AND type = push\n  language: node_js\n  node_js: 8.5.0\n  os: osx\n  osx_image: xcode9.2\n  before_install: bundle install\n  before_script:\n    - ./internals/scripts/travis/gitconfig.sh\n  install: yarn\n  script: npm run deployment:ios\n
\n

Lessons learned

\n\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/shipping-react-native-fastlane-travis.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/shipping-react-native-fastlane-travis.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/shipping-react-native-fastlane-travis.png"}},"slug":"shipping-react-native-fastlane-travis","title":"Shipping React Native apps with Fastlane and Travis"},{"dateModified":"2017-10-30 10:00","datePublished":{"formatInWords":"over 2 years ago","formatDate":"30 October 2017","value":"2017-10-30 10:00"},"disqusIdentifier":"23","excerpt":"ReactiveConf 2017 review. A two days conference about functional and reactive programming that takes places in Bratislava.","html":"

ReactiveConf is a two days conference about functional and reactive programming that takes place in Bratislava ๐Ÿ‡ธ๐Ÿ‡ฐ, at the Old Market Hall. I attended the conference with three workmates from Ulabox, @sospedra_r, @juanmaorta and @markcial. The venue was organized in two stages: the Main Stage and the Discovery Stage.

\n

Getting started

\n

The conference day started with the registration, a backpack with a cool t-shirt, socks and stickers was given to every attendee.

\n

\"ReactiveConf

\n

Venue and talks

\n

The Old Market was a comfortable and big place to watch the Main Stage talks. Upstairs there was a big space with TV's to follow the live-stream, a ton of bean bags and a few XBOX's to play with ๐ŸŽฎ.

\n

Discovery Stage was located in another place called Ateliรฉr Babylon, within a 5 minute walk from the Old Market Hall. This stage was too small, at the second day we had to turn back to the Old Market, due to the lack of space, losing the opportunity to watch the \"Understanding Webpack from inside out\" talk. ๐Ÿ‘Ž

\n

The food was very nice ๐Ÿ—. A few serving points were distributed along the Market Hall. Also, the schedule was very well planned the number of breaks helped to follow the presentations easily!

\n

In terms of talks the quality and technical level was good. But with some speakers I thought that they were selling us their product instead of training us. Also I noticed that some talks, IMHO were offtopic according to the Reactive paradigm.

\n

Having said that, these are my favourite talks โญ๏ธ:

\n\n

All talks were recorded and live streamed through the ReactiveConf youtube channel. Be sure to take a look into it! ๐Ÿ‘‡

\n\n

Final thoughts

\n

I learnt a lot of things at the venue and I really enjoyed the great atmosphere of the conference. Actually I've met a ton of friendly people!

\n

Huge thanks to Ulabox ๐Ÿ’– for giving us the oportunity to attend ReactiveConf! ๐Ÿ‡ธ๐Ÿ‡ฐ โœˆ๏ธ

\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/v1504460420/nmm99etv7j32h5lsulgx.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/v1504460420/nmm99etv7j32h5lsulgx.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/v1504460420/nmm99etv7j32h5lsulgx.png"}},"slug":"reactiveconf-2017","title":"ReactiveConf 2017"},{"dateModified":"2017-10-18 10:15","datePublished":{"formatInWords":"over 2 years ago","formatDate":"18 October 2017","value":"2017-10-18 10:15"},"disqusIdentifier":"59cfbaac613ac70679db193e","excerpt":"The process of moving my website and Ghost blog from Heroku to DigitalOcean. Provisioning up the server with Node.js, Nginx, LetsEncrypt and PM2.","html":"

Last weekend I've moved my website and blog to DigitalOcean. At the time of building this website 2015 I choosed Heroku as the platform to host my application, because I didn't wanted to deal with server provisioning and maintenance.

\n

Heroku is probably the easiest way to ship your application into production ๐Ÿš€. In my use case, the GitHub repository that hosts the code for my website, was connected to Heroku and magically I've managed to ship my application using continuous deployment through Travis CI and GitHub.

\n

However I knew that I would be switching at some point to an IaaS considering I would need more control over the infrastructure of my application.

\n

The problem

\n

My website is a Node.js application built with Express, Pug and SCSS. The blog runs on a self-hosted Ghost.

\n

Since the start of using Heroku, I wanted to use a single Dyno to keep it simple. But every dyno is attatched to a process, so I managed to start Ghost as a module from the Express application. This workaround wasn't the best solution, but it worked more than a year and a half.

\n

Recently, Ghost went out of beta and released the 1.0.0 with a ton of breaking changes. Since then it was nearly impossible with my needs to keep using Heroku.

\n

Switching to DigitalOcean

\n

I decided to make the move considering it an opportunity to learn and improve my infrastructure.

\n
\n

If you don't have a DigitalOcean account, use this link to register and get $10 for free!

\n
\n

Requirements

\n\n

Basic security

\n

After spinning up a 5$ droplet, the first thing I did was disabling root login and password authentication. That means only SSH can be used to connect to the server.

\n

UFW

\n
$ sudo ufw allow 'Nginx Full' && sudo ufw allow OpenSSH\n$ sudo ufw enable && sudo ufw status\n\nTo                         Action      From\n--                         ------      ----\nOpenSSH                    ALLOW       Anywhere\nNginx Full                 ALLOW       Anywhere\nOpenSSH (v6)               ALLOW       Anywhere (v6)\nNginx Full (v6)            ALLOW       Anywhere (v6)\n
\n

LetsEncrypt

\n

When I was on Heroku, I used Cloudflare to obtain SSL ๐Ÿ”’. But LetsEncrypt is way better. Just because you get end to end encryption.

\n

To get your SSL certificate, first, you need to install certbot.

\n
$ sudo add-apt-repository ppa:certbot/certbot\n$ sudo apt-get update && sudo apt-get install python-certbot-nginx\n
\n

Then, open your Nginx configuration file find server_name directive and set your domain.

\n
server_name example.com www.example.com;\n
\n

Verify the configuration and if you have no errors, reload it.

\n
$ sudo nginx -t\n$ sudo systemctl reload nginx\n
\n

Now it's time to obtain our SSL certificates for the domain specified at the Nginx config file.

\n
$ sudo certbot --nginx -d example.com -d www.example.com\n
\n
\n

โš ๏ธ Before obtaining the certs, you'll have to point the domain to your DigitalOcean IP. That's the way Certbot verifies that you control the domain you're requesting a certificate for.

\n
\n

If that's successful, certbot will ask how you'd like to configure your HTTPS. Finally the certificates will be downloaded, installed, and loaded.

\n

Auto renewal

\n

LetsEncrypt certificates are only valid for ninety days, however the Certbot cli includes an option to renew our SSL certificates and we can automate this process with a crontab.

\n
$ sudo crontab -e\n
\n

Add a new line inside the crontab file and save it. Basically you're asking to your server to run the certbot renew --quiet command every day at 04:00.

\n
0 4 * * * /usr/bin/certbot renew --quiet\n
\n

Apps management

\n

Both applications are started as a daemon on the server. So even though the server is restarted, automatically both apps will go up.

\n

carloscuesta.me: Uses PM2 โ€“ production process manager for Node.js.

\n

carloscuesta.me/blog: Ghost uses ghost-cli to update and run the blog.

\n

Nginx

\n

I use Nginx as a reverse proxy against the applications that are running on localhost.

\n

The first block of my configuration file, redirects all the requests to https.

\n
server {\n  listen         80;\n  listen    [::]:80;\n  server_name    carloscuesta.me www.carloscuesta.me;\n  return         301 https://$server_name$request_uri;\n}\n
\n

After enforcing HTTPs, we use another server block to set our locations. Those locations will define how Nginx should handle the requests to specific resources.

\n

As an example, if your make a request to carloscuesta.me, Nginx will match our / location and is going to proxy_pass the request to my http://localhost:PORT where the Node.js application is started.

\n

Also, we're enabling HTTP2 and SSL for our server, providing the certificates and keys needed.

\n
server {\n  listen 443 ssl http2 default_server;\n  listen [::]:443 ssl http2 default_server;\n\n  ssl_certificate ...;\n  ssl_certificate_key ...;\n  ssl_dhparam ...;\n\n  server_name carloscuesta.me www.carloscuesta.me;\n\n  location / {\n    proxy_pass http://localhost:PORT;\n    proxy_set_header Connection \"upgrade\";\n    proxy_set_header Host $http_host;\n    proxy_set_header Upgrade $http_upgrade;\n    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n    proxy_set_header X-Forwarded-Proto $scheme;\n    proxy_set_header X-NginX-Proxy true;\n    proxy_set_header X-Real-IP $remote_addr;\n  }\n\n  location ^~ /blog {\n     # Same as / with another port\n  }\n}\n
\n","images":{"featured":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/moving-to-digitalocean.png"},"preview":{"src":"https://res.cloudinary.com/carloscuesta/image/upload/w_500/moving-to-digitalocean.png","lqpi":"https://res.cloudinary.com/carloscuesta/image/upload/t_lqpi-post-preview/moving-to-digitalocean.png"}},"slug":"moving-to-digitalocean","title":"Moving to DigitalOcean"}],"repositories":[{"description":"An emoji guide for your commit messages. ๐Ÿ˜œ ","language":"javascript","name":"gitmoji","stars":7468,"url":"https://github.com/carloscuesta/gitmoji"},{"description":"A gitmoji interactive command line tool for using emojis on commits. ๐Ÿ’ป","language":"javascript","name":"gitmoji-cli","stars":2260,"url":"https://github.com/carloscuesta/gitmoji-cli"},{"description":"A material design theme for your terminal. โœจ","language":"shell","name":"materialshell","stars":710,"url":"https://github.com/carloscuesta/materialshell"},{"description":"A Front End development Gulp.js based workflow. ๐Ÿš€","language":"javascript","name":"starterkit","stars":84,"url":"https://github.com/carloscuesta/starterkit"},{"description":"A material design theme for Hyper based on materialshell. โœจ","language":"javascript","name":"hyper-materialshell","stars":69,"url":"https://github.com/carloscuesta/hyper-materialshell"},{"description":"A simple and reusable React-Native error boundary component ๐Ÿ›","language":"javascript","name":"react-native-error-boundary","stars":47,"url":"https://github.com/carloscuesta/react-native-error-boundary"}]},"__N_SSG":true}