Controlled vs Uncontrolled Components

In this post we’re going to discuss Controlled and Uncontrolled components in React.

Controlled

A Controlled Component is one that takes its current value through props and notifies changes through callbacks like onChange. A parent component “controls” it by handling the callback and managing its own state and passing the new values as props to the controlled component. You could also call this a “dumb component”.

// Controlled
class Form extends Component {
  constructor() {
    super();
    this.state = {
      name: '',
    };
  }

  handleNameChange = (event) => {
    this.setState({ name: event.target.value });
  };

  render() {
    return (
      <div>
        <input
          type="text"
          value={this.state.name}
          onChange={this.handleNameChange}
        />
      </div>
    );
  }
}

Uncontrolled

A Uncontrolled Component is one that stores its own state internally, and you query the DOM using a ref to find its current value when you need it. This is a bit more like traditional HTML.

// Uncontrolled
class Form extends Component {
  handleSubmitClick = () => {
    const name = this._name.value;
    // do something with `name`
  }

  render() {
    return (
      <div>
        <input type="text" ref={input => this._name = input} />
        <button onClick={this.handleSubmitClick}>Sign up</button>
      </div>
    );
  }
}

In most cases you should use controlled components but that doesn’t mean you should use them all the time, like if:

  • Your form is very simple and doesn’t need any instant validation?
  • Even if any validations are needed, they are done only when the form is submitted?
  • Values need to be retrieved only on “submit”. No fields are dependent on any other field(s)?

References

JavaScript Promises

Defined by MDN,

A Promise is a proxy for a value not necessarily known when the promise is created. It allows you to associate handlers with an asynchronous action’s eventual success value or failure reason. This lets asynchronous methods return values like synchronous methods: instead of immediately returning the final value, the asynchronous method returns a promise to supply the value at some point in the future.

It has three states:

  • pending: initial state, neither fulfilled nor rejected.
  • fulfilled: meaning that the operation completed successfully.
  • rejected: meaning that the operation failed.

Intro

A promise is basically a wrapper around a value that may or may not be known when the object is first instantiated (created), and provides a method for handling a value after it is known (resolved), or when it’s unavailable on failure (rejected).

Breakdown

Let’s look at some code:

function getCurrentTime(onSuccess, onFail) {
  // Get the current 'global' time from an API using Promise
  return new Promise((resolve, reject) => {
    setTimeout(function() {
      // randomly decide if the date is retrieved or not
      var didSucceed = Math.random() >= 0.5;
      didSucceed ? resolve(new Date()) : reject('Error');
    }, 2000); // delay by 2000, 2 seconds before responding.
  })
}

Here we have a function getCurrentTime that has two callbacks, onSuccess & onFail. Those are faciliated by creating a Promise object with two states (resolve, reject) which it feeds back to the parent by the return.

We use setTimeout and a random generator to flip between resolve and reject for our example.

var didSucceed = Math.random() >= 0.5;

Then we use a one-line if statement to return a success (resolve) or fail (reject) state to the parent function.

didSucceed ? resolve(new Date()) : reject('Error');

Then as we want to simulate an API statement we add a delay of 2 seconds.

setTimeout(function() {
  // ... code ...
}, 2000); // delay by 2000, 2 seconds before responding.

Usage

Now how to use it?

To catch the value on success, we’ll use the then() function available on the Promise instance object. The then() function is called with whatever the return value is of the promise itself. For instance, in the example above, the getCurrentTime() function resolves with the currentTime() value (on successful completion) and calls the then() function on the return value (which is another promise) and so on and so forth.

To catch an error that occurs anywhere in the promise chain, we can use the catch() method.

getCurrentTime()
  .then(currentTime => getCurrentTime())
  .then(currentTime => {
    console.log('The current time is: ' + currentTime);
    return true;
  })
  .catch(err => console.log('There was an error:' + err))

.then Breakdown

So when we do,

.then(currentTime => getCurrentTime())

The return value is the current time (onSuccess), that’s then passed to the next then() in the chain.

.then(currentTime => {
  console.log('The current time is: ' + currentTime);
  return true;
})

Which takes that value currentTime and prints it to the console.

The important thing to remember is each time we call .then() it operates on the previously returned value in the chain. So if we do something with the previous value the next .then() call will be handed that value, and so on.

fetch() method & React

Moving on, the fetch() method supports Promises so we can wire in a call to a backend API and handle it similarly:

fetch("/api/getuser")
  .then(resp => resp.json())
  .then(resp => {
    const fullname = resp.fullname;
    this.setState({ fullname: fullname })
  })

First we make a call to /api/getuser that returns:

{
  id: "424324234343ffer3",
  fullname: "John Smith"
}

We get the json() response then pass it to the next .then() call, take that json and pull out the fullname into a constant and then using setState() set the value of the fullname local object.

References

Many thanks to these articles, especially Ari Lerner for his great articles linked here under Full Stack React.

Webpack 4 & Babel 7 Setup

Last year I had to upgrade Webpack to 4 and Babel to 7, which even though it sounds trivial the upgrade included a lot of breaking changes. The new mode option amalgamating your production and development setup into one, switching libraries as some did not make the transition to 4 and a general reworking of how I laid out the original into what is presented below.

After which I’ll break down what each component does.

webpack.config.js
// webpack.config.js
const path = require('path');

const HtmlWebpackPlugin = require("html-webpack-plugin");
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
const CopyWebpackPlugin = require('copy-webpack-plugin');
const UglifyJsPlugin = require('uglifyjs-webpack-plugin')
const WorkboxPlugin = require('workbox-webpack-plugin');

const devMode = process.env.NODE_ENV !== 'production';

const BUILD_DIR = path.resolve(__dirname, 'build');
const SRC_DIR = path.resolve(__dirname, 'src');

const isHot = path.basename(require.main.filename) === 'webpack-dev-server.js';

console.log(isHot)

console.log('BUILD_DIR', BUILD_DIR);
console.log('SRC_DIR', SRC_DIR);

const cssPlugin = new MiniCssExtractPlugin({
  filename: "style.css"
});

const htmlPlugin = new HtmlWebpackPlugin({
  template: "./public/index.html",
  filename: "./index.html"
});

const copyWebpack = new CopyWebpackPlugin([
    {from: './public/images', to: 'images'},
    {from: './public/fonts', to: 'fonts'},
    {from: './node_modules/font-awesome/fonts', to: 'fonts'},
    {from: './public/images/favicon.ico' },
    {from: './public/images/apple-icon.png' },
    {from: './public/robots.txt' }
  ],
  {copyUnmodified: false}
);

const uglifyJs = new UglifyJsPlugin({
  parallel: 4
});

const workbox = new WorkboxPlugin.GenerateSW({
  // these options encourage the ServiceWorkers to get in there fast
  // and not allow any straggling "old" SWs to hang around
  clientsClaim: true,
  skipWaiting: true
})

module.exports = {
    target: 'web',
    entry: {
      index: [SRC_DIR + '/index.js']
    },
    output: {
      publicPath: '/',
      path: BUILD_DIR,
      filename: '[name].bundle.js'
    },
    devtool: 'source-map',
    devServer: {
      port: 3000,
      disableHostCheck: true,
      host: 'localhost',
      contentBase: BUILD_DIR,
      historyApiFallback: true,
      compress: true,
      hot: true,
      open: true,
      proxy: {
        '/api/*': 'http://localhost:5000',
        '/media/*': 'http://localhost:5000'
      }
    },
    module : {
        rules : [
            {
                test: /\.s?[ac]ss$/,
                use: [
                    isHot ? "style-loader" : MiniCssExtractPlugin.loader,
                    { loader: 'css-loader', options: { url: false, sourceMap: true } },
                    { loader: 'sass-loader', options: { sourceMap: true } }
                ],
            },
            {
                test: /\.js$/,
                exclude: /node_modules/,
                use: {
                  loader: "babel-loader"
                }
            }
        ]
    },
    plugins: this.mode === 'production'
     ? [cssPlugin, htmlPlugin, copyWebpack, workbox, uglifyJs]
     : [cssPlugin, htmlPlugin, copyWebpack, workbox]
    ,
    mode : devMode ? 'development' : 'production'
};
.babelrc
// .babelrc
{
  "presets": [
    "@babel/preset-react",
    [ "@babel/preset-env", {
      "targets": {
        "browsers": [
          ">0.25%",
          "not op_mini all"
        ]
      }
    }]
  ],
  "plugins": [
    "@babel/plugin-proposal-object-rest-spread",
    "@babel/plugin-proposal-class-properties",
    "@babel/plugin-transform-runtime"
  ]
}

Breakdown

Build & Src Directories

First I define constants to store where the source is (/src) and compiled build (via webpack-dev-tools & webpack, in /build), we’ll need those later.

const BUILD_DIR = path.resolve(__dirname, 'build');
const SRC_DIR = path.resolve(__dirname, 'src');

Hot Loading

For hot reloading, when the code changes refresh the client, I had a problem with MiniCssExtractPlugin not handling it. So I setup a constant that if dev-server is loaded it will run the appropriate styling libraries.

const isHot = path.basename(require.main.filename) === 'webpack-dev-server.js';

use: [
    isHot ? "style-loader" : MiniCssExtractPlugin.loader,
    { loader: 'css-loader', options: { url: false, sourceMap: true } },
    { loader: 'sass-loader', options: { sourceMap: true } }
],

CSS & HTML Plugins

Next defining the css extractor and html plugins, again defining the setup inside a constant. A really handy method that cleans up how they’re implemented later on under ‘plugins’.

const cssPlugin = new MiniCssExtractPlugin({
  filename: "style.css"
});

const htmlPlugin = new HtmlWebpackPlugin({
  template: "./public/index.html",
  filename: "./index.html"
});
plugins: this.mode === 'production'
 ? [cssPlugin, htmlPlugin, copyWebpack, workbox, uglifyJs]
 : [cssPlugin, htmlPlugin, copyWebpack, workbox]
,

I don’t use uglifyjs in development as there’s no need, and it’ll massively slow down hot-loading.

CopyWebpackPlugin

For CopyWebpackPlugin I setup the usual copy definitions for images, fonts and font-awesome so everything’s self hosted.

Then for favicons and the robots.txt file which usually reside in the root I set the from but not the to so it will default to copying to the root location of the website (correct me if i’m wrong).

const copyWebpack = new CopyWebpackPlugin([
    {from: './public/images', to: 'images'},
    {from: './public/fonts', to: 'fonts'},
    {from: './node_modules/font-awesome/fonts', to: 'fonts'},
    {from: './public/images/favicon.ico' },
    {from: './public/images/apple-icon.png' },
    {from: './public/robots.txt' }
  ],
  {copyUnmodified: false}
);

copyUnmodified’s default value is false which: “Copies files, regardless of modification when using watch or webpack-dev-server. All files are copied on first build, regardless of this option”.

uglifyJs

Next setup uglifyJs, this obfuscates your end result making it difficult for others to reverse engineer.

As I usually have more than one core on my computer I set it up to use parallel processing using 4 threads as this is a very cpu intensive task.

const uglifyJs = new UglifyJsPlugin({
  parallel: 4
});

WorkboxPlugin

WorkboxPlugin is a tool to implement service workers inside your web app and enable caching so your application has the chance to operate even when in offline mode and then gracefully connect when back online.

const workbox = new WorkboxPlugin.GenerateSW({
  // these options encourage the ServiceWorkers to get in there fast
  // and not allow any straggling "old" SWs to hang around
  clientsClaim: true,
  skipWaiting: true
})

You can read more here https://webpack.js.org/guides/progressive-web-application/#adding-workbox.

devServer

While in development I have the server running a separate process on port 5000 with the client at 3000. So I setup proxy’s for the /api and /media uploads.

I enable compression, hot loading and set the hostname. With that as so as the client starts up an event will be fired to the browser to hot-load the site on http://localhost:3000

devServer: {
  port: 3000,
  disableHostCheck: true,
  host: 'localhost',
  contentBase: BUILD_DIR,
  historyApiFallback: true,
  compress: true,
  hot: true,
  open: true,
  proxy: {
    '/api/*': 'http://localhost:5000',
    '/media/*': 'http://localhost:5000'
  }
},

modes

With WebPack 4 it introduced the concept of build modes so we only had one webpack file to maintain.

I generate a constant that holds the current mode taken from the current process.env.NODE_ENV node environment.

const devMode = process.env.NODE_ENV !== 'production';

Which I can then pass over to mode, plugins and what else needs it.

plugins: this.mode === 'production'
 ? [cssPlugin, htmlPlugin, copyWebpack, workbox, uglifyJs]
 : [cssPlugin, htmlPlugin, copyWebpack, workbox]
,
mode : devMode ? 'development' : 'production'

Babel

For Babel I had to change things with version 7.

// .babelrc 6
{
  "presets": [
    "react",
    "env"
  ],
  "plugins": [
    "transform-object-rest-spread",
    "transform-class-properties",
    "transform-runtime"
  ]
}

Still loading support for spread operator (@babel/plugin-proposal-object-rest-spread), class properties (@babel/plugin-proposal-class-properties) and transform which adds common helpers across your compiled app that are deduplicated to help with code size (@babel/plugin-transform-runtime)

// .babelrc 7
{
  "presets": [
    "@babel/preset-react",
    [ "@babel/preset-env", {
      "targets": {
        "browsers": [
          ">0.25%",
          "not op_mini all"
        ]
      }
    }]
  ],
  "plugins": [
    "@babel/plugin-proposal-object-rest-spread",
    "@babel/plugin-proposal-class-properties",
    "@babel/plugin-transform-runtime"
  ]
}

The juicy part is:

"@babel/preset-react",
[ "@babel/preset-env", {
  "targets": {
    "browsers": [
      ">0.25%",
      "not op_mini all"
    ]
  }
}]

Which rather than compile to support all browsers which increases code size, it limits support to the top 25% of all current browsers.

Removing support for opera mini on all versions.

not op_mini all

This is a browserlist query, you can read more about it here https://github.com/browserslist/browserslist.

The GROW Model

I was introduced to the GROW model on my first cohort with the Mozilla Open Leaders program using it as a method of setting goals and navigating the problems you experience when trying to achieve those goals.

Originally developed in the UK, it was used extensively in corporate coaching in the late 1980’s and 90’s. We used as a framework for meeting and is broken into five parts.

G. Goal

First establish the end point, where the client wants to be in such a way that is is clear when you have reached that point. 

To reference Avengers this would be when Thanos is defeated and everyone who died in the snap is resurrected. What good about this is it’s clear and easy to identify when that point has been reached.

R. Reality

The reality is where the client is now. What are the issues, the challenges and how far away are they from the goal?. 

To get an unbiased opinion this should come from the client but you can help by asking questions to make sure the reality is fully explored.

  • What is happening now? 
  • What is the effect or result of this?
  • Have you already taken any steps towards your goal?
  • Does this goal conflict with any other goals or objectives?

So continuing our Avengers example, what is our reality? Half the universe has been wiped out, no one knows if they can be brought back and the glove was destroyed.

O. Obstacles

The third part can be broken down into two sides, Obstacles and Options. 

These obstacles are stopping the clients from getting to where they want to be,

  • What are they?
  • Have they had success in the past, what made the difference?
  • What might you need to change in order to achieve the goal? 

In reality if there were no obstacles, then they’d already be at their goal.

Diving back to the movies, the steps they already took were to bring the fight to Thanos on the world of Titan but that wasn’t the grand success they hoped for. What were the obstacles in their way? What could have the done differently to achieve their goal?

O. Options

Now we’ve identified our Goal, Reality and Obstacles what are our Options? What things can we do to get to our Goal?

Things you can do here are,

  • Take time to brainstorm the full range of options.
  • Offer suggestions carefully, remember you don’t want to be seen as the expert in their situation you’re more trying to unlock valuable nuggets of information that the client might have overlooked that’s available to them.
  • Are there any constraints we can remove?
  • Are we repeating some action without noticing that we could potentially remove?

Once we’re happy with the options available to us we can move onwards.

I understandably don’t know what happens next with Avengers. However that doesn’t mean we can’t break down what their options are and make sure they don’t repeat those same mistakes again.

W. Way Forward

So now we’ve identified the obstacles and uncovered our options we can move on to deciding on a path forward.

We can go ahead and convert those options to actions points, attach time frames to the relevant ones and start building a plan to getting to our end goal.

Things we can do,

  • Commit to action.
  • Set a long-term aim if appropriate.
  • Make specific steps and define timing.
  • Ensure choices are made.

Epilogue

One of the takeaways from this process is to understand that it’s really focused on asking questions. 

You don’t have to be an expert in their situation, you’re more of a facilitator in the conversation.

References

JEST & Nock Testing

Let’s go through the process of setting up JEST with our React application, writing our first test and mocking an api call.

Setup

First we’ll need to install dependencies via yarn

yarn add enzyme
yarn add enzyme-adapter-react-16
yarn add jest
yarn add jest-enzyme
yarn add nock

And add jest to our package.json to fire up JEST

"scripts": {
  "start": "webpack-dev-server --mode development --open --hot --progress",
  "build": "webpack --mode production",
  "test": "jest"
},

We’ll also need a JEST setup file to load the enzyme adapter for React

import { configure } from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';

configure({ adapter: new Adapter() });

File Layout

__mockData__/[mock json files]
__mocks__/[test files]
jest.setup.js
package.json

Mock Data

Rather than interact with live data we need to mock any data we give our tests so we have a staple ground with which to interact with our application.

We’ll do this quite simply by storing our mock data in JSON files and pulling it in much like how our api would send it.

// __mockData__/videos.json
[
  {
    "id": "01"
    "title": "Avengers Infinity War"
  },
  {
    "id": "02"
    "title": "Avengers Endgame"
  }
]

#

Now lets move to writing our first test. For this example we’re going to use a simple Async request and then use Nock to intercept the GET request and give it the data from our videos.json file.

First our sample API call

//../src/videoRequest.js
import request from 'superagent';

async function listVideos(category) {
  return await request
    .get(`/v1/videos/category/${category}`)
    .set('accept', 'json')
    .then(function(response) {
      return response.body
  });
}
export {
  listVideos
}

And now our Test

// __tests__/testAPI.js
import React, { component } from 'react';
import { mount, shallow } from 'enzyme';
import toJSON from 'enzyme-to-json';
import nock from 'nock';

// load helper get method for video categories
import listVideos from './../src/videoRequest';

// load mock data
let mockVideos = require('./../__mockData__/videos.json');

// setup get intercepts,
// specifying the API path and data to return.
nock('http://localhost')
  .get('/v1/videos/')
  .reply(200, { mockVideos });

nock('http://localhost')
  .get('/v1/videos/category/all')
  .reply(200, { mockVideos });

// describe tests
describe('videoRequest Component', () => {
  describe('when helpers fired', () => {
    // define test, applying async similar to the live version
    it('return videos in category all', async () => {
      // fire get request
      const results = await listVideos('all')
      // define expected returned data
      expect(results.mockVideos[0].title).toEqual("Avengers Infinity War")
    })
  })
});

Notice at the start we load the mock data into memory.

let mockVideos = require('./../__mockData__/videos.json');

Then knoking the API route the request will use we tell nock to intercept that very path and method returning it that very same data.

nock('http://localhost')
  .get('/v1/videos/category/all')
  .reply(200, { mockVideos });

That way when we test the data give we can have predictable results.

expect(results.mockVideos[0].title).toEqual("Avengers Infinity War")

We can use this same method to mock other GET requests, POST requests, error handling for front-end so we’re able to test out our components without them touching live data.

React AutoComplete, Async & Aggregates

Recently I had to create an autocomplete component for a React project, the difference here is that even though the data was coming from the popular NoSQL datastore MongoDB I had to pull it from several sources.

With Mongo and Node.JS API calls you only really have one hit you can use in order to request all the data you need for the request.

Now you can use Async with Mongoose to recall several tables at once:

Server: With Async

var team = require('../../models/team.js');
var user = require('../../models/user.js');
var async = require('async');

try {
  async.parallel({
    teams: function(cb){
      team.find({}, '_id title')
      .lean() // cut out virtuals
      .exec(cb)
    },
    users: function(cb){
      user.find({}, '_id email')
      .lean() // cut out virtuals
      .exec(cb)
    }
  }, function(err, results){

    const teams = results.teams // store teams data
    const users = results.users // store users data

    // and then process the results

But for autocomplete it seems impractical to use, so we’ll be using Mongoose’s aggregate feature link and return just the data we need in the format we want.

Client: AutoComplete Component

First we’ll create a React component to autocomplete team names, we’ll use the isomorphic-fetch component to trigger the call. Using React-Select to render the results and using props to pass back and forth the selected data to the parent.

Here the autocomplete component returns all teams with their assigned user.

import React, { Component } from 'react';
import Select from 'react-select';
import fetch from 'isomorphic-fetch';

class AutoCompleteTeams extends Component {
  // define initial props
  static defaultProps = {
    className: '',
    update: null,
    initialValue: { _id: '', title: '' }
  }

  // define initial state
  constructor(props) {
    super(props);
    this.state = {
      selectedArray: new Object()
    }
  }

  componentWillReceiveProps(newProps){
    if (newProps === this.props) {
      this.setState({
        selectedArray: new Object()
      }, () => {
        return;
      })
    }

    this.setState({
      selectedArray: newProps.initialValue || new Object()
    }, function() {
    })
  }

  // onChange (item selected), return value to parent
  // => { title: 'Alexandria (john@smith.com)' }
  onChange = (value) => {
        this.setState({
      selectedArray: value,
        }, function() {
      this.props.update(value);
    });
    }

  // on autocomplete, perform GET request. 
  // We'll also send credentials as we're operating
  // within a secure session
  autocomplete = (input) => {
    if (!input) {
            return Promise.resolve({ options: [] });
    }

    return fetch(`/v1/users/autocomplete/teams?q=${input}`, {
      method: 'GET',
      credentials: 'include' })
      .then((response) => response.json())
      .then((json) => {
        return { options: json };
      });
  }

  // render autocomplete using Select.Async displaying
  render() {
    const { className} = this.props
    return (
       <div className={`form-control ${className}`}>
         <Select.Async
           value={this.state.selectedArray}
           onChange={this.onChange}
           valueKey="_id" // define which value to use when item selected
           labelKey="title" // define which field to display in select
           loadOptions={this.autocomplete}
           backspaceRemove={true}
         />
       </div>
    )
  }
}

export default AutoCompleteTeams;

Client: Parent Usage

We can then add it to our parent component via:

import React, { Component } from 'react';
import AutoCompleteTeams from './../sharedComponents/AutoCompleteTeams'

class AddTeamLeader extends Component {

  setTeam = (value) => {
    this.setState({ user: value })
  }

  render() {
    return (<div>
      <AutoCompleteTeams
        className="col-xs-12 col-sm-6 col-md-8"
        name="title"
        id="title"
        initialValue={this.state.team}
        update={this.setTeam}
        />
      </div>)
  }
}

Server: Aggregate Query

Now we’ll define the get request, using aggregation to return a list of teams with their respective leader’s email in brackets.

var team = require('../../models/team.js');

exports.autocompleteTeams = function(req, res) {
  const q = req.query.q || ''
  if (q) {
    team.aggregate([
      // Order is important, each extra match drills down the results available.
      {$lookup: {from: 'users', localField: 'leader', foreignField: '_id', as: 'leader'} },
      {$project: {"title" :
        { $concat : [
          "$title",
          " (",
          { $arrayElemAt:["$leader.email", 0] },
          ")"
        ] }
      }},
      {$match: {"title": { "$regex": q, "$options": "i" }}},
      {$sort: {"title": 1}}
    ])
    .limit(25)
    .then((records) => res.send(records))
  } else {
    team.aggregate([
      {$lookup: {from: 'users', localField: 'leader', foreignField: '_id', as: 'leader'} },
      {$project: {"title" :
        { $concat : [
          "$title",
          " (",
          { $arrayElemAt:["$leader.email", 0] },
          ")"
        ] }
      }},
      {$sort: {"title": 1}}
    ])
    .limit(25)
    .then((records) => res.send(records))
  }
}

Lets drill down the aggregate action, as we cannot access virtuals inside mongoose queries we cannot pull this data from a populate() action and then pass it into our records in one so we do a look up connecting team to users via it’s assigned ‘leader’ field which will match the user field _id.

{$lookup: {from: 'users', localField: 'leader', foreignField: '_id', as: 'leader'} },

Next with the user connected to team we project or create the title value concatenating the email address to the end in brackets.

{$project: {"title" :
  { $concat : [
    "$title",
    " (",
    { $arrayElemAt:["$leader.email", 0] },
    ")"
  ] }
}},

Then we match the result to the API request value.

{$match: {"title": { "$regex": q, "$options": "i" }}},

Sort the results.

{$sort: {"title": 1}}

Giving us the end output:

React Extract & Upload to S3 (2 of 2)

In this next part we’ll break down how I set-up the Agenda task worker, how i defined the ‘upload ready’ task and the recursive uploader.

Logging a Task

Agenda is a background task scheduler similar to Ruby on Rails ‘Whenever’ plugin. Rather than hosting the tasks on a separate datastore like Redis we opted to keeping it light by allowing Agenda to create the table ‘agendaJobs’ within our primary MongoDB database. Then using the Agenda methods to push new tasks to it for processing.

In the previous post we did this via

const Agenda = require('agenda');
var agenda = new Agenda({db: {address: process.env.MONGODB_URI}});

var job = agenda.create('upload ready', {
  extract_location: record.extract_location,
  story_id: record.id
});

In the first two lines we make a mongo connection, require Agenda and then afterwards we use the .create method with the name of the task with any attributes wrapped in {} it may need for processing. Here it’s the extract location and the record it will eventually need to update.

The Worker

So we’ve logged the task into our agendaJobs table for processing. How do we build the process that watch for new tasks and process them when they come in?

For this we’ll create a simple js script that we can run with pm2 on our server, or a Procfile on Heroku.

// worker.js
'use strict';

const Agenda = require('agenda');
const mongoose = require('mongoose');
var { uploadDirToS3 } = require('./lib/workers/s3uploader');
require('dotenv').config();

// setup mongoose
var mongooseOptions = {
  reconnectInterval: 500, // Reconnect every 500ms
  reconnectTries: 30, // max number of retries
  keepAlive: true, // keep alive for long running connections
  poolSize: 10, // Maintain up to 10 socket connections
  bufferMaxEntries: 0, // If not connected, return errors immediately
  useNewUrlParser: true
};
mongoose.Promise = global.Promise;
mongoose.connect(process.env.MONGODB_URI || 'mongodb://localhost/et', mongooseOptions)

// setup delayed job worker
var agenda = new Agenda({db: {address: process.env.MONGODB_URI}});

// -----------------------

// define story uploader
agenda.define('upload ready', {priority: 'highest', concurrency: 1}, function(job, done) {
  var data = job.attrs.data;
  uploadDirToS3(data.extract_location, data.story_id, job, done)
});

// start job runner
agenda.on('ready', function() {
  agenda.start();
  console.log("Worker started")
});

agenda.on('success:upload story', function(job) {
  console.log('Successfully uploaded story');
});

From the top we:

  • Require Agenda.
  • Require our s3uploader (which we’ll talk about next).
  • Setup our connection to MongoDB via the library Mongoose.
  • Give Agenda a connection to our mongo database server.
  • Define the ‘upload ready’ task, set it’s priority, concurrency (how many of these jobs can be run at once), then using job.attrs.data we’ll gain access to those attributes we defined when we created the task and pass it to the uploadDirToS3 method.
  • Next we’ll start the job runner.
  • And finally on success we’ll notify the console.

In our package.json we can define the job with the ‘worker’ script.

"scripts": {
  "start": "nodemon --ignore media/ --ignore client/ --exec 'node server.js'",
  "worker": "node worker.js",
  "test": "jest"
},

‘upload ready’ Script

So far we’ve setup built the client and server side to get the zip file there. Unzipped it and setup a task worker to operate on our uploads when they get logged but how are we going to handle the actual uploads.

Here we need to build a method that once given a directory can go in there, find all the files inside, then walk from directory to directory pushing more content up to s3.

Now we can do this in two methods.

Depth First, where we go down each directory as far as we can go pushing content to s3 before moving to the next.

Breadth First, where we try to stay as close to the top working one row at a time in unison until we get to the bottom of the tree.

React Extract & Upload to S3 (1 of 2)

For a recent project I had to build a service that when the user uploads a zip file. It would unpack it, process it and upload it to S3, on other platforms like Rails this could be achieved by Paperclip or some similar library but as we were building everything in Node.JS this would have to be done from the ground up.

Upload

To build the uploader I used a React package called DropZone which creates an area of the form a user can drag-and-drop a file on, or use the native file select.

import Dropzone from 'react-dropzone';

render() {
  return (<div>
    <Dropzone
      disableClick={true}
      multiple={false}
      accept={'application/zip'}
      onDrop={this.onDrop}>
      <div className="dropzone-text">Drop your zip file here</div>
    </Dropzone>
    { (this.state.percent > 0) &&
       <div className="progress-upload">
         <Progress
            bar
            color="success"
            value={this.state.percent}
            className="upload-story-bar">
              {Math.round(this.state.percent)}%
          </Progress>
       </div>
     }
  </div>
)}

As you can see the dropzone component is wired to only accept files with a mime-type of ‘zip’ so only zip files will be allowed for upload.

I also use the Progress component from ReactStrap (a React wrapper for Bootstrap to render a progress bar as the upload is taking place).

When the onDrop action here is the resulting handler:

onDrop = (file) => {
  this.setState({ percent: 0 });
  let data = new FormData();
  let singleFile = file[0];
  data.append('media', singleFile);
  var that = this;

  SuperAgent.put(`/v1/story/upload_video/${this.state.id}`)
    .send(data)
    .on('progress', e => {
      if(e.direction === "upload") {
        this.setState({
          percent: e.percent,
          percent_message: `${e.direction} is done ${e.percent}%`
        });
      }
    })
    .end(function(err, resp) {
      if (err) {
        this.handleError(err);
      }
      that.setState({
        upload_filename: resp.body.upload_filename,
        upload_filepath: resp.body.upload_filepath,
        extract_location: resp.body.extract_location,
        complete: true
      });

      return resp;
    });
}

As you can see we’re using the SuperAgent library to handle the put request to the server with .on(‘progress’) updates the progress state and updates the progress bar on the render.

Functional Programming

My partner started watching Mattias Johansson’s FunFunFunction series on Functional Programming, so to help and also give myself a refresher I’ve put together this massive post on the series.

Hope you find these useful.

Anonymous functions

Anonymous functions are functions that are dynamically declared at runtime. They are declared using the function operater instead of the function declaration.

// named function
function triple(x) {
  return x * 3
}
// anonymous function
var triple = function(x) {
  return x * 3
}

Higher Order functions

Higher Order Functions help us with composition, by allowing us to place a lot of little functions into bigger functions and so on; so we break down the task into little pieces which can help making it more readable and easier to understand.

var animals = [
  { name: 'peter', species: 'rabbit'},
  { name: 'james', species: 'dog'}
]

// function inside a value
var isDog = function(animal) {
  return animal.species === 'dog'
}

var dog = animals.filter(isDog) // ['dog']
var otherAnimals = animals.reject(isDog) // ['rabbit']

// to return what we need,
// breaking down the problem.

Map function

var new_array = arr.map(function callback(currentValue[, index[, array]]) {
    // Return element for new_array
}[, thisArg])

Map is a higher order function that iterates through an array of objects, unlike filter it doesn’t throw away items that don’t match the filter it transforms them.

var animals = [
  { name: 'peter', species: 'rabbit'},
  { name: 'james', species: 'dog'}
]

// es5 version
var names = animals.map(function(x) { return x.name })

// es6 version
var names = animals.map((x) => x.name)

// result
// > ['peter', 'james']

Reduce function

arr.reduce(callback[, initialValue])

The reduce() method executes a reducer function (that you provide) on each member of the array resulting in a single output value.

Unlike other transforms you provide a function to operate on each member of the array, returning a single value.

var orders = [
  { amount: 250 },
  { amount: 250 }
]
var totalAmount = orders.reduce((sum, order) => {
  return sum + order.amount
})

// result
// 500

We can go beyond it with

// data.txt
// john smith blender 80 2

var output = fs.readFileSync('data.txt', 'utf8')
  .trim()
  .split('\n')
  .map(line => line.split('\t'))
  .reduce((customers, line) => {
    // if blank return [], otherwise store in iterator
    customers[line[0]] = customers[line[0]] || []
    // attach to iterator
    customers[line[0]].push({
      name: line[1],
      price: line[2],
      quantity: line[3]
    })
    return customers
  })

// create json string from output with 2 spaces indentation
console.log(JSON.stringify(output, null, 2))

// result
{
  "john smith": {
      "name": "blender",
      "price": "80",
      "quantity": "2"
  }
}

Closures

A closure is the combination of a function and the lexical environment within which that function was declared.

Languages such as Java provide the ability to declare methods private, meaning that they can only be called by other methods in the same class.

JavaScript does not provide a native way of doing this, but it is possible to emulate private methods using closures. Private methods aren’t just useful for restricting access to code: they also provide a powerful way of managing your global namespace, keeping non-essential methods from cluttering up the public interface to your code.

For every closure we have three scopes:

  • Local Scope (Own scope)
  • Outer Functions Scope
  • Global Scope
// global scope
var e = 10;
function sum(a){
  return function(b){
    return function(c){
      // outer functions scope
      return function(d){
        // local scope
        return a + b + c + d + e;
      }
    }
  }
}

console.log(sum(1)(2)(3)(4)); // log 20

It is unwise to unnecessarily create functions within other functions if closures are not needed for a particular task, as it will negatively affect script performance both in terms of processing speed and memory consumption.

Currying

Currying is the process of taking a function with multiple arguments and returning a series of functions that take one argument and eventually resolve to a value. The original function volume takes three arguments, but once curried we can instead pass in each argument to three nested functions.

Alternative it’s a function that doesn’t take all it’s arguments up front. It expects you to give it the first argument and then the function returns another function which you are supposed to call with the second argument which in turn will return a new function which you are supposed to call with the third argument and so on… until all the arguments are provided then the function at the end of the chain will be the one that returns the value that you actually want.

let dragon =
  name =>
    size =>
      element =>
        name + ' is a ' +
        size + ' dragon that breathes ' +
        element + '!'

let fluffy = dragon('fluffy')
let tiny = fluffy('tiny')

console.log(tiny('lightning'))

// => flully is a tiny dragon that breathes lightning

Recursion

Recursion is when a function calls itself until it’s done.

let categories = [
  {id: 'animals', parent: null},
  {id: 'mammals', parent: 'animals'},
  {id: 'cats', parent: 'mammals'},
  {id: 'dogs', parent: 'mammals'},
  {id: 'chihuahua', parent: 'dogs'},
  {id: 'labrador', parent: 'dogs'},
  {id: 'persian', parent: 'cats'},
  {id: 'siamese', parent: 'cats'},
  {id: 'ghosts', parent: null},
  {id: 'casper', parent: 'ghosts'}
]

let makeTree = (categories, parent) => {
  let node = {};
  categories
    // filter out every category whose
    // parent matches the one given
    .filter(c => c.parent === parent)
    .forEach(c =>
      // then for each node id run makeTree
      // with using the id as the parent
      node[c.id] = makeTree(categories, c.id))
  // the return the node until we're done
  return node;
}

console.log(
  JSON.stringify(
    makeTree(categories, null)
    , null, 2));

// output
{
  "animals": {
    "mammals": {
      "cats": {
        "persian": {},
        "siamese": {}
      },
      "dogs": {
        "chihuahua": {},
        "labrador": {}
      }
    }
  },
  "ghosts": {
    "casper": {}
  }
}

Promises

new Promise(executor);

The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.

A Promise is in one of these states:

  • pending: initial state, neither fulfilled nor rejected.
  • fulfilled: meaning that the operation completed successfully.
  • rejected: meaning that the operation failed.

We did this in a previous article but to recall the code:

function getCurrentTime(onSuccess, onFail) {
  // Get the current 'global' time from an API using Promise
  return new Promise((resolve, reject) => {
    setTimeout(function() {
      // randomly decide if the date is retrieved or not
      var didSucceed = Math.random() >= 0.5;
      didSucceed ? resolve(new Date()) : reject('Error');
    }, 2000); // delay by 2000, 2 seconds before responding.
  })
}

getCurrentTime()
  // onSuccess or resolve
  .then(currentTime => getCurrentTime())
  // manipulates the previously returned value
  .then(currentTime => {
    console.log('The current time is: ' + currentTime);
    return true;
  })
  // onFail or reject
  .catch(err => console.log('There was an error:' + err))

Functors

Functors are simply something that can be mapped over.

Why is this useful? because it allows us to build functions that can operate on data that they were not originally designed to work with.

The Map or Filter methods have nothing to do with functors, it’s objects that implement maps that are functors.

console.log([ 2, 4, 6 ].map(x => x + 3))
// => [ 5, 7, 9 ]

With arrays

const array = [ 2, 4, 6 ]
const addThree = (x) => x + 3
const mappedArray = array.map(addThree)

console.log(mappedArray)
// => [ 5, 7, 9 ]

Or objects

const dragons = [
  { name: 'Fluffykins', health: 70  },
  { name: 'Deathlord', health: 65000 },
  { name: 'Little pizza', health: 2 },
]

const names =
  dragons.map(dragon => dragon.name)

console.log(names)

// result
[
    'Fluffykins',
    'Deathlord',
    'Little pizza'
]

Functor laws:

1. Transformation of contents

The basic idea is that the map method of the functor takes the contents of the Functor and transforms each of them using the transformation callback passed to map.

In this case, this function here is the transformation callback — it transforms a dragon object into just a dragon name.

2. Maintain structure

The second thing that Array#map does in order qualify Array for the title of Functor is that it maintains structure.

If you call .map on an array that is three long, it returns an array that is three long. It never changes the length of the array, it doesn’t return null.

3. Returns a new functor

The third and final thing that Array#map does in order to be functor-material is, the value that map returns must be a functor of the same type.

Because of this, we can chain map calls like this:

const dragons = [
  { name: ‘Fluffykins’, health: 70  },
  { name: ‘Deathlord’, health: 65000 },
  { name: ‘Little pizza’, health: 2 }
]
const nameLengths =
    dragons
        .map(dragon => dragon.name)
        .map(dragonName => dragonName.length)

console.log(nameLengths)
// [ 10, 9, 12 ]

Here we have the same array of dragons, but after we extract the names, we get the length of each name. Because the first map function returns a functor, we can keep calling map on it. You can also do map map map map chaining with promises, or any other functor.

Streams

A stream is a flow of values that will eventually arrive.

// load file handler
const fs = requre('fs')
// load stream library supporting promises
const highland = require('highland')
// read file in utf8 format
highland(fs.createReadStream('customers.csv', 'uft8'))
  //split into each line
  .split()
  // break out each line into single items by ,
  .map(line => line.split(','))
  // take that array and return an object
  .map(parts => ({
    name: parts[0],
    numPurchases: parts[1]
  }))
  // filter out purchases above 2
  .filter(customer => customer.numPurchases > 2)
  // map thru customers returning name
  .map(customer => customer.name)
  // console log each result using the previous response
  .each(x => console.log('each ', x))

// customers.csv
Mattias,2
King Midas,4

// Result
each: King Midas

Monads

In Brief

A monad is a design pattern that allows structuring programs generically while automating away boilerplate code needed by program logic.

Monads achieve this by providing their own data type, which represents a specific form of computation, along with one procedure to wrap values of any basic type within the monad (yielding a monadic value) and another to compose functions that output monadic values (called monadic functions).

The essence of monads:

  • Functions map: a => b which lets you compose functions of type a => b
  • Functors map with context: Functor(a) => Functor(b), which lets you compose functions F(a) => F(b)
  • Monads flatten and map with context: Monad(Monad(a)) => Monad(b), which lets you compose lifting functions a => F(b)

Array, Stream, Tree, Promise are all Functors (as we can map them), but the last two Stream and Promise are Monads because we can flatMap them.

Remember functors are simply something that can be mapped over, why? it allows us to build functions that can operate on data that they were not originally designed to work with.

let capitalizePortuguease =
  portuguese.map(_.capitalize)

Streams will map new stream, arrays will map new arrays, trees will map new trees, etc.

Whats a flatMap?

flatMap is just like map, except flatMap does NOT expect the mapper to return a value.

Instead, flatMap expects the mapper to return a functor containing the value, and flatMap will take that functor and flatten into it’s actual value.

A flatMap maps each element of an array using a mapping function, then flattens the result into a new array.

let arr = [1, 2, 3]

arr.map((x) => [x * 2])
// [[2], [4], [6], [8]]

arr.flatMap(x => [x * 2])
// [2, 4, 6, 8]

Promises are Monads

let getInEnglish = require('./getInEnglish')
let _ = require('lodash')

let whenFood = new Promise(function(resolve) {
  setTimeout(() => resolve('vaca'), 2000)
})

whenFood
  // get english of 'vaca' via the Promise
  .then(getInEnglish)
  // capitalize it
  .then(_.capitalize)
  // print it to the console
  .then(food => console.log(food))

When we look at the code above it’s similar to flatMap, which takes one thing but doesn’t necessarily return the same.

Here even though .then doesn’t look like .flatMap it’s doing the same, taking one chain and returning something else.

If we use map on an array it will still return something like an array, same thing with trees, streams & promises.

But if we use flatMap it will return a flattened array, not the same structure as was given and that’s why Promises are monads.

You can map the data, or you can flatMap on the data.

fetch('/api/user')
  // get response
  // flatMap, flatten it returning .json
  .then(response => response.json)
  // hand .json to the next in the chain
  .then(response => {
    console.log(response)
  })

This video will help:

Testing Terms

Software Testing

Test Harness

In software testing, a Test Harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.

Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using an automation framework.

Regression Testing

Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired, stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes.

Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.

Unit Testing

Unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class.

Unit testing is the cornerstone of Extreme Programming (XP), which relies on an automated unit testing framework. This automated unit testing framework can be either third party, e.g. xUnit, or created within the development group.

Engineer Tests

Black box Testing

Black box testing treats the software as a black-box without any understanding as to how the internals behave. Thus, the tester inputs data and only sees the output from the test object.

This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case.

White Box Testing

When the tester has access to the internal data structures, code, and algorithms.

For this reason, unit testing and debugging can be classified as white-box testing and it usually requires writing code, or at a minimum, stepping through it, and thus requires more skill than the black-box tester. If the software in test is an interface or API of any sort, white-box testing is almost always required.

Grey Box Testing

Grey box testing could be used in the context of testing a client-server environment when the tester has control over the input, inspects the value in a SQL database, and the output value, and then compares all three (the input, sql value, and output), to determine if the data got corrupt on the database insertion or retrieval.

Acceptance Testing

Alpha Testing

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.

Beta Testing

Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Links

Software Testing – Wikipedia