Sunday, 5 November 2017

Using web component polyfills with template tags

I've been playing around with using <template> tags and how well they work with the current Web Component (Custom Elements) polyfills.

My main motivation for going for Web Components instead of something like React or Angular is that I'm currently developing a chrome extension. I wanted the code base to be as small so that it didn't slow down devtools and increase the frequency of hands. Plus I think it's going to be the natural progression from the current React/Angular/etc components - especially with HTTP 2.0's server push of dependant files removing the need for tools like webpack by allow all dependant files to be automatically sent in response to one request.

I immediately hit problems using custom elements in a chrome extension as they're disabled by default. So in order to use them I had to forcefully polyfill the existing API, it took a bit of fiddling  but now works with both libraries I looked at.

Next, using template tags an import link html file, seemed to be creating me a bit of grief. Templates are a key part of making web components easy to build. The contents of a template tag is parsed but it's not considered to be part of the document. This means that when the web components are defined, they are not instantiated as for any tags that are defined inside the template tags until they are attached to the document tree.

There are also 2 types of components:
  • Autonomous custom element
    These are just basically any tag that only extends HTMLElement or a parent class that does. All behaviour and rendering needs to be done by the implementer. They are defined in html as &ltmy-tag>&lt/my-tag>
  • Customized built-in element
    These are components that extend existing elements such as a button, adding to existing functionality. They are defined in html as <button is="my-button"></button>

Importing the elements

In the process of getting the polyfill working in my chrome extension, I came across 2 different ways of adding nodes from an external document. Both were recommend.
clondeNode
This gave me issues with document-register-element which I had to patch to get working until I found the other suggested way of doing it. cloneNode creates a new copy of the node that isn't attached to any document until it is append to a tree.
var link = document.querySelector('link[rel="import"]');
var template = link.import.querySelector('#my-template');
var dest = document.getElementById("insertion-point");

// uses import node
var instance = document.importNode(template.content, true);
dest.appendChild(instance);
importNode
Using this made the polyfills behave a bit better. ImportNode creates a copy of the nodes which are attached to the document (but not placed on the tree).
// ....
// uses cloneNode
var instance = template.content.cloneNode(true);
dest.appendChild(instance);

Libraries compared

webcomponents.js

This is one promoted by polymer / Google's developer site as being the polyfill to use. However I discovered these don't support customized built-in elements. I don't have an immediate need for them but as I get more familiar with using them I'm sure I'll be wanting to use them.
One benefit that this library did have was that when I did a forced polyfill inside the chrome extension's content scripts I was able to use the basic custom elements correctly (but not built-in extensions).

document-register-element

This is a more lightweight implementation and the best to use in other than a chrome extension. Both types of components are supported and the callback methods are called in the right place.
However, using the forced polyfill, the constructors and callback methods were called in the next event loop. This means that at the moment you insert them you can't use them and definitely shouldn't be setting any properties on them as they would overwrite the functionality that hasn't been applied yet.

The comparison

I did a comparison using chrome only as that was my target browser. The files were just locally hosted from my workspace using node express / connect.

Chrome 62.0.3202
customized built-in element
autonomous custom element
Library constructor called createdCallback called connectedCallback called constructor called createdCallback called connectedCallback called
document-register-element.js v1.7.0 After import After constructor After append After import NO After append
document-register-element.js - force polyfill After event loop After constructor After callback After event loop After constructor After callback
webcomponents.js v1.0.17 NO NO NO After import NO After append
webcomponents.js - forced polyfill NO NO NO After import NO After append

So in summary:
  • Use document-register-element for most cases
  • If your forcing the polyfill use webcomponents.js instead at the sacrifice of being able to extend the built-in elements

The test code

<!DOCTYPE html>
<html>
<head>
    <script>
        window.module = {}; // for pony version of document-register-element.js
        // uncomment below for forcing webcomponentsjs polyfill
        //if (window.customElements) window.customElements.forcePolyfill = true;
    </script>
    <script src="/node_modules/@webcomponents/webcomponentsjs/webcomponents-lite.js"></script>

    <!-- enable below for document-register-element -->
    <!--<script src="/node_modules/document-register-element/pony/index.js"></script>-->
    <script>
      // force polyfill
      //window.module.exports(window, 'force-all');
      // apply with defaults
      //window.module.exports(window);
    </script>

    <script>
        class MyButton extends HTMLButtonElement {
            constructor() {
                super();
                console.log("MyButton:init  -- customized built-in element");
            }
            createdCallback() {
                console.log("MyButton:createdCallback -- customized built-in element");
                this.textContent = 'button';
            }
            connectedCallback() {
                console.log("MyButton:connectedCallback -- customized built-in element");
            }
            customMethod() {
            }
        }
        // customized built-in element
        customElements.define('my-button', MyButton, {extends: 'button'});

        class MyDiv extends HTMLElement {
           constructor() {
              super();
              console.log("MyDiv:init");
           }
           createdCallback() {
              console.log("MyDiv:createdCallback");
              this.innerHTML = 'button';
           }
           connectedCallback() {
              console.log("MyDiv:connectedCallback");
           }
           customMethod() {
           }
        }
        // autonomous custom element
        customElements.define('my-div', MyDiv);
    </script>
  <!--<link rel="import" href="template-tag-import.html"/>-->
</head>

<body>
  <p>
      There should be a "button" text inside the button.
  </p>
  <div id="insertion-point">
  </div>

  <template id="my-template">
      <button is="my-button">Something</button>
      <my-div>Something</my-div>
  </template>

  <script>
      var template = document.querySelector('#my-template');
      // if you want to try out the linked document try these statements instead:
      // var link = document.querySelector('link[rel="import"]');
      // var template = link.import.querySelector('#my-template');


      console.log("************* Import node");
      var instance = document.importNode(template.content, true);

      var dest = document.getElementById("insertion-point");
      console.log("************* Appending child");
      dest.appendChild(instance);
      console.log("*************");

      window.setTimeout(function() {
        var myButton = dest.querySelector("button");
        console.log("\n\nmyButton has customMethod %s -- customized built-in element", !!myButton.customMethod);
        console.log(myButton);

        var myDiv = dest.querySelector("my-div");
        console.log("\n\nmyDiv constructor == MyDiv %s -- autonomous custom element", !!myDiv.customMethod);
        console.log(myDiv);
      }, 1);
  </script>
</body>
</html>

Wednesday, 26 April 2017

How to chain an ES6 Promise

Node.js uses async functions extensively, as it based around non-blocking I/O. Each function takes a callback function parameter, which can result in some messy, deeply nested callback functions if you have to call a bunch of async functions in sequence. Promises make these callbacks a lot cleaner.

ES6 (or ES2016) Promises are a great way of chaining together asynchronous functions so that they read like a series of synchronous statements.

There's already some great posts on Promises, 2ality has a good intro to async then detail of the api, so I won't rehash that article. However, after starting to use them for cases more complicated than most examples, it easy to make a few mistaken assumptions or make things difficult for yourself.

So here is a more complicated example showing a pattern I like to follow. In this example, I'll use the new Javascript Fetch API, which is an API that returns a Promise, allowing you to make async HTTP calls without having to muck around with XMLHttpRequest calls.

First off there are 3 ways to start the chaining, the most obvious one is (taken from MDN, updated to use arrow functions):
function callPromise() {
  return fetch('flowers.jpg');
}
var myImage = document.querySelector('img');
callPromise()
  .then(response => response.blob())
  .then(myBlob => {
    var objectURL = URL.createObjectURL(myBlob);
    myImage.src = objectURL;
  });
A slightly different:
function callPromise() {
  return fetch('/data');
}
function handler1(body) {
  console.log("Got body", body);
}
Promise.resolve()
  .then(callPromise)
  .then(response => response.json())
  .then(handler1);
It starts the chain with a promise that immediately calls the next then in the stack. The benefit of this, is that all the statements that perform an action are in the 'then' invocations, so your eye can follow it easier. I personally prefer this way, as I think its to read but both are effectively equivalent.
There is one minor difference that you should be aware of when doing it this way. In the first example callPromise is called immediately when the javascript engine gets to that line. In the 2nd example callPromise is not called until the javascript engine gets to the end of the call stack - it gets called from the event loop.
The 3rd way is by creating a new Promise(). For this article lets just stick to consuming a promise.

The response for each 'then' be any object or undefined which is then passed as the only argument to the next function in the chain. You can also return a promise (or a 'thenable' function) who's final output is used to as the parameter for the next function. So you don't have to resolve any response you return, the promise library automatically normalises this behaviour for you.

A Catch Gotcha 

Once a promise is in a rejected state it will call all 'catch' handlers from that point forward. The 'then' function can take both a onFulfilled and a onRejected parameter and it can be easily mistaken which handlers are called. Looking at the following example, if fetch throws an Error then errorHandler1, errorHandler2 and errorHandler3 will all be called.
Promise.resolve()
    .then(() => fetch('url1'), errorHandler1)
    .then(response => fetch('url2'), errorHandler2)
    .then(response => fetch('url3'), errorHandeler3);
So how do you achieve what you were intending if you did the above? The answer is to add the errorHandler to the promises returned in each of the fulfilled handlers before they get added to the outer promise chain. An example exlpains it a lot better, applying it to the above:
Promise.resolve()
    .then(() => {
        return fetch('url1')
            .catch(errorHandler1);
    })
    .then(response => {
        return fetch('url2')
            .catch(errorHandler2);
    })
    .then(response => {
        return fetch('url3')
            .catch(errorHandeler3)
    })
    .catch(finalErrorHandler);
In the above, only one the errorHandlers 1-3 only get called if the individual fetch call fails. If one of the fetch fails then the finalErrorHandler is called, so you could use it as a single place to return an error response up the async stack.

Continuing a Promise After an Error

Usually if a Promise chain goes into an error state, it will call all the error handlers and never call any more of the fulfilled handlers. If you want the fulfilled handlers to be continued to be called when the error can be recovered from, then you need to return a Promise in a fulfilled state.
Promise.resolve()
    .then(() => {
        throw new Error();
    })
    .catch(err => {
        // handle recoverable error
        return Promise.resolve(); // returns a Promise in a fulfilled state
    })
    .then(() => {
        // this handler will be called
    });

Making Testable Promises

Using arrow functions in promises makes for pretty code in blog posts but if the promises are implemented this way it makes them hard to test. In order to test each handler function, you have to call the entire chain, which for anything other than the most trival chain is tedious and makes for bad unit tests. So to make them testable, my recommendation is to not use arrow functions or function expressions in a promise chain. Instead create a named function for each handler that is exported so it can be invoked from a test case. There are cases where it make sense to use a one line arrow function but use sparingly. I'll post a complete example in another post.

Using a Promise to Once-only Init

Promises can only settle once, calling the attached stack once it resolves only once. Additionally, calling 'then' or 'catch' on an already settled promise, will immediately can the passed function. This can make it handy to implement asynchronous init events, for instance a AWS Lambda function that queries DynamoDB as soon as it boots up and then finishes the initialisation on the the first request that it handles using environment variables stored in API Gateway.
import aws  from 'aws-sdk';
import SpotifyWebApi from 'spotify-web-api-node';
const dynamoDB = new aws.DynamoDB.DocumentClient({region: 'ap-southeast-2'});

var apiLoadedResolve;
var apiLoadedPromise = new Promise((resolve, reject) => {
    apiLoadedResolve = resolve;
});

export function lambdaHandler(request, lambdaContext, callback) {
    let api = getApi(request);
    // handle individual request
    return {}; // response object
}

var spotifyApi;
function getApi(request) {
    if (spotifyApi)
        return spotifyApi;

    spotifyApi = new SpotifyWebApi({
        clientId: request.env.SPOTIFY_CLIENT_ID,
        clientSecret: request.env.SPOTIFY_SECRET,        
    });
    // this only triggers the promise to be settled once
    apiLoadedResolve(spotifyApi);

    return spotifyApi;
}

// access dynamo db on script load before lambdaHandler is called
dynamoDB
    .get({
        TableName: "TestTable",
        Key: {name: "initData"}
    })
    .promise()
    .then(item => {
        let conf = item.Item;
        apiLoadedPromise = apiLoadedPromise.then(spotifyApi => {
            // passed in on first request
            spotifyApi.setAccessToken(conf.access_token);
            spotifyApi.setRefreshToken(conf.refresh_token);
        });
    });

This implementation doesn't require the config from the database to be set before being able to process requests. If you needed the config to be set before processing requests add a conditional check to see if it's init'ed, if not execute the rest of the function as a then callback to apiLoadedPromise

Saturday, 15 April 2017

Adding MDC headers to every Spring MVC request

Mapped Diagnostic Context (MDC) logging allows you to set attributes associated with current thread, so that SLF4J (via your logger implementation library) can log those attributes with each logging statement without it having to be specified.

For example you could configure logback to log the sessionId on every statement, which is really handy when using a log indexer such as Splunk. This would allow you to easily see all the requests made by a user, for a given session. To use with logback, you'd set the pattern to %-4r [%thread] %-5level sessionId=%X{sessionId} - %msg%n

Setting these attributes for each entry point would be a pain so one way would be to implement a ServletRequestListener, which would allow setting the attributes at the start of the request and removing them again at the end of the request. (Note: It's important to remove the attributes afterwards, as threads are re-used by application servers and will give misleading logging statements)
If you're using Spring MVC then an alternative is to implement a HandlerInterceptor. How to configure it using annotations instead of XML is not immediately obvious, so here it is so I don't have to work it out again next time I need it.

This implementation just pulls values out of the request object without much manipulation. If you want to add the method parameters of an annotated handler method then you'll need to use Spring AOP instead.

HandlerInterceptor Implementation

import org.slf4j.MDC;
import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;

import java.util.HashSet;
import java.util.Set;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

/**
 * Adds logging.
 */
public class LoggingHandlerInterceptor extends HandlerInterceptorAdapter {
   /**
    * set of keys added to MDC so can be removed
    */
   private ThreadLocal<Set<String>> storedKeys = ThreadLocal.withInitial(() -> new HashSet<>());

   @Override
   public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
      addKey("sessionId", request.getHeader("X-Session-Id"));
      addKey("url", request.getRequestURI());
      if (request.getHeader("X-Request-Id") != null) {
         addKey("requestId", request.getHeader("X-Request-Id"));
      }
      return true;
   }

   private void addKey(String key, String value) {
      MDC.put(key, value);
      storedKeys.get().add(key);
   }

   @Override
   public void afterConcurrentHandlingStarted(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
      // request ended on current thread remove properties
      removeKeys();
   }

   @Override
   public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex)
           throws Exception {
      removeKeys();
   }

   private void removeKeys() {
      for (String key : storedKeys.get()) {
         MDC.remove(key);
      }
      storedKeys.remove();
   }
}

Spring Java Annotation Configuration

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.InterceptorRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;

@Configuration
public class BeanConfiguration {

   @Bean
   public LoggingHandlerInterceptor loggingHandlerInterceptor() {
      return new LoggingHandlerInterceptor();
   }

   @Bean
   public WebMvcConfigurerAdapter webConfigurer() {
      return new WebMvcConfigurerAdapter() {
         @Override
         public void addInterceptors(InterceptorRegistry registry) {
            registry.addInterceptor(loggingHandlerInterceptor());
         }
      };
   }
}

And if your using logback and Spring Boot here is the configuration to output all MDC keys (using spring's default formatting):

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <include resource="org/springframework/boot/logging/logback/defaults.xml"/>
  <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m %mdc%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
  <property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %m %mdc%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/>
  <include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
  <include resource="org/springframework/boot/logging/logback/file-appender.xml"/>
  <root level="INFO">
    <appender-ref ref="CONSOLE"/>
    <appender-ref ref="FILE"/>
  </root>
</configuration>


Tuesday, 14 March 2017

Populating stored procs into a HSQL DB

I recently encountered a problem trying to load stored procedures into a HSQL DB used for testing. The problem was caused by the script runner provided by spring which separates each statement to be executed in a script file by a semicolon. If a stored proc has statements inside it (which most do), then the proc isn't executed as a single statement. This is further compounded by each statement executed must be understandable by JDBC. For example the following stored proc causes issues:
CREATE PROCEDURE MY_PROC(IN  param1 VARCHAR(30), OUT out_param VARCHAR(100))
  READS SQL DATA
  BEGIN ATOMIC
    SELECT the_value INTO out_param FROM my_table WHERE field = param1;
  END
.;
This problem is solved by using the script runners provided by HSQL in the "org.hsqldb:sqltool" dependency as they parse can correctly parse the scripts containing stored procedures. Here is a Spring Boot test, using an in memory database but using HSQL's script runners:
@RunWith(SpringRunner.class)
@AutoConfigureJdbc
@Slf4j
public class MyDaoTest {

  @Autowired
  private MyDao dao;

  @Test
  public void myTest() throws Exception {
    String response = dao.invokeProc("1234");
    assertThat(response, notNullValue());
  }

  @Configuration
  @ComponentScan("net.devgrok.sample.dao")
  public static class Config {

    @Bean
    public EmbeddedDatabaseFactoryBean dataSource() {
      EmbeddedDatabaseFactoryBean factory = new EmbeddedDatabaseFactoryBean();
      factory.setDatabaseType(EmbeddedDatabaseType.HSQL);
      factory.setDatabasePopulator(databasePopulator("./src/db/sql/stored_proc.sql"));
      return factory;
    }

    @Bean
    public HsqlDbPopulator databasePopulator(String... scripts) {
      return new HsqlDbPopulator(scripts);
    }
  }

  public static class HsqlDbPopulator implements DatabasePopulator {
    private final String[] scriptFiles;

    public HsqlDbPopulator(String[] scripts) {
      this.scriptFiles = scripts;
    }

    @Override
    public void populate(Connection connection) throws SQLException, ScriptException {
      FileSystemResourceLoader resourceLoader = new FileSystemResourceLoader();
      for (String scriptFile : scriptFiles) {
        try {
          SqlFile file = new SqlFile(resourceLoader.getResource(scriptFile).getFile(), null, false);

          log.info("Running script {}", scriptFile);
          file.setConnection(connection);
          file.execute();
        } catch (IOException | SqlToolError e) {
          log.error("Error executing script {}", scriptFile, e);
          throw new UncategorizedScriptException("Error executing script " + scriptFile, e);
        }
      }
    }
  }
}
Note: this test uses an in memory database, if you want to use another variation, you'll need to create a custom EmbeddedDatabaseConfigurer instance to do this which is slightly painful due to Spring's insistence on making everything either private or final or both.

Friday, 10 February 2017

Dev Setup of a Mac

After working in my 2nd consecutive company that uses Mac's for developers and having forgot everything I used the first time, I thought I better write down all the tweaks, work arounds and config changes that got me using my mac efficiently.

Remap Fn-C to copy

Mac's use Mac-C instead of Ctrl-C to copy (and X, V to cut and paste). This is really annoying if you switch between a mac and Windows machine a lot. Fortunately you can use Karabiner to map Fn-C to copy as the Fn key is located where Ctrl is on a windows keyboard. If your on OSX Sierra you'll need to use Karabiner Elements for now.

Git

Just run git from the command line and it will prompt you to install xcode tools.

homebrew

This needs to be installed first as it installs most dev tools.
Note: if your behind a corporate proxy you'll need to run export HTTPS_PROXY=http://yourproxy:port first.
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

A better terminal

brew cask install iterm2

Colour ls 

From this Stackexchange post, edit ~/.bash_profile:
export CLICOLOR=1
export LSCOLORS= gxBxhxDxfxhxhxhxhxcxcx

Colour vim 

edit ~/.vimrc:
syntax on

AWS etc

awslogs is a cloudwatch log watcher (you may have to run the install as sudo)

brew install python
# this is needed as brew installs the commands as python2, pip2 etc
echo 'export PATH="/usr/local/opt/python/libexec/bin:$PATH"' >> ~/.bash_profile
pip install awscli
pip install awslogs

Tunneling

sshuttle - a handy python ssh tunnelling that allows you to easily route ranges of IPs over ssh.

pip install sshuttle

Ruby

brew install rbenv
Add the following to ~/bash_profile
eval "$(rbenv init -)"
Then run
rbenv install 2.2.3
rbenv global 2.2.3

Java

brew cask install java
brew install gradle
Then add this to ~/.bash_profile:
export JAVA_HOME=$(/System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/java_home)

node.js / nvm

To install node, it's best to use only nvm to handle the installations.
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash

Cntlm

cntlm is a local proxy that makes dealing with a corporate proxy a bit easier.
brew install cntlm

Corckscrew

Corkscrew is a ssh proxy tunneler that allows ssh through https.
brew install corkscrew