A Peek Into the Open Source Technologies Behind CapitalOne.com

(This is a post I made on Capital One’s tech blog.)

Ever wonder what goes into selecting technologies for one of the most trafficked banking websites in the world?

According to Alexa.com, the CapitalOne.com home page is ranked the 49th most popular site in the US and the 253th in the world. To maintain and enhance the functionality of this mission-critical site, we at Capital One have partnered with the open source community to address the incredible technical challenges associated with being a high profile web destination.

I oversee the team implementing the new technology platform behind CapitalOne.com. During the 2016 ng-conf keynote (I speak at the 56 minute mark), we were invited to present our cutting edge solution that leverages popular technologies such as the Cloud, Node.js, Angular 2, TypeScript, and Angular Universal. This post is based on the content of that talk.

Enterprise Constraints

Our website engineering teams must balance a number of difficult technical issues which can be generally grouped into:

· Performance

· SEO

· Accessibility

· Distributed Nature of a Global Engineering Team

The first two issues — Performance and SEO — have clear revenue implications for any website. As a financial institution, we have additional regulatory requirements when it comes to the third issue — Accessibility for all users. Finally, with dozens of engineering offices across the United States, building complex JavaScript applications can be unwieldy as code bases grow and teams are not collocated.

Looking at Angular 1

The good news is that “Performance” is a relatively straightforward problem to address. In our case, move as much of the application to the front-end and leverage the full power of a CND. In building the first iteration of our website solution, we started here, with Angular 1. However, the limitations of Angular 1 became immediately apparent as it impacted our SEO or Accessibility issues.

The source code of a typical Angular application is boilerplate “loading” text/image and some JavaScript tags. This “initial state” HTML is not what the user ultimately interacts with and tells you nothing about what content will be on the page. Our problem emerges when screen readers and crawlers — often not as robust as a modern browser — get confused and do not “see” what we he developers expect.

In short — the HTML is a mess.

For example, if your team uses Slack and you go to send a link, Slack will crawl the page to create a preview. If your application uses traditional Angular boilerplate, the crawler will not know how to interpret the JavaScript.

As a result, the preview might break if the page title is determined in the JavaScript code. You would see this same problem if you tried to share the link on Facebook.

Ideally, crawlers and screen readers would be able to see the “end state” — or what the user ultimately sees. To address this, the ideal solution is for the cached source to reflect the “end state” and the page to retain the Angular application behavior. An advantage of this is that performance is dramatically improved since JavaScript doesn’t need to download and run for the page to appear correctly.

Building It Ourselves

To address this, our team decided to build a solution that combined Angular 1 with server-side Node. This would involve building a pre-rendering service that would determine the HTML “end state” and cache it.

We looked at every page we wished to migrate and built two versions of it: a “fragment” version and a “full page” version.

The “fragment” is the content (images, text) for a URL and the “full page” is that same content with the menu/header/footer surrounding it. We served the appropriate file based on the context of the user: “full page” for a first time visit and “fragments” as users click around.

The “fragment” experience would swap out the contents and leave the surrounding menu/header/footer in place. From a load time perspective, this performed extremely well.

But this solution was limited. Out of Angular 1’s many features, our pre-render service only managed to port over one aspect, the route (URL) changes. Other components of Angular functionality such as directives, filters, and services weren’t being evaluated by our pre-render service.

This meant major parts of any given page could not be determined server-side. We concluded that Angular 1 had limitations that ensured it could never fully support our specific use case.

We needed a more robust solution.

Enter Angular 2 and Angular Universal

Thanks to a major rewrite, Angular 2 with Angular Universal fully supports server-side rendering.

A rough example would be that this is like the browser being emulated on your server — allowing you to pre-render the HTML and cache it. When we heard this, we immediately started planning to leverage this capability.

With Angular 2, the source code resembles normal HTML and if you link to a cached page, everything works as expected. Every feature of the Angular framework codebase gets run server-side.

We still use the same “fragment” and “full page” caching strategy from earlier, but the remaining drawbacks are eliminated. This strategy has some additional major performance benefits. Normally, the user cannot interact with a web application until a number of serial processes are finished:

Download the HTML * Download the JavaScript *Run the JavaScript.

While this is happening, the user often sees just a white screen or a loading spinner.

In the Angular Universal flow, the user can see and interact with the page as soon as the HTML downloads. While the JavaScript downloads and boots up, we can capture user inputs and play them back via a nifty library called pre-boot. This yields a performance boost of hundreds of milliseconds, making the application feel much snappier and more responsive.

Best of all, we now have a cache of exactly what our visitors see, and our developers can build their systems using standard Angular 2.

Don’t Forget TypeScript

JavaScript has a problematic reputation with some enterprise developers due to issues like inconsistent browser support, lack of classical OOP, and weak type support. Whether these criticisms have merit or not — or apply to your project — the Angular community listened to enterprise developers and proposed a solution.

Angular 2 leverages an amazing new programming language called TypeScript. When this was first announced, the decision was greatly criticized. Fortunately, the community has come around as developers have pushed the limits of Angular and JavaScript.

The benefits of TypeScript to enterprise — and large code bases — cannot be overstated.

This technology adds key new functionality to JavaScript — it forces dependencies to be called out explicitly, enables types, supports stronger OOP through the addition of interfaces and generics, and provides compile-time error checking.

The main takeaway: TypeScript provides many of the benefits of a language like Java while avoiding the baggage. It works well for large, distributed teams where the code needs to be self-documented, catching bugs earlier (via compilation). Best of all, it runs on Node.js.

Wrapping Up

The CapitalOne.com team is grateful for the hard work that the Angular teams have put into making these great technologies. Together, Angular 2 and Angular Universal significantly advance web development for the enterprise community. We look forward to remaining a part of the Angular community and working with these technologies on additional projects.


For more on APIs, open source, community events, and developer culture at Capital One, visit DevExchange, our one-stop developer portal. https://developer.capitalone.com/

In Depth Tutorial on Writing a Slackbot

(This is a repost of an article I wrote on Monsoon’s blog prior to Capital One acquiring us.)

At Monsoon (my employer), we are avid users of Slack. It’s a great collaboration tool in addition to adding a new social dimension to the office. We just crossed 500k messages sent over the platform and we’ve only been on it for a few months!

We recently held a 4-hour slackathon at Monsoon where people were tasked with writing the most useful Slack bot they could think up. The winner was a secret polling script that we use to vote on controversial topics such as what to name our teams or who the coolest person in the office is. We chose to do our event using Hubot, a popular open source bot framework written by Github. Half of the participants were mobile developers, so JavaScript, the hubot scripting language, was foreign to them. We spent a few minutes prior to the event training everybody on how to write scripts to ensure an even playing field.

I’d like to share our Slack tutorial with the rest of the community.

Setting it up takes 5 minutes

To get started with the tutorial, you’ll need to setup your machine by installing Hubot. The most important step is the first two: clone the repo and then run:

$ npm install

Once you’ve done that, you’re ready to write scripts! In the scripts folder you’ll find a file called slackbot-examples.coffee. This example script is a more feature-rich set of examples than the default that comes with Hubot. We’ll go over these examples in greater detail below.

The first thing to notice is that this is a “coffee” file. CoffeeScript is a language that compiles into JavaScript. It is popular with some communities due to its terseness compared to JavaScript. If you don’t like it, you’re welcome to write scripts in JavaScript by naming your file .js instead of .coffee.

Talking to the Bot – robot.respond

In the next step, we start working with a bot.  All bot behaviors start with a listener. The first one we’ll review listens for messages directed at the bot.

When you mention the bot directly in a room via @botname, followed by the command, the bot will execute the above block of code. In this case, it will look for the text “@botname sleep it off.” This behavior will also trigger if you privately message the bot with the text “sleep it off.”

Either of these will trigger the bot to run the command msg.send ‘zzz…’

Making the Bot Say Stuff – msg.send

Now that the bot is listening for messages directed at it, let’s see if we can get it to talk back.  The msg.send command tells the bot to send out a message to the current chat room (that told it to “sleep it off”). In this case, the bot will say, “zzz…” publicly. msg.send always replies in the same channel it detects the original message.

It Sees Everything – robot.hear

We’ve already programmed a bot to respond to messages directed at it, but you can program a bot to listen to conversation anywhere in the office, and respond to a specific word or phrase.  The second type of listener is robot.hear, a blanket listener that reacts to a phrase regardless of who it is directed at.

In this example, we are using the regular expression looking for the word (and not just a phrase containing) “up.” If anybody says “up,” this block will trigger. It will also trigger if you direct “up” at the bot; both of these would trigger the behavior:

Michi: up
Michi: @botname up

It Can Remember – robot.brain

Bots can also store information for retrieval later.  In the “up” example above, the robot initializes and/or increments a value which we save as everything_uppity_count. It does nothing more. In this case, a user can say, “up” all they want and nothing will seemingly happen while the counter increases. This is done through the “brain,” which is a simple key-value store.

Note that the “brain” uses Redis to store its contents. This way, if the bot restarts, the data is preserved. If Redis is not running, the bot will still function, but all data is lost next time the bot restarts.

In the second example of robot.hear, the bot retrieves the current value of everything_uppity_count and displays it via msg.send. As a reminder, this means the robot would just reply in the chat room that it heard the “are we up?” statement.

Calling People Out – msg.reply

Bots can add tailored prefixes to their responses. You can use the command msg.reply for this. msg.reply probably does *not* do what you think it does. Rather, it acts similarly to msg.send except that it prefixes whoever authored the original message it is replying to.

In the above example, the script will simply reply to the original sender as illustrated in the following theoretical exchange:

Michi: What’s up!
Bot: @Michi What’s up!

Note that the reply is in the channel where you sent the original message. If this was a private chat room between you and the bot, the reply would have appeared there.

Replying to Private Messages – Advanced msg.send

Handling private messages is a little more tricky. This is because Hubot doesn’t treat private messages differently from any other types of message. Instead, you have to examine the room that the message is sent in.

In order to reply to a private message, you need to check if the room shares the name with the bot. If the channel names are the same, it is implied that the channel is a private channel. We’ve provided the helper methods to accomplish this:

Starting New Private Conversations – robot.messageRoom

Sending unsolicited private messages is more straightforward. Just remember that private messages are just another room named after a user. To accomplish this, simply tell the bot to message a room:

Notice that an error can be thrown if the channel is invalid. In that case, it’s a good idea to catch the error so that the bot does not crash.

More Examples!

The example script file also includes an example of how to run a web service in the bot to listen for external data sources (such as a github webhook) and how to trigger/watch custom events. Take a look – and when you’ve finished, hopefully you’ll have as much fun designing bots and expanding your office interactions and conversations as we have had here at Monsoon!