A Peek Into the Open Source Technologies Behind CapitalOne.com

(This is a post I made on Capital One’s tech blog.)

Ever wonder what goes into selecting technologies for one of the most trafficked banking websites in the world?

According to Alexa.com, the CapitalOne.com home page is ranked the 49th most popular site in the US and the 253th in the world. To maintain and enhance the functionality of this mission-critical site, we at Capital One have partnered with the open source community to address the incredible technical challenges associated with being a high profile web destination.

I oversee the team implementing the new technology platform behind CapitalOne.com. During the 2016 ng-conf keynote (I speak at the 56 minute mark), we were invited to present our cutting edge solution that leverages popular technologies such as the Cloud, Node.js, Angular 2, TypeScript, and Angular Universal. This post is based on the content of that talk.

Enterprise Constraints

Our website engineering teams must balance a number of difficult technical issues which can be generally grouped into:

· Performance

· SEO

· Accessibility

· Distributed Nature of a Global Engineering Team

The first two issues — Performance and SEO — have clear revenue implications for any website. As a financial institution, we have additional regulatory requirements when it comes to the third issue — Accessibility for all users. Finally, with dozens of engineering offices across the United States, building complex JavaScript applications can be unwieldy as code bases grow and teams are not collocated.

Looking at Angular 1

The good news is that “Performance” is a relatively straightforward problem to address. In our case, move as much of the application to the front-end and leverage the full power of a CND. In building the first iteration of our website solution, we started here, with Angular 1. However, the limitations of Angular 1 became immediately apparent as it impacted our SEO or Accessibility issues.

The source code of a typical Angular application is boilerplate “loading” text/image and some JavaScript tags. This “initial state” HTML is not what the user ultimately interacts with and tells you nothing about what content will be on the page. Our problem emerges when screen readers and crawlers — often not as robust as a modern browser — get confused and do not “see” what we he developers expect.

In short — the HTML is a mess.

For example, if your team uses Slack and you go to send a link, Slack will crawl the page to create a preview. If your application uses traditional Angular boilerplate, the crawler will not know how to interpret the JavaScript.

As a result, the preview might break if the page title is determined in the JavaScript code. You would see this same problem if you tried to share the link on Facebook.

Ideally, crawlers and screen readers would be able to see the “end state” — or what the user ultimately sees. To address this, the ideal solution is for the cached source to reflect the “end state” and the page to retain the Angular application behavior. An advantage of this is that performance is dramatically improved since JavaScript doesn’t need to download and run for the page to appear correctly.

Building It Ourselves

To address this, our team decided to build a solution that combined Angular 1 with server-side Node. This would involve building a pre-rendering service that would determine the HTML “end state” and cache it.

We looked at every page we wished to migrate and built two versions of it: a “fragment” version and a “full page” version.

The “fragment” is the content (images, text) for a URL and the “full page” is that same content with the menu/header/footer surrounding it. We served the appropriate file based on the context of the user: “full page” for a first time visit and “fragments” as users click around.

The “fragment” experience would swap out the contents and leave the surrounding menu/header/footer in place. From a load time perspective, this performed extremely well.

But this solution was limited. Out of Angular 1’s many features, our pre-render service only managed to port over one aspect, the route (URL) changes. Other components of Angular functionality such as directives, filters, and services weren’t being evaluated by our pre-render service.

This meant major parts of any given page could not be determined server-side. We concluded that Angular 1 had limitations that ensured it could never fully support our specific use case.

We needed a more robust solution.

Enter Angular 2 and Angular Universal

Thanks to a major rewrite, Angular 2 with Angular Universal fully supports server-side rendering.

A rough example would be that this is like the browser being emulated on your server — allowing you to pre-render the HTML and cache it. When we heard this, we immediately started planning to leverage this capability.

With Angular 2, the source code resembles normal HTML and if you link to a cached page, everything works as expected. Every feature of the Angular framework codebase gets run server-side.

We still use the same “fragment” and “full page” caching strategy from earlier, but the remaining drawbacks are eliminated. This strategy has some additional major performance benefits. Normally, the user cannot interact with a web application until a number of serial processes are finished:

Download the HTML * Download the JavaScript *Run the JavaScript.

While this is happening, the user often sees just a white screen or a loading spinner.

In the Angular Universal flow, the user can see and interact with the page as soon as the HTML downloads. While the JavaScript downloads and boots up, we can capture user inputs and play them back via a nifty library called pre-boot. This yields a performance boost of hundreds of milliseconds, making the application feel much snappier and more responsive.

Best of all, we now have a cache of exactly what our visitors see, and our developers can build their systems using standard Angular 2.

Don’t Forget TypeScript

JavaScript has a problematic reputation with some enterprise developers due to issues like inconsistent browser support, lack of classical OOP, and weak type support. Whether these criticisms have merit or not — or apply to your project — the Angular community listened to enterprise developers and proposed a solution.

Angular 2 leverages an amazing new programming language called TypeScript. When this was first announced, the decision was greatly criticized. Fortunately, the community has come around as developers have pushed the limits of Angular and JavaScript.

The benefits of TypeScript to enterprise — and large code bases — cannot be overstated.

This technology adds key new functionality to JavaScript — it forces dependencies to be called out explicitly, enables types, supports stronger OOP through the addition of interfaces and generics, and provides compile-time error checking.

The main takeaway: TypeScript provides many of the benefits of a language like Java while avoiding the baggage. It works well for large, distributed teams where the code needs to be self-documented, catching bugs earlier (via compilation). Best of all, it runs on Node.js.

Wrapping Up

The CapitalOne.com team is grateful for the hard work that the Angular teams have put into making these great technologies. Together, Angular 2 and Angular Universal significantly advance web development for the enterprise community. We look forward to remaining a part of the Angular community and working with these technologies on additional projects.


For more on APIs, open source, community events, and developer culture at Capital One, visit DevExchange, our one-stop developer portal. https://developer.capitalone.com/