3 years ago, Thermo Fisher Scientific (previously Life Technologies) was tasked with enhancing its original mobile experience integration which leveraged Moovweb’s cloud platform. It was a good solution in getting the domain introduced into browsers outside of desktop, specifically mobile and tablet devices but it certainly came with its own set of constraints.
While Moovweb was an excellent platform for enhancing on mobile conversion rates it came at the cost of scalability and ease of maintenance. The overall platform required you to maintain two separate codebases—your desktop HTML and Moovweb’s SDK, which effectively transformed your HTML into mobile-friendly elements. It was also a page-by-page type deal. Every page would require its own transformation. Every transformation was its own subset of code. The first two waves for those previous projects had the strictest of intentions on mobile-optimizing eCommerce related workflows for all regions but in English only (e.g. checkout flows). It was not enough. The domain needed more flexibility in a web property that could display well to a majority of legacy and evergreen browsers as well being agnostic of device type.
Because of these limitations the Responsive Project for ThermoFisher.com was designed to effectively address the following constraints:
- Reduce dependency on additional vendor code.
- Generate mobile, tablet and desktop experiences from one codebase.
- Scale solutions which would be available globally and not just page-by-page.
- Responsive interfaces for all regions, all languages.
- Extend the experience beyond eCommerce: content authored areas and channels outside of B2C.
The domain is huge and its attached ecosystem is a collection of multiple applications owned by multiple teams with their own standard of implementation. Adding to that loophole, not every application in the domain was a candidate for this new responsive experience. From June 2016 and forward, our team had to break features down into chunks.
It was agreed that the header, navigation and footer would be the first to address as this was considered the foundation for everything else to work. Each app would rope this subset of components based under their back-end constructs (e.g. Java, ColdFusion, etc.). This took 3+ months. Not an easy feat if you consider the amount of tooling currently being offered from these areas. You have a series of controls for search tools, commerce and deep links into areas such as login and Order Support.
Real estate doesn’t come cheap with smaller screen sizes. The basics needed to be in order first. In essence, an organized CSS media query strategy for an agreed set of breakpoints and a baseline approach for DOM manipulation in the event this choice ever needed to be made.
Listed below were some saving grace plugins, libraries and frameworks from the open source world which allowed us to get our job done:
- Bootstrap 2.3.2 – the overall HTML, CSS and JS framework for delivering responsive enabled user interfaces. Keep in mind that support for legacy browsers in IE8+ was priority. Also, since the domain is driven from a desktop-first approach this framework was still considered standard.
- jQuery 1.8.1 – a minor upgrade path for better DOM selector performance. Originally, 1.8.0 was elected to be the global jQuery library available but that caused problems for older Internet Explorer browsers with .ready() firing prematurely. See jQuery defect 12282.
- Responsive HTML Tables – a CSS technique which turns table elements into div blocks with labels being controlled through data attributes.
- TableHeadFixer – a jQuery plugin which was modified in-house to give content authors more flexibility in drafting HTML tables with features such as locked table headers or columns.
In the summer of 2014, I was contracted and assigned to develop the mobile experience for Life Technologies.
Life Technologies Homepage in Mobile
Partnered with Moovweb
, we were able to generate mobile optimized workflows from high-traffic areas like Homepage, Cart/Checkout and PDP. Several mobile optimized workflows were developed fairly quickly and we successfully launched in all regions (English only) in January of this year.
Client and vendor teams were geographically split, yet worked in unison to accomplish this project at the expected go-live date. The desktop experience is transformed with Moovweb’s SDK and deployed to their cloud infrastructure. This allowed front-end development teams to start the project fairly quickly without too much reliance on back-end development support.
The next challenge relies heavily on building and maintaining the experience while integrating their tech stack w/in Thermo Fisher Scientific standards.
Visit m.lifetechnologies.com on iOS or Android mobile browser.
I’ve been itching to get this launched for a long time. Thankfully, it’s finally here.
Visit, allWebSD.com for more information.
After exposing the Solr endpoint with a reverse proxy, it’s important to note that it also exposes the Solr admin panel to the end-user. This is not desired.
Flowchart of a RewriteRule directive that rests on website.com’s httpd.conf file.
- Solr’s admin panel becomes exposed from the reverse proxy.
RewriteRule ^/solr/$ / [R=301,L,DPI]
It’s encouraged that you secure your Solr instance by placing the application on a different file server and behind a firewall. That’s an issue if you are trying to consume data from the Solr instance leveraging AJAX techniques.
Flowchart of a reverse proxy directive that rests on website.com’s httpd.conf file.
- www.website.com and Apache Solr live on separate boxes.
- A firewall protecting Apache Solr plus the cross-domain issue does not expose the necessary end-point to consume via AJAX.
- Depending on your sys admin setups, Solr may not live on a fully qualified domain (ie. http://18.104.22.1689:8983/solr/#/)
- An AJAX call to consume the Solr instance’s JSON/XML won’t work cross-domain.
- Reverse Proxy directive, mod_proxy – Apache HTTP Server
- This allows for an endpoint that is visible to the browser and we can consume the JSON/XML that rests within the Solr instance.
ProxyPass /solr http://22.214.171.1249:8983/solr/#/
ProxyPassReverse /solr http://126.96.36.1999:8983/solr/#/
Don’t forget to apply a RewriteRule Directive to protect the Solr admin panel, once you’ve exposed it to the browser!
Reviewing the key/value structure of JSON, I came across this discussion on Parsing JSON with hyphenated key names, I thought the same would hold true for mine. That said, I’ve augmented the Stackoverflow suggestion slightly to leverage underscores versus dot syntax and came up with the following:
/* For schema.xml on Nutch and Solr */
<field name="metatag_description" type="text_general" stored="true" indexed="true"/>
<field name="metatag_keywords" type="text_general" stored="true" indexed="true"/>
/* For solrindex-mapping.xml on Nutch */
<field dest="metatag_description" source="metatag.serptitle"/>
<field dest="metatag_keywords" source="metatag.serpdescription"/>
This was implemented on Nutch 1.7 on a Solr 4.5.0 instance.
Please refer to the following for context:
- Extracting HTML meta tags in Nutch 2.x and having Solr 4 index it
- Parsing JSON with hyphenated key names
- Nutch – Parse Metatags
In regards to my post on Stackoverflow, my resolution to this problem was to update search.js and check the window.location object:
//Old code - from reuters.js example
//Custom query by end-user for my search.js file
var userQuery = window.location.search.replace( "?query=", "" );
In regards to my post on Stackoverflow, I pointed my crawl and index to the location of my collection. In this case:
$ bin/nutch crawl urls -solr http://localhost:8983/solr/rockies -depth 1 -topN 5
$ bin/nutch solrindex http://localhost:8983/solr/rockies crawl/crawldb -linkdb crawl/linkdb crawl/segments/*
Additionally, I updated the -depth to 1 (specifies how deep to go after the link is defined. In this case 1 link from main page) and -topN to 5 (how many documents will be retrieved from each level).