Month: February 2014

Fun with Javascript

I recently came across 2 nifty javascript frameworks by the same developer Ian Lunn.   Sequence is a framework for creating basic animations by defining the elements in your HTML and the animation effects in CSS.   Parallax is a script for creating a background image parallax effect.  Both are solid and do what they should.  Good job Ian.

Here is another cool parallax framework that I haven’t played with yet but it looks pretty interesting.

The practical use of these effects is probably limited by they might add a little eye candy to your otherwise boring webpage.

Keep coding!

What did you do last weekend?

Last weekend I participated in GlobalHack.  I say “participated” rather than “competed” because I didn’t ever have a chance at winning. Why?  I don’t actually work in the Saint Louis tech scene. My current team is global and distributed so I don’t know many local developers.    I decided rather than risk trying to build an effective team of 10 strangers during a 2-day event, I would see just how far I could go it alone.  And although the $50,000 prize would have been nice, my reward was in taking up this more personal challenge.

The parameters of the problem presented were to “create an application that scores and weights sales opportunities in according to an algorithm and then displays the ranked and scored opportunities in a graphical user interface”.   Although I had no experience with Salesforce, I felt I understood the problem domain better than many people at the event.

Here’s how I approached the challenge:

The problem, to me anyway, wasn’t about how to create a pretty app with a ton of features.  It was how to analyze data and make predictions based on past experience, so I focused my efforts on building a kick-ass Analytics Engine.

As I said, I knew very little about Salesforce.  I had read a bit and gone through some tutorials prior to the event, but that was it.  So rather than try to build my app inside of Salesforce, I would build it as a standalone application using tools I was already familiar with and connect to Salesforce as needed through their APIs.  I also saw some distinct advantages to building a product that was independent of Salesforce and extendible to other CRM systems in the future.

We were given a tiny amount of sample data to work from.  This consisted of 30 opportunities currently in the sales pipeline.  There were no closed opportunities and no history of previous opportunities won or lost.   I knew right away that this wasn’t going to work for what I was envisioning.   I needed more data, a lot more data.   But first, I needed clarification on the requirements to see just how far I could push things.  I asked the sponsor of the event, Jim Eberlin, how tied I was to the data he provided.  Luckily, he understood where I trying to go and said to make up any additional data that I needed to demonstrate my idea.  I did, I created another 150 opportunities won and lost over the past year.  I then set out to analyze that data and predict how the current opportunities might unfold.

The opportunities consisted of several stages with attributes assigned during each stage.  These included things like customer pain rating, and product fit.  So I first analyzed the closure rate of past opportunities based on their stage.  How many were lost or never progressed beyond stage 1, stage 2, stage 3, etc.   This gave me a historical “rate of decay” that I could apply to the current opportunities to determine their odds of closure based on their current stage.  The next step was to look at the attributes assigned during each stage to see how they affected the odds of moving to the next stage and apply that to my algorithm.  Given that I was still working with a relatively tiny amount of data, I had to make some assumptions.   Opportunities with a low customer pain rating or product fit would be far less likely to progress to the next stage than those with a higher rating, etc.

The last variable was to figure in the sales rep’s projected odds of closure.  Was the opportunity “Likey”, “”Upside”, “Commit”, etc.  Knowing that sales reps are typically very optimistic, this became a small component of the algorithm, choosing to focus more on historical performance vs gut feel.

I applied the algorithm to the current opportunities to determine an “opportunity odds” for each opportunity and ranked the opportunities by that percentage.    I also applied the percentage to the opportunity amount to determine a sales forecast using opportunities projected to close during the sales quarter.

This data provided the VP of Sales of our fictitious company with a clear picture of what opportunities were in the pipeline, their projected odds, a sales forecast based on those odds and a difference between the forecast and the sales goal for the quarter.  Simple, sweet, to the point.

Having worked with sales people I know how difficult it is to get them to input or manipulate data so I wanted the Analytics Engine to also provide the VP with recommendations on how to close any gaps between the sales forecast and the sales goal.   This was achieved by adding one button to my user interface.   When pressed the Analytics Engine cycles through the top opportunities and highlights which ones must close in order to achieve the sales goal.  It also provided any risks associated with the opportunity that might hinder the closure.

So that’s what I was able to achieve during the event.  There are several ways to further improve the analytics such as looking at closure rates by sales rep (some reps are better closers than others), closure rates by geography, industry, company, etc, all factors that could be applied to future predictions.

Needless to say, I didn’t walk away $50,000 richer.  But I had fun, met some amazing people, and am extremely proud of what I accomplished.  Not a bad weekend.