When it comes to AI and automated machine learning, more data is good — location data is even better.

At Data Con LA 2019, I had the pleasure of co-presenting a tutorial session with Pitney Bowes Technical Director Dan Kernaghan. We told an audience of data analysts and budding data scientists about the evolution of location data for big data and how location intelligence can add significant and new value to a wide range of data science and machine learning business use cases.

Speeding model runs by using pre-processed data

What Pitney Bowes has done is take care of the heavy lifting of processing GIS-based data so that comes ready to be used with machine learning algorithms. Through a process called reverse geocoding, locations expressed as latitude/longitude are converted to addresses, dramatically reducing the time it takes to prepare the data for analysis.

With this approach, each address is then associated with a unique and persistent identifier, the pbKey™, and put into a plain text file along with 9,100 attributes associated with that address. Depending on your use case, then, you can enrich your analysis with subsets of this information, such as crime data, fire or flood risk, building details, mortgage information, and demographics like median household income, age or purchasing power.  

Surfacing predictors of summer rental demand: location-based attributes

For Data Con LA, we designed a use case that we could enrich with location data: a machine learning model to predict summer revenue for a fictional rental property in Boston. We started with “first person” data on 1,070 rental listings in greater Boston that we sourced from an online property booking service. That data included attributes about the properties themselves (type, number of bathrooms/bedrooms, text description, etc.), the hosts, and summer booking history.

Then we layered in location data from Pitney Bowes for each rental property, based on its address: distance to nearest public transit, geodemographics (CAMEO), financial stress of city block, population of city block, and the like.

Not surprisingly, the previous year’s summer booking and scores based on the description ranked as the most important features of a property. However, it was unexpected that distance to the nearest airport ranked third in importance. Other location-based features that surfaced as important predictors of summer demand included distance to Amtrak stations, highway exits and MBTA stations; block population and density measures; and block socio-economic measures.

By adding location data to our model, we increased the accuracy of our prediction of how frequently “our” property would be rented. Predicting that future is an important outcome, but more important is determining what we can do to change future results. In this scenario, we can change the price, for example, and rerun the model until we find the combination of price and number of days rented that we need to meet our revenue objective.

Building effective use cases for data science

As a Business Partner since 2015, Ironside Group often incorporates Pitney Bowes data — both pbKey flat-file data and traditional GIS-based datasets like geofences — into customized data science solutions built to help companies grow revenue, maximize efficiency, or understand and minimize risk. Here are some examples of use cases that incorporate some element of location-based data into the model design.

Retail loss prevention. A retailer wanting to analyze shortages, cash loss and safety risks expected that store location would be a strong predictor of losses or credit card fraud. However, models using historical store data and third-party crime risk data found that crime in the area was not a predictor of losses. Instead, the degree of manager training in loss prevention was the most significant predictor — a finding that influenced both store location decisions and investments in employee training programs.

Predictive policing. A city police department wanted to a data-driven, data science-based approach to complementing its fledgling “hot spot” policing system. The solution leverages historical crime incident data combined with weather data to produce an accurate crime forecast for each patrol shift. Patrol officers are deployed in real time to “hot spots” via a map-based mobile app. Over a 20-week study, the department saw a 43% reduction in targeted crime types.

Maximize efficiencies for utilities demand forecasting. A large natural gas and electricity utilities provider needed a better way to anticipate demand in different areas of their network to avoid supply problems and service gaps. The predictive analytics platform developed for the utility uses cleaned and transformed first-party data from over 40 different geographic points of delivery, enriched with geographic and weather data to improve the model’s predictions of demand. The result is a forecasting platform that triggers alerts automatically and allows proactive energy supply adjustments based on predictive trends.

About Ironside Group and Pitney Bowes

Ironside Group was founded in 1999 as an enterprise data and analytics solution provider and system integrator. Our data science practice is built on helping clients to organize, enrich, report and predict outcomes with data. Our partnership and collaboration with Pitney Bowes lead to client successes as we combine our use case-based approach to data science with Pitney Bowes data sets and tools.

In today’s “Big Data” era, a lot of data, in volume and variety, is being continuously generated across various channels within an enterprise and in the Cloud. To drive exploratory analysis and make accurate predictions, we need to connect, collate, and consume all of this data to make clean, consistent data easily and quickly available to analysts and data scientists.

Read more

The day-to-day work of an Underwriter ranges from research, to data entry, to pricing a risk, to ultimately negotiating that premium value with an agent. At the core, they need to accurately gauge risk, on a case by case basis. But their job doesn’t stop there. Even if we were to codify all the significant risk factors (as actuarial tables do), this doesn’t translate directly to how much the insurance firm ultimately charges for a given premium. Underwriters need to create an offer that they can justify to their customers, and keep an eye on the prevailing market dynamics.

Read more

LEXINGTON, MA, May 10, 2019 – Ironside, an enterprise data and analytics firm, was featured in a Wall Street Journal article about AI consultants that enable their clients to be self-sufficient with AI and not have to rely on their consulting counterparts to manage the model.

Read more

LEXINGTON, MA, May 3, 2019 – Ironside, an enterprise data and analytics firm, was recognized in the Wall Street Journal this morning for AI work being done at one of our clients, Coverys, a Boston-based provider of medical professional liability insurance.

Read more

At Ironside, we believe that data science is a team sport, and should be accessible to and enable as many players as possible. We work with clients on a regular basis to make data science accessible within their organization. But we also do this within our own company. Meet Tom Clancy – hear about his journey and what he has learned along the way.

Read more

2019 is the year that data science, machine learning and artificial intelligence for business will become ubiquitous. Most organizations large and small, across all industries, have recognized the benefits and competitive advantage that these capabilities bring to bear. If you have not already begun the journey, chances are this will be the year you begin to develop this competency. Whether you’re about to take your first step, you’re a team of one looking to scale, or even a more mature organization that is always seeking self-improvement, consider the following traits to maximize your chances of success with data science.

Read more

When asked “What’s your data strategy?” do you reply “We’re getting Hadoop…” or “We just hired a data scientist…” or “If we only had a data lake, all our problems would be solved…”? Plotting a good data strategy requires more than buying a tool, hiring a resource, or adding a component to your architecture. You need something to describe:

  • the goals you are trying to achieve,
  • the stakeholders you are trying to serve, and
  • the internal capabilities required to satisfy those stakeholders and achieve those goals

Read more

Earlier today the AWS team unveiled two new capabilities for QuickSight, Amazon’s signature Business Intelligence tool. Speaking live from the AWS re:Invent conference at the Venetian in Las Vegas, the four hosts announced the ability for users to easily embed QuickSight dashboards in applications and previewed new native Machine Learning capabilities. Read more

At least weekly, I am granted the opportunity to meet and work alongside experienced professionals who serve in a corporate business intelligence (BI) leadership function. When they describe their role upon introduction, there is a common thread to the scope of influence and control which usually intersects one or more of these domains: Read more