Posts

Be sure to check out our Take30 webinar this Thursday around the approach and features of Ironside’s AscentAI.

Artificial Intelligence, at its core, is a wide ranging tool that enables us to think differently on how to integrate information, analyze data, and use the resulting insights to improve decision making.

With the current shift to digitization (which has been accelerated by the pandemic), customer behavior has changed significantly, along with the expectation around accuracy of AI-based predictions. We all are used to “What to Watch Next” recommendations from streaming channels like Netflix or “Suggested Products” to buy from Amazon, but now with most businesses offering the expected “Wait Time” before your a haircut, or pickup time for the food you ordered online, it is critical to manage the queue to ensure timely service that begets customer satisfaction & retention.

Many organizations want to leverage AI but are unable to mainly due to following reasons

  • High cost & Time to Market
  • Complexity & Lack of expertise
  • Uncertainty of success of the AI outcomes

Because one of our goals is to help our customers benefit from the AI revolution, derive real business value,  improve their, all in a timely fashion we at Ironside have introduced a product-style approach to Data Science called AscentAI: 

As with any project, the first steps are to identify and prioritize the business use case, define the objective, and clearly specify the goals. Next, we offer a rapid viability assessment (RVA),  which ensures the model provides sufficient signal to justify productionizing the machine learning model in under three weeks.

The activities during RVA involve:

  • Data collection & preparation
    • The quality and quantity of the data dictates the accuracy of the model
    • Split the data into two distinct datasets for training and evaluation
  • Feature engineering
    • Identify & define features for the models
  • Model training & evaluation
    • Choose different models and identify the best one for the defined requirements
  • Make & validate predictions
    • Measure prediction accuracy against real data sets

RVA is a decision phase, where if the AI provides more noise than actual signal, then it would not provide value in developing production-ready machine learning models. This gate-based approach ensures the customer has clear visibility into expected outcomes, and can take an informed decision on either pursuing the current use case or moving on to the next one.

If the RVA provides meaningful insights, then part of the next step is to productionize the best model, integrating it into existing business processes to start consuming the predictions.

The final step includes a performance monitoring dashboard, which is provided to monitor the performance of the models, identify the need to tune, and optimize the model due to the naturally expected skew over time. Finally, we strongly recommend model “retraining” over time at a predefined frequency to ensure the AI consistently delivers on the expected ROI.

Below is a snapshot of a real implementation of AscentAI for a customer with 1800+ stores to predict accurate “wait times” in real time in a very high volume setting on AWS cloud platform.

Your data needs are different from those of any other client we’ve worked with. Plus, they’re ever-changing. 

That’s why we’re fluid in our approach to creating your framework and why we ensure fluidity in the framework itself. 

Diagram

Description automatically generated

Whether your current investment in assessments, governance, and technology is heavy or light, we can meet you where you are, optimize what you have, and help you move confidently forward. 

These steps are all necessary, but don’t happen in a strict sequence. Each of them is an iterative process — taking small steps, looking at the results, then choosing the next improvement. You need to start with assessment and governance — unless you already have some progress in those areas. 

Analytics are constantly evolving, and the Modern Analytics Framework is designed to evolve more readily as users discover new insights, new data, and new value for existing data. There will be constant re-assessment of the desired future state, modifications to your data governance goals and policies, design of data zones, and implementation of analytics and automated data delivery. Making these changes small and manageable is a key goal of the Modern Analytics Framework.

Can we ask you a few questions?

The better we understand your current state, the better we can speak to your specific needs. 

If you’d like to gain some insight into how your organization can move most effectively toward a Modern Analytics Framework, please schedule a time with Geoff Speare, our practice director.

Geoff’s Calendar
GSpeare@IronsideGroup.com
O 781-652-5758  |  484-553-1814

Get our comprehensive guide.

Learn about our proven, streamlined approach to taking your current analytics framework from where it is to where it needs to be, for less cost and in less time than you might imagine.

Download the eBook now

Check out the rest of the series.

In recent years, the field of data science has been advancing in leaps and bounds. In most enterprises, executives are aware that they need to be doing more with both internal data and external data by leveraging advanced analytics. They understand that machine learning and artificial intelligence will be rich sources for competitive advantage in the years ahead. The challenge for many is the question of exactly how to get started.

Many companies have begun to collect their data and ensure that it is stored where it can be put to use at some point in the future. That often includes text data, semi-structured and unstructured data including service tickets, user reviews, and social media posts, as well as more traditional sources like ERP transactional data. Simply by gathering, organizing, and ensuring that this information is preserved in such a way that it can be used later, these businesses are laying the foundation to gain future advantages from data analytics.

Many of them may already be well-positioned to gain significant business value from their existing data. Data enrichment makes that possible. It provides the “low hanging fruit” that produces powerful business insights, which, in turn, can drive immediate value.

According to a 2021 survey by Transforming Data With Intelligence (TWDI), approximately 30% of enterprises are already using external data. Just as many intend to begin using external data sometime within the next year. Trends surrounding geospatial data show a similar pattern. About one-third of companies are using location data in one way or another, and more than 25% of the remaining enterprises responding to the TWDI survey intend to start using geospatial within the next year.

Those trends illustrate a growing awareness that for companies seeking to elicit value from their corporate information, data enrichment provides a natural starting point. In a recent webinar co-sponsored by Precisely, experts from TDWI, Ironside, and Precisely discussed these trends, including how companies can get started leveraging data science more effectively to produce better business results.

This post was originally written and shared by our partner Precisely.

Ironside’s Take30 with a Data Scientist series was typically targeted towards business leaders, with topics focused on strategy, including use-case development advice, de-risking AI with Data Science-as-a-Service, and ways to overcome common barriers to AI adoption. We also covered technical concepts like Model Evaluation and Feature Store Development. On top of that, we took several deep dives into technology partners including IBM Watson Auto AI, AWS Sagemaker Studio, Snowflake and DataRobot. Finally, we had a couple industry spotlights where we explored common use cases in Higher Education and Insurance.

Several attendees have shared that these sessions bridge the gap between the technical world of Machine Learning and that of their business, which in turn has helped them to know how to bridge that gap within their own organizations. For technicians, it has helped them to understand how to talk to the business and draw out use cases and help the business adopt solutions. For the business leaders, it’s helped them know what to ask of the data science team or what to look for in building a team. 

Overcoming the Most Common Barriers to AI Adoption (2/25/21)

Because so many organizations are in the early stages of AI Adoption, this is likely the most important topic to CIOs and business leaders in the Data Science series. This session discusses the challenges with people, infrastructure, and data that every organization faces and offers sound advice on how to overcome them.

Is Data Science-as-a-Service Right for your Organization? (5/19/20)

AscendAI, Ironside’s Data Science-as-a-Service, provides many benefits to organizations that are in the early or mid-stages of AI Adoption. Learn more about Ironside’s offering and how it could reduce your time to ROI to as little as 12 weeks.

How Snowflake Breaks the Chains Holding Your Data Science Team Back (9/10/20)

We hosted a number of Technology related secession with Partners such as Snowflake. This session dove a bit deeper than Data Science Best Practices: Feature Stores. Other Technology related sessions include Watson Studio, AWS Sagemaker, and a data enrichment session with Precisely, titled More Data, More Insight: The Value of Data Enrichment for Analytics.

Data Science work requires infrastructure that is scalable, cost-effective, and with easy access to multiple data sources. Snowflake provides this and much more to a data science tech stack. It also integrates easily with other machine learning platforms like DataRobot, AWS, and Azure. Snowflake is particularly valuable for data sharing with external data sources.

Leveraging Data for Predicting Outcomes in Higher Ed (6/30/20)


We hosted an industry-related session sharing how Higher Education is leveraging machine learning in very creative ways; this ended up being one of our top attended sessions for the Take30 series. In this webinar, we reviewed some of the ways that higher ed is using machine learning such as enrollment management, space planning and student retention. We also discussed some of the use cases that are helping universities cope with the challenges and nuance of COVID-19. We also hosted another industry specific session on Insurance.

______

As we continue our Take30 with a Data Scientist series, we’ll continue to partner with experts in Machine Learning technology to offer demos and successful solutions as well as strategic sessions for business leaders. We also hope to spotlight some of our clients this year and the exciting AI driven applications we are developing for them in Retail, Insurance, Higher Ed, and Manufacturing. Coming up on May 20th, we will be hosting an industry focus for Banking. 

We’d love to have 1-on-1 conversations to discuss any challenges you may be facing with AI adoption. Please feel free to sign up for a spot with Pam Askar, our Director of Data Science.

As we enter into the Independence Holiday weekend, I wanted to drop a quick note. It’s hard to believe three months have passed since my last letter. The world is evolving daily and technology continues to play a critical role in how we all connect, track information and communicate worldwide. Despite the shift toward working remotely, the executive team and I continue to be impressed with the level of productivity, cohesiveness, employee engagement and strength as an organization that we have seen demonstrated by our team. 

Here are a few highlights:

Teamwork. My team has pointed out how their interaction with each other has expanded — collaboratively tackling projects, sharing knowledge to prepare for webinars, and helping clients deal with COVID-19’s impact on their data and business analytics. We’ve hosted weekly Town Hall forums and internal Step competitions that have promoted teamwork company-wide.Under our Strategies for Success free content offerings to our clients, we’ve rallied around our Take30 Series. 

These sessions hosted by Ironside’s Data Science, Data Advisor and Business Intelligence Leads, have made it important for Senior Consultants, Partners and Clients to come together to offer the best of our thinking. The planning and delivery has been mutually beneficial for our team and the ever-growing number of participants who we have shared 30 minutes together, multiple times per week since the start of the pandemic.

Education. This unprecedented time when we are not traveling to clients has offered a time for our consultants to learn additional skill sets and to expand their certifications. One of the greatest values we offer to our clients is understanding best practices related to integration aspects between our key partners: IBM, AWS, Precisely, Microsoft, Trifacta, Tableau, DataRobot, Snowflake, Alteryx, Matillion and Alation. For us, to continue to excel with these partners — cross-training between our Business Intelligence, Information Management and Data Science practices on our most utilized tools — has created many “a-ha” moments toward streamlining our delivery services.

Client Engagements. Despite the sunsetting of “business as usual” for now, Ironside’s business is strong. COVID-19 has impacted businesses in various ways, whether they are operating and accessing data differently or needing to measure the impact of the global environment on their businesses’ analytics. Perhaps now, more than ever, the demand for information and analytics is a “must-have” versus a “nice-to-have.” Some of our clients have found themselves busier than ever and racing to keep up with the demand for new analytics and reports. Other clients are compelled to be more hands-on with analytics that used to be automated by machine learning models that have been rendered invalid. In these cases and beyond, Ironside’s Analytics Assurance Service is here to help. Our team’s expertise is being leveraged for immediate, short term assistance to support organizations running as efficiently as possible, allowing clients to use their own skills for other tasks to avoid stifling tradeoffs. 

Thank you for your continued relationship with our team. From myself and my team to you and yours, we wish you a wonderful Independence Day. Stay safe and remain strong, both in business and in health.

Best,
Tim

When it comes to AI and automated machine learning, more data is good — location data is even better.

At Data Con LA 2019, I had the pleasure of co-presenting a tutorial session with Pitney Bowes Technical Director Dan Kernaghan. We told an audience of data analysts and budding data scientists about the evolution of location data for big data and how location intelligence can add significant and new value to a wide range of data science and machine learning business use cases.

Speeding model runs by using pre-processed data

What Pitney Bowes has done is take care of the heavy lifting of processing GIS-based data so that comes ready to be used with machine learning algorithms. Through a process called reverse geocoding, locations expressed as latitude/longitude are converted to addresses, dramatically reducing the time it takes to prepare the data for analysis.

With this approach, each address is then associated with a unique and persistent identifier, the pbKey™, and put into a plain text file along with 9,100 attributes associated with that address. Depending on your use case, then, you can enrich your analysis with subsets of this information, such as crime data, fire or flood risk, building details, mortgage information, and demographics like median household income, age or purchasing power.  

Surfacing predictors of summer rental demand: location-based attributes

For Data Con LA, we designed a use case that we could enrich with location data: a machine learning model to predict summer revenue for a fictional rental property in Boston. We started with “first person” data on 1,070 rental listings in greater Boston that we sourced from an online property booking service. That data included attributes about the properties themselves (type, number of bathrooms/bedrooms, text description, etc.), the hosts, and summer booking history.

Then we layered in location data from Pitney Bowes for each rental property, based on its address: distance to nearest public transit, geodemographics (CAMEO), financial stress of city block, population of city block, and the like.

Not surprisingly, the previous year’s summer booking and scores based on the description ranked as the most important features of a property. However, it was unexpected that distance to the nearest airport ranked third in importance. Other location-based features that surfaced as important predictors of summer demand included distance to Amtrak stations, highway exits and MBTA stations; block population and density measures; and block socio-economic measures.

By adding location data to our model, we increased the accuracy of our prediction of how frequently “our” property would be rented. Predicting that future is an important outcome, but more important is determining what we can do to change future results. In this scenario, we can change the price, for example, and rerun the model until we find the combination of price and number of days rented that we need to meet our revenue objective.

Building effective use cases for data science

As a Business Partner since 2015, Ironside Group often incorporates Pitney Bowes data — both pbKey flat-file data and traditional GIS-based datasets like geofences — into customized data science solutions built to help companies grow revenue, maximize efficiency, or understand and minimize risk. Here are some examples of use cases that incorporate some element of location-based data into the model design.

Retail loss prevention. A retailer wanting to analyze shortages, cash loss and safety risks expected that store location would be a strong predictor of losses or credit card fraud. However, models using historical store data and third-party crime risk data found that crime in the area was not a predictor of losses. Instead, the degree of manager training in loss prevention was the most significant predictor — a finding that influenced both store location decisions and investments in employee training programs.

Predictive policing. A city police department wanted to a data-driven, data science-based approach to complementing its fledgling “hot spot” policing system. The solution leverages historical crime incident data combined with weather data to produce an accurate crime forecast for each patrol shift. Patrol officers are deployed in real time to “hot spots” via a map-based mobile app. Over a 20-week study, the department saw a 43% reduction in targeted crime types.

Maximize efficiencies for utilities demand forecasting. A large natural gas and electricity utilities provider needed a better way to anticipate demand in different areas of their network to avoid supply problems and service gaps. The predictive analytics platform developed for the utility uses cleaned and transformed first-party data from over 40 different geographic points of delivery, enriched with geographic and weather data to improve the model’s predictions of demand. The result is a forecasting platform that triggers alerts automatically and allows proactive energy supply adjustments based on predictive trends.

About Ironside Group and Pitney Bowes

Ironside Group was founded in 1999 as an enterprise data and analytics solution provider and system integrator. Our data science practice is built on helping clients to organize, enrich, report and predict outcomes with data. Our partnership and collaboration with Pitney Bowes lead to client successes as we combine our use case-based approach to data science with Pitney Bowes data sets and tools.

The day-to-day work of an Underwriter ranges from research, to data entry, to pricing a risk, to ultimately negotiating that premium value with an agent. At the core, they need to accurately gauge risk, on a case by case basis. But their job doesn’t stop there. Even if we were to codify all the significant risk factors (as actuarial tables do), this doesn’t translate directly to how much the insurance firm ultimately charges for a given premium. Underwriters need to create an offer that they can justify to their customers, and keep an eye on the prevailing market dynamics.

Read more

LEXINGTON, MA, May 10, 2019 – Ironside, an enterprise data and analytics firm, was featured in a Wall Street Journal article about AI consultants that enable their clients to be self-sufficient with AI and not have to rely on their consulting counterparts to manage the model.

Read more

LEXINGTON, MA, May 3, 2019 – Ironside, an enterprise data and analytics firm, was recognized in the Wall Street Journal this morning for AI work being done at one of our clients, Coverys, a Boston-based provider of medical professional liability insurance.

Read more

At Ironside, we believe that data science is a team sport, and should be accessible to and enable as many players as possible. We work with clients on a regular basis to make data science accessible within their organization. But we also do this within our own company. Meet Tom Clancy – hear about his journey and what he has learned along the way.

Read more

Portfolio Items