Ironside’s Take30 with a Data Scientist series was typically targeted towards business leaders, with topics focused on strategy, including use-case development advice, de-risking AI with Data Science-as-a-Service, and ways to overcome common barriers to AI adoption. We also covered technical concepts like Model Evaluation and Feature Store Development. On top of that, we took several deep dives into technology partners including IBM Watson Auto AI, AWS Sagemaker Studio, Snowflake and DataRobot. Finally, we had a couple industry spotlights where we explored common use cases in Higher Education and Insurance.

Several attendees have shared that these sessions bridge the gap between the technical world of Machine Learning and that of their business, which in turn has helped them to know how to bridge that gap within their own organizations. For technicians, it has helped them to understand how to talk to the business and draw out use cases and help the business adopt solutions. For the business leaders, it’s helped them know what to ask of the data science team or what to look for in building a team. 

Overcoming the Most Common Barriers to AI Adoption (2/25/21)

Because so many organizations are in the early stages of AI Adoption, this is likely the most important topic to CIOs and business leaders in the Data Science series. This session discusses the challenges with people, infrastructure, and data that every organization faces and offers sound advice on how to overcome them.

Is Data Science-as-a-Service Right for your Organization? (5/19/20)

AscendAI, Ironside’s Data Science-as-a-Service, provides many benefits to organizations that are in the early or mid-stages of AI Adoption. Learn more about Ironside’s offering and how it could reduce your time to ROI to as little as 12 weeks.

How Snowflake Breaks the Chains Holding Your Data Science Team Back (9/10/20)

We hosted a number of Technology related secession with Partners such as Snowflake. This session dove a bit deeper than Data Science Best Practices: Feature Stores. Other Technology related sessions include Watson Studio, AWS Sagemaker, and a data enrichment session with Precisely, titled More Data, More Insight: The Value of Data Enrichment for Analytics.

Data Science work requires infrastructure that is scalable, cost-effective, and with easy access to multiple data sources. Snowflake provides this and much more to a data science tech stack. It also integrates easily with other machine learning platforms like DataRobot, AWS, and Azure. Snowflake is particularly valuable for data sharing with external data sources.

Leveraging Data for Predicting Outcomes in Higher Ed (6/30/20)


We hosted an industry-related session sharing how Higher Education is leveraging machine learning in very creative ways; this ended up being one of our top attended sessions for the Take30 series. In this webinar, we reviewed some of the ways that higher ed is using machine learning such as enrollment management, space planning and student retention. We also discussed some of the use cases that are helping universities cope with the challenges and nuance of COVID-19. We also hosted another industry specific session on Insurance.

______

As we continue our Take30 with a Data Scientist series, we’ll continue to partner with experts in Machine Learning technology to offer demos and successful solutions as well as strategic sessions for business leaders. We also hope to spotlight some of our clients this year and the exciting AI driven applications we are developing for them in Retail, Insurance, Higher Ed, and Manufacturing. Coming up on May 20th, we will be hosting an industry focus for Banking. 

We’d love to have 1-on-1 conversations to discuss any challenges you may be facing with AI adoption. Please feel free to sign up for a spot with Pam Askar, our Director of Data Science.

The Ironside Take30 webinar series premiered on April 16th, 2020, with the goal to share expert dialog across a variety of data and analytics related topics to a wide range of audiences. The series has three primary dialog categories, each hosted by a BI Expert, Data Scientist or Data Advisor. In the past year, we’ve shared best practices with over 200 companies ranging from Fortune 50 to small businesses. Our success has been measured by participants returning and telling a colleague; on average, each unique company attended over six Take30 sessions. 

While some Data Advisor sessions are more technical, the focus is on describing concepts and tools at a less detailed level. We want to give people a sense of how rapidly the data and analytics environment is changing. To that end, the Data Advisor series worked with our partners, including IBM, AWS, Snowflake, Matillion, Precisely and Trifacta, to bring demonstrations of their tools and discuss the impact of their capabilities. We talked about the rapid expansion both of data and of the solution space to move, structure, and analyze that data. 

Most importantly, we had a special series on the Modern Analytics Framework, Ironside’s vision for and approach to a unified approach to insight generation that puts the right data in the hands of the right people. Regardless of your industry, your tools, or your use cases, you need a way to keep your data, users, and processes organized.

“What do you need in your data warehouse?” used to be the chief question asked when thinking about data for analytics. That time is past. Now, a data warehouse is just one possible source of analytics. Most organizations have so much data that building a warehouse to contain all of it would be impossible. At the same time, data lakes have emerged as a popular option. They can easily hold vast amounts of data regardless of structure or source. But that doesn’t mean the data is easy to analyze. 

And just as there’s no longer a single question to ask about structuring data, there’s no longer just one voice asking the question. Data scientists and data engineers are among the many personas that have emerged as consumers of data. Each has their own toolset(s) and preferences for how data should look for their purposes. 

All of this diversity demands a more distributed approach to ingesting, transforming and storing data – as well as robust and flexible governance to manage it. All of the topics covered in the past year of Take30 with a Data Advisor touch on these points, and on Ironside’s goal to help you make better decisions using your data.  Here are five of the 27 Data Advisor sessions we hosted this year: 

  • Modern Analytics Framework: Series Summary  (7/30/20-9/1/21) – This 6-part series covers all aspects of Ironside’s Modern Analytics Framework: overall concepts, assessment and design, governance, identification of user personas, implementation, and usage. If you are looking to upgrade your existing analytics environment, or creating one for the first time, this is an essential series, and one that Ironside will be expanding on in 2021.

  • Snowflake as a Data Wrangling Sandbox (6/3/20) – Snowflake is a tremendous cloud database platform for data storage and transformation. Its complete separation of compute and storage allows for many usage scenarios, and most importantly for easily scalability based on volume of data and consumption. Nirav Valia (from Ironside’s Information Management practice) presents on one common Snowflake use case: using Snowflake as a data wrangling sandbox. Data wrangling typically involves unpredictable consumption patterns and creation of new data sets, as an analyst seeks to discover new insights or answer new questions by manipulating data sets. Snowflake’s power and flexibility easily handles these types of activities without requiring up-front investment or significant recurring costs. It’s easy to create transformations, let them run, then let the data sit until it is needed again. (If you are interested in Snowflake, also consider our later Take30 Snowflake: Best Practices (9/24/20), including commentary from a Snowflake engineer.) 

  • What is Analytics Modernization? How can Data Prep Accelerate It? (with Trifacta) (2/4/21) – Toward the end of our first year of Take 30s, we held a panel, hosted by Monte Montemayor of Trifacta, around data prep and accelerating analytics modernization. As I mentioned earlier, there is a tremendous amount of data available today – but getting it analytics-ready is a huge challenge. Tools like Trifacta (known as Advanced Data Prep, or ADP, in the IBM world) are extremely useful for giving analysts and business users the ability to visualize and address data quality issues in an automated fashion. This is useful for data science, dashboarding, data warehouses – any place where data is consumed. (If you are interested in Data Prep, check out IBM Advanced Data Prep in Action (7/8/20) and Data Wrangling made Simple and Flexible with Trifacta (5/6/20))

  • A Data Warehousing Perspective: What is IBM Cloud Pak™ for Data? (5/27/20) – IBM has created a single platform for data and analytics that works across cloud vendors and on-premise. If you want to be able to shift workload between local nodes and the cloud easily, this is the solution for you. In this Take30, we provide an overview of the technologies that make Cloud Pak for Data possible, and how you can take advantage of them. (We also have a session Netezza is back, and in the cloud (7/23/20) discussing Netezza, one of the many technologies available on the Cloud Pak platform)

  • A Data Strategy to Empower Analytics in Higher Ed (7/1/20) – Occasionally, we have the opportunity to host an industry-specific Take30, and where possible, we have clients join us. Northeastern University joins this Higher Ed focused Take30 to discuss their approach taken with Ironside in developing a multi-year roadmap. This was geared towards increasing the “democratization of data and analytics” by establishing the organizational foundation, technology stack and governance plan necessary to grow self service throughout the institution. Our discussion highlights the particular challenges of a decentralized, highly autonomous structure, and shared the value of a data science pilot in the admissions area executed during the strategy engagement to generate tangible results.

2020 was a unique year. At Ironside, it gave us the opportunity to reach out to customers in a new way – one that we are continuing into 2021. We look forward to more detailed sessions on the Modern Analytics Framework, and on trends and tools that we see gaining prominence. 

After a year of delivering these sessions, we’ve realized that customers are not only looking for specific solutions, but for a sense of where the analytics world is going. Which cloud platforms make the most sense? What transformation and data wrangling tools are the most useful? Should I redesign my warehouse or just add a data lake? We look forward to exploring those and other questions with you.

So you’re thinking that 2021 is the year to infuse Artificial Intelligence/Machine Learning (AI/ML) into your business. You’ve read about the difference it’s making in other organizations. You want to beat — or keep pace with — your competitors. But where should you begin? 

Should you license AI/ML software? How do you find the right business problem to solve? And if you’re like most organizations, your data is imperfect. Should you focus there first? 

Ironside can help. We’re a data and analytics consulting firm with a track record of helping companies get started with AI.

Let’s start with four things we think every organization should consider on their AI journey. They’re not the only four things you need to know, but we know your time is valuable, so let’s start here:

  1. Develop an AI Use Case Catalog – One of the first places you’ll start is to develop a list  of possible business problems, opportunities or challenges that AI might improve. We assist our customers in building that list by talking to executives and functional area leaders and understanding the organization’s strategic goals, how they are measured and then considering challenges/pain and what information is missing that would improve decision making. The catalog of use cases should be enhanced with information from a thorough data analysis. Is there the data to support the use case? Is there enough of it? What’s the data quality? Can you forecast improvement of an important metric? What’s the return on investment? 
  2. Involve a variety of stakeholders. Executive sponsorship in some form is critical in funding and executing an AI project. But building a culture of AI across an organization starts by involving as many stakeholders as possible across functional areas. Even if the use cases surfaced by some stakeholders are not immediately pursued, people want to be included and to have a voice. Broader involvement will avoid roadblocks and seed a culture of AI. Organizational success will grow over time. 
  3. Start small – Rank the use case catalog and find one or two to test. Identify the relevant business sponsor and data and prepare a limited set of data, or features. Build a simple machine learning algorithm to see if there are results that show that AI could improve a desired outcome. If not, move on to the next use case in the catalog (see, that’s why we need a catalog). If the early results are promising, move on to building a more advanced machine learning model.  
  4. Limit your investment – For as low a cost as possible, get a model deployed and start using it in the business to begin to get the benefit. You’ll inevitably iterate on that model but expediting that process and limiting the investment — and the risk — is the goal. Now here’s where we answer the questions about hiring a data scientist or buying software. 

Sometimes the answer is yes but for many organizations the answer is “no.” They’re just not sophisticated enough. And big costly failures could sour your organization on pursuing AI and set you back years

Ascend AI 

So what should you do? One option to get started is Ascend AI, a data science as-a-service solution that Ironside developed. Ascend AI lowers the risk out of diving into AI on your own. It is  underpinned by a custom configured and scripted cloud-based architecture as well as our highly skilled data scientists. 

We bring the data scientists and engineers and the technology. You provide the data and the business problems. 

We start with your leading use cases or help you develop them in a use case catalog. Then we perform rapid viability assessments on the leading use cases selected and if signs are good we would then build out full machine learning algorithms. Finally, we could deploy and host and manage the algorithms. At any point, depending on customer preference and maturity, we would hand the IP back to our customers and help them develop AI competency in house. We’re not a black box. 

Of course there’s more to getting started with AI than these four points and data science as a service might not always be the answer. The thing to remember is that AI should be consumed in bite sized-chunks and is attainable to even the most technologically immature organizations.

By 2022, 35% of large organizations will be either sellers or buyers of data via formal online data marketplaces, up from 25% in 2020. With AI and ML supplementing existing data sources, there is always more value to be derived from large quantities of data.

For years, the data management industry has been talking about the ever-growing volumes, velocity, and variety of data.  For traditional analytics, the challenge has been about how to reduce the data used in reporting and BI; how to separate the noise from the signal, how to prioritize the most relevant and accurate data, and how to make a company’s universe of data usable to an increasingly self-service user population. This notion of having too much data is well-founded – so much data in an organization isn’t readily useful for traditional analytics. Data may be incomplete, inaccurate, too granular, unavailable, or simply not useful for a particular use case. However, in implementing AI and ML, it turns out that the more data that is available from as many sources as possible is one of the most important ingredients in building a successful model.

In traditional analytics, the user decides which data is most useful to their analysis and, in so doing, taints their results through their own intentional omissions and unintentional biases. But, in AI/ML (and especially when we’re leveraging Automated Machine Learning (AML) technologies), we really can’t have too much good data. We can throw massive amounts of data at the problem and let AML ascertain what’s relevant and helpful, and what isn’t. We want lots of data, and unfortunately we usually don’t actually have enough.

In a recent project, we met a customer who (as with most) believed that they had all the data they needed to accurately predict insurance loss risk – they knew their customers, their properties, various demographics, payment histories, on and on. And so we built a loss prediction model for them, and got good results. The customer was very pleased.  

Then we decided to train the model with a combination of internal and 3rd party data to see whether there would be a difference. We loaded several sets of data that significantly enriched that customer’s already voluminous customer and property data.  The result was a 25% increase in the efficacy of the AI model – which as any Data Scientist will tell you, is a massive improvement. And the cost of that data was a drop in the bucket relative to the scope of the larger budget.

My message to customers facing these issues has evolved; I now encourage them to seek out more data than they already have. The inclusion of external data at marginal cost can drive substantial improvements in the quality of models and outputs. And many data vendors have made it easier to test, acquire, and parse data for where it is most impactful. The bottom line is that, in the area of AI, more is definitely better, and you can never be too data-rich. 

Ironside and our partner Precisely recently published a white paper where you can learn more about data enrichment for data science, which you can download here.

The world has changed dramatically over the course of a single month, and companies are struggling even more with things that have historically challenged them:

  • Finding the best people to run, build and innovate on their analytics tools and data
  • Making these environments accessible to employees in a work-at-home model

In this Forbes article, Louis Columbus cites a recent Dresner survey that shows up to 89% of companies are seeing a hit to their BI and Analytics budgets due to COVID-19. The survey includes these two recommendations:

Recommendation #1

Invest in business intelligence (BI) and analytics as a means of understanding and executing with the change landscape.

Recommendation #2

Consider moving BI and analytical applications to third-party cloud infrastructure to accommodate employees working from home.


89% of companies are seeing a hit to their BI and Analytics budgets due to COVID-19.


We’re here to help you explore your options.

Now that the role of analytics is more important than ever to a company’s success, analytics leaders are again being asked to do much more with much less — all while companies are experiencing staff reductions, navigating the complexities of moving to a work-from-home model, and struggling to onboard permanent hires.

To address these short-term shortages (and potentially longer-term budget impacts), companies are naturally evaluating whether leveraging a managed-service approach — wholly or even just in part— can help them fill their skills gap while also reducing their overall spend.

As they weigh this decision, cost, technical expertise, market uncertainty and the effectiveness of going to a remote-work model are all top-of-mind. Here’s how these factors might affect your plans going forward:

Factor 1: Cost

As the Dresner number showed, most analytics teams need to reduce spend. Doing this mid-year is never easy, and usually comes at the expense of delayed or canceled projects, delayed or cancelled hiring, and possibly even staff reductions. All of these decrease a company’s analytics capabilities, which in turn decreases its ability to make the right business decisions at a critical time. A managed services approach to meeting critical analytics needs, even just to address a short-term skills gap, can provide valuable resources in a highly flexible way, while saving companies significant money over hiring staff and traditional consulting models.

Factor 2: Technical Expertise

A decade ago, your options for analytics tools and platforms were limited to a handful of popular technologies. Today even small departments use many different tools. We have seen organizations utilizing AWS, Azure, and private datacenters. Oracle, SQL Server, Redshift all at the same company? Yes, we have seen that as well. Some of our customers maintain more than five BI tools. At some point you have to ask: Can we hire and support the expertise necessary to run all these tools effectively? Can we find and hire a jack-of-all trades?

In a managed services model, companies can leverage true experts across a wide range of technology while varying the extent to which they use those resources at any particular time. As a result, companies get the benefit of a pool of resources in a way that a traditional hiring approach simply cannot practically provide.

Factor 3: Effectiveness of Remote Work Engagement

If you weren’t working remotely before, you probably are now. Companies are working to rapidly improve their processes and technologies to adjust to a new normal while maintaining productivity.

Managed service resourcing models have been delivering value remotely for years, using tools and processes that ensure productivity. Current events have not affected these models, therefore making them an ideal solution for companies  trying to figure out the best way to work at home.

Times are changing. We’re ready!

Ironside has traditionally offered Managed Services, to care for and maintain customer platforms and applications, and consulting services, to assist in BI and Analytics development.

Companies can leverage our Analytics Assurance Services temporarily, for a longer period of time to address specific skills gaps, or to establish a cloud environment to support remote analytic processes.

With Ironside, you can improve your data analytics within your new constraints, while reducing your costs. We’d love to show you how.

Contact us today at: Here2Help@IronsideGroup.com

Over the past week, I’ve spoken to a number of customers and partners who are adjusting to the ever-evolving reality of life during COVID-19. Beyond the many ways it has affected their personal lives and families, we’ve also discussed how it has impacted their jobs, and the role of analytics in the success of their organizations.

During these conversations, a few consistent themes have emerged from the people responsible for delivering reporting and analytics to their user communities:

  • Reliability: Continuing to deliver business as usual content despite a suddenly remote workforce
  • Resiliency: Hardening existing systems and processes to ensure continuity and security
  • Efficiency: Delivering maximum value even in the midst of a short-term business downturn
  • Innovation: Finding new ways to leverage data to address emerging challenges in areas such as supply chain, customer service, pricing optimization, marketing, and others.

While none of these topics are new to those of us in analytics, the new reality brought on by COVID-19 has made it even more important for us to succeed in every area. In an excellent Forbes article, Alteryx CEO Dean Stoecker discusses the importance and relevance of analytics professionals in driving success for their organizations in these trying times.

As he correctly concludes,

“If anyone is prepared to tackle the world’s most complex business and societal challenges—in this case, a global pandemic—it is the analytic community.”

We’re all in this together.

At Ironside, we’re taking that challenge to heart and looking at how we, too, can refocus our talents to better help our customers. Our upcoming series, Strategies for Success During Uncertain Times, will cover the steps we’re taking to help our partners weather this storm.

As of today, we’re:

  • Holding on-demand “Coffee Breaks” with some of our most experienced SMEs
  • Increasing remote trainings on key technologies
  • Rolling out short-term hosted platforms to accelerate model development, especially for predictive analytics
  • Expanding our managed-services capabilities for platforms and applications, even for short-term demand
  • Increasing our investment in off-shore capabilities to reduce costs and expand coverage models and other areas, too

Additionally, we are offering more short-term staffing options to our customers. Read Depend on Ironside for your data and analytics bench for short- and long-term success for more about these services.

We’re here to help.

At Ironside, we agree that the analytics community is uniquely-positioned to help our organizations weather the COVID-19 storm, and we’re committed to making our customers and partners as successful as possible.

We look forward to speaking with you about your immediate needs, and continuing the conversation on these and other timely topics.

Contact us today at: Here2Help@IronsideGroup.com

Ironside has historically focused on longer-term, project- or services-based staffing. However, we understand that what you may need most now is immediate, on-demand access to highly-experienced professionals.

To address that critical need, we’re making some of our top people available for short-term work. If you have even the most temporary and immediate need to address capacity constraints, delayed hiring, budget limits, or just to knock a few items off of your to-do list, we can assist with a flexible, remote, talented, and cost-effective pool of professionals. Our areas of expertise include:

  • IBM Analytics portfolio including Cognos, Watson, Netezza and others
  • BI tools including Tableau, Power BI, QuickSight and others
  • Cloud-native technologies on AWS and Microsoft
  • Leading data wrangling, management, and catalog tools
  • Top AI and AML technologies from DataRobot, AWS, and more

The world may be up in the air, but we understand that it has to be business as usual for our clients. We’re here to help you with that.

Contact us today at: Here2Help@IronsideGroup.com

Ironside is pleased to announce the release of a new packaged service “Ascend AI.” For nearly 10 years, Ironside has offered data science expertise and advisory services to organizations who seek to establish AI within their enterprise. Now Ironside offers a powerful new service for organizations who are earlier in their AI journey.

AI is becoming more of a necessity for businesses to retain their competitive edge, keep internal costs low and manage risk. But getting started can be overwhelming. What technologies should we invest in? Should we hire data scientists and how many? What use cases should they work on? Where would they get started? How much will this cost us?  Can our current infrastructure support this? Is our data mature enough? Is our organization ready?

Ironside’s strong history of helping organizations get started on their AI journey allows us to understand common pitfalls and how to pivot around them, have a valuable point of view on AI/ML technology and infrastructure options, and provide a highly skilled data science team. We understand that many organizations can’t jump in feet first and need a way to quickly and easily prove value with rapid cost-effective sprints before they begin to think about hiring or large technology purchases. That is why we created Ascend AI.

What is Ascend AI?

Ascend AI is a packaged service, delivered in progressive modules to allow you to scale up at your own pace.

Ascend AI's Progressive 4-Step Solution

Ironside provides the data science team, including solution architects, developers, experience designers, data engineers and of course experienced data scientists, and leverages their own infrastructure and AI IP that they’ve developed over the years. You provide your data and business subject matter experts who work closely with the Ironside team to develop a customized AI solution that delivers measurable results.

We bring the technology and expertise so you can focus on putting the results to work for your organization.

Who is Ascend AI for?

Ascend AI is right for any organization that says:

  • We need to test out AI use cases before investing in technology and people.
  • We want to build a business case to gain executive support for further AI funding.
  • We want to become an AI driven organization, but without investing in capital expenses or building an internal center of excellence.
  • We need a trusted AI partner, not an off-the-shelf solution.
  • We have unique business problems and use cases that don’t fit any solution on the market.

Get started, today!

If you want all the benefits of implementing artificial intelligence to analyze, action and manage your data, without any of the hassles and headaches, Ascend AI delivers.

When it comes to AI and automated machine learning, more data is good — location data is even better.

At Data Con LA 2019, I had the pleasure of co-presenting a tutorial session with Pitney Bowes Technical Director Dan Kernaghan. We told an audience of data analysts and budding data scientists about the evolution of location data for big data and how location intelligence can add significant and new value to a wide range of data science and machine learning business use cases.

Speeding model runs by using pre-processed data

What Pitney Bowes has done is take care of the heavy lifting of processing GIS-based data so that comes ready to be used with machine learning algorithms. Through a process called reverse geocoding, locations expressed as latitude/longitude are converted to addresses, dramatically reducing the time it takes to prepare the data for analysis.

With this approach, each address is then associated with a unique and persistent identifier, the pbKey™, and put into a plain text file along with 9,100 attributes associated with that address. Depending on your use case, then, you can enrich your analysis with subsets of this information, such as crime data, fire or flood risk, building details, mortgage information, and demographics like median household income, age or purchasing power.  

Surfacing predictors of summer rental demand: location-based attributes

For Data Con LA, we designed a use case that we could enrich with location data: a machine learning model to predict summer revenue for a fictional rental property in Boston. We started with “first person” data on 1,070 rental listings in greater Boston that we sourced from an online property booking service. That data included attributes about the properties themselves (type, number of bathrooms/bedrooms, text description, etc.), the hosts, and summer booking history.

Then we layered in location data from Pitney Bowes for each rental property, based on its address: distance to nearest public transit, geodemographics (CAMEO), financial stress of city block, population of city block, and the like.

Not surprisingly, the previous year’s summer booking and scores based on the description ranked as the most important features of a property. However, it was unexpected that distance to the nearest airport ranked third in importance. Other location-based features that surfaced as important predictors of summer demand included distance to Amtrak stations, highway exits and MBTA stations; block population and density measures; and block socio-economic measures.

By adding location data to our model, we increased the accuracy of our prediction of how frequently “our” property would be rented. Predicting that future is an important outcome, but more important is determining what we can do to change future results. In this scenario, we can change the price, for example, and rerun the model until we find the combination of price and number of days rented that we need to meet our revenue objective.

Building effective use cases for data science

As a Business Partner since 2015, Ironside Group often incorporates Pitney Bowes data — both pbKey flat-file data and traditional GIS-based datasets like geofences — into customized data science solutions built to help companies grow revenue, maximize efficiency, or understand and minimize risk. Here are some examples of use cases that incorporate some element of location-based data into the model design.

Retail loss prevention. A retailer wanting to analyze shortages, cash loss and safety risks expected that store location would be a strong predictor of losses or credit card fraud. However, models using historical store data and third-party crime risk data found that crime in the area was not a predictor of losses. Instead, the degree of manager training in loss prevention was the most significant predictor — a finding that influenced both store location decisions and investments in employee training programs.

Predictive policing. A city police department wanted to a data-driven, data science-based approach to complementing its fledgling “hot spot” policing system. The solution leverages historical crime incident data combined with weather data to produce an accurate crime forecast for each patrol shift. Patrol officers are deployed in real time to “hot spots” via a map-based mobile app. Over a 20-week study, the department saw a 43% reduction in targeted crime types.

Maximize efficiencies for utilities demand forecasting. A large natural gas and electricity utilities provider needed a better way to anticipate demand in different areas of their network to avoid supply problems and service gaps. The predictive analytics platform developed for the utility uses cleaned and transformed first-party data from over 40 different geographic points of delivery, enriched with geographic and weather data to improve the model’s predictions of demand. The result is a forecasting platform that triggers alerts automatically and allows proactive energy supply adjustments based on predictive trends.

About Ironside Group and Pitney Bowes

Ironside Group was founded in 1999 as an enterprise data and analytics solution provider and system integrator. Our data science practice is built on helping clients to organize, enrich, report and predict outcomes with data. Our partnership and collaboration with Pitney Bowes lead to client successes as we combine our use case-based approach to data science with Pitney Bowes data sets and tools.

In today’s “Big Data” era, a lot of data, in volume and variety, is being continuously generated across various channels within an enterprise and in the Cloud. To drive exploratory analysis and make accurate predictions, we need to connect, collate, and consume all of this data to make clean, consistent data easily and quickly available to analysts and data scientists.

Read more