Public data is everywhere, and if you know where to look, you’d be surprised at the insights it can give you. In fact, when paired with the right tools, this freely available information can enrich and complement your internal data resources to reveal compelling patterns of behavior and trends that you can act on to drive growth at your organization. To showcase what public data can do in the hands of professional analysts, we’re kicking off the Ironside Public Data Powered article series. These publications will periodically take you behind the scenes to show you how our consultants think about and interact with public data using the skills and technologies at their disposal. In this inaugural article, we’ll explore what it takes to start understanding patterns and relationships within a combined public and internal data set through IBM Watson Analytics.

Read more

ELT is a term heard increasingly in today’s analytic environments. Understanding what it means, and how you can make use of it, requires understanding the traditional nature of how data warehouses are loaded and how data movement tools work. That’s why we’ve pulled this article together: to break down the ETL vs. ELT divide and show you where the similarities and differences are. Read more

In 2014, cloud data warehousing services led the information management category in increased adoption rate, jumping from 24% to 34% according to surveys by Information Week . For organizations challenged by data urgency needs that can be difficult to meet with traditional data warehouse infrastructures, cloud services offer an alternative that can provide value at the pace of business, often supplementing existing, on-premise data warehouses. With new technologies and advancements in the cloud data warehousing space, 2015 should prove to be an exciting year for those looking to build out or implement new cloud based DW programs. Whether you are in the midst of a cloud DW initiative, looking to start one soon, or just getting to know the technology – the five trends that we will discuss below are items you will want to keep in mind for the coming year. Read more

Cognos makes extensive use of data warehousing concepts. Most data warehouses are built using dimensional modeling techniques (also known as the “Kimball style”). Data is divided into fact and dimension tables, which are joined together in star schemas. Restructuring data in this fashion takes a great deal of effort, both in planning and implementation. These types of changes are only done because they are necessary for high-quality analytics. Understanding more about how they work and why they are important can help make Cognos a more efficient and effective reporting tool. Read more

Today’s big data challenges for both transactions and analytics are increasing demands on data systems. Traditional data warehouses sometimes struggle as they are often NOT designed to meet the demands of advanced analytics on big data. That’s where solutions like Netezza come in.

IBM PureData for Analytics (formerly Netezza) is a data warehouse appliance that has a purpose-built analytics engine and an integrated database, server, and storage. With simple deployment, out-of-the-box optimization, no tuning, and minimal ongoing maintenance, the IBM Netezza data warehouse appliance has the industry’s fastest time-to-value and lowest total cost of-ownership. Read more

Planning an information management infrastructure is a fun and exciting activity for everyone, whether they work with information systems or not. What? It isn’t? Then why are so many people building information management infrastructure solutions in Access and Excel? That’s not IM you say? Well in a way it is, though we can agree that it is neither sufficient nor sustainable for just about any organization. So why do it? Read more

Reporting should start with a “single source of truth” and end with a “single version of truth.” Delivery of accurate reporting information is paramount in the day-to-day operations of every organization. Some of the methods for maintaining data Integrity (within the data warehouse) would be to apply referential integrity and maintain primary keys on tables. In addition, associated ETL processes can utilize checksums, or record counts, when loading the warehouse. Once the data is in the warehouse, the next step would be to model the data for exposure to the end-users. Read more

With the introduction of Active Report, IBM Cognos 10 report designers now have the ability to quickly and effectively create dynamic, powerful, interactive, self-contained reports available offline and on the go, which are highly sought after by report consumers.

Active Report’s intuitive, drag and drop interface makes it incredibly easy to add interactivity and functionality to reports, including variable text items, tab controls, and data decks.

To illustrate the ease of use, this tutorial will walk you through the steps for creating a simple report containing a data deck in Active Report. Data decks allow report developers to add modern and visually appealing animations to reports. Using master-detail relationships, a variety of data containers can be inserted into a deck and updated to show only values selected in an Active Report control.  Read more

A Cognos report is only as good as its data. How that data is organized will affect performance, accuracy, and ease of authoring. There is no single solution — before you get can your data right, you need to know how it is going to be used.

Know Your Requirements

In an ideal world, databases would produce the data for any kind of report with equal speed and ease. Unfortunately, this is not the case. Business Intelligence queries, in particular, frequently push the envelope of what databases can be expected to deliver. In order to keep performance acceptable, design compromises need to be made. Usually, this means making certain types of queries faster at the cost of making all other queries slower. If the faster queries are the ones the users care about, then the compromise will be a success. Read more

As consultants, we have found that the majority of struggling IBM Cognos implementations we encounter are due to either poor framework model design, or more often, a flawed database architecture. The case of the latter can present itself in a number of ways, but in the worst cases, we’ve discovered reporting applications built upon highly normalized OLTP systems that are ineffective and detrimental to both analytical and operational performance of an organization’s information systems. Another common case is an implementation of Cognos upon an existing data warehouse where users are provided with unfettered ad-hoc access to the data source for the first time, exposing previously unforeseen or unknown data quality issues. Read more