For most Microsoft IT professionals, migrating or updating a native mode SQL Server Reporting Services (SSRS) installation from one version to another is a rare, if not once-in-a-lifetime, event – and probably one you would prefer a root canal to. Because software upgrades of all types tend to get postponed as long as possible, if you find yourself finally tasked with such an upgrade, several unpleasant things are likely true:
The effort is in crisis mode, driven by software (SSRS, OS) going off support, hardware becoming unreliable, or a line-of-business application that must itself be upgraded but cannot be until SSRS is.
The current installation was not done by you and whomever did is long gone, so you are not that familiar with it and would frankly rather not be. SSRS is not your “thing”.
The current installation is poorly documented, if at…
SQL Server’s AlwaysOn technology (available since SQL Server 2012) provides high-availability and disaster-recovery database functionality which largely supplants mirroring and log-shipping – in fact, mirroring is now deprecated. Exactly what functionality is available and how robust it is, varies by release (2012+), edition (Standard versus Enterprise) and the physical infrastructure devoted to it. AlwaysOn is fairly easy to set up (though it requires cooperation from both Windows Server and networking admins) and, relative to the required effort, provides exceptional durability for SQL Server databases. AlwaysOn is not a complete OOTB durability solution and has significant gaps – e.g. it does not maintain SQL Server logins and roles synchronized across multiple servers – but it is an excellent start for the needs it caters to.
This post assumes the reader has at least a basic familiarity with SQL Server backups, as well…
Introduction – Part 2
Part 1 of this post focused on the first category of how the Analysis Services Multidimensional (MD) duplicate attribute key error can arise. It reflects the perspective of an atomic attribute – an attribute having no attribute relationships other than with the dimension key attribute.
This post focuses on the second category of this error, which can arise when an attribute does have attribute relationships besides the (required) one with the dimension key attribute.
As is well known, creating attribute relationships is a best practice in Analysis Services MD for improving query performance. The most common reason attribute relationships are created is to support a natural hierarchy – so your data model has to have one for this to arise. The next most common reason is to support attribute properties, such as a sort order – i.e. when the…
Introduction – Part 1
The most common and dreaded error that may occur when processing a dimension in Analysis Services Multidimensional (MD) is undoubtedly “Errors in the OLAP storage engine: A duplicate attribute key has been found when processing: Table: …. “ ‘Common’ because of both the poor data integrity frequently found in cube data sources, and improper modeling of the data in MD. ‘Dreaded’ because a number of distinct situations all give rise to this error and it can be difficult to diagnose which one is the actual problem. There is enough to explore around this error that this is Part 1. There will be a Part 2 subsequently.
The full error message looks like “Errors in the OLAP storage engine: A duplicate attribute key has been found when processing: Table: ‘TABLENAME’, Column: ‘COLUMN’, Value: ‘VALUE’. The attribute is ‘ATTRIBUTE’.” TABLENAME will…
It is no secret that Microsoft is moving away from the original “Multidimensional” (MD) flavor of Analysis Services, instead focusing all efforts on the new “Tabular” (Tab) flavor. This drives the standard advice that Tab should be the basis of most new cube projects. Another idea driving this is that for 60-70% of projects either technology likely will work fine – so it is prudent to go with the newer one which is much earlier in its lifecycle.
However, there are many differences between the two technologies. In particular cases, one or the other is clearly the best fit, but without understanding these differences it is difficult to even assess whether your project is in the 60-70% where it doesn’t matter, or the 30-40% where it may. Even if you would elect Tabular regardless to stay earlier in the product lifecycle, you should be sure you understand the tradeoffs.
Since MD is…
This post is the first of a series providing an overview of “bi-temporal data management” and contrasting it with the current standard approach in data warehousing for tracking historical changes in data (“temporal data management”) developed by Ralph Kimball. Bi-temporal data management is a concept that has been under development since the 1980’s, but is familiar to few data professionals even today. Reasons for this include:
in many applications it just isn’t needed, or its complexity is not justified;
the Kimball methodology is predominant for temporal data management in the data warehouse context – and it does not handle or even consider bi-temporality other than via several half-baked patterns;
without built-in support in the major relational database engines, the necessary SQL to implement bi-temporality is devilishly complex; for all practical purposes it is not feasible from perspectives of understandability, maintenance and performance, especially…
The following observations come from the NYC SQL Server Meetup 9/1/2015 presented by Devin Knight of Pragmatic Works, augmented with my own discoveries. They assume the reader has basic familiarity with the prior PowerPivot, PowerQuery and PowerView products.
• This release debuts PowerBI Desktop (PBID). The significance of this is that PowerBI is no longer tied to Excel or Sharepoint, thus it no longer requires IT involvement (aside from giving access to data sources). It is a stand-alone app ala (but not part of) Office 365. It is basically a bundling, divorced from Excel, of the prior PowerQuery, PowerPivot and PowerView products. Look and feel is very similar to these. It can be downloaded for free from www.powerbi.com
• The disconnect from Excel means it can have its own release cycle and not be constrained by having to render through Excel. There have reportedly…
I’ll be giving a presentation on the somewhat unlikely subject of the title at the next NYC SQL Saturday at Microsoft HQ on May 30, 2015. It will be a largely non-technical, non-demo presentation about accounting and bookkeeping concepts developers are likely to encounter if they have to interact with financial recordkeeping applications, particularly in data warehousing and business intelligence contexts. I will also offer reasons why learning about this (in more depth than is possible in this session) represents great career development for the average developer.
There will be one tech hook wherein I will demonstrate how Microsoft’s Analysis Services directly supports some of the mechanisms of bookkeeping.
The all-day event is free (except for lunch) and has a host of valuable sessions on a wide variety of mostly technical SQL Server topics. Here is the link: http://www.sqlsaturday.com/380/EventHome.aspx
This is Part 2 of Performance Point – the Good and the Bad – Part 1, which can be found at http://blog.tallan.com/2012/12/16/performance-point-the-good-and-the-bad/
That article focused on the good features of Performance Point (PP) out-of-the-box (OOTB). This one focuses on the not-so-good features, or lack thereof – and some tips for dealling with some of these issues. Given its focus, it is somewhat more technical and requiring of familiarity with the product than Part 1.
This posting presumes the reader has at least a basic understanding of what Microsoft’s Performance Point (PP) offering is, at least at the level of wanting to assess whether it is appropriate for your requirements. Basic familiarity with Microsoft’s Analysis Services is also required to get the most out of the observations I provide. If you have that background, this two-part posting should prove helpful in making an assessment of whether PP might suit your requirements, in whole or in part.