Tallan Blog

Tallan’s Experts Share Their Knowledge on Technology, Trends and Solutions to Business Challenges

Performance Point – the Good and the Bad – Part 2

This is Part 2 of Performance Point – the Good and the Bad – Part 1, which can be found at http://blog.tallan.com/2012/12/16/performance-point-the-good-and-the-bad/

That article focused on the good features of Performance Point (PP) out-of-the-box (OOTB).  This one focuses on the not-so-good features, or lack thereof – and some tips for dealling with some of these issues.  Given its focus, it is somewhat more technical and requiring of familiarity with the product than Part 1.

As an introduction, the three most problematic aspects of PP in my experience are:

  • Writing a custom MDX query for a “report” object (analytic chart or grid), which is often the only way to achieve non-trivial functionality, causes all the built-in ad hoc navigation and decomposition tree functionality that PP provides to become disabled (this is importantly not true for MDX written for Filter and KPI objects).  While this is understandable when one considers the complexity that would be involved for PP to be able to provide these capabilities for an arbitrary query, it should be technically possible.  PP takes the easy way out here.
  • Options for formatting and structuring of all objects is extremely limited.
  • Charting capabilities are very simplistic.

Performance Point OOTB Bad Features, in no special order:

1 – Analytic Grids do not provide row or column totals or a grand total.  The only way to achieve them, if possible at all, is either with fairly complex MDX in the query, or driving calculated measures and members (after developing them) back into the cube, so the designer alone can be used.  When row or column order is important (as it usually is!), the latter approach may further require creation of named sets in the cube.

2 – Tied with #1 as a major problem: as soon as you modify the query MDX of an analytic grid or chart via the provided client designer tool Dashboard Designer (DD), which is frequently necessary to work around or extend OOTB functionality, the object becomes totally static – you lose the drilldown and drill-across OOTB functionality mentioned in the #2 good feature of the prior post.  In PP2007 you could edit the mdx directly in the dashboard file and frequently the drilldown would still work – not recommended, but possible.  However, in 2010 the query mdx is not stored in the file anymore, so this is not an option.

3 – If you modify the MDX of a query, you can still link filters to it, but must wire them up partly by hand using the Parameters functionality in the Query pane.  See:  http://blogs.msdn.com/b/performancepoint/archive/2008/10/20/mapping-dashboard-filters-to-analytic-charts-and-grids.aspx

4 – Formatting and layout options for analytic grids and charts are severely limited – to the point of embarrassment.  You cannot add static text, and there is virtually no control over chart, legend and label formatting.  You cannot rename measures (typically, column labels).  You have perhaps 1% of the degree of control you have in Reporting Services on charts and grids.  Therefore, presenting an RS report through PP may be your best option in many cases.

5 – MDX-based filters take a set expression, *not* a full-blown query.  Thus, you cannot use query-scoped constructs, such as query-specific measure and set definitions.  This often forces you to define the calculations and sets you need in the cube, just to work around this limitation.  At least it is an option, assuming you have free reign with the cube.  If you don’t, your options are that much more limited.

6 – DD does not display cube objects which are hidden – thus, since it is a drag-drop UI, you cannot use hidden objects.  It does seem to be OK to hide objects after incorporating them into your PP objects, as you generally would wish to do if they exist solely to support PP.  It is very inconvenient to have to use such a two-step process.

7 – There is no drag/drop or other help in MDX editors, nor ability to test fragments.  It’s just plain text entry.  Ths is cumbersome and error-prone.  You are highly advised to develop and test all queries and expressions separately before incorporating them into PP.

8 – In general, building dashboards is not intuitive, and some terminology is confusing.  For instance, in industry parlance most would consider a scorecard a specialized type of dashboard.  In PP a scorecard, as with all other object types except what it calls “dashboards”, is not a stand-alone entity, but only functions within the context of a “dashboard”.  Dashboards “package” all other PP types, and must be deployed to the PP server (Sharepoint as of 2010) to function.  In DD they cannot be previewed (certain other types can be, such as analytic grids and KPIs).  Also, the advanced drill-anywhere functionality is only available on dashboards which have been deployed to the server.

9 – The “remembering” of filter choices between sessions cannot be disabled it seems (except by setting a very short timeout, which is server-wide).  While this functionality can be useful in some scenarios, it can be equally irritating in others.

10 – When exposing a Reporting Services report object that has parameters in PP, even if all parameters have defaults defined in the report definition, and the PP report object is set to use defaults on each parameter, in preview mode in DD it does not work – you get an error that a parameter is missing a value.  And even when deployed to the server within a dashboard, the report only starts working when you hook up a filter to each parameter.  Thus, you cannot use reports whose parameter defaults are sufficient by themselves, by themselves.  You still have to attach dashboard filters to them.

11 – Named set – based filters can only use named sets defined in the cube – you cannot write a set expression.  And the set may contain only a single hierarchy or attribute.  If it contains more than one, DD will not show it as an option to use.  This can come up if the named set uses the EXISTING function – the resulting set will contain attributes from both the sets passed to it.  To use the set with PP in such a case you must wrap it with the EXTRACT function to extract the single user hierarchy or attribute that you need members from.

12 – In an analytic chart or grid, when you do a right-click drilldown (which can only be done on dimension members), it shows *all* dimensions in the cube, not just the ones that could be related at that point.  This is very dysfunctional if you have many dimensions and each is only related to certain measure groups.  In contrast and much better, when you right-click to a decomp tree (which can only be done on measures), it only shows the related dimensions, even on calculated measures.

13 – Every time you expand *or* contract a drillpath, a new cube query is issued.

14 – If something is wrong with a connection formula, you get no indication until the dashboard is deployed, at which time you’ll get an unhelpful “You do not have permission to access” error.

15 – In the prior case and others, most error messages you receive are useless and you must access the PP server log for usable detail.  However, this is typically located in the Sharepoint(SP) hive, and in a locked-down environment, you as a BI developer probably will not have direct access to it.  It can make debugging almost impossible if you must ask the SP administrator for this data every time.

16 – It is not clear how committed Microsoft really is to PP, or whether it is getting any further investment, given the current emphasis on PowerPivot, PowerView and the Tabular model.

17 – PP is a component of the Enterprise SKU of Sharepoint (SP).  Not everyone has, wants or can afford that.

18 – As a component of SP, PP suffers from all the same limitations as SP does on mobile devices.


  • The “Connection Formula” option of a Filter connection can provide crucial functionality.  It enables you to write an expression based on the value being passed by the filter to whatever it is connected to, which allows you to modify it in some way before it is passed to the connected object’s end point.  For example, with cube sources, the value passed by the filter would usually be the MDX Unique Name of the member selected in the Filter.  The Connection Formula allows you to do things like: a) use the LinkMember function to “shift” the passed-in dimension reference to a different one required by the connected object.  For instance, suppose the cube has many role-playing dimensions based on date.  The Date filter on the dashboard can be based on only a single one of these.  If an object on the dashboard requires its Date parameter to be a different role-playing one, the Connection Formula can use LinkMember to shift it on the way in.  b) use “.Children” to pass the children of the selected member to the connected object.  c) pretty much any transform you can think of that is based on a member.  d) it is TBD what a Connection Formula can do (if anything) if the filter is a multi-valued filter.  NOTE however, that the MDX Unique Name value can be used in only one filter connection.  If you (say) wanted to connect the same filter to two different, role-playing dimensions on a PP object, you cannot.  Once you’ve used the MDX Unique Name choice, the only other choice (for a filter) is Display Name.  I tried to write connection formulae using that to construct the relevant unique name, but it just doesn’t work.  Display Name seems to be most suitable for passing as a parameter to an RS report object.
  • Dashboard Designer will not recognize structural changes to a cube until restarted.
  • On a PP Analytic grid with a hierarchy on rows, and a multi-selectable filter on the same hierarchy connected to it, aggregated values will reflect “visual totals” in the manner expected when only a single member is selected in the filter.  However, if more than one member on the same hierarchy level is selected, then the grid displays “non-visual-totals”, apparently by design – though it can be extremely confusing-looking.  This article  has the details:  http://stackoverflow.com/questions/9162856/inconsistent-roll-up-behavior-in-a-performancepoint-grid-object
  • If you use a connection formula to change dimensions (i.e. with LinkMember), be sure the result is compatible with the type of object the target parameter expects.  A classic error would be that the source is an attribute hierarchy and the target expects a user hierarchy, or vice versa.

Additional Information

Others have written in even more detail of PP’s shortcomings and in some cases, possible workarounds.  Here is a good one:


Also, PP2013 is or soon will be available for general release.  It makes a number of important improvements, though all to deficiencies I didn’t even mention (or in some cases know about).  I have not yet had a chance to look at this release in depth but as far as I can tell from the advance notices, it does not appear to address any of the deficiencies I’ve brought up.  If I am wrong I will update this post.  For more information:  http://blogs.msdn.com/b/performancepoint/archive/2012/08/03/what-s-new-in-performancepoint-services-2013.aspx

Learn more about Tallan or see us in person at one of our many Events!

Share this post:

1 Comment. Leave new

Performance Point – the Good and the Bad – Part 1
March 17, 2013 6:54 pm

[…] […]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>