FAST Search Server 2010 comes with a web application for executing FQL queries against the FAST index, called QRServer. One of our clients was having an issue executing FQL queries, only getting data back that was being crawled by the FAST Web Crawler, and none of the SharePoint data being crawled by the FAST Content SSA.
The issue stemmed from permissions on the FAST server. Our client did not wish to add all of the developers accounts to FASTSearchAdministrators, and he found a solution/workaround on TechNet forums:
To Disable Security FQL stage from Query pipeline follow the instructions below. For those of you who are not familiar with this stage, the securityfql stage checks the security rights of the logged in user against docacls in each search results and filters those results that the current user has rights to see. Also,…
In a recent client solution we were utilizing a multi-valued Managed Metadata field to classify documents in a organizational taxonomy. Our search and navigation relied on being able to retrieve documents by supplying any one of the possibly multiple values stored in the Managed Metadata field.
We determined however, that even though many documents had multiple values in this field, they were returning only one value in the managed property.
We examined the crawled property feeding the managed property and found IsMultiValued = false, and figured that was the issue. It turns out, this property is not used in FAST.
From Multi-Value Property Support in FAST Search Server for SharePoint (en-us):
Configuring multi-value properties
All crawled properties supports multi-value data. Note that the crawled property configuration in the index schema contains a property named IsMultiValued. This is not used.
When you map to a managed property …
Crawls in our Fast Search content all of the sudden started “hanging”, and reporting reporting 0 successful crawls, with some errors.
We were seeing no errors to speak of in the event logs or in the ULS logs on either SharePoint server, and the FAST Server was reporting no errors.
Expired Self Signed Certificate for Content SSA Communication
Like many people, we followed the setup instructions on TechNet for setting up the FAST Search Server 2010 for SharePoint 2010 Farm found here:
During that setup process, we used the supplied FASTSearchCert.pfx self-signed certificate to secure the Content SSA communication. That certificate has a one-year expiration date. Exactly one year after configuration of the servers, crawl ceased to function.
One solution is to merely generate a new Self-Signed certificate via Micrsosoft’s instructions here:
This will however land you in the same situation a year from…
When we were attempting to crawl an RSS feed for a client using the FAST Web Crawler, we found the following error in our RSS Crawl Logs (located at \FASTSearch\var\log\crawler\node\fetch\<content collection name>)
“Invalid RSS MIME type: text/xml”
We found that the MIME type being indicated by the server was indeed “text/xml”. However, we were able to crawl another feed that used ‘text/xml’.
See screenshot below. The feed on the LEFT crawled properly, the feed on the right did not.
The only difference we were able to find was the working feed had the xml version tag, while the non-working one did not.
In order to ensure FAST can properly crawl your RSS feeds, make sure they are being returned as MIME type: application/rss+xml.
We were able to have our client change the MIME type of the RSS feed, and although it was formatted exactly it was…
While attempting to set up the FAST Web Crawler for a client, I was unable to find a way to force FAST to re-crawl data immediately. The web crawler uses an internal scheduler to schedule fetch operations.
There are a large number of options for the crawleradmin.exe command line tool that looked like they might work:
(from crawleradmin.exe reference on technet)
but none of them forced an immediate crawl.
I found that using
crawleradmin –d <content collection name>
to delete the data in the FAST Web Crawler content collection, then using
crawleradmin –f <crawler config .xml file>
forced an immediate recrawl.
Note: Deleting the data from the FAST Web Crawler content collection DOES NOT delete all of the data from the FAST content collection with the same name. Any data crawled through your FAST Content SSA will still be intact after running crawleradmin –d.
The most effective way to crawl RSS Content in a SharePoint / FAST Farm is to use the FAST Web Crawler. The FAST Web crawler is a component supplied with FAST that is administered completely outside of SharePoint on the FAST server itself.
In order to configure an RSS Crawl, you must first set up some XML configuration files.
First, copy CrawlerCollectionDefaults.xml.generic.template from \FASTSearch\META\config\profiles\default\templates\installer to your FASTSearch\etc folder.
Next, make a copy of \FASTSearch\etc\CrawlerConfigTemplate-RSS.xml and save it in \FASTSearch\etc with some unique name. (I use \FASTSearch\etc\rss.xml.)
Now, open the file in a text editor.
Find the Domain Specification line. Ensure the name property is the same as the Content Collection you crawl into (“sp” by default for most people) or you will be unable to query your results.
Next, set the RSS URL(s) to crawl. In the ‘rss’ section, add <member> lines as seen below…
During a recent server migration we attempted to bring up a client application that utilizes Search.asmx in a SharePoint/FAST Search installation. The application calls GetMetadataProperties() at startup to create a cache of properties for a search based web application. We were getting errors that we hadn’t seen previously.
The .NET Code executing the GetMetadataProperties() call was receiving the following error:
There was no inner exception and no string in the Message property.
We examined the ULS logs on the Central Admin server and got the following:
SearchServiceApplication::GetProperties–Exception: Microsoft.SharePoint.Search.Extended.Administration.Common.AdminException: Failed to communicate with the WCF service. —> System.ServiceModel.Security.SecurityAccessDeniedException: Access is denied. Server stack trace:
Unable to retrieve FAST schema context — Exception: Failed to communicate with the WCF service. Microsoft.SharePoint.Search.Extended.Administration.Common.AdminException: Failed to communicate with the WCF service. —> System.ServiceModel.Security.SecurityAccessDeniedException: Access is denied. Server stack trace:
SearchServiceApplicationProxy::GetProperties–Error occured: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: Failed to communicate with the WCF service. (Fault…
During part of the setup process, you secure SSL communications between the SharePoint server and the fast server by running the SecureFASTSearchConnector.ps1 script. You supply the certPath, the username of the user running the OSearch14 service, and the name of the FAST Content SSA.
In my case, it was an account called SP Farm. I did this multiple times, and even ran the ping-spenterprisesearchcontentservice command to verify the communication, and all looked good, however crawling was unsuccessful stating “Unable to resolve ContentDistributor….” on port 13391, even though the securefastsearchconnector.ps1 reported “connection to <fastserver>.domain.local:13391 successfully verified.
You must be logged into the SharePoint 2010 Application Server as the account running the osearch14 service when your run the SecureFastSearchConnector.ps1 script. We had initially run this script under a different account.
Log into the box as that user, re-run the script, run IIS reset, and the…
We’ve been doing some work for multiple clients using FAST Search for SharePoint 2010 and indexing content surfaced using SharePoint’s Business Connectivity Services (BCS) and External Content Types (ECTs).
I recently was looking for how to do incremental searches on External data and found these 2 MSDN articles that are extremely detailed and informative on the subject.
Configuring BCS ECTs for Search
Customizing Search Results
Using Associatiosn for Master/Detail Relationships
BDC Model XML
Modifying ECTs to support Incremental crawls ( 2 different methods with pros and conts
I highly recommend reading them:
Configuring SharePoint Server 2010 Search for External Content Types (Part 1 of 2)
Configuring SharePoint Server 2010 Search for External Content Types (Part 2 of 2)