You take the good...you take the bad...you take them both and there you have, the facts of .... Search????
(High-five for anyone who can name the TV show without Google)
That's right. In a way, Search is just like that. Crawling and indexing content, both good and bad, and trying its best to provide fast and relevant results. But, much like most scenarios, there are always trade-offs. In this case, it's usually relevancy vs. speed. Over the past major iterations of the Jive platform, Jive's engineers have maintained a constant focus on ways to keep delivering on faster and more relevant search results. Most recently in Jive 5, the search architecture was completely revamped to set the stage for conversations such as this one:
How can we make search relevancy better?
Problem: Search relevancy is often considered a qualitative metric. In order for Jive engineers to properly tune and tweak the search relevancy algorithms that ship with the product, we need use-cases that we can test against, but most important ... they need a common data-set, which is very difficult to share in most cases.
- If only there was a way we could articulate use-cases on a common data-set.
- If only feedback could be received prior to product release using a live system, and not mock data.
Solution: The Jive Community. It's a live Jive 5 instance with a large data sample, and houses many of the same use-cases customers see in their own instances.
Instructions on How to Help using the Jive Community:
We are asking customers to share their search relevancy grievances with Jive in this conversation, and asking you to provide the following detail(s), if available.
- How are you searching? (@mentions, spotlight search, search page, other)
- Example of the Search Query Typed
- Example of the Data Scenario
- For example, where were you expecting the match to hit? Subject? Body? Attachment? Tags? Comments? Binary Contents? Inline Comments? Hidden Meta-Data?
- Also, specifics about the context. Where you searching across containers? Single Container? Filters? etc...
- Example of the Search Results Returned
- For example, providing a screenshot of the Jive Community results (redacted if needs be) is extremely helpful.
- What is your "result sentiment"? Are the results returned: (fair, unexpected, right on, random)
- Please also indicate if this is negative or positive search behavior.
Note: If a search use-case is already listed, please Like it so we can gauge the reach of a use-case.
How will this feedback be used?
For each piece of reproducible feedback that our engineers receive that is articulated in terms of the Jive Community, Jive will include the scenario into a suite of regression tests that Jive can use to validate future relevancy tuning initiatives.
If you have any questions about this effort, please feel free to ask either myself or Karl Rumelhart (who is leading this effort from Engineering). We look forward to your feedback.