You've heard it before -- especially in a public setting seeded with unfamiliar faces: "There are no stupid questions."
Mostly the moderator who says this is responding to a lack of feedback -- especially when the presentation they gave is either alien or controversial to at least some of the participants.
In all honesty the stupidity lies with the moderator for boxing themselves into an exchange-proof presentation. But if we were even more honest about the kinds of questions that drive search analysts and KM folks batty it's a misinformed question built on the premise of unfounded assertions, urban legends, and generalized assumptions that stretch the appropriateness of their fit too far.
For example it's entirely understandable that some rocket scientist raised on Google believes they could pepper their query with the names of propellants and launchers and then truncate on a few choice biological weapons. What's misinformed about that? Nothing if you're on the web. However if it's done on your firm's SharePoint server and rockets are not what you sell and maintain then you run into two walls right away:
1. Complex question +
2. Uncommon terms =
3. Dumb question
Of course the site admin who sees it is no likely point this out than the search tool itself. Can you imagine buying the Google appliance and for every "zero hit" set of search results the response is "Did you mean to search this on public Google?" The problem metaphorically is that Rocket Star is sticking to his guns by running an ocean-sized search request inside the information pond that is my intranet.
Here's a QA framework I developed that illustrates the response range in terms of the battles worth fighting (stay with the upper quadrants):
Short of remedial information literacy classes the best work-around is to focus on the use of one or two unique terms so that my user can see the lay of the Rocket land in my shop before plundering ahead with anything more esoteric or complex. I can also engineer a search outcome that breaks the question down in terms of the topic addressed. But that works best for blank, receptive brains -- not for domain experts.
Ultimately the best run around the no bad questions mindset is to connect people and dispense with relevancy scoring for documents. Once we're past that we can actually prove what a good question can be. But only by providing a sound answer and people deliver those better than PowerPoints.
- Marc Solomon
- attentionSpin is a consulting practice formed in 1990 to create, automate and apply a universal scoring system (“The Biggest Picture”) to brands, celebrities, events and policy issues in the public eye. In the Biggest Picture, attentionSpin applies the principles of market research to the process of media analytics to score the volume and nature of media coverage. The explanatory power of this research model: 1. Allows practitioners to understand the requirements for managing the quality of attention they receive 2. Shows influencers the level of authority they hold in forums where companies, office-seekers, celebrities and experts sell their visions, opinions and skills 3. Creates meaningful standards for measuring the success and failure of campaigns and their connection to marketable assets.