Saturday, March 26, 2011

Yes, Some Questions are Better Left Unasked


We've been told since we were old enough to test our merits and theories that there are no dumb questions. The urgency of this instruction tends to increase not with the caliber of the questioning but with the size of the group. Frowning on any inquisitive form is akin to the inquisition of Oliver Twist and his empty gruel bowl. Nobody wants to be the public silencer of curiosity. No one wants to discourage debate -- at least while the cameras are rolling. That kind of instruction happens more naturally behind closed doors (and minds too ... one insinuates).That said, most public forums for open questioning happen virtually these days. From email group lists to Facebook walls there's a whole lot of prodding and posturing around what we're trying to pursue that we feel more comfortable asking our fellow humans than the sequestered confession chamber we have come to know as the Google search box.

These garden variety crowd-sourcing drills are routine in my firewall where domain experts and junior staff alike are more comfortable with receiving pointers and attachments from peers than plowing through an unvetted pile of search results. Sometimes an uninformed question is so basic that tree-in-the-forest physics kick in: No response is not the same as being ignored, though admittedly it's hard to spot the difference in cyberspace.

One way to avoid the open stares of disbelieving colleagues is to press for currency -- a great face-saving defense by inferring that we know our stuff -- just not the latest stuff. Another is to watch the zealousness of the responders. Are they dispatching their personal stashes of best practices or giving the requester a number to call? Is the responder as secure in their answers as they are in their positions? There is a tendency for junior-level people to over share expertise because: (1) they're eager to jump in, (2) they prefer texting to face-to-face problem-solving, or (3) they need to get with post merger realities of a changing power structure.

But enough about me the responder. How can I make your questions more informed without becoming too pointed? How can they expand and advance existing discussions without becoming too open-ended -- too tenuous to invite followup?

For starters, let the question breath a little. Don't put practitioners on the spot with "is it this" or "is it that?" Reducing all the complexities to a multiple choice outcome tells the gallery you really can't draw the distinction between talking to machines or people. Secondly, how the question gets asked supersedes the topic or our learning objectives. Owning up to where we've been, the walls we hit, and the loops we're trying to cycle through means we're paying our dues and respects -- not to any one obstacle or expert but to the community of practice we're addressing for its collective problem-solving intelligence. As hacker Eric Raymond writes in his online manual for How to Ask Questions the Smart Way, there is more than a subtle difference between demanding “an answer” and requesting collaborators. The presumption is that there is a scarcity of definitive answers and a wealth of lessons waiting to be drawn from a pool of experience.

Another crowdsourcing pleaser is to summarize the responses both as a form of gratitude to the participants and respect for the process. A brief write-up of the investigation also validates the commitments of the community to building know-how, not merely revisiting the same knowledge on our domain of experts. That's because the summary integrates the responses into a shared output. It's the interplay of an unfolding conversation -- not a scripted, one-sided and static one. Unpacking responses is conducive to wikis so long as the formating remains simple enough to foster and contain further problem-solving as the community evolves.

Then there's the powerful allure of motivation: why I'm asking. Divulging one's incentives for knowing will disarm the most overconfident know-it-all. That means not competing for smartest guy-in-the-room. The burden of proof is shifted towards a common purpose around a shared understanding. Of course it will take more than case summaries and deference to practice members for junior-level requesters to pick the brains of the more seasoned practitioners. The purpose of the question is key. It's not that answering a question makes us instant and equal partners in the same outcome. It's the faith in knowing that the wheel will turn. As casually as a requester can speak their inquiring mind, the sincerest way to complete this virtuous loop is through reciprocation.

In sum the most incisive question can lose its smarts if the questioner:

  • asks unblinking yes-or-no questions that require a back story to move forward

  • is not forthcoming about the path that led to their request, and

  • fails to disclose what they hope to gain by involving the larger community.


Conversely, the burden falls on the responder to: (a) trust that the collaboration can run both ways, or (b) rediscover the rapture of learning the same advice they're giving; that giving is its own reward. The teaching becomes the ends as well as the means.

As an online research educator I consider myself lucky to fall inside this second camp.

Any questions?

 



No comments:

Bookmark and Share

About attentionSpin

My photo
attentionSpin is a consulting practice formed in 1990 to create, automate and apply a universal scoring system (“The Biggest Picture”) to brands, celebrities, events and policy issues in the public eye. In the Biggest Picture, attentionSpin applies the principles of market research to the process of media analytics to score the volume and nature of media coverage. The explanatory power of this research model: 1. Allows practitioners to understand the requirements for managing the quality of attention they receive 2. Shows influencers the level of authority they hold in forums where companies, office-seekers, celebrities and experts sell their visions, opinions and skills 3. Creates meaningful standards for measuring the success and failure of campaigns and their connection to marketable assets.